When we started the Bump.sh adventure, having API documentation was mostly something reserved for A-players like Stripe, Twilio, or SquareUp. For everyone else, it was at best a hidden page buried deep in the dev portal or a SwaggerUI for the most modern ones. At worst, a PDF sent by email or even… nothing at all.
Over the past five years, things have changed. Having an API documentation portal has become a must-have for any company exposing APIs to their clients. Creating an API is not enough anymore: it needs to be found, understood, and used. That’s why it’s critical to have up-to-date documentation, with good SEO, a killer developer experience, and everything in place to optimize the time-to-first-call. And that’s why we’ve built Bump.sh, in the beginning.
But since 2024, we’ve entered a new era. The APIs we build don’t just need to be discoverable, understandable, and usable by humans. They also need to be usable by AI agents.
And that changes everything, again.
MCP, the turning point
At the end of 2024, Anthropic announced its MCP (Model Context Protocol), designed to allow LLM-based applications and agents, such as Claude or ChatGPT, to interact in a standardized way with external tools like APIs, databases, file systems, etc. Its goal is to make these natural language applications operational (not just talking, but acting within real systems) by providing a standardized way to use tools without retraining or custom integrations.
Ask ChatGPT to book a train ticket today: it might explain which site to use, maybe even do the search for you, but that’s where it stops. With an MCP server exposing a “book a train ticket” tool, if you plug that server into ChatGPT, it instantly becomes capable of searching for the train ticket, adding it to your cart, and paying for it.
To me, it feels a lot like the “App Store” moment we had in 2008, a year after the iPhone came out. Suddenly, phones that had been limited to Apple-built features could do… anything.
It definitely changed the way we use smartphones and created a whole new ecosystem… and this is what’s currently happening with MCP democratisation.
From OpenAPI to MCP: the challenge
I think most of us initially thought something like:
“Wait, we already describe our APIs with OpenAPI. Let’s just generate and expose an MCP server based on that, and voilà, our APIs are AI-ready!”
The idea seemed great: start from an existing standard and use already structured data to generate these servers and make APIs usable by agents instantly… Except it doesn’t work.
Back to my train booking example. In my OpenAPI spec, I’d have something like:
paths:
/search:
post:
...
/book:
post:
...
/pay:
post:
...
/download:
get:
...
Following a principle of separation of concerns, we designed this API with multiple endpoints to perform clear actions, each returning specific responses. What happens if I generate a matching MCP server?
I’ll get a list of tools matching my endpoints exactly:
- “Run a search”
- “Book a ticket”
- “Pay for the ticket”
- “Download the ticket”
Now imagine an AI agent asked to book a train ticket. Naturally, it will go straight to the /book endpoint (it sounds like the right one for reservations) and it will immediately hit an error. Why? Because before booking, it actually needs to call /search to retrieve a trip ID and travel details.
The agent can’t guess that sequence any more than a human could. Just like you’d explain to a new developer that they must search before booking, the agent also needs that guidance: it doesn’t “intuit” API logic.
Now suppose that after several trial-and-error attempts, the agent finally learns to start with /search. What next? It faces a massive JSON payload full of data it doesn’t yet understand. Which fields matter for the /book call? Without clear structure or optimized responses, it’s forced to sift through irrelevant noise wasting tokens, increasing error risk, and slowing everything down. If it keeps getting stuck in this cycle of guesswork and oversized responses, it eventually gives up… and fails the user’s request.
Using an API means chaining calls to match real-world workflows
An API reference is essential. It gives a clear view of the API landscape: what can be done and how.
But the info it provides is limited to the structure of the API itself. It describes a list of operations that can be done, but not the business actions. In our example, there’s a difference between the final goal (as a human or agent) of booking a train ticket, and the API call to the /book endpoint. The former involves an entire process requiring several operations. The latter refers to a specific call.
An API reference does a good job describing the latter. That’s why some OpenAPI users add “Getting Started” sections to their references, or why companies like Readme created the concept of “Recipes”: to help humans (developers) understand how to chain API calls to accomplish complex real world tasks.
Today, only a small part of API documentation provides this kind of help: most leave humans to figure things out on their own, generally relying on trial and error. In the context of AI agents, we absolutely must avoid this. For reasons of efficiency, of course, but also for cost-related reasons: multiple attempts, incorrect call sequences, and massive JSON responses that bloat the context will consume more tokens and therefore increase the cost of using your API for the end user.
We need to find a way to abstract this complexity, the heaviness of these sequences and responses. In fact, in my opinion, these chains should be completely invisible to the LLM: in our train ticket example, the agent shouldn’t care that it takes four different API calls to book a ticket. What it wants is to book a ticket for a given date, person, origin, and destination. That’s it.
Arazzo: the underrated standard
In May 2024, the OpenAPI community released a new standard, complementary to OpenAPI: Arazzo. Here’s how they describe it:
The Arazzo Specification provides a mechanism that can define sequences of calls and their dependencies to be woven together and expressed in the context of delivering a particular outcome or set of outcomes when dealing with API descriptions (such as OpenAPI descriptions).
Basically, Arazzo lets you describe multi-steps and APIs workflows. It defines how requests chain together, depend on each other, and share data to reach a specific outcome. The philosophy is simple: move from a list of endpoints to a clear and coherent user journey.
Here’s what an Arazzo document for a train ticket booking process might look like:
arazzo: 1.0.1
info:
title: Train Ticket Booking
version: 1.0.0
summary: Search, book, pay, and download a train ticket.
sourceDescriptions:
- name: trainApi
url: ./train-api.yaml
type: openapi
workflows:
- workflowId: bookTrainTicket
summary: End-to-end train ticket purchase
inputs:
type: object
properties:
origin:
type: string
destination:
type: string
travelDate:
type: string
passenger:
type: string
steps:
- stepId: search
operationId: searchTrains
requestBody:
payload:
origin: $inputs.origin
destination: $inputs.destination
date: $inputs.travelDate
passenger: $inputs.passenger
outputs:
tripId: $response.body#/offers/0/id
- stepId: book
operationId: bookOffer
requestBody:
payload:
tripId: $steps.search.outputs.tripId
outputs:
bookingId: $response.body#/booking/id
amount: $response.body#/payment/amount
- ...
outputs:
ticketUrl: $steps.download.outputs.ticketUrl
It allows us to describe exactly what an API user needs to understand how to chain calls and achieve our real-world use case. Of course, it’s incredibly useful for developer-facing documentation, but doesn’t this also look exactly like what an AI agent needs?
With an approach based on Arazzo, we can describe the superpowers of our API, not a bunch of small tools the agents have to figure out. They only see business actions, the necessary inputs, and the specific output data it needs, keeping the context clean and simple: this would not only make agents far more efficient, but also drastically reduce token usage and overall execution cost.
The future we’re building at Bump.sh
Bump.sh is going to evolve a lot.
We will add native Arazzo support across the board: in the docs (describing workflows for humans), in our API explorer (by automatically chaining calls), and in brand new production-ready MCP servers directly generated from your OpenAPI and Arazzo files.
Why is this a real game-changer? Because you’ll be able to make your APIs AI-ready with zero configuration, zero effort, and zero maintenance. The core of Bump.sh, understanding your OpenAPI and detecting changes, will allow you to maintain workflows without breaking anything, abstracting implementation details through simple input/output parameters.
Our product has always been built around API standards (Swagger/OpenAPI, AsyncAPI, JSON schema, Overlays). Supporting Arazzo and MCP is just a natural continuation of our mission, abstracting away the technical complexity so humans can focus on what matters: building product and APIs that other users, human or not, can use efficiently.
Be among the first to expose truly AI-ready APIs through optimized MCP servers, with no effort. Join the early access list.