Hacker Timesnew | past | comments | ask | show | jobs | submit | gertjandewilde's commentslogin

We built a unified API with a large surface area and ran into a problem when building our MCP server: tool definitions alone burned 50,000+ tokens before the agent touched a single user message.

The fix that worked for us was giving agents a CLI instead. ~80 tokens in the system prompt, progressive discovery through --help, and permission enforcement baked into the binary rather than prompts.

The post covers the benchmarks (Scalekit's 75-run comparison showed 4-32x token overhead for MCP vs CLI), the architecture, and an honest section on where CLIs fall short (streaming, delegated auth, distribution).


How is progressive discovery not more expensive due to the increased number of steps?

I assume because the discovery is branching. If the an agent using the CLI for for GitHub needs to make an issue, it can check the help message for the issue sub-command and go from there, doesn't need to know anything about pull requests, or pipelines, or account configuration, etc, so it doesn't query those subcommands.

Compare this to an MCP, where my understanding is that the entire API usage is injected into the context.


In short: JSON. Plan prose or markdown is way more token efficient than JSON. I think that responding in JSON was always a mistake in the spec; it should have been free-form text (which could then be JSON if required).

It depends on what your "currency" is: inference cost vs. models getting dumber/slower with a fuller context.

Most APIs were designed for human developers, not autonomous agents. As LLMs start selecting endpoints and generating arguments directly from your schema, ambiguity and weak error semantics become production issues. This post outlines practical API design patterns that make APIs more reliable for agent-driven workflows.


Analyze codebases using AI - generate architectural overviews, documentation, explanations, bug reports and more

Would love to hear your thoughts, feedback, and ideas for improvement!


We’ve overhauled our SDKs with Speakeasy, leaving the limitations of our old OpenAPI generator behind. The new versions deliver major upgrades in usability, error handling, and performance.


Great use case. Is the demo broken?



Hehe, fair point! Most of those products have startup programs not to break the bank

In case you want to go the open-source route you have this handy overview https://www.btw.so/open-source-alternatives


Thanks for the feedback!

I agree some categories are indeed thin, which is one of the reasons why we made the submission so potentially valuable tools can get listed.


Thanks for sharing. RAFT looks great


Good question! To transform the OpenAPI spec to a Postman collection, we're using the handy openapi-to-postman package from Postman.

The test automation is where the real magic happens.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: