Hacker Timesnew | past | comments | ask | show | jobs | submit | cadamsdotcom's commentslogin

Those with fond memories of a childhood spent playing games and typing code from magazines and having low fidelity conversations with faraway likeminded folks need to know how lucky they were, because these days that stuff may still be there but for kids it won’t win pales vs addictive social media..

Why wouldn’t you just give the agent a shell (and by implication a sandbox)?

Seems like unnecessarily constraining it.


Most of the time you should. But it depends on what you're wrapping. Exa is a good example of where MCP makes sense, it's not just one API call, it's 4 different tools (web search, code search, crawling, advanced search) plus embedded skills for chaining them. One MCP connection and the agent discovers all of that at runtime. Doing that with a CLI means building a multi-command script and hoping the agent figures out the orchestration.

On the other hand, something like context7 is just `npx ctx7 resolve <lib>` then `npx ctx7 docs <id>` — two stateless shell calls, done. No server to maintain, no protocol overhead. CLI is the right tool there.


Why not put all of that into a skill file? The context overhead from an MCP connection is significantly higher.

You're right actually. Exa's MCP server is stateless, just a REST wrapper. A skill + CLI would do the same job with way less context cost. Someone already built that (https://github.com/tobalsan/exa).

Gotta get there somehow.

Congratulations on learning to prompt websites into existence! It is a miracle we can do this.

It’s also great that you acknowledge the hard work of the humans that got Go into the great state it’s in.

Your blog post needs heavy edits to let your voice speak through over the AI you started from: for example, your headings say “The X.”, “The Y.” Please edit away the AI-isms before publishing. Leaving them in shows a lack of care, and I’m sure that is not the impression you wanted.


Ask Claude to do that deterministic search & replace. Best of both worlds, and now you’re prompt engineering :)

Very cool!

It’d be very cool to have a “remove signs of AI writing” feature (based on https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing) - wishing you great success reinventing this space for the new era!


Thank you! I have considered adding some settings like this, but don't want to encourage the use of this software for cheating.

For now, the catch-all solution that anyone can attempt to use is the custom prompt, under account settings. You can instruct it never to use emdash, avoid certain cliches, or simple things like that. But you have to write it yourself for now, there's no convenient presets or anything like that.

If you have ideas please email me art@revise.io


Why make it your business what people use it for?

We don’t do that with hammers, or guns. In the case of the latter, manufacturers outsource policing the thing’s use, and everyone understands that.


My line is I don’t want to market it as a tool that’s obviously for cheating/deception by building features for that. But beyond that, I don’t want to make it my business.

This is excellent; silly laws on the books should exclude countries from access to things.

Unfortunately it’s not enough because there’s also a need to work to get the laws repealed AND stop the endless attempts to bring them back.


A thermostat’s capabilities and what’s expected of it wont change even if the tech gets better though, and that’s the key difference.

> My job went from connecting these two things being the hard and reward part, to just mopping up how poorly they’ve been connected.

That’s only half of the transition.

The other half - and when you know you’ve made it through the “AI sux” phase - is when you learn to automate the mopping up. Give the agent the info it needs to know if it did good work - and if it didn’t do good work, give it information so it knows what to fix. Trust that it wants to fix those things. Automate how that info is provided (using code!) and suddenly you are out of the loop. The amount of code needed is surprisingly small and your agent can write it! Hook a few hundred lines of script up to your harness at key moments, and you will never see dumb AI mistakes again (because it fixed them before presenting the work to you, because your script told it about the mistakes while you were off doing something else)

Think of it like linting but far more advanced - your script can walk the code AST and assess anything, or use regex - your agent will make that call when you ask for the script. If the script has an exit code of 2, stderr is shown to the agent! So you (via your script) can print to stderr what the agent did wrong - what line, what file, wha mistake.

It’s what I do every day and it works (200k LOC codebase, 99.5% AI-coded) - there’s info and ideas here: https://codeleash.dev/docs/code-quality-checks

This is just another technique to engineer quality outcomes; you’re just working from a different starting point.


This.

Might be worth updating the link.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: