AFAIK what they do is that they calculate a hash of the true thinking trace, save it into a database, and only send those hashes back to you (try to man-in-the-middle Claude Code and you'll see those hashes). So then when you send then back your session's history you include those hashes, they look them up in their database, replace them with the real thinking trace, and hand that off to the LLM to continue generation. (All SOTA LLMs nowadays retain reasoning content from previous turns, including Claude.)
So we are paying the price for the cost of infra need to protect their asset which was trained on data derived from the work of others while ignoring the same principle? I need this to make sense.
I see. If that's just hashes and not encrypted content I can't see how they can resume old sessions properly. IIRC they have a 30 days retention policy and surely the thinking traces must be considered data. Wonder how this works with the zero-retention enterprise plans...
> I wonder if there is a more general solution that can make models spend more compute on making important choices, while making generation of the "obvious" tokens cheaper and faster.
I think speculative decoding count as a (perhaps crude) way implementing this?
The tools was mostly already known, no? (I wish they had a "present" tool which allowed to model to copy-paste from files/context/etc. showing the user some content without forcing it through the model)
Yeah in fact one thing claude is freaking great at is decompilation.
If you can download it client side you can likely place a copy in a folder and ask claude
‘decompile the app in this folder to answer further questions on how it works. As an an example first question explain what happens when a user does X’.
I do this with obscure video games where i want to a guide on how some mechanics work. Eg.
https://pastes.io/jagged-all-69136 as a result of a session.
It can ruin some games but despite the possibility of hallucinations i find it waaay more reliable than random internet answers.
Works for apps too. Obfuscation doesn’t seem to stop it.
> This is a production-grade agentic system that happens to live in your terminal.
You read the code?
> The ink/ directory — roughly 50 files — is not the popular npm ink package. Anthropic built their own React-based terminal rendering engine from scratch.
To be fair, they display it reasonable prominently in GitHub when you are logged in. Given that, I feel the post title fall under the click bait category. I was fully aware of the Co-pilot opt-out change, but still clicked due the phrasing of the title.
It wasn't about keeping up. It was 100% about Google putting billions in advertising and abusing their dominance. Besides legit stuff like paying millions or more likely billions for billboards, spots in tv/radio/etc... there were monopoly "ads" on google.com, gmail,com, youtube.com homepages. And of course the classic of blocking features based on user agent alone, lying to people they need to use Chrome to access a product or a feature. They just needed to manipulate the masses and now almost everyone uses browser from an advertising company and they can keep pulling the rug.
reply