>AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.
all AI works on patterns, it's not very different from playing chess. Chess Engines use similar method, learn patterns then use them.
While it's true training data is what creates pattern, so you do not have any new "pattern" which is also not already in data
but interesting thing is when pattern is applied to External World -> you get some effect
when the pattern again works on this effect -> it creates some other effect
This is also how your came into existence through genetic recombination.
Even though your ancestral dna is being copied forward, the the data is lossy and effect of environment can be profoundly seen. Yet you probably don't look very different from your grandparents, but your grandchildren may look very different from your grandparents.
at same point you are so many orders moved from the "original" pattern that it's indistinguishable from "new thing"
in simple terms, combinatorial explosion + environment interaction
There is such a mind-bogglingly huge amount of waste in IT services worldwide, particularly in the consulting and offshoring areas, that big swings, up and down, in that area don’t actually have anything to do with what works well or doesn’t. Decisions are made to offshore work or drop offshore contracts based on the latest hype cycle, not whether it is effective or worthwhile.
So while there may be lots of consultants losing their jobs, that’s not because AI tools do the work better. It’s because management thinks investors will accept the story that AI tools will do the work better and save money. Management, and investors, don’t know, can’t judge, and honestly don’t actually care if it’s better or worse. And they run things so poorly it would be impossible to tell anyway.
It just means Kursor is sharing data with Chinese llm which enables them to improve their LLM by training on outputs and input of all data which cursor collects.
Claude code might be subsidized but there are other risks
Like if any agent can use claude models then it exposes them to distillation risk. Where data gathered from millions of such agent usage can easily be used to train a model, making their model superiority subpar
Second thing is, to improve their own coding model, you need predictable input.
If input to their model is all over the place (using different harnesses adds additional entropy to data) then it's hard to improve the model along 1 axis.
Cache is money saver in computing. Their own client might be lot better at caches than any other agent so they do not want to lose money yet end up with disgrunted customer that claude isn't working as good
And also, if a user can simply switch model in an agent. Then what moat does anthropic have? Claude code will not include other companys models and thus will allow them to make their claude code more "complex" with time so the workflows are ingrained in users psyche to the point using anything else becomes very difficult and user quickly returns to claude code
They are not entitled to a moat, and their customers do not owe them one. Several companies have narrow or no moats. Dell and HP are two examples when it comes to their PC business.
This idea that companies should be allowed to lock down their products just so they can have moats, is how we ended up with printer ink being more expensive than crude oil or champagne.
Companies are absolutely allowed to lock down their own products. Netflix is a great example, you don't bring your own client for Netflix.
The whining/entitlement in this thread is ridiculous. The API is always there for you to use as you desire.
If you want to use the loss leader on the other hand, you agree to abide by certain terms. But if you don't want to do that, just use the API. It's not that hard.
> Cache is money saver in computing. Their own client might be lot better at caches than any other agent so they do not want to lose money yet end up with disgrunted customer that claude isn't working as good
I’d bet a reasonable amount that this could be the case. They are very well incentivized to maximize cache use when it’s basically not pay per token.
This is literally the first time I've heard this. What is your source? I can type the exact same query three times and though the general meaning may be the same, the actual output is unique every single time. How do you explain this if it's cached?
In this case LLMs were obviously used to dress the code up as more legitimate, adding more human or project relevant noise. It's social engineering, but you leave the tedious bits to an LLM. The sophisticated part is the obscurity in the whole process, not the code.
all AI works on patterns, it's not very different from playing chess. Chess Engines use similar method, learn patterns then use them.
While it's true training data is what creates pattern, so you do not have any new "pattern" which is also not already in data
but interesting thing is when pattern is applied to External World -> you get some effect
when the pattern again works on this effect -> it creates some other effect
This is also how your came into existence through genetic recombination.
Even though your ancestral dna is being copied forward, the the data is lossy and effect of environment can be profoundly seen. Yet you probably don't look very different from your grandparents, but your grandchildren may look very different from your grandparents. at same point you are so many orders moved from the "original" pattern that it's indistinguishable from "new thing"
in simple terms, combinatorial explosion + environment interaction
reply