Wow I though 300 Kph was some kind of physical limit. I mean every high speed train in the world used to max out at 300.
Now it feels like it was just lack of competition. Maybe now other countries will start producing lines and trains capable of 400 Kph and hopefully its not a China only thing going forward.
There is show and there is reality: French TGV achieved 574,8 km/h in 2007 for show, but it was under specific conditions, not in real world conditions.
While it is technically proven that it is possible to do 400+km/h on rail, it's not practical: maintenance, wear, noise, turns, embranchement, and overall cost, ... many considerations that are probably less important for Chinese railway now, which needs some "show".
You should update your data; in 2013, China's high-speed rail reached 605 km/h on experimental lines. The CR450 is scheduled to enter commercial service in 2026.
Like pretty much everything else, it's an optimization problem rather than a physical limit.
So running a train at 350kph is more expensive than 300kph, both in per-distance and pre-unit time terms. But if you can run more services that way then sufficient demand might make it economical. Also, if it's too slow, people may choose flying instead.
Maglev can go even faster but those have never been made economical, really. It's much more complicated and expensive.
It's a bit like how commercial planes have actually gotten slower. 747s used to fly closer to Mach 0.9. Now most commercial planes fly at around Mach 0.8. There are physical problems flying between Mach 0.8 and 1.2 but sometimes that doesn't matter so the best private planes top out at about Mach 0.93. Even then they rarely fly that fast.
300kph is the limit because aerodynamics make that about the best compromise on the effeciency cury. higher speeds are completely possibly - but air planes running with much less atmospheric drag start to become the better option.
of course the above is all about compromise and you can emphasize whatever numbers you want to get different results.
Edit: it is often a good idea to have everything capable of faster speeds - say 350km/h. You don't normally want to use those speeds, but if a train gets delayed (as happens) you can use that extra speed to make up time. Just don't let this become a normal thing.
the losses from weight are linear with speed - at high speed completely dwarfed by losses from pushing air out of the way which is quadradic with speed.
the wings on race cars are poited down - they increase weight to keep the car on the ground at the expense of more drag, which they overcome with a bigger engine (and more fuel use)
We train the model with `explanations`. Most training asks the model to predict the next token or group of tokens. Our training says, predict the next group of tokens (causal diffusion), but also these tokens should be about {sports/art/coding/etc}. So in addition to token supervision, the model gets concept level supervision. The model is forced to more quickly learn these high level concepts.
It's weird to see the expectation that the result should be perfect.
All said and done, that its even possible is remarkable. Maybe these all go into training the next Opus or Sonnet and we start getting models that can create efficient compilers from scratch. That would be something!
"It's like if a squirrel started playing chess and instead of "holy shit this squirrel can play chess!" most people responded with "But his elo rating sucks""
It's more like "We were promised, over and over again, that the squirrel would be autonomous grand master level. We spent insane amounts of money, labour, and opportunity costs of human progress on this. Now, here's a very expensive squirrel, that still needs guidance from a human grandmaster, and most of it's moves are just replications of existing games. Oh, it also can't move the pieces by itself, so it depends on Piece Mover library."
even a squirrel that needs guidance from a human grandmaster, is heavily inspired by existing games, and who can use Piece Mover library is incredible. 5 years ago the squirrel was just a squirrel. then it was able to make legal moves. now it can play a whole game from start to finish, with help. that is incredible
Any way you slice it: LLMs provide real utility today, right now. Even yesterday, before Opus/Codex were updated. So the money was not all for naught. It seems very plausible given the progress made so far that this new industry will continue to deliver significant productivity gains.
If you want to worry about something, let's worry about what happens to humanity when the world we've become accustomed to is yanked out from underneath us in a span of 10-20 years.
For reference, I use LLMs daily for coding. I do think they are useful.
I am speaking about corporations and sales tactics, because this VERY experiment was done by exactly such a corporation. How about you think about how "this whole thing works", and apply it to their post? What did they not write? How many worse experiments did they not post about to not jeopardize investments?
I don't find this impressive, because it doesn't do anything I'd want, anything I'd need, anything the world needs, and it doesn't do anything new compared to my personal experience. Which, just to reiterate, is that LLMs are useful, just not nowhere close to as world shattering/ending as the CEOs are selling it. Acknowledging that has nothing to do with being a luddite.
To be a bit pedantic, I'm not accusing you of being a Luddite. That would mean that you were fundamentally opposed to a new technology that's obviously more useful.
Instead, in my opinion you are not giving enough grace to what is being demonstrated today.
This is my analogy: you're seeing electrical demonstrations in front of your very eyes, but because the charlatans who are funding the research haven't quite figured out how to harness it, you're dismissing the wonder. "That's all well and good, but my beeswax candles and gas lamps light my apartment just fine."
It is very impressive indeed, but impressiveness is not the same as usefulness.
If important further features can’t get implemented anymore
The usefulness is pretty limited.
And usefulness further needs to be weighed against cost.
This is really questionable outcome. So you'll have your own custom OS riddled with holes that AI won't be capable of fixing because the context and complexity became so high that running any small bug fix would cost thousands of dollars in tokens.
Is this how tech field ends? Overengineered brittle black-box monstrosities that nobody understands because important thing for business was "it does A, B, and C" and it doesn't matter how.
But the Squirrel is only playing chess because someone stuffed the pieces with food and it has learned that the only way to release it is by moving them around in some weird patterns.
But people have been telling us for years that the squirrel was going to improve at chess at an exponential rate and take over the world through sheer chess-mastery.
>It's weird to see the expectation that the result should be perfect.
Given that they spent $20k on it and it's basically just advertising targeted at convincing greedy execs to fire as many of us as they can, yeah it should be fucking perfect.
A symptom of the increasing backlash against generative AI (both in creative industries and in coding) is that any flaw in the resulting product is predicate to call it AI slop, even if it's very explicitly upfront that it's an experimental demo/proof of concept and not the NEXT BIG THING being hyped by influencers. That nuance is dead even outside of social media.
AI companies set that expectation when their CEOs ran around telling anyone who would listen that their product is a generational paradigm shift that will completely restructure both labor markets and human cognition itself. There is no nuance in their own PR, so why should they benefit from any when their product can't meet those expectations?
Because it leads to poor and nonconstructive discourse that doesn't educate anyone about the implications of the tech, which is expected on social media but has annoyingly leaked to Hacker News.
There's been more than enough drive-by comments from new accounts/green names even in this HN submission alone.
It cannot be overstated how absurd the marketing campaign for AI was. OpenAI and Anthropic have convinced half the world that AI is going to become a literal god. They deserve to eat a lot of shit for those outright lies.
Maybe the general population will be willing to have a more constructive discussions about this tech once the trillion dollar companies stop pillaging everything they see in front of them and cease acting like sociopaths whose only objectives seem to be concentrating power, generating dissidence and harvesting wealth.
When we were trying to build our own agents we put quite a bit of effort on evals which was useful.
But switching over to using coding agents we never did the same. Feels like building an eval set will be an important part of what engg orgs do going forward.
Big fan of FastAPI but I think SQLModel leads to the wrong mental model that somehow db model and api schema are the same.
Therefore I insist on using SQLAlchemy for db models and pydantic for api schemas as a mental boundary.
reply