Hacker Timesnew | past | comments | ask | show | jobs | submit | whinvik's commentslogin

Yeah I have always struggled to figure out why I would use SQLModel.

Big fan of FastAPI but I think SQLModel leads to the wrong mental model that somehow db model and api schema are the same.

Therefore I insist on using SQLAlchemy for db models and pydantic for api schemas as a mental boundary.


This is my current position as well.

I think we are going through the same cycle of http leading to https, the rise of oauth and oidc. Its just way faster now.


Wow I though 300 Kph was some kind of physical limit. I mean every high speed train in the world used to max out at 300.

Now it feels like it was just lack of competition. Maybe now other countries will start producing lines and trains capable of 400 Kph and hopefully its not a China only thing going forward.


There is show and there is reality: French TGV achieved 574,8 km/h in 2007 for show, but it was under specific conditions, not in real world conditions.

While it is technically proven that it is possible to do 400+km/h on rail, it's not practical: maintenance, wear, noise, turns, embranchement, and overall cost, ... many considerations that are probably less important for Chinese railway now, which needs some "show".


You should update your data; in 2013, China's high-speed rail reached 605 km/h on experimental lines. The CR450 is scheduled to enter commercial service in 2026.

Sorry if I wasn't clear but was not talking about demo runs. There are plenty of those. Was more meaning operational speeds having a limit.

Like pretty much everything else, it's an optimization problem rather than a physical limit.

So running a train at 350kph is more expensive than 300kph, both in per-distance and pre-unit time terms. But if you can run more services that way then sufficient demand might make it economical. Also, if it's too slow, people may choose flying instead.

Maglev can go even faster but those have never been made economical, really. It's much more complicated and expensive.

It's a bit like how commercial planes have actually gotten slower. 747s used to fly closer to Mach 0.9. Now most commercial planes fly at around Mach 0.8. There are physical problems flying between Mach 0.8 and 1.2 but sometimes that doesn't matter so the best private planes top out at about Mach 0.93. Even then they rarely fly that fast.


In the case of private jets, the Mach figure is mostly a proxy for other performance metrics.

Flying an aircraft at max cruise can save a lot of time on longer flights, but it's also substantially more expensive.


300kph is the limit because aerodynamics make that about the best compromise on the effeciency cury. higher speeds are completely possibly - but air planes running with much less atmospheric drag start to become the better option.

of course the above is all about compromise and you can emphasize whatever numbers you want to get different results.

Edit: it is often a good idea to have everything capable of faster speeds - say 350km/h. You don't normally want to use those speeds, but if a train gets delayed (as happens) you can use that extra speed to make up time. Just don't let this become a normal thing.


What about if they added “wings” to trains? That could generate some lift reducing the effective weight is my shower thought.

No idea how much the wings would add versus the lift help.


The friction is almost entirely from drag/air resistance, not from the resistance of the rails.

the losses from weight are linear with speed - at high speed completely dwarfed by losses from pushing air out of the way which is quadradic with speed.

the wings on race cars are poited down - they increase weight to keep the car on the ground at the expense of more drag, which they overcome with a bigger engine (and more fuel use)


The French TGV managed to reach 574km/h, so 300km/h is not an hard limit. https://www.youtube.com/watch?v=EOdATLzRGHc

Looks very interesting. Can you comment on why you think this model can give comparable performance with less training data?


We train the model with `explanations`. Most training asks the model to predict the next token or group of tokens. Our training says, predict the next group of tokens (causal diffusion), but also these tokens should be about {sports/art/coding/etc}. So in addition to token supervision, the model gets concept level supervision. The model is forced to more quickly learn these high level concepts.


Another vote for handy. I am using with Parakeet and its pretty good.

Now its mostly about models getting better.


Thanks for the Handy info. (New to me)

Haven’t used Parakeet, but noting it too.

Commenting here so I come back to it.


It's weird to see the expectation that the result should be perfect.

All said and done, that its even possible is remarkable. Maybe these all go into training the next Opus or Sonnet and we start getting models that can create efficient compilers from scratch. That would be something!


This is firmly where I am. "The wonder is not how well the dog dances, it is that it dances at all."


"It's like if a squirrel started playing chess and instead of "holy shit this squirrel can play chess!" most people responded with "But his elo rating sucks""


It's more like "We were promised, over and over again, that the squirrel would be autonomous grand master level. We spent insane amounts of money, labour, and opportunity costs of human progress on this. Now, here's a very expensive squirrel, that still needs guidance from a human grandmaster, and most of it's moves are just replications of existing games. Oh, it also can't move the pieces by itself, so it depends on Piece Mover library."


even a squirrel that needs guidance from a human grandmaster, is heavily inspired by existing games, and who can use Piece Mover library is incredible. 5 years ago the squirrel was just a squirrel. then it was able to make legal moves. now it can play a whole game from start to finish, with help. that is incredible


I think the post you're responding to would agree, but is trying to make the argument that it isn't worth the cost:

> spent insane amounts of money, labour, and opportunity costs of human progress on this

That said, I would 100% approve of certain people pouring all their energy into AI to rather focus on teaching squirrels chess!


Any way you slice it: LLMs provide real utility today, right now. Even yesterday, before Opus/Codex were updated. So the money was not all for naught. It seems very plausible given the progress made so far that this new industry will continue to deliver significant productivity gains.

If you want to worry about something, let's worry about what happens to humanity when the world we've become accustomed to is yanked out from underneath us in a span of 10-20 years.


My opinion: you are critiquing electricity because the candles are still better / more affordable / more honestly made.

You seem to be mad that companies are in the business of selling us things. It's the way this whole thing works.

If you don't think this is impressive: stop everything you're doing and go make a c compiler that can build the Linux kernel.


For reference, I use LLMs daily for coding. I do think they are useful.

I am speaking about corporations and sales tactics, because this VERY experiment was done by exactly such a corporation. How about you think about how "this whole thing works", and apply it to their post? What did they not write? How many worse experiments did they not post about to not jeopardize investments?

I don't find this impressive, because it doesn't do anything I'd want, anything I'd need, anything the world needs, and it doesn't do anything new compared to my personal experience. Which, just to reiterate, is that LLMs are useful, just not nowhere close to as world shattering/ending as the CEOs are selling it. Acknowledging that has nothing to do with being a luddite.


To be a bit pedantic, I'm not accusing you of being a Luddite. That would mean that you were fundamentally opposed to a new technology that's obviously more useful.

Instead, in my opinion you are not giving enough grace to what is being demonstrated today.

This is my analogy: you're seeing electrical demonstrations in front of your very eyes, but because the charlatans who are funding the research haven't quite figured out how to harness it, you're dismissing the wonder. "That's all well and good, but my beeswax candles and gas lamps light my apartment just fine."


Until the juice is worth the squeeze, the beeswax candles and gas lamps are likely more than fine.


It is very impressive indeed, but impressiveness is not the same as usefulness. If important further features can’t get implemented anymore The usefulness is pretty limited. And usefulness further needs to be weighed against cost.


I'm not trying to get coached in chess by the squirrel for 200 per month though.


"The squirrel can do my job and more? It can do five years of my work in a month? For only $20k? Pssh, but I bet it copied someone's homework."

Developer salaries are about to tank.

This is the end of the line. People are just in denial.

Soon companies will hire the squirrel instead of you. And the squirrel will transform into enormous infrastructure we can't afford ourselves.

"One mega squirrel to implement your own operating system overnight. Just $100k."

It's going to be out of the reach of humans / ICs soon. Purely industrial. And all innovation will accrue to the capital holders.

Open weights models are our only hope of keeping a foot in the door.


This is really questionable outcome. So you'll have your own custom OS riddled with holes that AI won't be capable of fixing because the context and complexity became so high that running any small bug fix would cost thousands of dollars in tokens.

Is this how tech field ends? Overengineered brittle black-box monstrosities that nobody understands because important thing for business was "it does A, B, and C" and it doesn't matter how.


IF you want the code to be reviewed and maintained you still need a developer. A developer can craft a better spec.


But the Squirrel is only playing chess because someone stuffed the pieces with food and it has learned that the only way to release it is by moving them around in some weird patterns.


But people have been telling us for years that the squirrel was going to improve at chess at an exponential rate and take over the world through sheer chess-mastery.


I was also startled when I learned about the human ancestor who was the first to see a mirror.

The brilliance of AI is that it copies(mirrors) imperfectly and you can only look at part_of_the_copy(inference) at a time.


>It's weird to see the expectation that the result should be perfect.

Given that they spent $20k on it and it's basically just advertising targeted at convincing greedy execs to fire as many of us as they can, yeah it should be fucking perfect.


A symptom of the increasing backlash against generative AI (both in creative industries and in coding) is that any flaw in the resulting product is predicate to call it AI slop, even if it's very explicitly upfront that it's an experimental demo/proof of concept and not the NEXT BIG THING being hyped by influencers. That nuance is dead even outside of social media.


AI companies set that expectation when their CEOs ran around telling anyone who would listen that their product is a generational paradigm shift that will completely restructure both labor markets and human cognition itself. There is no nuance in their own PR, so why should they benefit from any when their product can't meet those expectations?


Because it leads to poor and nonconstructive discourse that doesn't educate anyone about the implications of the tech, which is expected on social media but has annoyingly leaked to Hacker News.

There's been more than enough drive-by comments from new accounts/green names even in this HN submission alone.


It does lead to poor non-constructive discourse. That's why we keep calling those CEOs to task on it. Why are you not?


The CEOs aren't here in the comments.


Which is why we ought to always bring up their BS every time people try to pretend it didn't happen.

The promises made are ABSOLUTELY relevant to how promising or not these experiments are.


I bet you get upset when you buy a new iPhone and don't love it, because Tim Cook said on the ad that they think you're going to love it.


It cannot be overstated how absurd the marketing campaign for AI was. OpenAI and Anthropic have convinced half the world that AI is going to become a literal god. They deserve to eat a lot of shit for those outright lies.


It's not just social media, it's IRL too.

Maybe the general population will be willing to have a more constructive discussions about this tech once the trillion dollar companies stop pillaging everything they see in front of them and cease acting like sociopaths whose only objectives seem to be concentrating power, generating dissidence and harvesting wealth.


Came here to ask the same question!


Interesting. That's exactly what I feel about most subreddits. Go to r/Python for example.

It's an endless stream of basic tool/library questions. Put me off reddit quite a bit.


Curious if anyone has experimented with dotenvx - https://dotenvx.com/


What would stop the agent from writing+running its own script wrapped in `dotenvx run` to access the secrets?


One can put `dotenvx` into the deny list for the agent but there will definitely be ways around of it.


When we were trying to build our own agents we put quite a bit of effort on evals which was useful.

But switching over to using coding agents we never did the same. Feels like building an eval set will be an important part of what engg orgs do going forward.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: