Hacker Timesnew | past | comments | ask | show | jobs | submit | fny's commentslogin

It uses the Homebrew API and uses its own dependency resolver and linker to pull Homebrew's precompiled packages.

Claude Code was released for general use in May 2025. It's only March.

Also using PyPI as a benchmark is incredibly myopic. Github's 2025 Octoverse[0] is more informative. In that report, you can see a clear inflection point in total users[1] and total open source contributions[2].

The report also notes:

> In 2025, 81.5% of contributions happened in private repositories, while 63% of all repositories were public

[0]: https://github.blog/news-insights/octoverse/octoverse-a-new-...

[1]: https://github.blog/wp-content/uploads/2025/10/octoverse-202...

[2]: https://github.blog/wp-content/uploads/2025/10/octoverse-202...


> Claude Code was released for general use in May 2025. It's only March.

Detractors of AI are often accused of moving the goalposts, but I think your comment is guilty of the same. Before Claude Code, we had Cursor, Github Copilot, and more. Each of these was purportedly revolutionizing software engineering.

Further, the core claim for AI coding is that it lets you ship code 10x or 100x faster. So why do we need to wait years to see the result? Shouldn't there be an explosion in every type of software imaginable?


> Detractors of AI are often accused of moving the goalpost, but I think your comment is guilty of the same. Before Claude Code, we had Cursor, Github Copilot, and more. Each of these war purportedly revolutionizing software engineering.

What's sauce for the goose is sauce for the gander. If you make that argument that 'I don't believe in kinks or discontinuities in code release due to AI, because so many AI coding systems have come out incrementally since 2020', then OP does provide strong evidence for an AI acceleration - the smooth exponential!


Amongst people who use AI regularly, November 2025 is widely regarded as a watershed moment. Opus 4.5 was head and shoulders above anything that came before it. It marked the first time my previously AI-disliker friends begrudgingly came to accept that it may actually be useful.

You absolutely can see a difference. [0] The term of art is "Runway Incursions", and the stats definitely show our airports are working at the limits of safety.

[0]: https://www.buckycountry.com/2025/09/22/runway-close-calls-u...


That's a 7 years graph, where category A incursions change by 0.7σ, and total incursions are basically horizontal.

What statistical conclusion are you taking from it?


Category A and B incursions increased by 2.8σ. Further, it was 7 years of increases in a row. Either factor on its own would indicate a process out of statistical control.

In clinical settings and situations where probabilities really matter, its a better fit.

I studied stats at Duke which is a Bayesian academy. Almost every problems come from regimes with small sample sizes. Given that Duke houses the largest academic clinical research organization globally, having a stats and biostats department with this bent is useful: samples are tiny in clinical trials compared to most big data settings.

The biggest problem with the whole Bayesian regime IMO is that as the data gets larger its selling point vanishes. If your data is big or is normal (mean-based statistics), a frequentest/bootstrapped CI approximates the Bayesian CI anyway.

Furthermore, many us work in settings where we're trying to sell toothpaste: we don't need the Bayesian guarantees that an insurer might.


I highly recommend people watch video from the trial--specifically the officer testimonies. It's absurd this lawsuit was even fit for trial.

The bits I watched were so captivating I had to take them off, otherwise I wouldn't be getting anything done this afternoon.

Honestly, someone could adapt it to a script and run it in a live theater.

Now I know what I'm watching tonight!


Do you have a link? There are plenty of snippets that are easy to find, is there one canonical full video?


With some strange commentary, ended up watching https://www.youtube.com/watch?v=-ozZIWy7OWk from that playlist and I'm not sure it's AI or what's going on, at one point she said Afroman called one of the police officers a "PDF" which is the first time I heard, doesn't seem Afroman actually said something like that, and I also don't understand at all what's that's supposed to mean. Doesn't seem to be a typo either?

"PDF" is a zoomer euphemism for "pedophile".

Funnily, this started because their trying to avoid perceived insta shadow banning.

Yes, this sort of thing has been a plague against the English language for a few years now, responsible for such ugly constructions as "to unalive".

(I understand and respect the linguists who would maintain a descriptivist view of this sort of thing, but I'm not a linguist and I'm not required to suspend my aesthetic opinions whenever an ugly fashion rears its head.)


> It's absurd this lawsuit was even fit for trial

Is it just me or did the judge seemed biased towards the cops? He also dismissed Afroman's counter-suit.


Unfortunately, cops have the right to break down your door if the warrant warrants it--like having a sex dungeon.

"..raid my house and then get pissed, because the dungeon don't exist.."

Bars.

That guy is like a court jester of old unleashed onto an unsuspecting, (and corrupt), 21st century steward class.


Other countries need to invest collectively in open alternatives, and AI must be considered critical infrastructure rather than a commercial venture. Building small firms to compete against behemoths will not accomplish that.

And by open I mean open weights AND open training pipelines.


It's complicated.

The obvious path is the Grok path. That is, anybody with a big pile of money can read some papers and hire some people and make a model which is at or near the frontier. Beating current models by a hair at current benchmarks is not as hard as it looks because you will be building the system to beat those benchmarks from the beginning. [1]

Six months or a year later people will start to realize that you're not really improving or making progress though because that's something entirely different.

Now real advances in the long term are going to come out of smaller companies working on things like

https://en.wikipedia.org/wiki/Mamba_(deep_learning_architect... [2]

Like the frontier models are just too expensive to do the experimental work which will lead to advances in the science, the science is going to advance through work on little models whereas companies like OpenAI and Anthropic are very committed to maximize the performance of their existing systems in the short term and it is the intense competition that will keep them in an "Innovator's Dilemma" situation where their customers are going to reject anything really new which doesn't perform the same. [3]

[1] ... and even if you don't cheat the model will cheat for you

[2] ... not necessarily that one in particular, but something like that

[3] companies like Microsoft that "disrupt themselves" ignoring their customers are afraid of an Innovator's Dilemma situation but are paradoxically not stuck in it because they are monopolies who can force their customers to do something they don't like.


The other consideration is that the kill switch is ultimately controlled by the US. The US government can easily commandeer Starlink or jail Musk, but other countries use starlink at the pleasure of both Musk and the US government.

That's the part that makes allied nations nervous. If you're running military comms through Starlink and the US decides to play hardball in some trade dispute, your entire C2 network just became a bargaining chip. Ukraine showed how quickly access decisions become political. I think we'll see European and Asian allies start investing in their own LEO constellations specifically because of this - nobody wants their military dependent on another country's CEO.

Most countries would not need to make their C2 infrastructure fully dependent on Starlink, because most countries are not big enough and cannot project enough power globally to make this an actual requirement, and the few countries who can project power globally can afford multiple communications layers. But your core idea is true.

This is explicitly one reason the US marketed the F-35 so hard to their allies. In addition to giving their allies a good capability, it made their air force dependent on continuing US support, so politicians wishing to go against US positions have to be willing to sacrifice their military power to do so. This gives the US a strong lever in negotiating.


European and Asian allies would have to start by investing in low-cost launch capabilities. So far they're making approximately zero progress in that area.

The reality is that all US allies except for maybe France no longer have the capability to project power much outside their own territory without active US support. It's not only satellites. They're also missing just about everything else such as logistics, specialized aircraft, air defense, amphibious capabilities, intelligence, etc. With largely stagnant economies there's no way they can sustain the funding necessary to close those gaps unless they join together in closer alliances with each other.


Most European countries (except France and the UK) are not interested in projecting power outside of a fairly narrow geographic area (mostly the European continent and adjacent seas).

These “military starlinks” will be much smaller systems than actual Starlink. The German one plans for 100 satellites.

Source: https://www.bloomberg.com/news/articles/2026-03-07/airbus-te...


I'm betting on every single implementation costing $10B minimum

You're right that the launch cost gap is the real barrier. Europe's been talking about sovereign launch capability for years but Ariane 6 still can't compete on cost with SpaceX. I think the more likely path is that smaller nations lease capacity on someone else's constellation rather than building their own. The question is whether that actually solves the dependency problem or just moves it from one provider to another.

LEO is pretty expensive. Smaller countries might be better off with cheaper Astranis GEO satellites.

There's other interesting middle ground options, like O3b's equatorial MEO ring, that has coverage similar to GEO as far as latitudes go, but better latency.

I never understood how people use compiled languages for video games let alone simple GUIs. Even though I'm now competent in a few, and I have LLMs at my disposal, I fall back to electron or React Native just because it's such a pain in the ass to iterate with anything static.

Native devs: what are your go to quality of live improvements?


Having a visual builder tool in an IDE like Delphi or Visual Basic or any of the others.

They ship with an existing library of components, you drag and drop them onto a blank canvas, move them around, live preview how they’ll change at different screen sizes, etc… then switch to the code to wire up all the event handlers etc.

All the iteration on design happens before you start compiling, let alone running.


what does compilation have to do with iteration speed? There's a lot of ways to get a similar feedback loop that youd get in something like react, like separating out your core gameplay loop into its own compilation unit / dll and reloading it on any changes inside your application


Yeah... that's way, way, way more complex than npm run dev


NPM is absurdly complex in comparison, it's just neatly abstracted. Maybe somebody will write a cross-platform reactive layer which can compile both natively and to the web?


if i wrap a bunch of abstractions in a `make run` command whats the difference


Hot reloading is about the only difference if you're doing incremental builds.

For that, some languages are blocked by runtimes that don't support it. C can do it [0] so it's not a limitation of the static/dynamic divide.

[0] https://www.slembcke.net/blog/HotLoadC/


Video games generally have various editors and that is where the major iteration happens. It's not like HTML where you type some tags and refresh. Instead you have/make editors to design your levels, UIs, characters.

Most video game teams are < 30% programmers.


> video games

Often use dynamic/scripting languages to improve iteration on gameplay code, even if a lot of the fundamental underlying code is native. And add dev-time hot reloading wherever we can so when you change a texture, it reloads ≈immediately without needing to so much as restart the level. We exile as much as we can to tables and other structured data formats which can easily be tweaked and verified by non-coders so we're not a bottleneck for the game designers and artists who want to tweak things, and make that stuff hot-reloadable if possible as well.

We also often have in-house build server farms full of testing code, because it's such a pain in the ass to iterate with anything dynamic. After all, games are huge, and sufficient testing to make sure all your uncompiled unanalyzed typecheckless code works is basically impossible - things are constantly breaking as committed during active development, and a decent amount of engineering work is frequently dedicated to such simple tasks as triaging , collecting, and assigning bugs and crash reports such that whomever broke it knows they need to fix it, as well as allowing devs and designers to work from previous "known good" commits and builds so they aren't blocked/unable to work on their work - which means internal QA helping identify what's actually "known good", hosting and distributing multiple build versions internally such that people don't have to rebuild the universe themselves (because that's several hours of build time), etc.

Some crazy people invest in hot-reloadable native code. There's all kinds of limits on what kinds of changes you can make in such a scenario, but it's entirely possible to build a toolchain where you save a .cpp file, and your build tooling automatically kicks off a rebuild of the affected module(s), triggering a hot reload of the appropriate .dll, causing your new behavior to be picked up without restarting your game process. Which probably means it'll immediately crash due to a null pointer dereference or somesuch because some new initialization code was never triggered by the hot reloading, but hey, at least it theoretically works!

And, of course, nothing is stopping you from creating isolated sandboxes/examples/test cases where you skip all the menuing, compiling unrelated modules, etc. and iterating in that faster context instead of the cumbersome monolith for most of your work.


Having a faster build step helps: I just stepped back into C recently, and I don't even want to imagine doing it without ccache and meson.


Why not ninja?


Meson uses ninja under the hood.


Not game dev related, but I program in both Go and Python, and there really is no difference in my feedback loop / iteration because Go builds are so fast and cache unchanged parts.


re, iteration: Have you encountered ImGui [0]? It's basically standard when prototyping any sort of graphical application.

re, GUIs in statically typed languages: As you might expect, folks typically use a library. See Unreal Engine, raylib, godot, qt, etc. Sans that, any sort of 2D graphics library can get the job done with a little work.

You might also take a look at SwiftUI if you have an Apple device.

[0]: https://github.com/ocornut/imgui


> It's basically standard when prototyping any sort of graphical application.

while imgui is super-cool, this is wildly overstating its reach or significance. It also embodies a very particular style of GUI programming (so-called "immediate mode", hence the "Im" part of the name) that is very well suited to some sorts of GUI applications and less so for others. The other style, often called "deferred mode", is the one used by most native toolkits, and it is very far from trivial to just switch an application between the two.

So, while there are plenty of good reasons to consider imgui for a graphical application, there are also many reasons why you would not want to use it too. It is very far from "standard" in terms of prototyping such apps.


As the kids say, AI makes everyone a 10x engineer. Who would you you want to 10x?


But it's measuring quantity, not quality.

https://www.folklore.org/Negative_2000_Lines_Of_Code.html

What we really need is the -10X engineer ;)

Alas, his job would entirely consist of debloating the slop everyone else is pumping out "at inference speed".

https://steipete.me/posts/2025/shipping-at-inference-speed


Even though a lot of what people with agents is wreckless, they often build their own guillotine in the process too.

Problem #1: He decided to shoehorn two projects into 1 even though Claude told him not to.

Problem #2: Claude started creating a bunch of unnecessary resources because another archive was unpacked. Instead of investigating this despite his "terror" the author let Claude continue and did not investigate.

Problem #3: He approved "terraform destroy" which obviously nukes the DB! It's clear he didn't understand, and he didn't even have a backup!

> That looked logical: if Terraform created the resources, Terraform should remove them. So I didn’t stop the agent from running terraform destroy


> Problem #3: He approved "terraform destroy" which obviously nukes the DB! It's clear he didn't understand

The biggest danger of agents its that the agent is just as willing to take action in areas where the human supervisor is unqualified to supervise it as in those where it isn't, which is exacerbated by the fact that relying on agents to do work [0] reduces learning of new skills.

[0] "to do work" here is in large part to distinguish use that focuses on the careful, disciplined use of agents as a tool to aid learning which involves a different pattern of use. I am not sure how well anyone actually sticks to it, but at least in principal it could have the opposite effect on learning of trust-the-agent-and-go vibe engineering.


His backup plan prior to the event had large obvious issues.

His backup plan after the fact seems suspicious as well because he is making it much harder than it has to be.

Between that and a glance at the home page, it feels like someone doing AI vibe work who is not comfortable in the space they are working.

Who is the intended audience? Other vibe coders? I just think its weird that given his backup solution, he likely asked the AI to create it . whatever hot-wash he did for this event was invalidated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: