Hacker Timesnew | past | comments | ask | show | jobs | submit | yladiz's commentslogin

This sounds just like the idea that quantum computing will solve a lot of computational issues, which we know isn’t true. Why would AGI be any different?

Accuracy/faithfulness to the code as written isn't necessarily what you care about though, it's an understanding of the underlying problem. Just translating code doesn't actually help you do that.

So just don’t trust people or organizations? Like sure it’s the author’s fault in a sense but should they have just not trusted in the first place?

The same reason you vibe code a Rust version of SQLite.

You’re shifting the goalposts. The initial point was that the Rust regeneration of SQLite was wasted money, because it’s unviable due to its slow speed. You’re trying to shift it to be about how it may get better over time. Do you have something that is more specifically refuting the initial quote that doesn’t involve anything about potential improvement?


The point wasn’t to make better SQLite, it was to make a functioning rust SQLite. Which it did. Badly but you don’t start at race cars. No one was assuming production SQLite.

And yet people still want artisan goods, artwork, high end food, things that aren’t “economic”.


Very, very, few people buy these things.


Okay? It doesn't refute my point.


The point is that artisanal code is to a first approximation a thing of the past. Most engineers will not have a job writing code in these niches that survive, and thus coding as a career is effectively dead.


If you do a lot of small commits, it's entirely reasonable to make 50 commits in 24 hours. Looking at a few random commits they seem human generated (with potentially some copied CSS).

Maybe, before making an accusation that it is AI generated you should have some proof. Do you have any?


Humans don't generate code, we write code.

I am strongly opposed to anthropomorphising autocomplete (phrases like "I asked <my favorite LLM>", "<my LLM> suggested", ...) or even referring to autocomplete+tooling as "AI" because it devalues actual human intelligence. But I've seen the opposite recently - devaluing human work by using language normally used for machines.

Maybe you didn't mean anything by it but how people talk about things shapes how they think about it (which arguably is one area where humans and LLMs are similar).


In general, any government already has your information, and it's naive to think that they don't; if you pay taxes, have ever had a passport, etc. they already have all identifying information that they could need. For services, or for the government knowing what you do (which services you visit), then a zero-knowledge proof would work in this case.


> The court leans right in matters of ambiguity because its constituants think in those ways.

What do you mean by constituents? The judges aren't elected by the people.


The person or thing which constitutes, determines, or constructs. A SC justice is a constituent of the SC.

"Their first composure and origination require a higher and nobler constituent than chance." Sir M. Hale


And another extremely critical piece of technology is the mirror from Zeiss, which is not manufactured in the US.


Yep, absolutely true. ASML is a critical technology provider that both the US and EU are dependent on each other to maintain.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: