Hacker Timesnew | past | comments | ask | show | jobs | submit | patrick451's commentslogin

> Sure, we built something in 2 mos that would have otherwise taken us 6 mos, but now I'm fixing the mess that we caused.

You didn't actually build it in 2 months.


Even if it takes me a month to get us to fix (likely a week tbh), then it took us 3 months to build.

A mere 2x productivity improvement sounds like something you could achieve by introducing new tools that are predictable (i.e.: Not AI).

Perhaps. 2x is still 2x. And new tools still need to be vetted and learned.

It's strange that the goalpost seems to have moved from "AI is net negative to productivity" to "only 2x improvement isn't worth it"


There are countless other stories about the AI's spouting complete bullshit. This easily wastes as much time as they save.

Context: I work in robotics. We use mostly c++ and python. The entire team is about 200 though the subset I regularly interact with is maybe 50.

I basically don't use AI for coding at all. When I have tried it, it's just half working garbage and trying to describe what I want in natural language is just miserable. It feels like trying to communicate via smoke signals.

I'll be a classical engineer until they fire me and then go do something else. So far, that's working. We've had multiple rounds of large layoffs in the last year and somehow I'm still here.


This comparison is useless until rust commits to a stable ABI.


> The first thing you need when you make something new is making it work, it is much better that it works badly than having something not working at all.

It is better for something to not exist than for a shitty version to exist. Software doesn't get better over time, it gets worse. If you make a bad, suboptimal choice today chances are that solution becomes permanent. It's telling that all of your examples of increasing efficiency are not software.

If are aren't going to do it well, don't do it.


Tiny suggestion: make the visualization for torch.zeros and torch.ones have the same y-axis limits so the difference is visually separated.


At least you can read the switch statement. One of the worst features of c++ is all of the code that gets generated for you automatically.


No human needs to have seen an elephant standing in the road before to know that you should not drive through an elephant standing in the road. These are not "long tail" events as the waymo says. It's a big object in the road. You have seen that hundreds of thousands of times. Calling that a long tail event is an admission that your model has zero ability to generalize.


Ideally, we would just ban AI content altogether.


I don't think there's any way for that to happen, and IF we could create a solid legislative framework, AI could definitely (at some point in the future) contribute more good than bad to society.


It's basically the same dynamic as hedonic adjustment in the CPI calculations. Cars may cost twice as much now they have usb chargers built in so inflation isn't really that bad.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: