Context: I work in robotics. We use mostly c++ and python. The entire team is about 200 though the subset I regularly interact with is maybe 50.
I basically don't use AI for coding at all. When I have tried it, it's just half working garbage and trying to describe what I want in natural language is just miserable. It feels like trying to communicate via smoke signals.
I'll be a classical engineer until they fire me and then go do something else. So far, that's working. We've had multiple rounds of large layoffs in the last year and somehow I'm still here.
> The first thing you need when you make something new is making it work, it is much better that it works badly than having something not working at all.
It is better for something to not exist than for a shitty version to exist. Software doesn't get better over time, it gets worse. If you make a bad, suboptimal choice today chances are that solution becomes permanent. It's telling that all of your examples of increasing efficiency are not software.
No human needs to have seen an elephant standing in the road before to know that you should not drive through an elephant standing in the road. These are not "long tail" events as the waymo says. It's a big object in the road. You have seen that hundreds of thousands of times. Calling that a long tail event is an admission that your model has zero ability to generalize.
I don't think there's any way for that to happen, and IF we could create a solid legislative framework, AI could definitely (at some point in the future) contribute more good than bad to society.
It's basically the same dynamic as hedonic adjustment in the CPI calculations. Cars may cost twice as much now they have usb chargers built in so inflation isn't really that bad.
You didn't actually build it in 2 months.
reply