Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

In practice, is AI incapable of performing human-judgement ... at the levels most humans do?
 help



If we are talking about jobs (quantity) maybe to some extent. But if want to be honest, it’s qualitative (human-judgment) question. And even if a job seems totally AI-ready on paper, it might have invisible side effects.

(Thought experiment: do I want an AI robot to perform a surgery on me, if it only has 2% chance of hallucinating? My answer is no, bring the surgeon)


> if it only has 2% chance of hallucinating

I want people to have jobs.

Setting that aside, it is dependent on error rate of human surgeons, right?


I wonder if we will see some perverse incentives emerge to make the AI seem even better. For example, say a well rested, stress free surgeon can have a 1% error rate. Well, lets make the job harder then, fatigue the surgeon, lay many of them off (or just not rehire as they leave) and spread the remainder thin. Make them hit 3% error rate. Then fire the lot because it would be malpractice not to.

If that’s the dystopia we would live in, I’d imagine an alternate healthcare/legal system would emerge. Also, personally I’m far more forgiving of the human-error than that of the machine

For the musk class maybe. But for you and I?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: