Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

As usual, science fiction is a cautionary tale of the future. The more effort we put into robots the bigger and stronger they get. Sooner or later one of them will hurt a human, and then what do we do? You can’t hold a robot accountable.

Well, where are we now? If that robot is a driverless car, it can pretty much run over humans with impunity. The owner and manufacturer of the car will suffer minor penalties (compared with the victim being dead or maimed). They will not be required to change their actions. The robots will continue to be allowed to harm people in public.

Personally, I liked it better when we told ourselves stories about breaking the damn things as soon as they hurt someone.



>Sooner or later one of them will hurt a human, and then what do we do?

Considering this took place before I was born, you might not be aware of the multiple times it's already occurred.

https://en.wikipedia.org/wiki/Robert_Williams_(robot_fatalit...


Thank you, I do, many thanks for the link. It is at that point exactly that we should have stopped and taken better stock of our situation. The companies involved drug the case through courts for a decade and paid a pittance. We decided years ago that companies are allowed to purchase a license for robots to kill humans. It’s gross.


Industrial robots have killed a whole lot of people. Automation without intelligence means that robots which mindlessly repeat tasks got built, resulting in people getting crushed when they're in the way of moving arms and apparatus.

Ironically, adding intelligence will probably result in robots that are far safer and kill fewer people.


Robots don’t keep humans safe. Humans keep humans safe. An industrial machine stays put in its context and humans can be trained to work around it, and at least notionally consent to be in its presence. A roaming machine means every human nearby needs to be constantly vigilant, and none of us may revoke consent.

In the second case, machine intelligence is supposed to keep us safe. That intelligence is controlled by people or companies that may or may not have the public benefit as motive for their actions. The typical response to that is to legislate, or publicly advocate for change. But what if the entity that controls the robots also controls the laws? That means there’s no way for regular people to revoke consent to the presence of dangerous robots.

So, cleanly, what if the CEO of a self-driving car company donates money to a government that provides it immunity from the actions of its robots? Who do we trust in that case?

I still prefer a world where we solve the robot problem early, with clubs and fire.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: