Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Imagine you are OpenAI. AI is going to be used for "Military and Warfare" whether you want it to be or not. Do you:

A) opt-out of participating to absolve yourself of future sins or

B) create the systems yourself, assuring you will have a say in the ethical rules engineered into the weapons

If you actually give a shit about ethics and safety (as opposed to the appearance thereof) the only logical choice is B.



Imagine you are Microsoft. Two decades ago the state regulated you. Now you get the opportunity to have them eat from your hand. Who cares about ethics and safety?


By the same logic, chemists in the USA should work on nerve gas, because if they don't North Korea will?


If said nerve gas was decisive weapon capable of giving one side absolute advantage chemists in USA or any other country for that matter would absolutely do it.


This is terrible logic and we (the international community) have banned several kinds of terrible weapons to avoid this kind of lose-lose escalation logic.


The only reason the US or any other country gave up chemical weapons is because they are nearly useless anyways.

There are plenty of other weapons (such as mines) that the “international community” has “banned”, but are very useful in a war. Any country that doesn’t or can’t expect the US to come to its rescue ignores such bans and still manufactures them in great quantities.


That's not the same logic at all.

OP choice was protest or participate and influence to safer outcomes. Your choice was protest or participate without influence to safer outcomes.

Also the AI participant would be OpenAI either way, whereas your inadequate alternative is participate with the US or NK will participate. Also, not the same.

So, wrong on two counts.


That is not valid logic. The USA ratified the Chemical Weapons Convention in 1997, and there are various Acts of Congress which make most work on nerve gas a federal felony. There are no such legal prohibitions on AI development.


We are debating ethics and morality surrounding a rapidly evolving field, not regurgitating trivia about the arbitrary legal status quo in the country you live in. Think for a moment about the various events in human history perpetrated by a government which considered those actions perfectly legal, then come back with something to contribute to the discussion beyond a pathetic, thought-terminating appeal to authority.


1. The initial “pathetic”thought-terminator was comparison to nerve gas.

2. Nerve gas is not strategic. A better comparison are nukes in WW2.

3. Nerve gas has no other uses unlike AI.

4. Nerve can only be used to hurt unlike AI

5. If AI in military is so dangerous, should the US just sit and do nothing while China /Russia deploy it fully? What is your suggestion here specifically?


> assuring you will have a say

Suppliers don't get to pick which house the missile lands on.


"Once the rockets are up, who cares where they come down? That's not my department" says Werner Von Braun.


If you really know about supplier networks, government, military: this is a losing game that is better not played.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: