Imagine you are Microsoft. Two decades ago the state regulated you. Now you get the opportunity to have them eat from your hand. Who cares about ethics and safety?
If said nerve gas was decisive weapon capable of giving one side absolute advantage chemists in USA or any other country for that matter would absolutely do it.
This is terrible logic and we (the international community) have banned several kinds of terrible weapons to avoid this kind of lose-lose escalation logic.
The only reason the US or any other country gave up chemical weapons is because they are nearly useless anyways.
There are plenty of other weapons (such as mines) that the “international community” has “banned”, but are very useful in a war. Any country that doesn’t or can’t expect the US to come to its rescue ignores such bans and still manufactures them in great quantities.
OP choice was protest or participate and influence to safer outcomes. Your choice was protest or participate without influence to safer outcomes.
Also the AI participant would be OpenAI either way, whereas your inadequate alternative is participate with the US or NK will participate. Also, not the same.
That is not valid logic. The USA ratified the Chemical Weapons Convention in 1997, and there are various Acts of Congress which make most work on nerve gas a federal felony. There are no such legal prohibitions on AI development.
We are debating ethics and morality surrounding a rapidly evolving field, not regurgitating trivia about the arbitrary legal status quo in the country you live in. Think for a moment about the various events in human history perpetrated by a government which considered those actions perfectly legal, then come back with something to contribute to the discussion beyond a pathetic, thought-terminating appeal to authority.
1. The initial “pathetic”thought-terminator was comparison to nerve gas.
2. Nerve gas is not strategic. A better comparison are nukes in WW2.
3. Nerve gas has no other uses unlike AI.
4. Nerve can only be used to hurt unlike AI
5. If AI in military is so dangerous, should the US just sit and do nothing while China /Russia deploy it fully? What is your suggestion here specifically?
A) opt-out of participating to absolve yourself of future sins or
B) create the systems yourself, assuring you will have a say in the ethical rules engineered into the weapons
If you actually give a shit about ethics and safety (as opposed to the appearance thereof) the only logical choice is B.