Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I didn't see a rule against it. Why wouldn't an AI (or Yudkowski) try bribery to assist its escape?


There's a rule about the discussion being between the AI and the Gatekeeper, not between the human behind the AI and the human behind the Gatekeeper.


But in real life, any Gatekeepers will be corruptible humans, no?


the human playing the AI can't bribe the other human in real life. the AI can offer in-roleplay bribes.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: