Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I think the rules should be stricter.

I’d prefer an explicit opt in from the content author being required for anyone to perform any model training with any given data.

Alternatively, require all weights, prompts and chat logs to have the same visibility as the original datasets.

None of this is going to happen and current decisions about uncopyrightable ai[1] are already good; but still, it feels like there is room for abuse.

[1]: https://en.m.wikipedia.org/wiki/Th%C3%A9%C3%A2tre_D%27op%C3%...



Well, you explicitly opt-in to Twitter ToS whenever you post anything there.


This is not opt-in how I understand it. When there is no alternative, or the alternative is not using a service, I'd call it a hard requirement instead.

I like how opt-in is handled by GDPR; e.g.: "Consent must be a specific, freely given, plainly worded, and unambiguous affirmation given by the data subject (...) A data controller may not refuse service to users who decline consent to processing that is not strictly necessary in order to use the service.", source: https://en.wikipedia.org/wiki/General_Data_Protection_Regula...


> When there is no alternative, or the alternative is not using a service, I'd call it a hard requirement instead.

There's plenty of alternatives to Twitter.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: