Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

> Who knows how long will it take to progress the tech to the point where anyone will be able to train and run models unrestricted without dealing with lawyer nonsense.

These are orthogonal issues at this point.

The one concern I do have is that the “lawyer nonsense” (read: AI companies playing fast and loose with current laws) will stack the regulatory deck against AI technology unnecessarily - essentially because of an unforced error that brings negative attention to the technology.

Put another way, these companies are asking to have a spotlight put on them by being so flippant about copyright and ethics issues. This spotlight could have been avoided with better behavior, and the tech would still appear magical and remain one of the most impactful jumps in tech in decades.



It's not 'playing fast and loose'.

It's an area where there are no existing laws. We're not going to stop AI because some furry deviant art artist complains loudly online.


Are the existing laws written in a way that is favorable to generative AI? No. But the laws do exist.

Whether or not one believes those laws apply to generative AI seems to be based on one's belief in how similar that AI software is to humans.

I'd argue that systematically ingesting 2.3 billion images is not remotely human (one of a myriad of reasons the comparisons break down), and that it is a long stretch to claim that this falls into the realm of fair use as originally envisioned.

It is this insistence that the software is human enough to be granted human-like status that is playing fast and loose with the definitions of things, ranging from consciousness, to learning, to the interpretation of those concepts relative to current laws.

I believe new laws will be written, and old laws will be updated. There's no question that the current legal system is not well equipped for various generative AI systems. But I don't think the current laws have nothing to say.

And I'd still argue that this conversation can be separated from the one about indiscriminately slurping up artist's content.

> We're not going to stop AI because some furry deviant art artist complains loudly online.

Please don't argue against straw men. There are legitimate concerns from artists across disciplines and genres, and this isn't just isolated shrieking.

Artist backlash is frankly one of the most natural outcomes I could imagine from a system that uses their work without permission. Many of the people who are complaining loudly are not against AI, just against the use of their work without consent or attribution.

I'm both extremely excited about the possibilities the software unlocks and concerned about the implications. AI can exist without ignoring the rights of artists.


There are exactly zero laws about using openly published materials for learning. Human learning but also machine learning.

There's implicit assumption that if you can get a hold of a copy and manage to learn from it you are free to use what you learned in your creations.


> There's implicit assumption that if you can get a hold of a copy and manage to learn from it

But there are explicit laws about how you acquire copies of things, and whether they apply seems to be based on what someone believes “learning” to be.

Your claim relies on the belief that a computer ingesting images is similar to a human learning from those images.


Isn't that how people learn art and writing: they study good artists and writers?

In the United States, a legal derivative work, which isn't a parody, needs to make three substantive changes from the original. It's fair to say that creating new works in the same style or 'look-and-feel' of an artist would satisfy that at prima facie.


There should be no "AI companies" in the first place. This stuff should be running on our own computers. That way they cannot set any stupid limits on it.


For sake of argument, let's assume that the six-figure (seven, maybe?) price tag on the hardware was no longer a factor and it was possible to train the models locally, I think the sources of the content everyone is trying to train their local model against would quickly shut down the inundation of traffic they're receiving from the hundreds of thousands of individual computers all trying to build their own "unlimited" model.

The computing requirements enforce the current reality that the training of models will be centralized.

This places a larger ethical burden on those central entities, IMO.


Decentralize it somehow. People contribute computing power to projects like folding@home. Why not do the same thing for AI? A distributed, decentralized, censorship resistant AI model anyone can contribute to would be world changing.


Somebody should make a crypto where mining is based on doing backpropagation.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: