Hacker Timesnew | past | comments | ask | show | jobs | submit | eman2d's commentslogin

https://projectsam.com/libraries/the-free-orchestra play around with something like this, and you'll be surprised with what you can come up with. Modern DAW's have given me a ton of freedom to create all types of music.


Location: Toronto, Canada

Remote: Yes

Willing to relocate: Yes

Technologies: NextJS, Vue, TypeScript, AWS, React, Some ML

Résumé/CV: https://e2d.me

Email: evankoumarelas@hotmail.com

Looking to work on some creative projects, I focus on creating interesting things that people would want to use or that make you think. Open for whatever.


A killer Ad would be converting each vocal track back into the original song


Location: Toronto, Canada

Remote: Yes

Willing to relocate: Yes

Technologies: NextJS, Vue, TypeScript, AWS, React, Some ML

Résumé/CV: https://e2d.me

Email: evankoumarelas@hotmail.com

Looking to work on some creative projects, I focus on creating interesting things that people would want to use or that make you think. Open for whatever.


"OpenAI faces multiple lawsuits over ChatGPT’s use of books, news articles, and other copyrighted material in its vast corpus of training data. Suno’s founders decline to reveal details of just what data they’re shoveling into their own model, other than the fact that its ability to generate convincing human vocals comes in part because it’s learning from recordings of speech, in addition to music."

Surely they own the audio they're using and haven't just dumped every song in existence


Specifics of the law aside, it seems quite unfair to not allow AI to learn from copyrighted examples. Imagine an aspiring human musician not being allowed to listen to copyrighted music for inspiration!


Humans and computers are different. It’s perfectly reasonable to treat a human being listening to something and taking inspiration differently from a computer listening to everything ever recorded.


Why is it perfectly reasonable to discriminate against the computer? Would the same apply to someone - with equivalent photographic memory - who listens to a ton of music and is thus inspired?


I think one difference is in how humans learn vs. how AI learns. A human could hear a very small amount of music and start mimicking it. AI -- at least in its present state -- needs to be trained on a huge quantity of music to start doing anything productive. What that difference means, exactly, is debatable, but it's a difference, and, I believe, one that begs deeper contemplation of laws surrounding AI use of copyrighted materials.

Another difference is one of scale. We've seen this in other areas, like surveillance. A random security camera here and there, or a random street photographer here and there, most people didn't really find objectionable. Besides which, any captured photographs or video had to be manually inspected for persons of interest. But start surveilling everyone in public, all the time, and using technology to track everyone automatically, and it starts to seem like a different construct.

A human producer putting out material that another human learns from, quotes from, paraphrases from, builds new things based on... well, that still requires time and dedication on the part of the learner. And there will be a correspondingly limited amount of new derivative material based on what the "teacher" producer created. AI systems offer the possibility for effectively unlimited derivative material, produced at an unprecedented rate.

Existing copyright laws, and existing social norms around creation of "intellectual property", have been formed with humans in mind. Humans who operate at a human rate of production, and who learn with a human style of learning. Some maybe be more efficient than others, but not so drastically different in form as AI.


AI generated content can be either derivative or transformative. For example if I use AI to paraphrase books or articles, that would be a derivative use.

But an AI that searches the web, news, and scientific papers for references and then outputs a Wikipedia-style article on a given topic would be a transformative use because it does a lot of work synthesizing multiple sources into a coherent piece, and only uses limited factual references.

Or we can do something more advanced. We solve a task with a student model and in parallel we solve the same task with a teacher model empowered with RAG, web search and code execution tools. Then you have two answers. You use an examination prompt to extract what the student model got wrong as a new training example.

That would be transformative, and targeted. No need to risk collecting content that is already known by the student model. It would be more like "machine studying" or "machine teaching" because it creates targeted lessons for the student.


Because computers don't have any of the limitations that most humans, even ones with photographic memories, have.


Why are limitations relevant?


Do you think computers may have the ability to become like humans? If you do, like I do, then it is not reasonable to keep them from achieving that potential because of purely legal restrictions.


> Humans and computers are different.

Here you're wrong. It's not humans vs computers, but humans with older methods vs humans with newer methods.


> it seems quite unfair to not allow AI to learn from copyrighted examples

AIs aren't human. Copyright laws were written for humans.


That's why they're suing OpenAI, as opposed to ChatGPT.


That happens with art in general.

British museums are full of ancient Greek and Egyptian statues, even a New York museum will not return mammoth bones to it's Canadian owner and excavator, thousands of them.

If you are not ok with everyone stealing art and ideas from everybody, everywhere all over the world, all the time, you will get used to it at some point.


They could still use public domain data.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: