Tone aside, thanks for the feedback. It doesn't excite me to hear my comment came across that way, but I'm trying to clarify what was a misinterpretation of what I was trying to say.
> I believe your argument comes down to “computers are super powered compared to humans doing the same thing”? Is that accurate?
No, that doesn't really touch it. The speed/power disparity between humans/computers at certain tasks are certainly a factor to consider, but the more fundamental point I was trying to make is much simpler: "computers and humans are fundamentally different, so let's stop building arguments on the mis-belief that they are the same".
> Because magnitude of ability, to me, makes no difference at all.
What is your position on autonomous AI weapons? Does that position change when there's a human in the loop? If such weapons were suddenly available to everyone, would that be functionally no different than allowing people to own firearms or baseball bats?
> It’s perfectly acceptable for a human to study the artwork of a specific person and then create their own works based on that style. Why wouldn’t it be the same for an automated process?
I'd turn that question around: why would it be the same for an automated process?
It is perfectly acceptable for a human to shoot an intruder entering their home in most states if they believe their life is in danger. An AI-controlled gun would be far more effective (I wouldn't even have to wake up!), but is clearly in a different category.
Is a human sitting on a neighborhood bench in view of your house the same thing as a surveillance camera on a nearby telephone pole? I think the answers to this question are useful when looking at the emerging issues of AI, at least to orient our basic instincts about what feels ok vs. what doesn't.
The AI software has only "learned" in the sense that it has operated on the input data such that it can now provide outputs that are of convincingly high quality to make it appear to "know" what it is doing.
Whatever the similarities, such learning lacks the vast majority of the context and contents of what a humans learns by viewing the same image, such that the word "learn" means something fundamentally different in each situation.
> why would it be the same for an automated process?
It's perfectly acceptable for a human being to drive a a car, but driving one drunk is completely unacceptable. Conversely, there is no rule against creating or consuming art while intoxicated.
So to answer your question, because it is not a matter of life and death. Take your argument and apply it to mass produced goods that were once the realm of only skilled craftsman.
> Is a human sitting on a neighborhood bench in view of your house the same thing as a surveillance camera on a nearby telephone pole?
If a person never leaves and keeps notes, yes, it is exactly the same. I'd call the police for stalking. The issue here is privacy, which is tangential to AI reproducing the styles of known individuals.
> The AI software has only "learned" in the sense that it has operated on the input data such that it can now provide outputs that are of convincingly high quality to make it appear to "know" what it is doing.
Completely disagree with you about the nature of learning here. If a person produces art in the style of an individual, they have no idea the internal machinations of the original artist, they just "appear to 'know' what they are doing".
> It's perfectly acceptable for a human being to drive a a car, but driving one drunk is completely unacceptable. Conversely, there is no rule against creating or consuming art while intoxicated.
You've lost me here. Are you saying that the most important factor when judging whether or not something is appropriate is based on whether or not the activity is dangerous enough to be fatal?
There are plenty of laws and cultural/ethical norms that restrict behavior for many other reasons.
> If a person never leaves and keeps notes, yes, it is exactly the same. I'd call the police for stalking.
You're arguing that a person taking notes with a pen and paper is the same as a video camera recording the same scene?
> The issue here is privacy, which is tangential to AI reproducing the styles of known individuals.
The point is that two forms of "seeing", one mechanical, and one biological, have two very different implications. If you don't believe that, ask the hypothetical person with a notebook to provide you with a 4K rendering of the scene over the last 30 days.
The AI reproducing art is just a single use case. The point of concern has little to do with how innocuous it is to produce images, but whether or not it is acceptable to use arguments about humans when judging what is or is not acceptable in an AI program.
> Completely disagree with you about the nature of learning here. If a person produces art in the style of an individual, they have no idea the internal machinations of the original artist, they just "appear to 'know' what they are doing".
Frankly, this is nonsense. We may not understand all of the underlying processes involved in learning, but we certainly know a lot more than nothing. Even if we knew literally nothing at all about the human brain, there is no standing to conclude that this lack of knowledge must imply that humans use some internal denoising algorithm when imagining what they will draw next.
We know enough to know that human processing of information is subjective, contextual, cultural, emotional, and there are a myriad of factors involved.
We know enough to know that what software like Stable Diffusion is doing looks very little like the human process for achieving a similar outcome, even if there are biologically inspired components inside.
What are context, culture, emotion? If you want to talk about how people do things in a mechanical sense, starting from before “thought”, no, we have no idea how it happens. You can form a model of how you think someone thinks, but the reality is you have no idea how another’s brain functions, and thing like aphantasia, autism spectrum disorders and internal monologue prove this. Different people can have very different brains.
> ask the hypothetical person with a notebook to provide you with a 4K rendering of the scene over the last 30 days.
You just seemlessly transitioned from a machine learning model to literal recording. These aren’t at all the same. In the context of your example, the person on the bench could have easily been wearing a body cam or recording with their cell phone, in certainly something they are capable of doing, so why would I treat it any different? The camera you mentioned could also be a CCTV feed with no DVR, in which case it couldn’t reproduce anything. The AI/person would be what would allow instant pattern recollection, like, “the person usually leaves in a hurry, but not on the weekends with a few exceptions” or “they usually turn lights on around X in the morning, and Y minutes after sunset”
> norms that restrict behavior for other reasons
What’s a concrete example of a tool that has been banned for other reasons. ML models like SD are tools, we usually let people use tools freely unless they can cause great bodily harm, and often even if they can.
> I believe your argument comes down to “computers are super powered compared to humans doing the same thing”? Is that accurate?
No, that doesn't really touch it. The speed/power disparity between humans/computers at certain tasks are certainly a factor to consider, but the more fundamental point I was trying to make is much simpler: "computers and humans are fundamentally different, so let's stop building arguments on the mis-belief that they are the same".
> Because magnitude of ability, to me, makes no difference at all.
What is your position on autonomous AI weapons? Does that position change when there's a human in the loop? If such weapons were suddenly available to everyone, would that be functionally no different than allowing people to own firearms or baseball bats?
> It’s perfectly acceptable for a human to study the artwork of a specific person and then create their own works based on that style. Why wouldn’t it be the same for an automated process?
I'd turn that question around: why would it be the same for an automated process?
It is perfectly acceptable for a human to shoot an intruder entering their home in most states if they believe their life is in danger. An AI-controlled gun would be far more effective (I wouldn't even have to wake up!), but is clearly in a different category.
Is a human sitting on a neighborhood bench in view of your house the same thing as a surveillance camera on a nearby telephone pole? I think the answers to this question are useful when looking at the emerging issues of AI, at least to orient our basic instincts about what feels ok vs. what doesn't.
The AI software has only "learned" in the sense that it has operated on the input data such that it can now provide outputs that are of convincingly high quality to make it appear to "know" what it is doing.
Whatever the similarities, such learning lacks the vast majority of the context and contents of what a humans learns by viewing the same image, such that the word "learn" means something fundamentally different in each situation.