Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I’ll be honest, this sounds like you have made a decision on your stance and now you’re building false distinctions to reinforce your own bias.

You said a lot of words, but I believe your argument comes down to “computers are super powered compared to humans doing the same thing”? Is that accurate? Because magnitude of ability, to me, makes no difference at all. It’s perfectly acceptable for a human to study the artwork of a specific person and then create their own works based on that style. Why wouldn’t it be the same for an automated process?



Tone aside, thanks for the feedback. It doesn't excite me to hear my comment came across that way, but I'm trying to clarify what was a misinterpretation of what I was trying to say.

> I believe your argument comes down to “computers are super powered compared to humans doing the same thing”? Is that accurate?

No, that doesn't really touch it. The speed/power disparity between humans/computers at certain tasks are certainly a factor to consider, but the more fundamental point I was trying to make is much simpler: "computers and humans are fundamentally different, so let's stop building arguments on the mis-belief that they are the same".

> Because magnitude of ability, to me, makes no difference at all.

What is your position on autonomous AI weapons? Does that position change when there's a human in the loop? If such weapons were suddenly available to everyone, would that be functionally no different than allowing people to own firearms or baseball bats?

> It’s perfectly acceptable for a human to study the artwork of a specific person and then create their own works based on that style. Why wouldn’t it be the same for an automated process?

I'd turn that question around: why would it be the same for an automated process?

It is perfectly acceptable for a human to shoot an intruder entering their home in most states if they believe their life is in danger. An AI-controlled gun would be far more effective (I wouldn't even have to wake up!), but is clearly in a different category.

Is a human sitting on a neighborhood bench in view of your house the same thing as a surveillance camera on a nearby telephone pole? I think the answers to this question are useful when looking at the emerging issues of AI, at least to orient our basic instincts about what feels ok vs. what doesn't.

The AI software has only "learned" in the sense that it has operated on the input data such that it can now provide outputs that are of convincingly high quality to make it appear to "know" what it is doing.

Whatever the similarities, such learning lacks the vast majority of the context and contents of what a humans learns by viewing the same image, such that the word "learn" means something fundamentally different in each situation.


> why would it be the same for an automated process?

It's perfectly acceptable for a human being to drive a a car, but driving one drunk is completely unacceptable. Conversely, there is no rule against creating or consuming art while intoxicated.

So to answer your question, because it is not a matter of life and death. Take your argument and apply it to mass produced goods that were once the realm of only skilled craftsman.

> Is a human sitting on a neighborhood bench in view of your house the same thing as a surveillance camera on a nearby telephone pole?

If a person never leaves and keeps notes, yes, it is exactly the same. I'd call the police for stalking. The issue here is privacy, which is tangential to AI reproducing the styles of known individuals.

> The AI software has only "learned" in the sense that it has operated on the input data such that it can now provide outputs that are of convincingly high quality to make it appear to "know" what it is doing.

Completely disagree with you about the nature of learning here. If a person produces art in the style of an individual, they have no idea the internal machinations of the original artist, they just "appear to 'know' what they are doing".


> It's perfectly acceptable for a human being to drive a a car, but driving one drunk is completely unacceptable. Conversely, there is no rule against creating or consuming art while intoxicated.

You've lost me here. Are you saying that the most important factor when judging whether or not something is appropriate is based on whether or not the activity is dangerous enough to be fatal?

There are plenty of laws and cultural/ethical norms that restrict behavior for many other reasons.

> If a person never leaves and keeps notes, yes, it is exactly the same. I'd call the police for stalking.

You're arguing that a person taking notes with a pen and paper is the same as a video camera recording the same scene?

> The issue here is privacy, which is tangential to AI reproducing the styles of known individuals.

The point is that two forms of "seeing", one mechanical, and one biological, have two very different implications. If you don't believe that, ask the hypothetical person with a notebook to provide you with a 4K rendering of the scene over the last 30 days.

The AI reproducing art is just a single use case. The point of concern has little to do with how innocuous it is to produce images, but whether or not it is acceptable to use arguments about humans when judging what is or is not acceptable in an AI program.

> Completely disagree with you about the nature of learning here. If a person produces art in the style of an individual, they have no idea the internal machinations of the original artist, they just "appear to 'know' what they are doing".

Frankly, this is nonsense. We may not understand all of the underlying processes involved in learning, but we certainly know a lot more than nothing. Even if we knew literally nothing at all about the human brain, there is no standing to conclude that this lack of knowledge must imply that humans use some internal denoising algorithm when imagining what they will draw next.

We know enough to know that human processing of information is subjective, contextual, cultural, emotional, and there are a myriad of factors involved.

We know enough to know that what software like Stable Diffusion is doing looks very little like the human process for achieving a similar outcome, even if there are biologically inspired components inside.


What are context, culture, emotion? If you want to talk about how people do things in a mechanical sense, starting from before “thought”, no, we have no idea how it happens. You can form a model of how you think someone thinks, but the reality is you have no idea how another’s brain functions, and thing like aphantasia, autism spectrum disorders and internal monologue prove this. Different people can have very different brains.

> ask the hypothetical person with a notebook to provide you with a 4K rendering of the scene over the last 30 days.

You just seemlessly transitioned from a machine learning model to literal recording. These aren’t at all the same. In the context of your example, the person on the bench could have easily been wearing a body cam or recording with their cell phone, in certainly something they are capable of doing, so why would I treat it any different? The camera you mentioned could also be a CCTV feed with no DVR, in which case it couldn’t reproduce anything. The AI/person would be what would allow instant pattern recollection, like, “the person usually leaves in a hurry, but not on the weekends with a few exceptions” or “they usually turn lights on around X in the morning, and Y minutes after sunset”

> norms that restrict behavior for other reasons

What’s a concrete example of a tool that has been banned for other reasons. ML models like SD are tools, we usually let people use tools freely unless they can cause great bodily harm, and often even if they can.


I think you're both barking up the wrong tree. A person, and even an animal, possibly even a plant or members of other kingdoms and domains, sees. A computer does not see any more than a lens sees, or to the extreme, a computer can not see any more than an empty paper towel roll can see. The computer, lens and empty paper towel roll have no "I," no ego. In order to see, there must be something, or more accurately, someone, seeing. AI is just a complex program, which is ultimately an algorithm, and to be very simplistic, a recipe. A recipe can never be conscious, can never have a sense of self nor a sense of anything. Just because a photocopier can reproduce an image doesn't remotely mean that it or anything within it could ever see anything.


I should have known my comment was doomed for downvoting. Many coders here. Many among them believe Strong AI is attainable. Everyone has self-bias, tends to believe their beliefs are correct and true. Anyone that believes Strong AI is attainable will evaluate that belief as correct, even with insurmountable evidence to the contrary. It is not a deficiency of programming that Strong AI will never be achieved, rather, it is an insurmountable problem of philosophy. No one takes philosophy seriously except philosophers. Coders, by and large a large percentage of them, because they are creators, often take themselves too seriously, and going right along with that is their beliefs, which they find near impossible to relinquish, even when it is shown beyond doubt their beliefs are not realistic. Strong AI can never be attained due to what computers are and the way computers work, and also what code is and how code works. This is not to say striving for Strong AI is a bad idea, because it isn't. Great things will come from that struggle, just not Strong AI.

No one knows why we are conscious. We have sliced the brain up a thousand ways and we will slice it up a million more and will never find consciousness because it is an emergent property of healthy brain, just like light is an emergent property of a working light bulb. No matter how you disassemble a light bulb, you will never find the light, though I grant you'll eventually figure out how light is produced, the assumption that a light bulb contains light is wrong headed. It's just a metaphor.

There is no worse slander than the truth: Strong AI can not be achieved, not with digital computers and programming and machine learning, and most likely by no other method either. Please, please grow up, and set aside your childish beliefs, because we need you now more than ever, here, in the real world.


I didn’t downvote you(tbh I don’t even know how to downvote). But I didn’t respond to you because I don’t understand the relevance of what you are saying. You said we’re both wrong and then went on to talk about how inanimate objects can’t see? It just doesn’t make sense to me what you’re trying to say.


The crux of it is that it is a false assumption, or more accurately a wrong headed assumption, to suggest that Stable Diffusion sees anything or to equate or compare what Stable Diffusion does do with biological sight. Only an individual, whether that be a person, animal, plant, etc., can see. A program, no matter how complex, no matter how advanced its hardware, will never be an individual, an ego, something that sees. It can only mimic and fool us into believing something there is seeing, but we should know better.


Now hold on a second. You seem very certain of "individual", what it is, and what it is not. I am not so sure that we're not actually creating an individual here or at least parts of an individual as snapshots of qualia and being able to recall or "hallucinate" them. Does no amount of mimicry bring a program closer to an individual? What if we made a really clever model that learns on the fly via constant streams of qualia and adapts as best as it can and it could do all of the things you could do, maybe not very well, or only to the same level as a service animal; is that any closer to an individual? I believe it is way more of a spectrum than the binary "will never be an individual".

Regardless of the details of how a brain is a conscious, it can be reduced to its constituent pieces or nuts and bolts, so to speak. Everything from the electrochemical potentials within neurons to the encoded chemical information in the form of DNA and RNA that spontaneously replicates and orchestrates a maddening array of complexities, we can partially explain. Even if our explanation is basically parts in a bucket, that's enough to paint a future where humanity understands enough of those processes to replicate consciousness without actually understanding why it works. Perhaps we don't need to understand the emergent properties, but merely discover them like the standard model in physics. We equally can't explain /why/ the fundamental physics constants are the way they are, but we can use them to do extraordinary things.


I sure hope you're wrong, because what you're describing is terrifying and horrendously cruel.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: