Hacker Timesnew | past | comments | ask | show | jobs | submit | rikschennink's commentslogin

When I noticed the article header image was generated with AI my interest in reading the article itself dropped to zero.

Can you point out what made you think the images were AI generates? I suspected they were (before reading in this thread everything is AI generated), but I couldn't find any of the usual signs.

I thought they were AI because I suspected nobody would pay an illustrator/actually spend time making those illustrations for a story like this.

The fact that the whole text was AI came as a surprise. I did notice that weird inconsistency about feed pricing mentioned in another comment but just thought the author made an error or I misunderstood something.


The combination of a "hand-drawn" art style, with text that is obviously not hand-lettered, is a dead giveaway. It would be very weird for a human to do that.

If you have an eye for fonts, the text itself stands out too, at least to me. The font style of "HARTMANN SOFTWARE MECHANICS" is a particular combination of clean, bland shapes and rounded corners that you rarely see in human-designed fonts, but it's super common in AI-synthesized text. I guess it's sort of an average middle ground in the abstract space of letter forms, and the lack of distinguishing features is what creates the impression.


Thanks. That's interesting. I haven't paid particular attention to the fonts. I do draw quite a lot these days but I don't have a particular eye for fonts.

I personally have on occasion added software rendered fonts into hand-drawn images. Sometimes instead of directly adding it with a text tool I would add it on a temporary layer and then trace it over by hand. This results in similar looking text with clean shapes and rough lines that fit better with the other parts of the drawing.

To me the only thing that stands out in the image is the view through the laundry shop windows. The line of laundry machines doesn't look aligned at a right angle to the window - given that the tiles in front of the window clearly establish perspective lines it's a mistake that seems hard to make and would be pretty apparent in early stages when drawing this.

In fact looking closely, the perspective of the building itself doesn't match the perspective of the fields behind it, but I can see myself doing something like this if it's not that noticeable and gives me better composition.


It seemed to be a common AI style, so I was suspicious. Zoomed in on the laundromat window sign and it says “vioice”, so yea.

Looking at it again now, things like the electrical wires not being aligned, or going nowhere are always obvious tells. The outlines on the A in “laundromat” are okay but for some reason the vertical line on the R isn’t open.

It’s impressive that this can be generated with AI. I just wish it would come with a “generated with llm-name” label.


The main building's roof doesn't make any sense (we should be able to see the top). The font choices are odd. Some straight lines look like digital line tool, and others look free hand. The perspective of the signs are wrong in a strange way.

You might like /r/antiai.

This.

We need a way to flag AI generated articles.


Looks like the article was 80% AI generated as well.


> No AI-generated media is allowed (art, images, videos, audio, etc.). Text and code are the only acceptable AI-generated content, per the other rules in this policy.

I find this distinction between media and text/code so interesting. To me it sounds like they think "text and code" are free from the controversy surrounding AI-generated media.

But judging from how AI companies grabbed all the art, images, videos, and audio they could get their hands on to train their LLMs it's naive to think that they didn't do the same with text and code.


> To me it sounds like "text and code" are free from the controversy surrounding AI-generated media.

It really isn't, don't you recall the "protests" against Microsoft starting to use repositories hosted at GitHub for training their own coding models? Lots of articles and sentiments everywhere at the time.

Seems to have died down though, probably because most developers seemingly at this point use LLMs in some capacity today. Some just use it as a search engine replacement, others to compose snippets they copy-paste and others wholesale don't type code anymore, just instructions then review it.

I'm guessing Ghostty feels like if they'd ban generated text/code, they'd block almost all potential contributors. Not sure I agree with that personally, but I'm guessing that's their perspective.


Right, that's what I'm thinking too (I'll update my statement a bit to make that more clear), but I constantly hear this perspective that it's all good for text and code but when it's media, then it's suddenly problematic. It's equally problematic for text and code.


I bet they aren't honoring the terms of the MIT license I use for my repos. It's pretty lenient and I bet they're still not compliant.


And to be frank, why would they? Who would stop them? Would take a massive case for them to be compelled to be stopped, and no one seems to care about attribution anymore, or licensing at all in most cases. Companies are using torrents to download copyrighted material, stuff individuals gone to prison for before, and they hardly even get a slap on the wrist.


I see a downvote. All righty then: cite where they credited me. Go on :)


It's not that code is distinct or "less than" art. It's an authority and boundaries question.

I've written a fair amount of open source code. On anything like a per-capita basis, I'm way above median in terms of what I've contributed (without consent) to the training of these tools. I'm also specifically "in the crosshairs" in terms of work loss from automation of software development.

I don't find it hard to convince myself that I have moral authority to think about the usage of gen AI for writing code.

The same is not true for digital art.

There, the contribution-without-consent, aka theft, (I could frame it differently when I was the victim, but here I can't) is entirely from people other than me. The current and future damages won't be born by me.


Alright, if I understand correctly, what you're saying is they make this distinction because they operate in the "text and code" space but not in the media space.

I've written _a lot_ of open source MIT licensed code, and I'm on the fence about that being part of the training data. I've published it as much for other people to use for learning purposes as I did for fun.

I also build and sell closed source commercial JavaScript packages, and more than likely those have ended up in the training data as well. Obviously without consent. So this is why I feel strong about making this separation between code and media, from my perspective it all has the same problem.


I agree it does all have the same problem, but on balance: it's much easier to rationalize my own use of genAI to augment my programming skillset and (maybe) stay employable, than it is to rationalize using genAI to do commercial artwork.


re: MIT license, I generally tell people they have to credit and that's functionally the only requirement. Are they crediting? That's really the lowest imaginable bar, they're not asked to do ANYTHING else.


I don’t think the scraping party cares about the license, if the JavaScript code is linked online they’ll just take it. Source: see the art industry


So nice, it’s just unfortunate that even fun experiences like this first show you a cookie popup.


Recently a customer pasted a complete ChatGPT chat in the support system and then wrote “it doesn’t work” as subject. I kindly declined.

I’ve also received tickets where the code snippets contained API calls that I never added to the API. A real “am I crazy” situation where I started to doubt I added it and had to double check.

On top of that you get “may I get a refund” emails but expanded to four paragraphs by our friend Chat. It’s getting kinda ridiculous.

Overall it’s been a huge additional time drain.

I think it may be time to update the “what’s included in support” section of my softwares license agreement.


This post ticks all the AI boxes.


Working on FilePond v5.

Entering year three of a complete rewrite. It’s kind of ridiculous but as I’m still enjoying the process of trying to built/craft a performant and flexible file upload web component I just keep going.

V4 is live on https://filepond.com, plan to release v5 before the end of summer.


I tried to read this on mobile but the blinking cursor makes it impossible.


Removed it! I agree it was distracting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: