Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

They've taken it down now and replaced with an arguably even less helpful diagram, but the original is archived: https://archive.is/twft6


Wow it’s even worse than I thought. I thought that convictungly morhing would be the only problem. The nonsense and inconsistent arrowheads, the missing annotations, the missing bubbles. The “tirm” axis…

That this was ever published shows a supreme lack of care.


The turn axis is great! Not only have they invented their own letter (it's not r, or n, or m, but one more than m!), it points the wrong way.


Lots of the AIisms with letters remind me of tom7's SIGBOVIC video Uppestcase and Lowestcase Letters [advances in derp learning]

https://www.youtube.com/watch?v=HLRdruqQfRk


It's like the Pokémon evolution of n through m, we need to notify the Unicode Consortium.


And that's what they dared to show to the public. I shudder thinking about the state of their code...


It really is wild / telling how fundamentally AI can screw up what seems like just basics like ... an arrow.


Is it truly possible to make GitFlow look worse than reality?


This passage from the post by the original creator of the diagramme summarises our Bruh New World:

"What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own. This isn't a case of being inspired by something and building on it. It's the opposite of that. It's taking something that worked and making it worse. Is there even a goal here beyond "generating content"?


That reminds me of the (earlier) Apple and people saying that Apple just copies from the competitors. Well, they took the good parts and improved the bad parts. That's the excellence level you can achieve when copying.

This here is just so cheap, I would not even dare to call it a copy.


Apparently the new diagram is now a rip off of another one from Atlassian: https://bsky.app/profile/vurobinut.bsky.social/post/3mf52hmw...



It looks like typical "memorization" in image generation models. The author likely just prompted the image.

The model makers attempt to add guardrails to prevent this but it's not perfect. It seems a lot of large AI models basically just copy the training data and add slight modifications


Remember, mass copyright infringement is prosecuted if you're Aaron Schwartz but legal if you're an AI megacorp.


> It seems a lot of large AI models basically just copy the training data and add slight modifications

Copyright laundering is the fundamental purpose of LLMs, yes. It's why all the big companies are pushing it so much: they can finally freely ignore copyright law by laundering it through an AI.


> It seems a lot of large AI models basically just copy the training data and add slight modifications

This happens even to human artists who aren't trying to plagiarize - for example, guitarists often come up with a riff that turns out to be very close to one they heard years ago, even if it feels original to them in the moment.


TIMMMAYYY




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: