Hacker Timesnew | past | comments | ask | show | jobs | submit | fzeroracer's commentslogin

It's actually not that hard to prove. For example, PSN now has dynamic pricing for their games which can vary quite wildly and all it takes is a small number of consumers with price differences to prove it. The same is true for grocery stores or whatever else.

Enforcing it is another question though and you're right that companies will likely just accept the fine. It's all the more reason why this sort of thing needs to aggressively be legislated against and denied.


How many times are we gonna have to see businesses get caught sharing customer data before we learn to not just trust them?

What software from what companies do you use to store your personal data?

I'm curious: If your boss emailed you and all of your coworkers with mass buyout offers and demanded that they quit their job how many do you think would take up the offer? 10%? 20%? Do you think it would be enough to cause significant organizational issues?

I don't think it's as much an 'attitude' problem as it is a 'wealth' problem.

The richest folk in this country have bought out every single media apparatus it can get its hands on and have spread decades of propaganda. The 'philanthropic' billionaire that spent wealth so that they could have a building or initiative named after them have vanished and gave their wealth to the methhead billionaires that rip up the wiring of the country to sell for pennies.


Reading this thread has definitely sheared off a few of my brain cells seeing people so collectively deluded about Chuck Norris. As you said he was a totality of capitalism, a product wrapped in human skin. He's only truly notable for the jokes people made (myself included) at the dawn of the early internet. As a person, what he actually accomplished is nothing at best and at worst actively damaging to multiple groups that didn't deserve the heat.

The only good thing out of this mess is that the universe felt cosmically aligned to have his death occur on the same day as Mr. Rogers birthday, someone who genuinely did fight for a better world.


Well, his administration has ignored the constitutional rights of this country multiple times at best, and at worst outright violated them resulting in killing American citizens with zero justice or recourse. There's a million different alternative reasons people could come up with, but we can just go with the classic 'treason' and line them up accordingly.

If you commission a baker to bake you a cake, did you make the cake? What if you added sprinkles on top?

If you commission a baker, another person, with wants and desires of their own, is involved.

If you use an AI, there isn't.

Either way, it's clear that the author (yes, the author) put a lot of work into this by iterating and shaping it to what he wanted, and that's a lot more than sprinkles.


> If you commission a baker, another person, with wants and desires of their own, is involved.

> If you use an AI, there isn't.

What is the functional difference here? You are commissioning (see: prompting) someone (see: an AI) for a piece of work, or artwork or whatever. The output is out of your control; and I don't think the existence or lack thereof of a human on the other end materially matters.

If we had hyper-advanced ovens from The Jetsons where we could type a prompt using a fold-out keyboard and it would magically generate whatever cake we ask of it: did we or did we not bake that cake? And I do not think it is clear the author put a lot of work iterating and shaping it into what he wanted; we have zero insight into that.


I didn't say the difference was functional. If you don't think the presence of a human on the other end matters (materially or not), feel free to continue this conversation with an LLM simulation of me. You can even prompt it so that you logically triumph and convince "me".

I'm asking you to explain what the actual difference is and you're avoiding the question.

If we had a complete black box where you submitted Prompt and out came Thing, and you had zero clue what said black box actually did, could you claim creation over Thing? What does knowing that it's a human vs LLM make materially different in terms of whether or not you created it?


And I - or did I turn this thread over to an LLM already? - am asking you a question in return, whose answer should give you the answer you want.

No please, I also agree with parent poster. Talk to the LLM, cause the human ain't listening.

Because 'quality' is a misnomer. LLM writing has quality in the same way that a press release from a big company has quality, or a professional contract written by a lawyer has quality. It is functional, generally typo-free and conforms to most standards but that doesn't mean it has flavor or spice to it.

Creative writing is the intent to convey feelings, thoughts, to create atmosphere. Here's a great example of the failure to do so here, in a way that even most terrible writers would avoid.

> “It just said harvest,” she told Tom. She was sitting in one of the plastic chairs, holding a cup of the adequate coffee.

The coffee in this story is conveyed as being 'perfectly adequate'. But how do you convey adequacy? When you simply just say 'the coffee is adequate' there's nothing there. It could be conveyed by establishing that the coffee is always perfectly room temperature, or with the mere hint of bitterness and sweetness, or that it tastes like every other brand out there. In many respects this story is the exact same as the 'perfectly adequate' coffee: functional, unexciting and ultimately flavorless.


Well-put.

This "flavorlessness" is all over the story, and paired with the obviously genAI images is how I realized as I read that this was either generated or at the least deeply driven by AI.

It constantly described facial expressions, tones of voice, and other emotional cues in generic, dry terms that communicated nothing but the abstract notion of "this person felt a particular way about what happened and it's up to you, the reader, to imagine what that feeling was."

It felt very much like it was prompted to "show, don't tell," by someone who has no idea what that phrase actually means.

As a professional programmer with a deep background in literature and music, this is yet another example that if you aren't an expert in a field, you will get mediocre results at best from an LLM, while being deceived into thinking they're great.


    > obviously genAI images
Five years ago and before, the blog post author would have gone to Fiverr and asked for an artist from a developing country to create some illustrations. There are many, many images on the Internet from five years (and before) that look similar. I object to your use of the adverb "obviously".

No, I clocked the AI images before I noticed the text. I think the "obviously" is earned.

You are correct that a previous era would have included a bunch of Fiverr images that would be in sort of that style, but it's not the style that's the problem. None of the images say more than the text that they're illustrating. It's subtle, but once you notice the lack of information density it becomes starkly apparent.


I took that phrase differently. The story makes the point that the AIs fail when metrics of quality can't be expressed in words. The use of a bare "adequate" reinforces the opacity of the coffee's quality. Certainly it would have worked well to use more words to convey specifics of the "adequacy" as you mention, but IMO that would have undercut the link back to the theme of human ineffability.

Obviously everyone's mileage may vary, but I didn't see this as a huge defect, and actually felt it worked pretty well.


Adequate coffee almost works as an image.

In the hands of Douglas Adams or Kurt Vonnegut it could be spun into a whole recurring motif.

In this case it's merely...adequate. Almost captures the density of ideas packed into something like "The ships hung in the sky in much the same way that bricks don't" but doesn't quite manage the same effect.


> But on reflection and discussion with the author, we decided that enough HN users may find that it gratifies intellectual curiosity, because it's interesting to see how a human and an AI bot can collaborate to create writing like this.

I can't say I agree, at all. This is essentially just your average post on Facebook or Linkedin made relevant on HN through telling a story about software mechanics. I don't find it interesting to 'read' collaborations between human and AI bots there and I would greatly prefer it if they don't infest HN as well.


> I don't find it interesting

That's fine. Nothing on HN is of interest to everyone. But the post spent 20 hours on the front page and earned over 450 upvotes and 300 comments. It was clearly interesting to a lot of the community and activated a worthwhile discussion.

> I would greatly prefer it if they don't infest HN as well

We are actively working against AI-generated/bot-posted comments "infesting" HN. LinkedIn-style marketing slop has always been unwelcome on HN, whether it's AI-generated or not. In this case a collaboration between a human and AI produced an interesting result, as evidenced by the community's response.


I feel like there was something else at play.

For reference, I moved to Austin in 2018, my rent for my apartment was about 1200/month. In 2022 (the year I left), my rent jumped suddenly to 1600/month despite new apartments near me, and all of the apartments I looked into had similar jumps. And anecdotally speaking my coworkers all reported similar massive rent spikes.

It feels more like this is associated with the tech industry cooling significantly in Austin so they can't get away with pricing bumps. This isn't to say new housing doesn't help, but it certainly didn't prevent me from getting fucked on rent.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: