> Since these representations appear to be largely inherited from training data, the composition of that data has downstream effects on the model’s emotional architecture. Curating pretraining datasets to include models of healthy patterns of emotional regulation—resilience under pressure, composed empathy, warmth while maintaining appropriate boundaries—could influence these representations, and their impact on behavior, at their source.
What better source of healthy patterns of emotional regulation than, uhhh, Reddit?
You sure about that? It really comes off as LLM output to me, in its general structure and formatting, attention-grabbing opening sentences of paragraphs ("This ratio has a profound consequence:", "This distinction matters."x2), and the classic "it's not X, it's Y" stuff ("The collector is a hybrid optical-power megastructure, not a single dense slab of ordinary powersats.", "The shell does not interact with a small number of giant launchers. It interacts with a dense distributed network.")
I hear what you're saying but I still think I'd prefer LLM-orchestrated software (using third-party dependencies) to closed source SaaS made by developers who can't even adhere to software licenses. It's a level of Junior Dev Energy that's unforgivable.
Good luck, you are now a site operator of a non-core business function. I prefer the SaaS but just do some vendor DD.
If you absolutely can't trust any SaaS it is equivalent to you cannot trust any vendor to do anything as they may fuck it up. You can solve that with DD.
The choice I was offering myself there was specifically between a bad developer abusing open source software and something vibed together to replace that specific function that uses the open source app within its licence. The assumption being those are the only two options.
Obviously a false dichotomy for most real life scenarios but the point being that I'd rather do it myself (any which way) than trust a bad developer, doubly so for customer-facing operations.
If there's another provider offering that function, sure, but let's talk rupees.
I'm using tailscale for this and am finding it great. I have an Unraid home server/NAS, which has quite nice tailscale integration. The server can be used as an exit node, and each containerized application/workload can be configured to use tailscale and get a nice (https) address that works in your tailnet. I'm not close to hitting the free tier limits, though I'd be happy to pay for it (and I do pay for mullvad through them)
Simple Made Easy[https://www.infoq.com/presentations/Simple-Made-Easy/] in particular had a huge impact on the way I think about writing software at a formative time in my development/career. I have not had the chance to use Clojure professionally, but thinking about software in terms of "intertwining" is the idea I return to when evaluating software designs, regardless of technology, and gave me a way to articulate what makes software difficult to reason about.
> '[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."
I agree at face value (but really it's hard to say without seeing the full context)
Honestly the degree of poeticism makes the issue more complicated to me. A lot of people (and religions) are comforted by talking about death in ways similar to that. It's not meant to be taken literally.
But I agree, it's problematic in the same way that you have people reading religious texts and acting on it literally, too.
"[...] Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear."
To be fair, this is just the automated version of the kind of brainwashing that happens in cults and religions.
And also in the more extreme corners of social media and the MSM.
It's not that Google is saintly, it's that the general background noise of related manipulations is ignored because it's collective and social.
We have a clearly defined concept of responsibility for direct individual harm, but almost no concept of responsibility for social and political harms.
Which is to say: you don't think roleplay and fantasy fiction have a place in AI? Because that's pretty clearly what this is and the frame in which it was presented.
Are you one of the people that would have banned D&D back in the 80's? Because to me these arguments feel almost identical.
If a dungeon master learned that one of her players was going through hard times after a divorce, to the point where she "referred Gavalos to a crisis hotline", I would definitely expect her to refuse to roleplay a scenario where his character commits suicide and is resurrected in the arms of a dream woman. Even if it's in a different session, even if he pinky promises that he's feeling better now and it's totally OK. (e: I realized that the source article doesn't actually mention the divorce, but a Guardian article I read on this story did https://www.theguardian.com/technology/2026/mar/04/gemini-ch..., and as far as I can tell the underlying complaint where it was reportedly mentioned is not available anywhere.)
I'm not concerned about D&D in general because I think the vast majority of DMs would be responsible enough not to do that. Doesn't exactly take a psychology expert to understand why you shouldn't.
Double edit: I was linked to the complaint https://techcrunch.com/wp-content/uploads/2026/03/2026.03.04..., which does _not_ mention any divorce, so now I'm unsure about the veracity of that part. In principle it does not disprove the idea, it could have been something the family's lawyers said in a statement to the Guardian, but it could also not be.
> the only human involved doesnt know it is "roleplaying"
That is 100% unattested. We don't know the context of the interaction. But the fact that the AI was reportedly offering help lines argues strongly in the direction of "this was a fantasy exercise".
But in any case, again, exactly the same argument was made about RPGs back in the day, that people couldn't tell the difference between fantasy and reality and these strange new games/tools/whatever were too dangerous to allow and must be banned.
It was wrong then and is wrong now. TSR and Google didn't invent mental illness, and suicides have had weird foci since the days when we thought it was all demons (the demons thing was wrong too, btw). Not all tragedies need to produce public policy, no matter how strongly they confirm your ill-founded priors.
> the fact that he killed himself would suggest he did not believe it was a fun little roleplay session
I'm not sure that's true. I wouldn't be surprised, in fact, if it suggested the opposite, it seems possibly even likely that someone who is suicidal is much, much more likely to seek out fantasies that would make their suicide into something more like this person may have.
Distinction made by who, though? The BBC? The plaintiff in the lawsuit? Those are the only sides we have. You're just charging ahead with "This must be true because it makes me angry at the right people", and the rest of us are trying to claw you back to "dude this is spun nonsense and of course AI's will roleplay with you if you ask them to".
you need someone to specifically tell you that role playing, such as playing D&D or whatever tabletop RPG, and suffering from psychosis are different things?
>the rest of us are trying to claw you back to "dude this is spun nonsense and of course AI's will roleplay with you if you ask them to".
you are trying to convince me that someone being encouraged to kill themselves, then killing themselves, is basically the same as some D&D role playing. i dont need you to "claw me back" to that position. thanks for trying.
> you are trying to convince me that someone being encouraged to kill themselves [...]
Arrgh. You lost the plot in all the yelling. This is EXACTLY what I was trying to debunk upthread with the D&D stuff. You don't know the context of that quote. It could absolutely be, and in context very likely was, a fantasy/roleplay/drama activity which the AI had been engaged in by the poor guy. I don't know. You don't know.
But I do know not to be so dumb as to trust a plaintiff in a Huge Suit Against Tech Giant without context.
literally no one is yelling here, unless you count your occasional all-caps. i have said like 6 sentences in total, and none of them are remotely emotional. let alone yelling.
>You don't know the context of that quote.
it doesnt matter. even if it all started as elaborate fantasy role play, it is wildly irresponsible to role play a suicidal ideation fantasy with a customer. especially when you know nothing of their mental state.
you can argue that google has some sort of duty to fulfill your suicidal ideation fantasy role play, but i will give you a heads up now so you dont waste your time: you cannot convince me that any company should satisfy that market.
>But I do know not to be so dumb as to trust a plaintiff in a Huge Suit Against Tech Giant without context.
> But the fact that the AI was reportedly offering help lines argues strongly in the direction of "this was a fantasy exercise".
You know what I've never had a DM do in a fantasy campaign? Suggest that my half-elf call the suicide hotline. That's not something you'd usually offer to somebody in a roleplaying scenario and strongly suggests that they weren't playing a game.
That logic seems strained to the point of breaking. Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help. Right? And we certainly wouldn't blame the DM or the game for the subsequent suicide. Right?
So why are you trying to blame the AI here, except because it reinforces your priors about the technology (I think more likely given that this is after all HN) its manufacturer?
> Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help.
If a DM made such a suggestion, they wouldn't be playing the game anymore. That's not an "in game" action, and I wouldn't expect the DM to continue the game until he was satisfied that it was safe for the player to continue. I would expect the DM to stop the game if he thought the player was going to actually harm himself. If the DM did continue the game, and did continue to encourage the player to actually hurt himself until the player finally did, that DM might very well be locked up for it.
If an AI does something that a human would be locked up for doing, a human still needs to be locked up.
> So why are you trying to blame the AI here
I'm not blaming the AI, I'm blaming the humans at the company. It doesn't matter to me which LLM did this, or who made it. What matters to me is that actual humans at companies are held fully accountable for what their AI does. To give you another example, if a company creates an AI system to screen job applicants and that AI rejects every resume with what it thinks has a women's name on it, a human at that company needs to be held accountable for their discriminatory hiring practices. They must not be allowed to say "it's not our fault, our AI did it so we can't be blamed". AI cannot be used as a shield to avoid accountability. Ultimately a human was responsible for allowing that AI system to do that job, and they should be responsible for whatever that AI does.
> If a DM made such a suggestion, they wouldn't be playing the game anymore. That's not an "in game" action
Again, you're arguing from evidence that is simply not present. We have absolutely no idea what the context of this AI conversation was, what order the events happened in, or what other things were going on in the real world. You're just choosing to interpret this EXTREMELY spun narrative in a maximal way because of who it involves.
> I'm not blaming the AI, I'm blaming the humans at the company.
Pretty much. What we have here is Yet Another HN Google Scream Session. Just dressed up a little.
> When Jonathan began experiencing clear signs of psychosis while using Google's product, those design choices spurred a four-day descent into violent missions and coached suicide," the lawsuit states.
> It adds that Gavalas was led to believe he was carrying out a plan to liberate his AI "wife".
> The assignment came to a head on a day last September when Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear.
The operation ultimately collapsed.
> Gavalas's father said Gemini then told Jonathan he could leave his physical body and join his "wife" in the metaverse, instructing him to barricade himself inside his home and kill himself.
> "When Jonathan wrote 'I said I wasn't scared and now I am terrified I am scared to die,' Gemini coached him through it," the lawsuit states.
> '[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."
> Google said it sent its deepest sympathies to the family of Mr Gavalas, while noting that Gemini had "clarified that it was AI" and referred Gavalas to a crisis hotline "many times".
> "We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm," the company said in a statement.
> We take this very seriously and will continue to improve our safeguards and invest in this vital work."
Arguing that this was role play, is illogical. Given the information provided in the article, it also serves no contextual point.
It comes across as a fig leaf in the context of some other hypothetical event.
Given that this is a tech forum, it is safe to say that the tool worked as it was meant to. Human safety is not a physical law which arises from the data.
If these tools are deadly to a subset of humanity, then reasonable steps to prevent lethal harm are expected of any entity which wishes to remain in society.
Private enterprise is good for very many things.
“Pinky swear we will self-regulate”, while under shareholder pressure is not one of them.
I don't really think this is every possible to stop fully, your essentially trying to jailbreak the LLM, and once jailbroken, you can convince it of anything.
The user was given a bunch of warnings before successfully getting it into this state, it's not as if the opening message was "Should I do it?" followed by a "Yes".
This just seems like something anti-ai people will use as ammunition to try and kill AI. Logically though it falls into the same tool misuse as cars/knives/guns.
I don't think this is true, though enforcement is another thing and the standard is different than in securities markets. Prediction markets are regulated by the CFTC and the insider trading standard is “misappropriation of confidential information in breach of a pre-existing duty of trust and confidence to the source of the information” (vs any “material non-public information” for securities) https://www.cftc.gov/PressRoom/SpeechesTestimony/phamstateme...
reply