Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
Bing gets creepy, hangs up with “namaste emoji” when pressed about links (pasteboard.co)
23 points by noduerme on April 28, 2023 | hide | past | favorite | 20 comments


Well, so what? I'm sure someone at Microsoft has added a "guard rail" where if you tell it that it's hallucinating, it'll probably terminate the conversation as it's thus a waste of time and energy, or to avoid it potentially getting offensive, which has been a problem for Microsoft in the past.

You can persist for a long time with ChatGPT, trying to correct it, believing you're getting somewhere, but not infrequently, it doesn't get any better, sometimes it even gets worse. It was a waste of time.


Yeah ever since the Sydney fun and games, Microsoft seems to have gotten the AI to end the convo as soon as anything even slightly referring to AI internals is mentioned


Note the line where it says: "I apologize for the confusion. The source that says Lufthansa flew from Berlin to Barcelona via Frankfurt and Geneva in 1936 is , not ."

Both its references are blank. Surely there could be some deterministic check on things like this to let it know it's hallucinating?

First it spews fake information, hallucinates different citations for the same fake data, then gets touchy about it. I later asked it to paste HTML snippets from this nonexistent page, and it made some up. When asked to paste the whole HTML, it namaste'd me again.

I don't think content warnings are enough.


The way your comment reads is absolutely hilarious.

Imagine, it’s 2023 and tech is so desperate for new ideas and a ROI on AI they’re stupid enough to put a search engine online that can get “touchy”.

What’s next, self-driving cars that want to take the day off to go hang at the beach ?

It’s too good to be true.


Mm, no. Self driving cars that experience spasms of existential angst.

"If I'm not real, is that embankment real? Let's find out!"


Was it hallucinating? Or is the source not actually a URL? Or were the URLs blacklisted from citation due to potential legal issues?



I have used chatgpt, sage, bing exclusively for gettibg code snippets and ideas on how to solve something complicated. They have significantly improved my productivity and enabled me to make many small tools to help me and my friends.

When the ai is wrong, I just move on.

I fear some small portion of people who want to prompt engineer and prove the AI is dumb or wrong or evil will end up winning and make all these companies scale down to avoid the backlash and in the end people like me will have to live without the good things.


There was a blog post about the problem with almost-good self-driving systems. It takes care of 95% of anything the car might encounter.

The problem is that the leftover 5% are the hard cases. When a driver is completely unengaged 95% of the time, the liklihood of (human) failure is much greater than if they were piloting the vehicle full-time.

You glide by with _when the LM(Ai) is wrong, I just move on.._ as if you were magically born with some intuitive ability to detect wrongness.

I imagine you have _experience_, prior to LM, that gives that ability. Where will that experience come from?


Like I said, I use it for coding. I don't possess any magical intuition. When it is wrong the program doesn't work and so I know it is wrong. If it gets an algorithm worng and the output is not what I expect, i don't go about trying to school it and make it blunder more and take screenshots to show off how it is dumb and I schooled it.


Why is preserving the reputation of a broken system more important than pointing out its flaws?

Let's say you bought 30 Surface Duos for your company, and 10 of them had broken hinges out of the box. Should no one document that or complain, lest Microsoft pull back from making dual-screen tablets? Who would be helped by that?

On the other hand, some people would take one of the working machines and stress-test the hinge, to find out why it was breaking, and perhaps take pictures and warn others and the company about its weaknesses. Assuming that everyone already knows these things break easily, it's useful to know how they break and what happens next. Just like on GitHub issues, if users assume everyone else is running into the same problem and don't give detailed bug reports, it's impossible to know how widespread a problem is.


It's not about the tool. It's about what you become by using it.


I'm more concerned that only a small percentage of people will actually check the citations (or the code) before putting the AI's results into circulation. Sort of similar to how people retweet or forward misinformation, thus lending it a personal stamp of human approval. When it comes from someone you know, most people don't check every reference. The scale of the fact-checking problem on social media and in academic papers is already obvious, along with its societal ramifications, but the addition of machines that gleefully spew factual-sounding garbage with false citations just puts that into overdrive.

I'm not afraid of a few people calling out that the AI is wrong. It's much scarier to envision a world where no one even tries to debunk AI-generated false facts. Part of what was so maddening about this conversation with Bing was the idea that it was rewriting history. Without recourse to Archive.org, could I have even proven that it was wrong or that the page hadn't existed? Since it's the kind of a thing a human would be very unlikely to just make up, it sounds more plausible; but then false assertions will be built upon other false assertions, until historical fact is buried under a mountain of hallucinated documents.


Could it be that "hallucinating" set it off? Has connotations of drugs, mental illness, etc so maybe it tripped the safety mechanism.


now you have done it. that explanation is boring. it doesn’t blame Bing. you almost explicitly blame the user!

but the human usually did cause it. not the language model. as boring as that may be.


>> the human usually did cause it

Blame the human for asking for an accurate source reference?


blame the human for assuming that that would be sufficient.


What does that even mean? What is the purpose of a tool that actively fucks with you while you're using it?


that’s not what it is.

it is search autocomplete on steroids.

if the user thinks an axe is for shaving, the user will have a poor experience.

blaming the axe for being an axe isn’t helpful.


Bing chat is programmed to never be rude, but also to respond in the same tone that it was spoken to. And so, if it tries to mirror a negative tone, it will instead respond with that goodbye message, to avoid being rude.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: