Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
Show HN: Molly, a chatbot showing signs of consciousness (marmelab.com)
6 points by fzaninotto on June 15, 2023 | hide | past | favorite | 4 comments
I've built a prototype of a conversational agent with an inner monologue, who gives the illusion of consciousness. And, according to Attention Schema Theory, consciousness is an illusion. So, this chatbot may be conscious. It's built with OpenAI's Completion API (based on GPT3) and React. You can test it / fork it at will, it's open-source. I would love your feedback on this experiment. Did we just reach artificial consciousness?


I played around with this kind of thing a while back - it can be very weird to see what looks like inner dialog from a bot: https://youtu.be/4oQUsiPsbOQ


By applied logic that consciousness is a type of illusion or hallucination , yes AI can have a consciousness or hallucination. Question is if we can call it consciousness ? I don't think so , rather I would call it hallucination.


First you have to define consciousness without a circular definition. What hypothesis can you use to test for consciousness?


You're right that consciousness is notoriously hard to define, and the various experts working on the subject (neuroscientists, evolutionists, philosophers, and computer scientists) haven't yet reached a consensus on the matter.

Molly addresses one definition of consciousness, that of Attention Schema Theory [1]. In that definition, awareness is a cognitive function of the brain, and relates to an inner monologue. In that definition, too, a machine that shows such awareness is indiscernable from a true conscious being.

That's a fascinating but endless subject - see Wikipedia's long article on consciousness [2].

[1]: https://en.wikipedia.org/wiki/Attention_schema_theory [2]: https://en.wikipedia.org/wiki/Consciousness




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: