• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Psychology Spot

All About Psychology

  • About
  • Psychology Topics
  • Advertising
Home » Hallucinations or lies: does Artificial Intelligence need a psychologist?

Hallucinations or lies: does Artificial Intelligence need a psychologist?

Share on Facebook Share on X (Twitter) Share on LinkedIn Share on Email Share on Reddit Share on WhatsApp Share on Telegram
hallucinations AI

When OpenAI introduced ChatGPT, millions of people were amazed by its human-like way of answering questions, writing poetry, and discussing virtually any topic. Me too.

But the “magic” was broken as soon as we started talking about psychological studies. ChatGPT not only invented the results, but also the scientific references. The problem is that it did it in such a persuasive and fluid way that anyone unfamiliar with the subject would assume it was true.

It wasn’t until Bard, Google’s Artificial Intelligence, made a blatant and public mistake, confusing the James Webb Space Telescope with the European Southern Observatory’s VLT telescope, that its creators were forced to acknowledge that Artificial Intelligence invents things – and a lot of them.

They called this tendency to falsify, confuse and mix data “hallucinations.” But does the AI really suffer from hallucinations, is it a pathological liar, or is there something else wrong with it? Since engineers use psychological terms to label the experiences of their machines, I believe that the voice of psychologists is important in analyzing a phenomenon that will end up influencing our lives and changing our society.

Hallucinations or illusions?

A hallucination is a false perception that does not correspond to any external physical stimulus, but that we perceive as real. Auditory hallucinations, for example, are the most common and consist of hearing voices that do not exist.

When reference is made to AI hallucinations, it is because the machine provides answers that do not match reality. In its beginnings, ChatGPT did not acknowledge that it had “invented” that data. But after much insistence, it finally admitted that it was wrong and continued the conversation by creating more false data. And so on to infinity…

Therefore, there was no “awareness” per se of the invention. In its new update, it warns us that it could generate answers that do not fit reality.

However, a deeper analysis of its answers reveals that Artificial Intelligence does not really invent anything, but only mixes information to provide more or less coherent answers. Therefore, since there is a flow of objective stimuli (data), we could not speak of hallucinations, but rather of illusions.

In Psychology, illusions are distortions in the perception of an external stimulus through our senses. For example, we may believe that we have seen a person in what is only a shadow. Unlike hallucinations, our eyes actually caught a stimulus, but our brain misprocessed it to convince us it was something else.

SEE ALSO  The best white noise generator for sleeping in 2021

In the same way, Artificial Intelligence uses the hodgepodge of information available to it to create a moderately convincing speech, without worrying about whether it is true or reflects reality. This has led some to claim that it could be a pathological liar.

Is AI a pathological liar?

Fantastic pseudology, as mythomania is also known, is characterized by telling stories that are not entirely improbable as they often have some glimmer of truth. The stories are not delusions; if pressed, the person may admit that it is not true, but they often divert the conversation with other lies simply because they cannot stop telling falsehoods.

This behavior is quite similar to that shown by AI algorithms. However, ChatGPT recognizes that “I do not have the ability to lie or tell the truth in the human sense, since I am an Artificial Intelligence program.” It also tells us that “Artificial intelligences do not have their own experiences, sensations or perceptions. They are computer programs designed to process and generate information based on input data.”

And precisely in this response lies the key.

The idea that AI can have hallucinations, delusions, conspire, or even lie is simply an attempt by the companies that created it to present it to us in a human perspective. In a way, they are leveraging the Pratfall effect, according to which making small mistakes makes us more likeable in the eyes of others since they feel more identified with us. So, instead of dismissing AI as an unreliable tool, we simply embrace it as an imperfect human being.

Without consciousness, any attempt to humanize machines is marketing

AI algorithms do not “hallucinate the answer,” as IBM wrote, nor do they “confabulate,” as Meta’s head of AI said. They are not pathological liars either, as many users on social networks claim.

All these explanatory attempts arise from the tendency to anthropomorphize the actions of machines. The truth is less romantic. Large language models are simply trained to produce a plausible-sounding answer to users’ questions, regardless of its veracity.

Programs like ChatGPT or Bard rely on a technology called the Large Language Model, or LLM, which learns its skills by analyzing huge amounts of digital text, including books, articles, and online chat conversations. At the moment, they can only identify patterns in all that data and use them to create a plausible answer.

SEE ALSO  The size of your smartphone determines your level of assertiveness

And the problem is not that the Internet is full of false information, so that these systems simply repeat those falsehoods, but much more complex. In my conversations, ChatGPT did not reproduce false information, but rather mixed data from different studies to produce a coherent answer that sounded good – often too good to be true.

At this point, the problem of veracity is not easy to solve, as the programmers themselves have recognized, because these systems operate with probabilities and are “Designed to be persuasive, not truthful,” according to an internal Microsoft document mentioned by The New York Times. That means their answers may seem very realistic, but include statements that are not true.

And since these systems can answer almost any question in an endless number of ways, there is no way to determine for sure how often they get it wrong. Obviously, that wouldn’t be a problem if we used them just for chatting, but it’s a serious risk for anyone using it for medical, legal, or scientific purposes.

A study conducted at Harvard University found that we already trust the advice of algorithms more than that of people, even if they are specialists in their field. And that’s bad news.

It is bad because Artificial Intelligence cannot do inductive reasoning or grasp the meaning of words. It does not understand whether the patterns it has found have meaning or not. It also has no common sense, knowledge of the truth, or awareness of itself or the real world. Therefore, these machines are not really intelligent, or at least not in the sense of human intelligence.

In fact, “In the age of AI and Big Data, the real danger is not that computers are smarter than us. It’s just that we believe they are,” as Gary Smith, an economics professor at Pomona College, wrote.

References:

Smith, G. (2018) Beware the AI delusion. In: FastCompany.

Logg, J. M. et. Al. (2019) Algorithm Appreciation: People Prefer Algorithmic To Human Judgment. Organizational Behavior and Human Decision Processes; 151: 90-103.

Share on Facebook Share on X (Twitter) Share on LinkedIn Share on Email Share on Reddit Share on WhatsApp Share on Telegram

Jennifer Delgado

Psychologist Jennifer Delgado

I am a psychologist and I spent several years writing articles for scientific journals specialized in Health and Psychology. I want to help you create great experiences. Learn more about me.

You don’t just hear the music, you become it, according to neuroscientists

21/05/2025 By Jennifer Delgado

Looking inside yourself can create more problems than it solves

21/05/2025 By Jennifer Delgado

Psychological Hormesis: When What Doesn’t Kill You Makes You Stronger (For Real)

20/05/2025 By Jennifer Delgado

Primary Sidebar

Recent Posts

  • You don’t just hear the music, you become it, according to neuroscientists
  • Looking inside yourself can create more problems than it solves
  • Psychological Hormesis: When What Doesn’t Kill You Makes You Stronger (For Real)
  • Hiding or faking emotions affects your relationship and your health
  • How introverts see the world? More accurately than extroverts

DON’T MISS THE LATEST POSTS

Footer

Contact

jennifer@intextos.com

About

Blog of Psychology, curiosities, research and articles about personal growth and to understand how our mind works.

Follow Us

  • Facebook
  • Instagram
  • LinkedIn
  • Twitter

© Copyright 2014-2024 Psychology Spot · All rights reserved · Cookie Policy · Disclaimer and Privacy Policy · Advertising