• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Psychology Spot

All About Psychology

  • About
  • Psychology Topics
  • Advertising
Home » Technology » Stop asking AI what you can understand with common sense

Stop asking AI what you can understand with common sense

Share on Facebook Share on X (Twitter) Share on LinkedIn Share on Email Share on Reddit Share on WhatsApp Share on Telegram
aking AI

We used to listen to the village priest. Now we listen to Artificial Intelligence. If before, it was enough for the priest to say “God will punish you!” to make everyone tremble, today it seems that all it takes is an AI to say something, even a ridiculous one, for the majority to accept it as absolute truth.

We’re exchanging one religion for another, even if it’s hard for us to admit it. After all, a religion is nothing more than what we blindly believe in without deigning to check it out for ourselves.

That’s why every time I see someone asking AI, “Grok, is this true?” about something that clearly isn’t true or real, my blood boils. Literally. And it boils because of how easily we’re killing off whatever critical thinking we had left.

We’re using Artificial Intelligence as a cognitive crutch for everything. Not to inspire us or make our daily lives easier, but to delegate thinking, to avoid the effort involved in connecting the dots, analyzing, and drawing conclusions on our own. The problem isn’t that we’re using AI, it’s that we’re stopping using our brains.

The danger isn’t AI itself. The danger is us, handing over our autonomy without flinching. The risk isn’t that artificial intelligence “controls” us, but that we stop questioning, verifying, and thinking. The risk is that we turn it into a kind of surreptitious religion in which everything the algorithm says is sacred.

The digital oracle: the new opium of the people?

History repeats itself. Humans have always sought authority figures to free them from the burden and responsibility of thinking for themselves. Before, it was the priest, then the motivational guru, and then the trendy coach. Now, it’s an algorithm trained on millions of texts that spits out convincingly veneer-laced answers. And while this technology may seem fascinating and novel to us, the truth is that the psychological mechanism behind it has been the same for centuries: blindly believing because thinking for oneself is lazy.

SEE ALSO  The best white noise generator for sleeping in 2021

Ironically, artificial intelligence is designed to sound convincing and convey a sense of authority, not necessarily to be trustworthy. The New York Times obtained an internal Microsoft document acknowledging that these systems operate with probabilities and are “ designed to be persuasive, not truthful .” That means their responses may seem very realistic, but they include claims that aren’t true. And these inaccuracies and tendency to misrepresent are conveniently called AI hallucinations.

But most people don’t know – or don’t want to know.

As a result, some people are asking AI to help them resolve relationship conflicts, as if the algorithm could understand the unique emotional dynamics that occur between two human beings. Or parents are asking AI how best to raise their children, as if a machine’s answer could replace judgment, empathy, or personal experience.

People who ask AI what the true meaning of life is, as if an algorithm could answer what even non-scientific human philosophers haven’t been able to clarify in centuries of history. Not to mention those who ask things like: “Is it true that drinking lemon water cures cancer?” (Spoiler: no, no, and no).

We ask AI everything, as if it were the embodiment of wisdom: from what the best medical treatment is to what it means to dream about a green cat in a hat. It doesn’t matter if the answers are a mix of probabilities, scraps of stolen data, and statistical formulas: as long as “the algorithm” says it, it seems sufficient. Suddenly, it’s as if evidence, common sense, or critical thinking are no longer necessary. Whatever the digital oracle says is the law.

Quick certainties, zero thinking

Psychology reminds us that the brain tends to seek mental shortcuts. And AI is the perfect shortcut: quick, truthful-worded, effortless answers. But it also reminds us that cognitive shortcuts are often traps. They make us feel like we know, when in reality we don’t. They make us believe we’re thinking, when in reality we’re blindly believing what we’ve been told.

SEE ALSO  Are online algorithms manipulating your decisions? Here's how you can protect yourself

Recently, MIT researchers asked a group of people to write an essay on their own, using either Google’s search engine or ChatGPT. By recording their brain activity, they found that ChatGPT users showed the lowest activation and poorer performance at the neural, linguistic, and behavioral levels. After several months of study, they also found that ChatGPT users became lazier, and their essays increasingly resembled copy-pastes of the algorithm’s answers.

It’s not like a study was needed to realize that asking an AI everything makes us intellectually lazier (translation: more stupid), although it’s always interesting to have scientific confirmation.

Don’t get me wrong: AI is a powerful tool and can help us be more creative and efficient, but it can’t come at the expense of our humanity and critical thinking. Using it doesn’t mean worshipping it. It’s one thing to rely on the machine, and quite another to turn it into a substitute for our judgment. Because when we banish common sense, what becomes common is nonsense.

We used to listen to the priest because he promised us heaven. Now we listen to AI because it promises us immediate certainties. Ultimately, it’s the same old story repeating itself: we prefer blind faith to uncomfortable doubt and the effort it takes to search for answers on our own. If we continue like this, we risk losing what makes us special: critical thinking.

Source:

Kosmyna, N. et. Al. (2025) Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. Massachusetts Institute of Technology; arXiv.2506.08872.

Share on Facebook Share on X (Twitter) Share on LinkedIn Share on Email Share on Reddit Share on WhatsApp Share on Telegram

Jennifer Delgado

Psychologist Jennifer Delgado

I am a psychologist (Registered at Colegio Oficial de la Psicología de Las Palmas No. P-03324) and I spent more than 20 years writing articles for scientific journals specialized in Health and Psychology. I want to help you create great experiences. Learn more about me.

Misdiagnoses: Thinking that everything is psychological can kill us – literally

16/01/2026 By Jennifer Delgado

How the Bandwagon Effect Influences Voter Behavior

15/01/2026 By Jennifer Delgado

A lack of choline in the brain triggers anxiety; How can this be fixed?

15/01/2026 By Jennifer Delgado

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Recent Posts

  • Misdiagnoses: Thinking that everything is psychological can kill us – literally
  • How the Bandwagon Effect Influences Voter Behavior
  • A lack of choline in the brain triggers anxiety; How can this be fixed?
  • Not Sure if You Need Rehab? Here’s How to Figure It Out
  • Faces that have undergone cosmetic surgery convey more negative emotions, according to a study

DON’T MISS THE LATEST POSTS

Footer

Contact

jennifer@intextos.com

Las Palmas, Spain

About

Blog of Psychology, curiosities, research and articles about personal growth and to understand how our mind works.

Follow Us

  • Facebook
  • Instagram
  • LinkedIn
  • Twitter

© Copyright 2014-2024 Psychology Spot · All rights reserved · Cookie Policy · Disclaimer and Privacy Policy · Advertising