
It seems that Artificial Intelligence is here to stay, largely due to the scant resistance we humans have put up. We surrendered before even putting up a fight, assuming that the supposed “technological progress” it brings is a historical inevitability before which we feel like voiceless ants. As a result, AI is now everywhere we look (and that’s not a metaphor).
Obviously, this has psychological implications beyond simply lowering our IQ scores, something we’re not exactly overflowing with these days. AI is also making us more paranoid. We see it everywhere and mistakenly assume that everyone is using it to deceive us.
Nonsense
When generative AI (the kind that writes and takes photos or videos) became popular, I uploaded one of my first articles (from 2009) to an AI detector, and the result was conclusive: 75% of it had been written by an AI. It seems that Gabriel García Márquez also used this technology (resorting to some mysterious time machine to travel to the future) to write the wonderful introduction to “One Hundred Years of Solitude.”
It’s devastating. But even more devastating is that we believe it and become completely paranoid, to the point of distrusting everything and everyone. Even more devastating (if that’s possible) is that we don’t understand that it’s the machine that copies Gabriel García Márquez, me, and millions of other writers.
So, every day I read yet another tip for spotting an article written by AI. Apparently, using adversative clauses has become the latest “irrefutable proof,” so now there’s a legion of writers who aren’t terrified of the classic blank page, but rather of being unable to use conjunctions like “but” or “however,” lest their writing be mistaken for that of a machine. But do you know how many “but” there are in “One Hundred Years of Solitude”? 226. I’ve taken the trouble to count them.
Paranoia
In 2007, psychologists at the University of Manchester found that paranoia is not exclusive to psychiatric patients, but rather exists on a continuum that also manifests in supposedly healthy individuals. In reality, the line separating mistrust from paranoia is extremely subtle.
Paranoia is a cognitive distortion that leads us to interpret neutral situations as threatening. We see enemies where there are none, the people who try to deceive or take advantage of us multiply as if by magic, and conspiracies grow like a hydra with a thousand heads. Obviously, gaining points on the paranoia scale is not good news.
It isn’t, because we feel we have to tread carefully, which raises our anxiety levels to stratospheric levels. We become hypervigilant, attentive to every detail that almost inevitably becomes a confirmation of our worst fears due to that very human mechanism we all fall prey to called confirmation bias . The world around us, the one we once inhabited with a certain confidence and security, transforms into a hostile and slippery place because we no longer know who to trust.
Ultimately, when we don’t know if what we see in a video is true, if what we read reflects a person’s opinion, or if what we hear are someone’s actual words, the world becomes a hologram where everything is questioned and quarantined until proven otherwise. This breaks something profound: the trust necessary to live in society.
A world without trust
Friedrich Nietzsche said that he wasn’t bothered by someone lying to him, but rather that from that moment on he could no longer trust that person. Trust, as the philosopher John Locke described it, is the vinculum societatis, without which we are left without any footholds.
The act of living itself is a constant test of trust, not only in ourselves and those around us, but also in the institutions, laws, systems we have built, and implicit norms we follow. Without trust and these shared rules, Thomas Hobbes warned that we would live in a constant state of war of all against all (I should note that any resemblance to current reality is not mere coincidence).
AI undermines trust in what we see, hear, or read, making us distrust our own judgment and that of others. By sowing paranoia, it leads us into a labyrinth of doubt, as if we were permanently walking a tightrope without a safety net. And that’s not good news, either personally or socially.
I don’t have the solution, but I know that the loss of trust that occurs when we succumb to collective paranoia leaves lasting scars. Sustained distrust not only changes how we see others and the kind of society we begin to build, but it also undermines our faith in ourselves, leaving us vulnerable to unprecedented psychological instability where nothing seems solid.
Little by little, our interpretations of others take on more defensive connotations, bonds become more fragile, and coexistence becomes strained. What’s most unsettling is that this process doesn’t happen overnight, but almost imperceptibly, like a cumulative effect of small, everyday distances and mistrust. We no longer just distrust the writer of the moment; we also doubt whether our partner actually sent us that message or if it was written by AI. And when that shared, implicit trust is broken, rebuilding it becomes incredibly difficult.
Brian Merchant said in his fascinating book “Blood in the Machine” that “Certain technologies are not inevitable. We don’t have to accept them… They can all be rejected,” at least in certain areas (I would add). And that’s not denying progress; it’s preserving certain spaces, defending what we want, protecting our capacity for decision-making, and safeguarding what, ultimately, makes us human.
References:
Green, C.E. et. Al. (2011) Paranoid explanations of experience: a novel experimental study. Behavioral and Cognitive Psychotherapy; 39 (1): 21-34.
Campbell, M.L.C. & Morrison, A.P. (2007) The subjective experience of paranoia: Comparing the experiences of patients with psychosis and individuals with psychiatric history. Clinical Psychology and Psychotherapy; 14: 63-77.




Leave a Reply