Have you ever watched a video or movie just because YouTube or Netflix recommended it to you? Have you used Google’s autocomplete option when searching for something? Or have you clicked on the stories, news, and ads that social networks like Facebook or X have decided to show you in your feed ?
Online platforms are powered by increasingly complex algorithms that use our data to recommend content they believe is tailored to our preferences. They choose the most compelling ads and stories, the ones we might be interested in based on our past behavior. But if we simply choose from the options the algorithms show us, are we making free choices or are we being manipulated without realizing it?
We are being profiled: almost everything we see is made to measure
An algorithm is nothing more than a code that, following a series of rules established by its developers, uses our data to profile us. It gathers all the available information, from the “likes” we give to the pages we interact with and, of course, a lot of our personal data, such as age, educational level or our relationship status, to try to predict our needs, values and preferences.
In fact, a few years ago Cambridge Analytica, the company involved in the largest Facebook data leak known to date, confirmed that it could create very complete psychological profiles to send political advertising with the aim of influencing voting decisions.
How do they do it? Primarily through “cookies,” small pieces of data from websites that track our activity. Companies use records of what we’ve done on the Internet, from clicks to websites visited, and combine them with data from multiple sources. Seemingly innocuous online activities, such as posting photos, updating statuses, or making purchases, are integrated with powerful data analysis tools to create profiles, usually for advertising purposes, to encourage us to buy a product or a service.
However, our profiles, generated primarily from the digital footprint we leave behind, can also be used to keep us connected to the platform for longer, as well as to push us in one direction or another, contributing to influence our opinions and, as a result, our behavior, becoming increasingly radical and extreme.
The psychology behind algorithms
“To take away a man’s freedom of choice, even his freedom to make the wrong choice, is to manipulate him as though he were a puppet and not a person,” wrote Madeleine L’Engle. Such recommendation algorithms can influence our decisions far more than we would be willing to acknowledge or even accept.
Some of these algorithms are not limited to persuasion, but are designed to deliberately influence people, making them make certain decisions by providing them with false information or half-truths without their knowledge. In this way, they exploit our vulnerabilities without us being aware of it and even before we can exercise any form of autonomy over the decision-making process, as sociologist Shoshana Zuboff explained.
In this regard, researchers from the Technical University of Denmark have found that the more times we are exposed to an idea on the Internet, the more likely we are to adopt it as our own and spread it. In other words, repeated exposure to advertisements, but also to similar news, can end up taking its toll on us.
As the methodical drop ends up eroding the stone, continually exposing ourselves to certain information can make us change our minds, pushing us to make decisions that we believe are our own, but that have actually been manipulated.
Some algorithms are specifically designed to target our vulnerabilities and capture our attention by guiding it towards certain goals, rather than allowing us to make more informed decisions at another time, when we feel more lucid. Carefully manipulated information tailored to our profile can change the way we perceive certain circumstances, which in turn leads to changes in our belief systems, decisions and behaviors.
And virality only makes this phenomenon worse. A study published at Harvard University showed that we are more likely to share information if we see that it has been shared many times and are less likely to question it, even if it comes from dubious sources. Simply seeing the metrics makes us vulnerable and more suggestible, largely due to our tendency to align ourselves with others. After all, we think that “so many people can’t be wrong.”
Not all is lost: how to prevent your decisions from being manipulated?
If technologies use our data to present us with biased information, they will end up promoting covert, gradual and persistent changes in our beliefs and values, causing us to make decisions and act based on a distorted reality. However, even if it seems that algorithms are eroding our ability to think freely, this does not necessarily have to be the case.
A first physical barrier
In this context, it should come as no surprise that VPN usage is expanding. 43% of people worldwide feel they lack control over their personal information, so last year, about a third of Internet users used a VPN.
A VPN is a cybersecurity tool that typically encrypts your internet connection to hide your location and prevent others from intercepting your web traffic. One of the VPN’s functions is precisely to ensure your privacy and anonymity online while browsing, shopping, or banking.
It is a type of filter that allows us to hide our IP address and encrypt data traffic, making it more difficult for search engines, advertising agencies, websites, Internet service providers and other platforms to track our activities, browsing history and messages.
By using this “physical barrier” we reduce the amount of data that algorithms can collect about us, limiting their ability to create a detailed profile of our preferences and behaviors to show us content that can better influence our decisions.
In a way, a VPN would allow us to access more neutral content and search results, which are not personalized based on a specific browsing history, helping us avoid manipulation and bias in the information we consume on the Internet and preventing us from locking ourselves in echo chambers.
A second mental barrier
Of course, VPNs are just a first shield of protection. It is also essential to develop a critical awareness. And that starts with understanding how social media and search engine algorithms work, assuming that they often employ psychological techniques to influence our choices and behaviors.
It is essential not to fall into what is known as the illusion of privacy, a particularly harmful phenomenon that “has become a crucial means for digital companies and platforms to ‘manipulate’ users and create an illusion of control,” as concluded by a study conducted at Huaqiao University.
In practice, we experience a paradox of control: when we think we are immune to manipulation and believe we fully understand how our data will be used, we let down our guard and become more vulnerable. In contrast, researchers at the University of Amsterdam found that users who are best protected are aware of algorithmic persuasion and have developed critical thinking. Unfortunately, it is estimated that only 14.7% of Internet users belong to this group.
The key, therefore, is to understand that we can be vulnerable and do our best to get out of the echo chambers in which algorithms try to trap us by feeding us content that may reflect the person we were, but that probably won’t help us become the person we want to be. We must be aware that the fact that many recommend or share something does not make it more valid or true. To reduce the influence of algorithms in our lives, we must never turn off critical thinking.
References:
Sun, R. et. Al. (2024) Research on the cognitive neural mechanism of privacy empowerment illusion cues regarding comprehensibility and interpretability for privacy disclosures. Sci Rep; 14: 8690.
Vojinovic, I. (2024) VPN Statistics for 2024 – Keeping Your Browsing Habits Private. In: DataProt.
Voorveld, H. A. M., Meppelink, C. S. & Boerman, S. C. (2023) Consumers’ persuasion knowledge of algorithms in social media advertising: identifying consumer groups based on awareness, appropriateness, and coping ability. International Journal of Advertising; 43(6): 960–986.
Botes, M. (2023) Autonomy and the social dilemma of online manipulative behavior. AI Ethics; 3: 315–323.
Avram, M. et. Al. (2020) Exposure to social engagement metrics increases vulnerability to misinformation. Harvard Kennedy School (HKS) Misinformation Review; 10.37016.
Keenan, S. J. (2020) Behind Social Media: A World of Manipulation and Control. Pop Culture Intersections; 48.
Mønsted, B. et. Al. (2017) Evidence of complex contagion of information in social media: An experiment using Twitter bots. PlosOne; 10.1371.
Leave a Reply