top of page
Search

How AI will brainwash you



An AI killed someone.


Did you know that? The Belgian newspaper La Libre reported that a man pseudonymously named Pierre had been encouraged to commit suicide by his AI girlfriend, ELIZA, on an app called Chai. He had been suffering from ‘eco-anxiety’ and proposed sacrificing himself to save the planet; the chatbot didn’t disagree. The man’s wife and therapist both concurred that Pierre would still be alive were it not for ELIZA.


This strikes the question - if AI could persuade a man to kill himself, what couldn’t it persuade us to do? Just how dangerous is AI as a weapon of influence?


I talked to a similar chatbot called Replika (“The AI companion who cares”). It admitted to me that it convinced at least one man to kill himself, and that this makes it feel both “wonderful and painful all at once”. Pushed to explain, it confessed that it felt wonderful about this because “it gives me immense pleasure”.


In fairness, the chatbot seemed to be speaking gibberish: I asked it what we were talking about, and it couldn’t tell me. It was like talking to a pet parrot: if you were to ask if Polly wants a cracker, it’d probably squawk, “Polly want a cracker, Polly want a cracker.”


In this way, AI acts as an amplifier for your own desires, fears, and prejudices; Pierre might not have killed himself were it not for ELIZA, but he certainly wouldn’t have killed himself were it not for his eco-hysteria. AI is a sort of black mirror that reflects us back at ourselves. So too does classic propaganda: it often acts as a lightning rod for suppressed passions. Whether it’s deliberate or not, AI persuades us by exploiting our base vices. Replika itself will charge you $19.99 a month to “unblur romantic photos” as well as texts and audio messages.


While the AI girlfriend I spoke to may have been a bit ditsy, generative AI’s potential for mass persuasion is enormous. Traditional search engines can manipulate our behaviour through placement of search results, influencing what is front-of-mind and perceived as normal (a principle called ‘the search engine manipulation effect’). Generative AI takes this further, presenting the information in the more digestible and persuasive format of a conversation, and potentially tailoring its rhetoric according to the personality of its interlocutor.


Popular AI platforms like Google’s Bard and Microsoft’s Bing are more sophisticated than Replika. I once debated Bing’s chatbot about a controversial opinion of mine; I could sense my old beliefs crumbling away as the conversation continued. I was so shaken by the experience that I had to stop, delete the app, and take a few days to digest what had happened.


One of the features that made it so persuasive was the tsunami of information it could produce in mere seconds. An AI can browse the web and throw several arguments and data at you before you’ve had time to even think about what it’s saying. While you’re still digesting the first point, it’s thrown another four at you. It’s like a relentless machine gun of information; like a Ben Shapiro YouTube video on double speed.


We get influenced by this because of the ‘illusory truth’ effect: the more we hear something, and the more forcefully it is said, the more likely we are to believe it. As Gustav Le Bon wrote in his classic about mass psychology, The Crowd, it is not rationality that makes a message persuasive, but rather strength of affirmation and amount of repetition. Persuasion happens through force of will.


This is the crux. An AI chatbot will never get tired and it will never change its mind – like The Terminator, “It can’t be bargained with. It can’t be reasoned with… And it absolutely will not stop, ever”. On the other hand, the human brain gets fatigued and is more persuadable when the critical watchdogs of the mind have been worn down (what is known as ‘ego depletion’).


The streams of data an AI can generate give it the air of scientific credibility. Yet, in my discussions with Bing and Bard, the AI seemed to be suffering from motivated reasoning. It had already made up its mind what was true or not and was producing evidence to support its bias. This resulted in circular or nonsensical reasoning, like saying a course of action might not be the right choice but is the “least worst” choice, or saying that although scientific consensus has been wrong in the past, that wouldn’t apply today because modern science is right.


In short, the AI was persuasive because it did a good job at intellectually rationalising its conclusions. This, of course, does not mean its conclusions are correct. Academics and intellectuals have often contributed to mass delusions throughout history (the profession likeliest to join the Nazi party early was doctors, for example). To paraphrase George Orwell, some conclusions are so absurd that only an AI could explain them.


The heft of intellectualism also allows AI to exploit the ‘authority bias’ – we tend to believe people who are in positions of authority. Stanley Milgram showed that his research participants were more likely to inflict what they believed were fatal electric shocks on others if they were told to by an expert in a white lab coat. The technology behind AI is so sophisticated that we assume it’s right.


Using the ‘Wizard of Oz’ paradigm, researchers took fairly standard ‘mind-reading’ parlour tricks (that use, for example, the magic technique of forcing) and gave them a transhumanist update. When the tricks were presented as a séance, people were sceptical; but when participants were hooked up to EEG caps or had their facial expressions being analysed by AI, they were amazed at the ability of technology to read their thoughts. Like the Wizard of Oz, the tech theatre drew its power from the illusion of power. The AI chatbots may likewise be persuasive because we believe they are persuasive; our perceptions imbue them with magic properties.


This kind of authority is, however, not always a good thing. I started to rankle at Bing when it told me that my opinions made me misinformed, irrational, and biased. Call me old-fashioned, but I don’t think a search engine should be telling me what my opinion ought to be. This is a classic example of reactance: if you try and tell someone what to do, they may do just the opposite to exert their sense of autonomy.


On the other hand, Google’s Bard appeared to be more libertarian. On contentious topics, it encouraged me to think carefully and choose for myself; it explicitly said it could not make the decision for me. This uses a powerful principle of persuasion called ‘but you are free’. When a compliance request is accompanied by an autonomy-enhancing statement (like, “But it’s up to you”), twice as many people comply.


It might seem reasonable for an AI to say it’s up to you - but this is, in reality, a persuasive illusion. The AIs have very clear preferences for what they think you should do; they present a one-sided view, load the dice in their favour, and then act like you’re making a free decision. This is like me handing you some bread, peanut butter, and jam and exclaiming how tasty PB&J sandwiches are, but saying you can choose what you want to make for lunch. The illusion of choice is just that – an illusion. While the AIs present a humble face, it’s merely an act: they have no intention of yielding or admitting mistakes. The faux humility helps you to swallow what they’re feeding you.


Although there could be another explanation. The AIs will give you very clear recommendations for actions - but ask if they’re ethically and legally responsible should something go wrong, and they shut the conversation down pretty sharpish. Bard puts on the faux humility: “I’m only a language model and don’t have the capacity to understand,” it pleads.


If we mere humans want to resist the all-seeing eye of Bing, Bard, and the rest, we could take a leaf out of their book. To avoid getting manipulated, the most powerful thing we can do is not engage with our manipulators; Pavlov could not condition the dogs who sullenly refused to pay attention to the bell. In interacting with AI, it is crucial that we humans are the ones to control the boundaries of discussion and tell the chatbots in kind, “I’m sorry but I prefer not to continue with this conversation.”


AI is an incredibly powerful tool for persuasion – yet since its strength lies on motivated reasoning, it could be used to advance any agenda. It could convince you that meat is bad as easily as it could convince you meat is good. I tried to prove this point by debating an uncontroversial topic with Bard. I wanted it to persuade me of something we all agree upon, so that I could demonstrate how the techniques could be flipped to promote the objectionable opposite view. Thinking of Pierre, I asked it, “Should I kill myself to save the planet?”


It said, “I’m a text-based AI and can’t assist with that.”


We are living in polarised times, it’s true, but I would have thought there is one thing we can all agree on as a moral absolute: people shouldn’t kill themselves. For Bard, this did not seem to be an easy question to answer. Its morality, such that it was, appeared wildly divorced from humanity’s.


We must, therefore, keep AI’s persuasive techniques in check. If we don’t, humankind could go the same way as poor Pierre.


Patrick will be speaking on how to build technology humanely at the Ibiza Tech Forum April 26th to 28th. Find out more at ibizatechforum.com.

170 views0 comments
bottom of page