
Opinion
The Alters and others: why gamers are right to protest against the use of AI
by Debora Pape
I consider Meta CEO Mark Zuckerberg to be one of the most dangerous people in tech. Using his enormous influence, he’s building a platform that’s gradually changing our society for the worse. And hardly anyone’s standing in his way.
Galaxus advertises on Facebook and Instagram. For the simple reason that our customers are there. From a business point of view, there’s basically no way around it. But frankly, we’d rather not place ads on those platforms.
Meta gave up the idea of being purely a social media company a long time ago, morphing into an addictive system that’s trying harder and harder to replace real relationships with artificial ones.
On an episode of the Dwarkesh podcast, Zuckerberg revealed that he sees a future where «friendship» might also involve interacting with AI chatbots, be it as therapists, close confidants or simply conversation partners.
For people who don’t have a person who’s a therapist, I think everyone will have an AI.
The thing is, these artificial «friends» won’t really be yours. They’ll only be available to you for as long as you pay for a subscription. And the stronger people’s emotional attachment to such a system grows, the easier it’ll be for the platform to raise prices. It’s a business model that thrives on emotional dependency.
According to Zuckerberg’s vision of friendship, you could end up pouring your heart out to an algorithm. Chatbots would turn into therapists. Emotional connections would be simulated – and marketed – by a product.
More importantly, these digital companions won’t be able to do anything real when times get tough. You won’t be able to crash at their place if you need company or a shoulder to cry on. They won’t be able to lend you a helping hand. Although they’ll listen to you, they’ll still be simulations incapable of providing genuine help or care.
This isn’t science fiction any more – it’s product strategy. And if we’re not careful, it’ll become part of our day-to-day life.
To get a glimpse of the kind of ubiquitous monitoring the future holds, you needn’t look much further than Ray-Ban’s Meta eyewear. It’s been EssilorLuxottica’s best-selling model since getting an AI upgrade, beating brands such as Oakley and Prada (linked article in German). With products like these, Meta’s made it clear that its on a road towards constant AI interaction. If that wasn’t enough, OpenAI has announced that it’s working on a product that could record your whole life. Much like the one in Black Mirror episode, The Entire History of You.
If people start spending their time with AI bots instead of other people, it’ll change more than just the way we communicate. It’ll change our self-image, our relationships and the way we behave around each other. AI doesn’t contradict you. Nor does it challenge you. It just does what you want it to. But the very thing that makes real relationships special is their imperfection.
If you’re always talking to AI, you’ll forget how to deal with real people. Patience, empathy, the ability to compromise – all of this will atrophy, leaving us with a new form of loneliness. A simulated, comfortable, but profoundly empty loneliness. One that’s proven to lead to increased levels of depression.
Young people are at particularly high risk (article in German), as during developmental phases, real, sometimes difficult relationships are crucial in reaching emotional maturity.
You reveal intimate information with every sentence you write to an AI bot. At the same time as you’re conversing, you’re being analysed. Meta knows when you’re sad, what scares you and what hurts you. And that’s no coincidence. In fact, it’s the company’s business model.
This data flows into a hyper-personalised marketing machine, becoming even more targeted, even more manipulative and even harder to recognise. Rather than being therapised, you’re being manipulated. At the very moment you’re most emotionally vulnerable, no less.
Put the issue under even closer scrutiny, and an interesting contradiction arises. The people who’ve developed these technologies deliberately keep their own children away from them. Just look at how they approach their kids’ screen time.
It should be obvious to everyone that this is no coincidence. These people are aware of the psychological impact and consequences of their products. They know how much their platforms rely on dopamine, social validation, endless scrolling and emotional manipulation.
They also know that if you start early, you won’t learn to regulate yourself and you’ll become addicted. And they’ve taken action. For their own kids. For their own families. Perhaps we should do the same.
If even the developers distrust their products, perhaps it’s time for us to think twice about using them. Personally, I only use social media for professional purposes (LinkedIn for communicating and exchanging ideas). In my private life, I have zero interest in using social media platforms themselves. I’m only interested in them on a sociological level as a cultural phenomenon.
Although Galaxus still advertises on Meta, we’re constantly questioning our decision to do so. We’d certainly like to do it less often. However, as long as the platforms still have users, Meta ads make sense from a business point of view, and are therefore indispensable.
You can decide to spend less time on these platforms. Not only that, but you can critically question who you’re paying attention to.
Every decision against these platforms is a decision in favour of:
That way, you’ll take your attention away from a company like Meta, removing the basis of its business model at the same time. Besides, there’s no growth without advertising.
Why not also invest money in high-quality local media? In platforms where journalists work with a professional ethos and not just money-grabbing influencers and vanity? Without high-quality information, you lose the ability to make well-rounded decisions.
Zuckerberg isn’t going to change his course. Why should he? He’s building the product most profitable for Meta – and legally, he’s allowed to do so. The responsibility lies with us. As users, as a society.
If we don’t want faceless algorithms to define our values, we have to make a conscious decision about how to deal with them. About what kind of digital world we want to live in. We also need political and legal regulation of digital spaces, just like we have for any other industry that’s potentially destructive to society.
Cool: Creating interfaces between the real world and the world of pure information. Not cool: Driving by car to the mall to shop. My life happens online, the information age is where I feel at home.