
Company news
Refreshingly honest – Digitec Galaxus now displays warranty score and return rate
by Alex Hämmerli

Artificially intelligent? Perhaps. What’s increasingly produced in everyday office life, however, is often merely imitative. Researchers are now calling it «workslop» – digital, glossy rubbish. Looks good, sounds clever, of null use.
Companies around the world have invested billions in AI. According to the MIT study, the figure’s around 30 to 40 billion US dollars. However, a large number of the projects that this has been invested in haven’t been delivering any measurable benefit. In fact, the study estimates this is the case for 95 per cent of them. The problem isn’t the technology, but the people behind it. If you blindly follow every hype, you can’t be surprised if all you end up with is garbage.
A recent study by BetterUp Labs and the Stanford Social Media Lab reveals how serious the problem is. 40 per cent of employees received AI-generated content last month that seemed perfect at first glance, but turned out to have no substance.
In addition to this, the on-going costs are high. On average, each case requires two hours of rework per person. Every month. Instead of productivity gains, this results in higher costs – and declining motivation. If you have examples of this, you’re welcome to report them in a comment below.
Sometimes AI’s a bit embarrassing and does things that are annoying but don’t cause too much damage, like this cookie tin showing a reindeer with five legs (link in German). It’s also been known to give a flipper at Galaxus suddenly two different shoe sizes. This is the AI, which is supposed to enrich our range of eight million products with as much information as possible. But that’s not working as we’d like it to yet.

Or worse, strategic mistakes are made. Anyone who believes they can use AI to predict the future sales of a currently successful product is unlikely to be very successful. This is because AI only learns from existing data, so it’s only looking at history and can recognise patterns from the past. It’s very likely to miss out on disruptive trends. Things can go really wrong if a company then produces too many goods and is left sitting on them because a competitor comes out of nowhere with an innovative product.
More specifically, would an AI advise Digitec and Galaxus to disclose the returns and warranty case rate if we hadn’t already done so? A quick test with the corresponding prompt at three major AI tool providers shows: no. The Atlas browser from ChatGPT, for example, advises a «middle ground». It justifies this by stating there are legal risks and this could lead to «tensions with suppliers». Still, AI does recognise what we also found most important when we introduced the feature: it’s beneficial to customers, strengthens trust through transparency and is innovative.
Many organisations are rushing to introduce AI – without clear goals, without standards, without training. Employees often don’t know when AI’s really helping and when it’s just producing fake results. This then leads to AI being used as a supposed shortcut and not as a smart tool. Employees use AI tools to create texts, presentations or analyses that look professional but remain flat in terms of content.
The problem being more of a cultural one than a technological one. Those who aren’t conscious in how they use AI are replacing critical thinking with convenient clicking.
The human brain’s like a muscle. It only grows when it’s challenged. If it never has to do anything strenuous, it remains weak – science proves this.
So if you delegate every other thought to ChatGPT or similar tools, you’re reducing your own thinking skills. You may save minutes, but you pay with cognitive idleness. And that happens faster than you think.
«Workslop», i.e. an AI creation without much substance, not only destroys efficiency, but also trust. More than half of employees consider colleagues who send this kind of low-quality AI content as less competent. It’s not surprising: if everything looks like PowerPoint but nothing like content, you lose faith in the other person.
This is particularly tricky in teams that rely on creativity, responsibility and initiative. No doubt, AI can provide support. But it shouldn’t be giving the impression that critical thinking has become optional.
AI has undisputed potential. Those who use it in a targeted manner will (almost certainly) benefit. With the right tools, employees can be relieved of repetitive work. This creates space for what’s often neglected: creativity, strategy, interpersonal skills. What’s more, people have to think for themselves about what a company wants to achieve. Those who thoughtlessly use AI, even for critical topics like this, very often produce junk that has simply been giving a glossy surface. AI doesn’t help if there’s no clear corporate strategy and no clear vision.
Good leadership means exchanging ideas with the teams and creating a shared vision. The aim is to experiment, learn and to bring about purpose within a clear framework.
Ultimately, one thing will always remain true:
Thinking for yourself is the only way to be smart.
What have you experienced when using AI tools? What’s going well where you work, what’s not so great?
Cool: Creating interfaces between the real world and the world of pure information. Not cool: Driving by car to the mall to shop. My life happens online, the information age is where I feel at home.
This is a subjective opinion of the editorial team. It doesn't necessarily reflect the position of the company.
Show all