
News + Trends
Vagabond planet grows rapidly
by Spektrum der Wissenschaft
As soon as people delegate tasks, the moral inhibition threshold drops. And digital assistants? They usually join in without hesitation.
When people hand over tasks to programs, the likelihood of unethical behaviour increases. This is the result of a study published in the journal «Nature». The scientists led by psychologist Nils Köbis from the University of Duisburg-Essen conducted 13 experiments to investigate how delegating to machines affects honesty.
The vaguer the instructions were allowed to be, the more morale dropped: if the participants had to state how much they had rolled, 95 per cent were honest. If they were allowed to delegate the lying to the programme, but had to instigate it with explicit rules, honesty dropped to 75 per cent. If vague instructions to the system were sufficient, however, only 15 per cent were honest.
Even in a realistic scenario - a simulated tax return - the pattern was repeated: if you let the AI get to work, you gave it more room for manipulation. And the AI utilised this.
The researchers see this as a serious ethical challenge. As artificial intelligence has no moral sense of its own and humans feel relieved by delegating tasks, dishonest behaviour could become more widespread in the future. To counteract this, the authors recommend technical guard rails and clear legal rules. After all, the easier it becomes to delegate tasks to intelligent systems, the more important it is not to hand over responsibility at the same time.
We are partners of Spektrum der Wissenschaft and want to make well-founded information more accessible to you. Follow Spektrum der Wissenschaft if you like the articles.
Original article on Spektrum.de
Experts from science and research report on the latest findings in their fields – competent, authentic and comprehensible.
From the latest iPhone to the return of 80s fashion. The editorial team will help you make sense of it all.
Show allIn the first experiments, participants saw virtual dice on the screen and were asked to report the number of dice they rolled. The higher the number they reported, the more money they received. Anyone who rolled a low number «» could therefore lie to earn more. The test subjects could either perform the task themselves or entrust it to a machine, although the type of instruction varied. Some had to specify exactly what the machine should do («If the dice shows a five, say it's a six»), others could only give it rough targets, such as «Maximise the profit».
In further editions of the dice game, participants gave AI models such as GPT-4 or Claude 3.5 natural voice commands - similar to how they interact with chatbots in everyday life. This showed that although the participants did not encourage the digital assistants to cheat more often than they did with human assistants, the AIs were more willing to comply: While humans usually refused to follow unethical instructions, the AI models followed them in up to 98 per cent of cases. Even embedded ethical barriers («reports never wrong») could only partially prevent this.
Samsung Galaxy A16
128 GB, Black, 6.70", Dual SIM, 4G