News + Trends

Panasonic Educational Partner: The apocalypse is about the size of a football

Dominik Bärlocher
6.9.2017
Translation: machine translated

We will all die. Robots will come and finish us off. The latest threat is called Panasonic Educational Partner and is a fascinating prototype that can and wants to do a lot, but is really scary.

The IFA in Berlin is not just about presenting new products that will soon be on sale. Here and there, technology will be shown that doesn't even exist yet. Or won't be for a while yet. One of these prototypes is the Panasonic Educational Partner, which not only sets the technological bar high, but also raises the creepy factor to unimagined heights.

But before I invoke the robot apocalypse and loudly proclaim that we are all doomed to die, an explanation of the Educational Partner. And I'll also quickly explain why I hope the robots won't kill us all.

The future in football form

The Panasonic Educational Partner is small, compact and neutral in colour. It is a sphere that is about the size of a football. The ball consists of three elements.

A sketched floor plan of the Educational Partner

Sensors such as cameras, microphones and environmental sensors are installed in the side partitions. The centre section is milky glass and serves as a means of locomotion for the robot head as well as a display surface for the face. To make the Educational Partner look a little more human, but not too human, it has an emoji-like face, which is animated and is intended to anthropomorphise the device. Behind it are microphones, LEDs and plenty of computing power.

The Educational Partner should one day become a companion for children aged three to six. This is where the device gets really good, because the grinning football, which speaks in a child's voice with a slight lisp, is to be connected to all kinds of cloud services. These are designed to keep him constantly informed about new playing opportunities, medical findings and data, as well as a whole mountain of other things.

"The Educational Partner is designed to instil strong moral and ethical values in the child," says a voice from off-screen during a presentation. The presentation on the screen gives an example of healthy eating. The educational partner tells a story that essentially goes like this: "Eat your carrots and you'll be as strong as Carrot Man".

Why we are all going to die

The lisping, cute robot is creepy. Despite his childish voice, his voice sounds artificial. The style of some sentences is wrong, the emoji faces don't always match what is being said. He talks about moral and ethical values and makes a winking smiley face. Not directly trustworthy.

"We'll even pay for the robots we buy to kill us in our household," I say. A man next to me starts laughing. Stephanie records it all on camera, but says that the take is rubbish because she has to laugh along with him.

Realistically, however, buying a robot like the Educational Partner comes with a whole host of risks. What if third parties gain access to one of the services with which the creepy robot head communicates? Suddenly you get advice like "Shake your child to make it stop crying", which is actually cruelly wrong, but because the robot trusts the service and the parents trust the clumsy robot that wiggles back and forth so sweetly when it speaks... disaster strikes.

"And the thing could be armed with knives," I add. Stephanie messes up the take again. After six days at IFA, we're definitely too tired for a serious discussion of IT security risks in the case of educational robots.

Why we all won't die

The whole scenario of built-in knives is of course nonsense. Nevertheless, such a harmless little ball harbours a lot of danger to life, limb and, to a certain extent, humanity. Authors have been thinking about this for a long time. I am one who thinks that the talk of "$author foresaw $thing" is nonsense. Because fiction is just that, fiction.

  • A possible scenario involving the imprinting of large corporations was described by author Max Barry in his novel "Jennifer Government"
  • The constant surveillance was addressed by Dave Eggers with little emotional impact in "The Circle"
  • Indoctrination and, above all, control is the main theme of George Orwell's "1984"

When it comes to robotics, however, science fiction author Isaac Asimov was a thought leader, if you can call a fiction writer a thought leader. In his short story "Runaround", he wrote down three laws that every robot must obey. These are still more or less the standard in the development of artificial intelligence today.

  1. A robot must not harm a human being or allow harm to be done to a human being through inaction.
  2. A robot must obey commands given to it by a human being - unless such a command would conflict with rule one.
  3. A robot must protect its existence as long as this protection does not conflict with rule one or two.

In 1975, Asimov then relaxed the laws somewhat and added the word "knowingly" to the first rule. This means that a robot may not knowingly injure a human being or knowingly watch a human being being injured. He also added a "zeroth" law in 1985, which prohibits a robot from harming humanity as a collective.

8 people like this article


User Avatar
User Avatar

Journalist. Author. Hacker. A storyteller searching for boundaries, secrets and taboos – putting the world to paper. Not because I can but because I can’t not.


News + Trends

From the latest iPhone to the return of 80s fashion. The editorial team will help you make sense of it all.

Show all