The art of co-existing with intelligent machines
As companies start to deploy advanced AI applications — from self-driving cars to robot co-workers — they need to better understand the new psychology of human-machine interaction, says Dr Franz Josef Radermacher, professor of artificial intelligence.
Anthropomorphism is regarded as an innate tendency of human psychology. The application of human traits to non-human entities — whether an animal, a god, a machine or an object — is irresistible and natural.
But as digital devices have grown in intelligence, enhanced with the capability to learn, that anthropomorphic tendency needs to be reconsidered, argues one of the world’s leading authorities on AI, Dr Franz Josef Radermacher.
“As humans we often project onto another party. So when a machine behaves in a certain way then we immediately interpret what the system is doing as if a human would be doing it, effectively giving it a human face.”
For example, a multi-factor online security check means (to the user) the bank’s system is “nervous” or “suspicious” when all it is doing is running an algorithm, says Ulm University’s Radermacher. “We have the tendency to over-project and see intention, autonomy or even consciousness in machines,” he says.
And that trait is going to present many opaque situations and misunderstandings when people start to view the decisions and actions of technologies such as self-driving cars.
It is a situation that is challenging psychologists and next-generation vehicle developers at German premium car companies who are working on the issue .
Trust in the machine
“We have to study the effects that a car has on a human when it is behaving autonomously,” he says — both inside and outside the vehicle. “On seeing a problem ahead, a human may feel they don’t need to react because they have so much trust in the machine and an expectation that it will react appropriately, as it has on previous occasions,” he says. “So, sometimes, it’s hard for the person to interfere.”
At some point in the future that trust might be justified but for the next few decades, intelligent, highly connected cars will have to co-exist on the road with more traditional vehicles.
Radermacher cites an everyday example. The tendency for many drivers approaching a green traffic light is to speed up slightly so they get though while it is still on green. But a self-driving vehicle may already know precisely whether it will be able to get through while the light is green and, if not, it will automatically choose to slow down. “These different levels of information and interpretation may create very dangerous situations,” he argues, with some drivers speeding up as intelligent cars brake.
“There is a lot of research by car companies into the issues created when different levels of information are available to different actors in traffic. And they are coming up with some very interesting insights,” he says — in particular, when four wheels meet two wheels.
“One of the most interesting things is seeing how smart cars react in combination with a motorbike, where the human has a much stronger role,” he says. “To the degree that automatic cars are programmed to be careful and can only behave reasonably, the more it will be fun for humans on motorbikes to trick these automatic cars,” he says.
A motorbike weaving between cars will, he suggests, be interpreted by the autonomous cars as a dangerous situation and, with nearby cars sharing that information, the natural reaction will be to slow or bring four-wheel traffic to a halt.
This is just one of the subtleties being explored, and it will take plenty of deep thinking on the psychology of human-machine interaction to come up with solutions that will work in practice, he concludes.
• Photography by Enno Kapitza