Share on LinkedIn
Share

The nature of intelligence and the limits of AI

Kenny MacIver — August 2017
Some fundamental differences exist between human and machine intelligence that will define the scope of AI — at least for the next half a century, says Dr Franz Josef Radermacher, professor of artificial intelligence at the University of Ulm.

The phrase ‘artificial intelligence’ may have entered the wider public vocabulary in recent years but there is a widespread misunderstanding (not to mention fear) about the nature and scope of the intelligence that is being developed for deployment in non-human entities.

For Dr Franz Josef Radermacher, professor of artificial intelligence at the University of Ulm in Germany and a co-founder of the eco-social initiative the Global Marshall Plan, a better appreciation will only emerge from a clearer definition of what we mean by ‘intelligence.’

“There is no common agreement on what constitutes ‘intelligence,’” says Radermacher. So applying it in a non-human context is as complicated as it is controversial.

He highlights the importance of separating intelligence from consciousness and from intention. In particular, any definition should be distinct from feelings, what in philosophical terms is known as ‘qualia,’ he says.

When considering AI, he prefers a tighter scope: “Intelligence is the ability to solve certain difficult problems — difficult in the sense that they are not easily solved by humans. And the more intelligent you are, the better you can solve those problems.”
Different types of intelligence

Clearly as information technology has evolved into areas such as expert systems and machine learning the ability to solve corresponding problems as well as — if not better than — humans has given rise to a perception of machine intelligence. But human intelligence and machine intelligence are not the same, Radermacher stresses.

“Just because a machine can solve a problem doesn’t mean that it does so in the same way,” he says, citing the analogy of flight. “You could say that an eagle is an interesting solution for flying and, separately, that an airplane is a good solution to the same problem. You may find aspects where the eagle is better than the airplane and you will certainly find aspects where the airplane is better than the eagle but there is no doubt about the fact that both do fly.”

It is similar with artificial intelligence and human intelligence, he says — notwithstanding the unstoppable rise in computing power.

“No doubt in the machine world the power of abstract processing will improve. At some point Moore’s law will come to an end, but before then we’ll maybe see further improvements in efficiency by a factor of 1,000 or maybe even 1 million in comparison to today’s systems ( in regard to elementary bit operations).”

But even when the machines become that powerful, it is not clear to him that they will be able to match the scope of intellectual capabilities that are possible in the human brain — let alone the human body. “Up to now we have seen nothing similar in artificial intelligence and it’s not clear for me whether machines will actually get there.”

Moreover, for the time being such machines will not come close to the multitude of abilities of the human body, he says. And that is before even considering the key characteristic of ‘qualia.’
Thinking — and feeling — machines?

“There is a question of whether a machine will ever really feel something or whether it can only be in a software-induced state that suggests to those around it that it is feeling something.”

For Radermacher, that points to a fundamental limit to what people should expect of AI in the future. “It may be that the ability to feel something is somehow coupled to human and higher animal life. And by the way, to feel something, be it positive or negative, is maybe the only phenomenon that gives life a meaning. That would then mean life is somehow special, that there could never be real artificial life.”

Certainly, in his view, it would be difficult for anyone to argue that even in 50 years’ time we will have intelligent machines that are similar to humans in the broad sense.

Photography by Enno Kapitza
First published
August 2017
Share on LinkedIn
Share
Dr Franz Josef Radermacher profile picture
About: Dr Franz Josef Radermacher
A German mathematician and economist with multiple doctorates to his name, Franz Josef Radermacher is professor of artificial intelligence at the University of Ulm and director of the Research Institute for Applied Knowledge Processing (FAW/n). A campaigner for a more positive future for humankind, he is a member of the global think tank the Club of Rome and a co-founder of the Global Marshall Plan movement, an eco-social initiative to rebalance global wealth while protecting natural resources.
IMPORTANT:
This website uses cookies to improve functionality. Continue to use the site as normal if you're happy with this or click here to find out more about cookies.
Web       Analytics