By Andrew Jeavons
There seems to be a growing discussion about what Artificial Intelligence (AI) is. One thing we do know is that it is artificial, usually this is thought of in terms of computer based. The intelligence part is a bit more amorphous.
What is intelligent behavior?
If we can identify that we may get a bit closer to understanding what AI is.
What IQ tests measure is not intelligence, it’s some sort of rough test of abilities that may or may not be useful in life. There is no help from psychometrics in defining AI. We need a more fundamental definition of what intelligent behavior is.
In the last century in the 1950’s and onwards a theory about the development of children’s cognitive skills became prominent. The author of this theory was Jean Piaget. Many people referred to him as a psychologist, but that is not what he would call himself. Piaget called himself a “genetic epistemologist”, which is a complicated way of saying that he studied the development of knowledge. Piaget was trained as a biologist and in the early part of his career he studied water snails in a lake in Switzerland. He then turned to studying children, initially his own children. While much of Piaget’s theory of children’s cognitive development has long since been discarded there are fundamentals elements of his approach that can help us understand the concept of Artificial Intelligence. Piaget retained his roots in biology; he was interested in all forms of intelligent behavior be it in a plant, child or a water snail.
For Piaget “the nature of intelligence is adaptation”. When he studied children he developed theories that stressed the constant change and adaptation of their internal mental structures that encoded knowledge. Piaget’s theory had the concept that mental structures need to maintain a certain state of stability. Thus they needed to adapt to new information and evolve in complexity to maintain stability. These knowledge structures allowed children to react to new situations by using their previous knowledge of the world. They could adapt mentally to new experiences.
Piaget’s idea of adaption of knowledge structures was not restricted to just humans. He saw all animals as having “biological knowledge” of their environment. This enables them to adapt to changes in their environment. This idea is elucidated in a conversation with a French journalist, Jean-Claude Bringuier.
Piaget: I am convinced there is no sort of boundary between the living and the mental or between the biological and the psychological. From the moment an organism takes account of a previous experience and adapts to a new situation, that very much resembles psychology.
Bringuier: For instance, when sunflowers turn towards the sun, that’s psychology?
Piaget: I think, in fact, it is behaviour.
This gives us a definition of intelligence that we can apply to AI. For something to be defined as intelligent it must adapt. This implies that the intelligence must be constantly active. AI is not analytical, it is adaptive. So a system that monitors advert placing based on CPM and changes advert placement according to results is intelligent, it is an AI. Using a deep learning algorithm to identify the content of images is not AI, it’s analytical. The deep learning algorithm may have been inspired by neural network theory, but that does not make it AI. AI is a dynamic behavior not a static process.
Piaget is proposing that any cybernetic system is intelligent. The term cybernetics was coined by Norbert Weiner in 1948 as the scientific study of control and communication by organisms. The word cyber is from the Greek work for navigator. Over the years the terms cybernetics has become associated with computers and technology, but this is not what it really means. Both Wiener and Piaget were interested in homeostatic regulation, which are mechanisms organisms use to maintain constant states. This could be internal temperature for animals or optimal photosynthesis for sunflowers, they are both examples of cybernetic systems. And for Piaget, the regulation of internal mental states and knowledge structures is an example of a cybernetic system.
Unless a deep learning algorithm, neural network or an expert system is constantly working to maintain some state it is not intelligent. The target state could be market share, or ROI or any number of KPIs (key performance indicators). Any AI must have a goal, an outcome that it can “navigate” to and so be termed intelligent. For Sunflowers it is simple, they seek the sun and hence we can called them intelligent.
Claiming that you use “AI” to tackle a problem is becoming increasingly common within the research world, yet most of these claims are false. Deep learning, machine learning and neural networks are very sophisticated and capable methods of analysis but they are not AI. They can form the basis of an AI, but just using them is not AI.
We probably need to adapt what we term AI.
By Andrew Jeavons, Mass Cognition
1 comment
Andrew,
I agree to all you said BUT …
… in contrast of seeing the term from an academic viewpoint, in contrast we might judged from a different persective: is it useful to use the term.
Automobiles never have been “auto” – thus selfdriving. Still it was a useful term to promote the useful innovation. In the same sense I believe in the productive hype of the A.I. term. Most high-end A.I. techniques are around for at least 20 years but still are rarely used. Why? Business people love simplicity and hate complex things. But complex matters must be managed with complex approaches (Einstein). We need help to break thru on this.
Therefore it is for the benefit of all to make “A.I.” techniques look magical to non-technicians. It is a good thing because those techniques drive impact and should be used much more intesively.
Frank