I’ve been reading a recent book by Max Bennett called “A Brief History of Intelligence”. Funny the number of titles with the expression “… a brief history …” ever since Stephen Hawking’s one about time. Initially I was not enthralled, despite its superb reviews, then there were some interesting things, but I am still unsure what I think. The subtitle hints at AI and there are bits and pieces, but mostly it is about evolution of homo sapiens’ brains. That alone is worthwhile. The problem is I know a lot about some aspects of AI and the biology of intelligence, and I know a bit about evolution and the human species and brain structure, so I am a critical reader.
The bottom line is that Bennett feels that there are five evolutionary breakthroughs that have resulted in the human brain with its human intelligence. He calls them “breakthroughs”, which is not really how evolution works. It is not a “eureka, I got there!” process. That implies direction or goal orientation. There are five mutations or five sets of numerous simultaneous mutations that resulted in changes which so far have resulted in us. But that is nitpicking.
1: bilateralism, direction, valence
2: reinforced model free learning
3: modelled learning, world simulation
4: theory of mind, internal simulation of self
5: language
His five historical stages actually make sense. They are not stages leading to us, but stages leading to our brains.
His first one is when animal body configurations became bilateral. Before this, for most of the history of life on Earth, organisms had only been single cellular, but once multicellularity arose (not a breakthrough, because the majority of life is still singular cellular), but once it arose, any number of body shapes and configurations abounded. Blobs most of them. But with a mobile organism with a bilateral symmetry, that is with a front and back and a left and right, it became suitable, more suitable than for blobs, to use whatever mobility mechanism it had, to choose between going left or right for some purpose. Even those single cells and many of the amorphous blobs had mobility, but choosing to orient right or left has less meaning if you are an amorphous disc or a sphere or a branching thing. Multicellularity can be of two types. One where every cell is the same, or alternatively an organism where different cells have different features. There is the critical second part to this. The purpose. There needs to be a mechanism for deciding on left or right. Attraction or avoidance, what I always called tropism and which now seems to be termed “valence”. Bennett describes the evolution of some cell types to be specialized as what we now call neurons, nerve cells, cells acting like communication lines, and with complex connections to other neurons that are comparable to computers, using chemical transmitters in lieu of digital ones, and some ending at the mobility structure, so that a calculation, the valence, decides on moving left or right. To my mind, “breakthrough” number 1 was really a series of steps, each a critical “breakthrough”. This section of his book covers the most ground, the origin of life, through to the first hints of what we think of as intelligence.
His second stage was the earliest forms of learning. Trial and error, reinforcement. Remembering what to do next time something happens similar to something that happened before, remembering what you did last time and repeating it if it was favourable, and doing something opposite if it resulted in something unfavourable, that is if you are still alive. There is no internal model of what the environment looks like or what are good choices. It is all reinforcement on experience. The admittedly simplistic mechanism for favourable and unfavourable are described as the presence of either dopamine or serotonin, as positive or negative enforcers. These are two of the hundreds of neurotransmitters, the chemicals that transmit the effect of a signal in one neuron on the one it connects to. The equivalence in one sense as the computer program instructions add or subtract. The more times a good result from turning one way or the other is registered by the accumulation of these chemical signals, the stronger the learning, the fundamental “remembering”, which is what learning is all about in my view. Some AI is discussed here, Pavlovian conditioning in robots for instance. Something the Tasman Turtles were doing in 1979, but Bennett forgot to mention that.
His third stage is where it got interesting for me. It is about how we seem to have an inbuilt representation of what our senses tell us is out there. I think of it as in internal image, not a thought, because the image must be there before we can think about it, so more fundamental. Bennett calls it a simulation. He discusses the evolution of the part of the brain where it is housed, the neocortex, and its structure. My familiarity is with the structure and function of the visual cortex, so it was interesting to me to learn that the whole neocortex has the same, “columnar” subunits. Now behaviour can be played out in advance of any action, by running it through the simulation, instead of running it through the real world. So mistakes, options, outcomes, causes and so on can be assessed ahead of time, leading to more intelligent behaviour, less risky, more favourable outcomes.
The fourth part progresses from having a raw model of the world around us, as presented by sensory information, to a newly modified part of the neocortex that makes a simulation of our own thoughts and beliefs about ourselves. A so called “theory of mind”. The benefit now is that we can use this structure to assess what other minds may be thinking, and to plan our actions accordingly, for good or bad. Altruistic or cruelty. This section of Bennett’s book starts to be too speculative for me, but there is definitely a model of the world and our internal state somewhere in our brains and the experimental evidence is it is where he says it is.
The final section brings us to language, pointing out that all the past aspects of humans that we thought, or wished were unique to us, such as tool making, are not supported by fact, but language, a generative language, does seem to be ours alone. He makes the distinction between communication and language. Animal communication can be sophisticated, but is not what he calls a “curriculum” a grammar machine with infinite potential constructs. He discusses the limits of sign language used by our closest primate relatives. Here he gets back to AI, and talks about GPT chats. My prior understanding of GPT was not changed by learning more about the colossal learning on colossal computing power behind these chat systems, and it reinforced my disdain for them being compared with how our brains generate language. Bennet confirms that opinion. Again there is so much speculation and guessing in this section that it reduces the value of it for me. I like facts.
So I am interested to know what others thought of the book or these ideas.