I’ve been reading a recent book by Max Bennett called “A Brief History of Intelligence”. Funny the number of titles with the expression “… a brief history …” ever since Stephen Hawking’s one about time. Initially I was not enthralled, despite its superb reviews, then there were some interesting things, but I am still unsure what I think. The subtitle hints at AI and there are bits and pieces, but mostly it is about evolution of homo sapiens’ brains. That alone is worthwhile. The problem is I know a lot about some aspects of AI and the biology of intelligence, and I know a bit about evolution and the human species and brain structure, so I am a critical reader.
The bottom line is that Bennett feels that there are five evolutionary breakthroughs that have resulted in the human brain with its human intelligence. He calls them “breakthroughs”, which is not really how evolution works. It is not a “eureka, I got there!” process. That implies direction or goal orientation. There are five mutations or five sets of numerous simultaneous mutations that resulted in changes which so far have resulted in us. But that is nitpicking.
1: bilateralism, direction, valence
2: reinforced model free learning
3: modelled learning, world simulation
4: theory of mind, internal simulation of self
5: language
His five historical stages actually make sense. They are not stages leading to us, but stages leading to our brains.
His first one is when animal body configurations became bilateral. Before this, for most of the history of life on Earth, organisms had only been single cellular, but once multicellularity arose (not a breakthrough, because the majority of life is still singular cellular), but once it arose, any number of body shapes and configurations abounded. Blobs most of them. But with a mobile organism with a bilateral symmetry, that is with a front and back and a left and right, it became suitable, more suitable than for blobs, to use whatever mobility mechanism it had, to choose between going left or right for some purpose. Even those single cells and many of the amorphous blobs had mobility, but choosing to orient right or left has less meaning if you are an amorphous disc or a sphere or a branching thing. Multicellularity can be of two types. One where every cell is the same, or alternatively an organism where different cells have different features. There is the critical second part to this. The purpose. There needs to be a mechanism for deciding on left or right. Attraction or avoidance, what I always called tropism and which now seems to be termed “valence”. Bennett describes the evolution of some cell types to be specialized as what we now call neurons, nerve cells, cells acting like communication lines, and with complex connections to other neurons that are comparable to computers, using chemical transmitters in lieu of digital ones, and some ending at the mobility structure, so that a calculation, the valence, decides on moving left or right. To my mind, “breakthrough” number 1 was really a series of steps, each a critical “breakthrough”. This section of his book covers the most ground, the origin of life, through to the first hints of what we think of as intelligence.
His second stage was the earliest forms of learning. Trial and error, reinforcement. Remembering what to do next time something happens similar to something that happened before, remembering what you did last time and repeating it if it was favourable, and doing something opposite if it resulted in something unfavourable, that is if you are still alive. There is no internal model of what the environment looks like or what are good choices. It is all reinforcement on experience. The admittedly simplistic mechanism for favourable and unfavourable are described as the presence of either dopamine or serotonin, as positive or negative enforcers. These are two of the hundreds of neurotransmitters, the chemicals that transmit the effect of a signal in one neuron on the one it connects to. The equivalence in one sense as the computer program instructions add or subtract. The more times a good result from turning one way or the other is registered by the accumulation of these chemical signals, the stronger the learning, the fundamental “remembering”, which is what learning is all about in my view. Some AI is discussed here, Pavlovian conditioning in robots for instance. Something the Tasman Turtles were doing in 1979, but Bennett forgot to mention that.
His third stage is where it got interesting for me. It is about how we seem to have an inbuilt representation of what our senses tell us is out there. I think of it as in internal image, not a thought, because the image must be there before we can think about it, so more fundamental. Bennett calls it a simulation. He discusses the evolution of the part of the brain where it is housed, the neocortex, and its structure. My familiarity is with the structure and function of the visual cortex, so it was interesting to me to learn that the whole neocortex has the same, “columnar” subunits. Now behaviour can be played out in advance of any action, by running it through the simulation, instead of running it through the real world. So mistakes, options, outcomes, causes and so on can be assessed ahead of time, leading to more intelligent behaviour, less risky, more favourable outcomes.
The fourth part progresses from having a raw model of the world around us, as presented by sensory information, to a newly modified part of the neocortex that makes a simulation of our own thoughts and beliefs about ourselves. A so called “theory of mind”. The benefit now is that we can use this structure to assess what other minds may be thinking, and to plan our actions accordingly, for good or bad. Altruistic or cruelty. This section of Bennett’s book starts to be too speculative for me, but there is definitely a model of the world and our internal state somewhere in our brains and the experimental evidence is it is where he says it is.
The final section brings us to language, pointing out that all the past aspects of humans that we thought, or wished were unique to us, such as tool making, are not supported by fact, but language, a generative language, does seem to be ours alone. He makes the distinction between communication and language. Animal communication can be sophisticated, but is not what he calls a “curriculum” a grammar machine with infinite potential constructs. He discusses the limits of sign language used by our closest primate relatives. Here he gets back to AI, and talks about GPT chats. My prior understanding of GPT was not changed by learning more about the colossal learning on colossal computing power behind these chat systems, and it reinforced my disdain for them being compared with how our brains generate language. Bennet confirms that opinion. Again there is so much speculation and guessing in this section that it reduces the value of it for me. I like facts.
So I am interested to know what others thought of the book or these ideas.
The model is still a mechanistic one.
Every living thing has something which makes it “alive” as opposed to “dead”.
AI is not, and never will be, alive.
I agree. AI is not a living thing. Hence the word “artificial” in AI. Even when and if it approaches the competence of humans, or any sapient animal for that matter, it will still be a machine and not a living one. I cannot see AI ever nearing what the human brain achieves. That is not to say that AI cannot achieve amazing performance in some areas, just as a computer does memory and search better than how we do it.
Where to start Allan.
The book is informative and interesting for sure.
I agree totally that “breakthroughs” are not really an appropriate term to use to describe the evolution of neurological traits, maybe milestones or stages, some word that has a more processive meaning?
Most AI is modeled on the neuroscience understanding of brain function so those comparisons were interesting.
Some discussion of what is missing would have been fun.
I have heard it said that there is an emotional component to every thought, how can we get computers to input emotions?
Hi Cheryl, It was you who urged me to persist with the book when I was uninspired in the beginning. Not because it was not good, just nothing new at least for me. But then I learned stuff, so I am glad you pushed me. And it was our mutual friend who put me onto the book in the first place. It was such a vast undertaking by the author that it would be trite to select some missing things I feel. Bennett goes to lengths to define the limitations of his discourse. Emotions would be another book I think. We do not yet know the answer to many human brain features, soul, emotion, awareness, consciousness, and so on. Like Jen says, AI is not alive. In fact the things AI can do now, image analysis, pattern recognition, natural language understanding, drive cars over cliffs, are all our attempts on digital systems, probably nothing like how our brains do it, despite that we are using our brains as models or prototypes.
In regard to emotions Cheryl, When I finished the book I started to read it all over again. First time through there were parts I ended up skimming over in order to get through it, and I always determined to go back again. I have just got to an earlier section that he calls “The Origin of Emotion”. I had forgotten about that, and no wonder such a “big call” on his part to suppose this is where it comes from. To me emotion includes necessarily a conscious, awareness feeling, not just observable reactions or behaviours. Bennett talks about a binary matrix of valence and arousal. Valence is the attraction or repulsion to some environmental factor such as food or heat, and arousal is the level and type of activity, like move fast and away or stay here and look around. It is sufficient to describe observable behaviour in simpler organisms (he introduces it with nematodes), but it is far too simplistic to relate, I think, to any genuine definition of emotion. Maybe when a mouse scatters at seeing you, or stops to eat a morsel that it finds, behaviours identical in appearance to that of the worms, it is feeling emotions like we feel them. We are far from knowing the answers because we do not know if the question is valid yet.
Does Bennett’s book mention any genetic influence on the brain, any evolutionary behaviors which are established before we start learning from others? I recently struggled with “Evolutionary Psychology” by Buss but was disappointed by the absence of recommendations for actions to better accommodate ancient predispositions. I did write “Happiness is in your genes” about 20 years ago which attempts to identify behaviors which would have been vital a million years ago, but are now a bother, and what to do with them. Would there be interest in a series of posts to BlackJay or parts of that book?
Hi Geoff, actually a good point, because although Bennett’s book is mostly about evolution, he rarely mentions genes. That is not his aim and he does not profess to be any sort of scientist. He is highlighting the results of evolution in relation to emergence of intelligence, not the processes like the genes involved. He does refer at times to DNA and Dawkins’ book, “The Selfish Gene”, one of my favourites is cited, but little else. There is a brief speculation about the origins of life and random appearance of reproducing polymers at those underwater thermal vents. Your point about vestigial emotions is also an interesting point. Cheryl mentions emotions in her comment. One aspect of Bennett’s thesis is that each step in the evolution of intelligence is built on the prior step. Sometimes adapted or modified or reused. So all the vestigial processes are kept and allow more and more sophisticated intelligent behaviours. So the earlier evolved features are not wasted, not bothersome like your ancient predispositions seem to be. In regard to your book and your question about posting something about it I will ask John Reid, who is the owner and founder of Blackjay.
I enjoyed your review and all the comments, Allan.
I think I approach these topics differently than you, since I don’t know much about them to begin with. So while I can be misled on misrepresented facts at times, I get value from learning about the main messages…in this case the ‘breakthroughs’ as Bennett calls them, and then from contemplating the larger possibilities that could emerge from those.
Btw, i think he used the word ‘breakthrough’ because of what you wrote in your last comment…each set up the next one, and so collectively they could represent a timeline/progression of the evolution of the functional complexity that our minds currently possess.
Also, I will stick to my theory on generative AI…i think that could be exactly how System 1 thought (a la Kahnemann) occurs in our minds. In any case it all emerged from trying to predict the next word, so is it any surprise that it faithfully recaps System I thought? Anyway, I still see the generative machine as the base upon which a superstructure is built (factoring in emotion, for instance) to yield our current minds such as they are.
I am currently reading “Why We Remember” by Charan Ranganath. It is a book on how our memories operate. It is also a good book and, again perhaps due to my ignorance going in, clarifies to me some things about how we think. All I will say on this for now is that one of generative AI’s current issues may be that it has the ability of total recall (and no filter/editing/analysis). Humans have selective memorization AND recall, and that is influenced by many things including emotion.
A defining feature of human intelligence, to me, is bias (prioritization, if you will, but basically that is a form of bias). And that affects, and in turn also gets affected by, the selective processes of our memories…all of which makes reality subjective for each one of us.
Aside, this is also why i think it is vital that we not corrupt the meanings of words…reality is subjective so to have a functioning civilization we need to have a standard of agreed-upon myths that we call our shared reality. And when words lose their meanings, soon our very system of logic begins to crumble.
Anyway so, to me, the combo of ‘Sapiens’, ‘Being You’, ‘A Brief History of Intelligence’, ‘The Alignment Problem’ and ‘Why We Remember’, makes a terrific scaffold upon which to construct a good understanding of human intelligence (and consciousness, btw). To me that scaffold already shows why and how most humans operate on generative intelligence…System 1 thought…most of the time. Of course it is energetically most efficient, but those books collectively seem to, to my mind at least, show how and why the mind came to be the way it is.
Starting with, of course, how an analytical intelligence could evolve in the first place. And for that part I will always owe “A brief history of intelligence” a debt of gratitude because it introduced me to how that could reasonably happen.
Cheers.
You are the one who put me and my friends onto this book in the first place. For which I thank you, and now you mention “The Alignment Problem”, and “Why We Remember” by Charan Ranganath, so I have more searching and reading to do. I think the word “Breakthroughs” was probably more a commercial decision by marketers than a serious technical word. It reads well and grabs the attention of the lay reader. I like it as a title, just not its inaccuracy in evolution. While I believe we are a ways off understanding what intelligence, emotion, consciousness and memory and all those vague things really are, every time we make progress in leaps, like expert systems, or bounds, like generative language prediction, we get a better feel for what it is and more importantly, what it is not. I like your words super structure and scaffold, which is what we are building. I suspect that we are treating it as more complex than it really is and it is time for a consolidation and a reduction, a “breakthrough” in thinking about thinking, pun intended.