De-mystifying AI

 

Maddie Abuyuan / BuzzFeed News; Getty Images

 

Peter Nielsen: The reason most of us will soon be accompanied by AI robotic companions, helpers is that they will be extremely easy to be educated to know everything they need to know while human companions, helpers, are NOT!

John Reid: AI is effectively useless—and it’s created a fake-it-till-you-make-it bubble that could end in disaster—
https://fortune.com/2024/07/08/ai-unproven-effectively-useless-fake-it-make-it-bubble-market-watcher-warns/

Peter Nielsen: Oh yes, there’s an AI bubble in the usual way of everything NEW, with most of the hype helping crooks make money . . . Sorry, to the extent that I might have contributed to that.

My understanding of AI is informed by my belief that humans are essentially Riders on Elephants, with the Elephant connecting to God/One Mind via its soul, as in Hinduism. I see the Rider as the chatterer that AI research informs us about, in so far as it ignores wisdom coming from the Elephant which is often the case, almost totally for most of us most of the time. It is this social surface of humans that I am alluding to in most of my AI commentary, their shaping much that passes as human, NOT the Elephants so much.

John Reid: My understanding of AI was informed by a maths lecture on Markov Processes I attended when I was an undergraduate in the 1960s. AI is just the implementation of cascaded Markov Processes using a very large training set made possible with modern computer power. As such it is incapable of original thought or understanding. It can only regurgitate patterns from its training set. The term Artificial Intelligence is therefore misleading – “Simulated Intelligence” would be more appropriate. AI is not completely useless – it is very good at mking deep fakes providing human editors vet them for extra hands and fingers as in the image at the top of this page. (See also: https://blackjay.net.au/ai-are-humans-irrelevant/)

Peter Nielsen: Yes, but “simulated intelligence” is also what we see in most Riders most of the time, its being so SHALLOW, a shallowness consistent with both AI and usual chatter arising in neural networks. Organic in the chattering Rider. Digital in AI.
My experiences of some Elephants, particularly with a telepathic one, is that Elephantine intelligence is entirely different. My intuition is that Elephantine intelligence arises from Quantum Mechanical micro-tubule connections to One Mind, more or less as explained in this video:

John Reid: A neural network is an effective way of implementing a Markov process or Markov chain as hardware, i.e. the predicted event is a function of the probabilities of past events. In my view intelligence is an emergent property of a very large number of nested Markov chains. This idea can be generalized to include external inputs such as physiological inputs, even telepathy, if it exists, but it is not mystical. Your elephant and rider dichotomy is a macroscopic description of this nested hierarchy. In my view there may well be more than two levels.

Regarding consciousness, I recognise consciousness in others and in my cat because part of my brain is dedicated to this task. It is called “Theory of Mind” and is located primarily in the medial prefrontal cortex (PFC) and the left and right temporoparietal junctions (TPJ). That’s it. That is what consciousness is. It is behaviour that is recognized as conscious by the Theory of Mind part of the human brain.

Peter Nielsen: Humans have thus been infinitely harder to educate . . . Traditional Educational practice will be seen as unnecessary when less complicated robotic helper-companions prove themselves to be able to do all the hard stuff for us however badly we’ve been educated . . . Increasingly so until . . . !?!

INFINITE, this advantage robotic helpers will have over human helpers, their being digitally loaded with everything they need to know, EASILY, CHEAPLY.

John Reid: We are already witnessing the negative effects of the evil intrusion of the digital world into the organic, human one with the significant increase in suicide rates of teenagers due to mobile phone bullying. To me, what you are describing could finish up as a bizarre Kafka-esque form of Hell.

Stop Press from Ralph Drayton-Witty: Children are susceptible to viewing popular home assistants like Amazon’s Alexa and Google’s Home range as lifelike and quasi-human confidantes, Dr Kurian said. As a result, their interactions with the technology can go awry because neither the child nor the AI product are able to recognise the unique needs and vulnerabilities of such scenarios.

https://www.news.com.au/lifestyle/parenting/kids/touch-it-creepy-thing-home-assistant-told-child-exposing-the-risks-of-ai-technology/news-story/95f3086dd127c9ca6339c29a81e87329

9 Replies to “De-mystifying AI”

  1. There’s a bigger danger in so-called AI being used in the background, for example Google Maps.

    At a lunch recently at Eaglehawk Neck (in Tasmania) out of 6 adults 3 of us had recently had strange experiences with Google maps, one of which was dangerous. One, a new arrival in Tasmania, was told to go from Devonport to Hobart via Great Lakes. It was snowing, she turned around and worked it out herself. Another was directed to Eaglehawk Neck via Primrose Sounds – a side road – rather than the Tasman Highway. I checked her directions back to Hobart, she was told to go through Cambridge. The correct route is a 4 lane major highway. And I was directed from Cressy to Kempton a few months ago via Lake Tooms. It’s a dirt back road instead of the Midlands Highway from Launceston to Hobart.

    This is dangerous!

    Thinking seriously about paper maps.

    1. Jen, The appalling outcomes you describe are more the result of poor software managenment than of AI “done properly”. It looks like nobody bothered to test the system on real geography. Nevertheless that is another issue. How is the user to know whether the AI has been done properly? The management of software development is itself another can of worms. Witness Robodebt and the British sub-postmaster scandal.

      1. Since I do not see current AI systems as legitimate duplications, (the same method or any other method) of human intelligence, just good search engines in one sense, with a clever output algorithm, (not to trivialise it all, because clever is clever, but clever is not intelligence, trees are very clever at coping in their environment), maybe my reaction is that, “What is the difference between AI done properly and good software management?” Maybe none.

      1. Okay, hear hear then.

        Apparently people navigate in one of two ways, unrelated to anything like sex or intelligence. Some use landmarks. Go three blocks, turn right until you see the red house with the blue fence, and it is the second house past the letterbox. My partner here in Bangkok, who could be a London cabbie, with her knowledge of the streets, does it that way.

        Others like me use maps and make mental maps. I like maps. Once someone gave me directions, to take the first right road after crossing the bridge, that person not cognisant that there were several bridges all with a right intersection after them. What if there are more than one red house with a blue fence, unlikely as it may be?

        I try to use landmarks when being driven around in Bangkok, but without a hint of the cardinal directions (close to the equator and cloudy skies), it has proven difficult. So I check the route on GoogleMaps to get a pre-sense of where we are going. Doesn’t matter, she always gets us there.

    2. I’ve had similar problems. Once in the UK made me take a stupid single lane backroad where I had to pull over into bushes every time a tractor came along.

      An AI system knowing everything is not enough, we don’t know everything yet we cope remarkably well with solutions despite the lack of some information. Not just is it other world knowledge, but we make new knowledge, data, information, discoveries along the way, all internally with no added input, just justling it around in our brains until something curious happens.

  2. ChatGPT is going well beyond its training set

    It is learning continuously from every prompt which contains new, credible information.

    I particularly like its courtesy ( unusual among humans these days) when you alert it to a hallucination- . It actually apologises and thanks you for the new insights. The correction immediately updates future responses on the topic

  3. David, You say “It is learning continuously from every prompt which contains new, credible information.” I doubt it.

    I have tried this when writing a covering letter when submitting a paper to a journal and it did really well. In my experience it is a real help to amateurs doing something infrequently or for the first time, but it is not as good as a proficient human who does it for a living. It shows you the way it is usually done. AI can certainly be helpful but it doesn’t justify a stock market bubble.

  4. Am I the only one who has not dabbled with ChatGPT? Like John, an advocate of AI technology, any technology, and a pioneer in some aspects of it. Pattern recognition, scene analysis, speech synthesis, speech recognition, natural language understanding, task planning, autonomous navigation. (I was excited this month to learn of brain cells that light up when a known location is encountered again.)

    I remember all the 1960’s and 1970’s AI fuss, then the lull, then the Expert Systems and Fuzzy Logic crazes, then the lull, and now the chatbots and am waiting for the lull again.

    Okay, there are my few pennies’ worth. I loved the novel format this time, very Socratic.

Comments are closed.