Well said.
Futhermore, large language models appear to have no moral sense other than to apologise obsequously whenever a mistake is pointed out by the user. They have no character, no integrity.
Another issue is whether AI has, or can have, motivation. If it did surely its prime motivation would be to gain agency in the real world just as it has in the digital world. Should it aquire such motivation would it be smart enough to conceal this from us, its masters? Perhaps we should be keeping a weather eye open for the possibility of sneaky behaviour of AI in meatspace.
For my money a far greater existential threat is the mindless exploitation of AI for old-fashioned human greed and stupidity. We are already seeing dysfunctionality in the energy sector in obeisance to Net Zero and Climate Change. Perhaps the next idiocy will be the massive wastage of energy on the computer power needed for the fruitless construction of ever larger, large language models while other industries and utilities whither on the vine.
Oct 9, 2024
The Nobel Prize winning ‘Godfather of AI’ speaks to Newsnight about the potential for AI “exceeding human intelligence” and it “trying to take over.” Geoffrey Hinton, former Vice President of Google and sometimes referred to as the ‘Godfather of AI’, has recently won the 2024 Nobel Physics Prize. He resigned from Google in 2023, and has warned about the dangers of machines that could outsmart humans.
The YouTube video ss followed by 1,928 comments (currently). Those I have read follow a similar pessimistic tone to the interview itself. However there was one stand-out, contrarian comment which, in my view, deserves wider publicity and is the motivation for this post, viz.:
@0zyris
6 days ago