Monday , March 18 2024

Book Review: ‘Machines That Think: The Future of Artificial Intelligence’ by Toby Walsh

A robotic hand with a brush painting a conspicuously straight line catches the eye, but the dominant image is one of a humanoid machine, wearing a hard hat and a pair of goggles, holding a futuristic looking tablet.  Can machines ever enter the creative realm of the visual arts?  Will robots replace humans in manufacturing and construction?  Readers of Machines That Think: The Future of Artificial Intelligence by Toby Walsh will be expecting answers.  The book promises to address the key question – will automation take away a select few of our jobs or conquer everything and make humans obsolete? – while explaining how best to prepare for whatever may come.  Walsh, a leading expert in Artificial Intelligence, suggests that thinking machines could be the last innovation humans will ever need to make for themselves.  While the book is aimed at the curious non-specialist, there are ample notes and references for those who are seeking more.

Machines That Think is organized in a straightforward way; three major sections account for the past, present, and future of AI.  This arrangement highlights the fact that computers themselves are relatively recent inventions, and they have rapidly progressed from very fast adding machines to sophisticated data manipulators.  This creates a sense of urgency for the reader, a feeling that those who ignore the advance of AI will do so at their own peril.

Humans have been thinking about thinking machines for centuries.  Ever since Aristotle founded the concept of formal logic in the third century, people have sought ways to make thought and reasoning a predictable, mechanized process.  Visionary scientist Alan Touring prophesied in 1950 that machines would one day think, and this was before there were even computers to speak of.  In 1987 Claude Shannon, the father of information theory, warned ominously that one day humans would be to robots as dogs are to humans now.

The unfortunate label “Artificial Intelligence” is attributed to John McCarthy.  Intelligence is poorly defined at best, and the adjective “artificial” doesn’t confer much credibility.  Can a machine be intelligent if it is not creative?  For that matter, are humans themselves intelligent, or just another form of machine?  Perhaps we are more sophisticated than we believe.

The development of computers accelerated with cryptography and atomic bomb calculations during the Second World War. In the post war era, it was clear that computers were powerful tools, and that they might have the potential to do more than replace a warehouse full of clerks pounding away on adding machines.  The Dartmouth Summer Research Project in 1956 was one of the first to formally articulate the idea of describing human intelligence in such a way that it could be simulated in a computer.

It was by playing games that computers really started to be taken seriously, and an early victory went to a machine known as BKG 9.8, which defeated the reigning world backgammon champion.  Notable is the fact that BKG 9.8 was programmed to learn from playing, rather than having humans code all the various rules and situations of the game.  Chess has always been considered an immensely complex and challenging game, and when IBM’s Deep Blue defeated world chess champion Garry Kasparov in 1997, public respect grew even more.  IBM-Watson’s victory on the game show Jeopardy provided yet another dose of cred.

The current state of AI can be classified into four groups, which Walsh calls “Tribes.”  The Learners can learn from scratch much like humans, while the Reasoners must be taught explicit rules.  The Robotocists learn from interactions with the real world, and finally the Linguists approach AI by parsing and attempting to understand language.

The Deep Learner tribe probably generates the most excitement, and as Big Data grows bigger, so will these machines get smarter.  There are some caveats.  They cannot explain their decisions; they cannot guarantee certain outcomes, and scaling up to match the human brain will always remain problematic.  Games are a favorite domain of this tribe because they have well-defined rules, a clear winner, and the computer can learn by playing itself many times.  The game of poker remains a challenge, as it involves a combination of strategy and psychology that is difficult to quantify.

Robotocists face a substantial challenge in that they work with physical machines that must interact with the environment – an environment which may include unpredictable humans!  There is currently a Samsung Sentry Guard robot, armed with a machine gun and a grenade launcher, that watches over the North-South Korea border and can identify and kill anyone who ventures into the forbidden zone.  Perhaps more than any other, this tribe begets questions of oversight and accountability.

Linguists like Siri and Alexa, which often work with Reasoners like Google or Bing search engines, are already a familiar part of daily life.  They are likely to continue to be funded as their respective developers compete for market dominance.  In some ways, they are the AI technologies helping to pave the way for what is to come.

On the practical and more immediate side, some jobs are poorly paid and there is no incentive to automate them, while others are undesirable and easily replaced by a machine.  Although humans must learn things individually, a machine has the advantage by learning and then instantly sharing across all machines.  On the other hand, AI will never replace humans in compassion and wisdom, respect and care.

The human brain continues to amaze; it accomplishes its many miraculous feats using only around 20 watts of power, while IBM’s Watson consumes 80,000 watts. Still, as the volume of Big Data grows inevitably and rapidly, some form of AI will be a necessity.  Will stronger, more sentient AI eventually come about?  A persistent difficulty is that “consciousness” is the most baffling problem in the science of the mind – how do we ever decide if AI has achieved this if we don’t even understand it with humans?

Walsh wraps things up with 10 predictions for 2050.  These include the practical – people being banned from driving altogether, or continuous real-time AI healthcare – as well as the esoteric like living forever by leaving behind a chatbot that is just like you.

Professor Walsh has written a dozen books related to the topic of Artificial Intelligence and is an acknowledged expert in the field.  While some of his writing is highly technical, the aim of this book is to explain the subject, and its risks, to the reader with scientific curiosity but no specific background in AI.

The book is well organized which facilitates learning and understanding the subject. In the first section, Walsh attempts to explain Turing’s halting problem – I found this unnecessarily convoluted and felt that perhaps it would have been better to defer to a reference (or a note in the back of the book) and move on, rather than bogging down the reader.  I had the same difficulty with the explanation of Russell’s Paradox.

As for the future of AI, the author assigns some of the heavy responsibility to the developers, while leaving the lingering impression that not enough is being done.  The toughest decision we face will not be which branch of AI research to fund, but how far should we trust it?  Many famous thinkers have issued dire warnings (e.g., Elon Musk, Stephen Hawking, Bill Gates, and Steve Wozniak), but the technology itself is morally neutral.  Society, which changes much more slowly than technology, will ultimately determine the future of AI.

About PW Smith

Check Also

SXSW

SXSW 2023: Connecting Your Brain to Computers

Brain Computer Interface technology will allow you to control the world with just your thoughts and bluetooth.