<

In the Battle for Artificial Intelligence, Winner Takes All

AI / ML
In the Battle for Artificial Intelligence, Winner Takes All

No doubt, AI has a lot of promise. Already today, nascent AIs can fly drones, beat the best human game players, translate languages, drive cars, trade stocks, develop new drug treatments, discover planets and much more.

Another driver for the intense activity is the huge prize. In the battle for artificial intelligence supremacy, winner takes all. Put another way, the first team that invents a “strong AI” will quickly render all other competitors irrelevant. Some experts have theorized that the first strong AI will also be the last human invention — because of a strong AI’s ability for rapid self-improvement.

Weak vs. Strong AI

All of today’s AI is so-called “weak AI,” which has narrow, predefined capabilities. Alexa and Siri are frequently cited examples of weak AI. While able to elegantly interact with humans and very impressive in their own right, they’re also limited in their capabilities. There’s no possibility or expectation that Alexa or Siri as currently constructed would ever perform beyond their well-defined duties.

Weak AI that’s equipped with machine learning may make novel observations and may be better than humans in completing specific tasks. However, it’s still limited to the scope of its design and often constrained by its original assumptive models.

In contrast, “strong AI” (a.k.a. artificial general intelligence) demonstrates human-like ability to reason and grow, mimicking the human mind. Alan Turing surmised that a strong AI device would be able to hold a conversation with a human just like another human could. As sci-fi fans already know, this threshold is referred to as the Turing Test.

Based on advancements in software and hardware (e.g. quantum computing), experts in the field believe that strong AI is achievable within 30 years. Some believe strong AI could emerge even sooner.

Intelligence Explosion

It’s generally theorized that once an AI approaches very modest human-level intelligence, it can quickly become ultra-intelligent in a matter of days, hours or sooner, driven by hyper-recursive self-improvement. This prediction is known as the “intelligence explosion,” and we’ve already observed an early example of it.

Shortly after Google’s AlphaGo Master beat the best human player in the board game Go, the AI was greatly surpassed by its successor, AlphaGo Zero. The latter had no human training, only learning by playing virtual copies of its itself and without using human-played games as an initial seed.

The irony, of course, is that the students are now the teachers. Human players who once trained the AI are now desperately trying to learn from the AI. It remains to be seen if the AI’s learnings can be meaningfully used to improve human players. Humans describe the experience of playing AlphaGo as playing a distinctly non-human “personality” (misnomer), potentially making knowledge transfer challenging. Consider that even after decades of studying computer chess games, no human chess player has beat a computer.

The Sky’s the Limit, But …

At this point of this blog, the “winner takes all” outcome should be obvious.

Once an ultra-intelligent, artificial general intelligence exists and is able to self-improve and operate beyond human understanding, it may be directed not just to solve a single problem but all (solvable) problems. It can invent novel ways to improve itself.

As IJ Good reasoned in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

With AI, the possibilities for improving society are limitless.

At the same time, clearly there is great risk for unintended consequence and bad actors. Since winner takes all, corporations, governments and others all will race each other to be the first. As an industry and a society, we need to design and implement safeguards with equal urgency.

It’s unlikely that the meaningful protections will be as simple and elegant as Asimov’s Laws (also known as the Three Laws of Robotics), which have been both widely popularized by Hollywood and widely criticized by experts as too limited.

I suspect we will discover that the only way to protect the human race from an ultra-intelligent AI is… you guessed it: an ultra-intelligent AI.

© 2024 Newfire LLC,
45 Prospect St, Cambridge, MA 02139, USA

Privacy Policy
Amazon Consulting PartnerClutch