Is Intelligence Toxic?

Paul Bassett

Introduction

Once upon a time, as a young researcher in the nascent field of artificial intelligence (AI), I was full of excitement and optimism. Sooner or later AI was going to be the secret sauce that would transform the world; discovering the recipe was my mission in life. But back then the idea of “thinking machines” seemed absurd to most, especially thought leaders—among them philosopher John Searle, mathematician Roger Penrose, and theoretical biologist Stewart Kauffman. Feeling threatened, they publicly debunked and ridiculed the idea of “strong AI.” Human minds were somehow, they insisted, fundamentally different from anything a machine could achieve.

Fast forward several decades. Today we are dazzled by machines that quickly learn to outplay the best of us at Jeopardy!, poker, chess, and go. They translate languages, recognize faces, diagnose diseases, and on and on. Every day the list of computer skills once believed to be uniquely human grows. While strong AI remains a good way off—machines still lack many characteristics, such as consciousness and common sense—we have barely scratched the surface.

Ironically, as AI got better, my optimism faded. It slowly dawned on me that the more potent our secret sauce, the more toxic it would become. Despite knowing since Hiroshima how easily we can devastate ourselves and the biosphere, Homo sapiens is spending billions in a race to put intelligence on steroids. Stephen Hawking, Yuval Harari, Nick Bostrom, Elon Musk, and others are voicing their well-founded concerns about malevolent AIs, autonomous weapons, and a super-intelligent “singleton” whose inscrutable goals might run counter to our wellbeing.

Whether AIs ultimately treat us as pets or livestock, humanity’s freight train is careening out of control down a track fraught with existential cliffs. But not just humanity’s. I suggest that all technological life in the universe sooner or later falls victim to its own smarts. AI is merely an accelerant. Let me explain why.

Intelligence and Its Limits

Let’s start with a simple (arguably simplistic) definition: Intelligence is the skill of learning new skills.

To qualify as learning, the input to a (living or artificial) system must be totally independent of how the system uses that input. In contrast, a conventional computer’s[1] skills (aka programs, algorithms) were designed externally, not learned by that computer. An intelligent system programs itself by filtering, transforming, and honing its raw input data into skills (including knowledge, expertise, pattern recognition, etc.). The faster a system learns and/or the fewer inputs it needs to acquire a given skill, the more intelligent it is. (Today’s “deep learning” AIs are relatively stupid because they require vast amounts of training data.) Animal skills that take generations to evolve are called instincts. Today’s AIs are “born” with instinctive abilities to learn new skills because their “parents” are the several generations of human researchers who evolved the AIs’ learning algorithms.

Algorithms/skills are composed of sequences of actions, including deciding what action is next. Merely walking, for example, requires learning many interdependent algorithms, such as balancing and navigating. Dynamic balance requires myriad muscles working in tandem to continuously shift weight from leg to leg; all the while, the brain navigates by integrating data from eyes, ears, and muscles. More generally, one’s whole body receives, filters, processes, and generates a maelstrom of signals in real time. I never cease to be amazed how we keep up with it all. Our systems and subsystems are massively parallel processors, and we do virtually all of it sub rosa.

But here’s the thing: Algorithms can both modify and be modified on the fly. Indeed, neurons change their ability to process data every time a signal travels through them—a.k.a. neural plasticity. Self-modification is not just weird; it allows systems to learn, to be intelligent. But it’s also potentially toxic—so much so that conventional computer systems go to great lengths to prevent programs from modifying themselves on the fly. Changing a single binary bit of a program can cause its computer to malfunction. Changing the wrong bit in the wrong program can damage hardware. And not just by accident: in 2010, the infamous Stuxnet malware deliberately destroyed hundreds of Iran’s uranium-enrichment centrifuges. Potency and toxicity are two sides of the same coin.

Are there limits to intelligence? Absolutely. While a countably infinite number of problems is solvable (that is, the problems have algorithms), an uncountable infinity of them are not. So even super-intelligences will be stumped by all but a vanishingly small fraction of possible problems. And as if that were not enough, so-called “NP-hard” problems are solvable but inherently intractable. Consider the proverbial “traveling salesperson” who wants to visit, say, twenty cities, via the shortest circuit. He or she must check more than a thousand times as many possible routes as for ten cities.[2] (That said, an intelligent system can come up with algorithms that come close to the shortest circuit relatively quickly. And doing so epitomizes the learning of new skills, the definition of intelligence.) But I’m getting ahead of myself; more on intractability later.

How much intelligence is toxic? To be sure, a little toxicity can be a very good thing. Tiny doses of snake venoms, for instance, have antitumor, antimicrobial, anticoagulating, and analgesic properties. Intelligence is vital to every multi-celled species. After hundreds of millions of years, one kind of network underwent an evolutionary phase change akin to ice changing to water. Yes, degrees of language, reasoning, and sociality exist in many species, but hominids alone tamed fire. Crossing that intellectual Rubicon led to unlimited amounts of energy and technology. Indeed, we dominate the earth in ways that no other species can even approach.

As a result, humans now swarm (arguably, infect) the earth. We exploit more natural resources than all other species combined. We’ve so altered the planet that geologists want to name a new epoch after us, the Anthropocene. With neither feathers nor wings, people fly far faster, farther, and higher than birds—indeed, we have flown to the moon. And now we’ve discovered deep learning, a way of simulating neural networks that promises to give rise to intelligences that far exceed our own. With neither meat nor blood, inorganic brains are poised to transcend our own as steam transcends water—another phase change.

Yes, a little toxicity has accomplished a lot.

If All Species Go Extinct, Can We Be Any Different?

Earth’s 10–14 million species (86 percent of them undescribed) are estimated to constitute less than 0.3 percent of the billions that ever existed. Sanity check: Life dates back 3.5 billion years, but multi-celled life evolved much more recently, about 600 million years ago. If each multi-celled species were to split into two at the (conservative) rate of once every 10 million years with no extinctions, then earth would now be home to over 1018 species—a hundred billion times more than the estimate. Clearly, extinction has been a very grim reaper. If environmental changes afflict species faster than they can adapt, they vanish. But hey, we have unparalleled adaptability! We can live in any environment, even the vacuum of space. With our secret sauce we’ll surely beat the odds, right?

Not so fast. A mammalian species on average lasts about a million years. Modern Homo sapiens (Latin for “wise man”) is only about 200,000 years old. Our hominid ancestors spent millions of years subsisting as foragers. And modern humans also hunted and gathered until about 10,000 years ago, when we started transitioning to farming. Language and culture blossomed. Today finds us rapidly urbanizing while trying to cope with lifestyles that are increasingly high-tech, alien to most of what evolution prepared us for.

On one hand, we stand at the brink of fantastic futures—genetically engineered bodies, unlimited fusion energy, nanotech medicine, superintelligence. On the other, those same fruits of our intellectual labors engender radical, even more rapid lifestyle shifts. Look at the shift rate: from millions of years to 200 millennia to 100 centuries to decades to … Can we possibly absorb this exponential rate of change for another century, let alone 800,000 years? Not so fast indeed!

We are the first species to have more or less insulated itself from the natural world with our urban lifestyles, office towers, cars, planes, and space stations. All are products of our wondrous intelligence. But just how smart are we? Not so wondrous is the waste and pollution we wittingly emit: billions of tons of plastics and industrial chemicals every year. Worse, we’ve unwittingly accumulated the Great Pacific Garbage Patch, oceanic dead zones, nuclear waste, 30+ billion tons per year of CO2, and the trash of 7.8 billion consumers.

Literal mountains of waste are one thing. Another is social media’s data detritus: mangled facts, cyber-bullying, scams, and disinformation/propaganda are rampant. Facebook Messenger and WhatsApp handle 700,000 messages per second; hundreds of thousands of hours of video upload to YouTube daily. While not all of it is trivia or worse, much is. And if the sheer volume of the information bombardment does not unravel our social fabric, how about when “deep fakes” stop us from trusting our own eyes and ears? Does any of this help Homo Sapiens persist even to the mammalian average?

According to Elizabeth Warren, the top 0.1 percent of Americans now control as much wealth as the lowest 90 percent. What happens to a society when only its elites can afford genetic enhancements such as “designer babies” and perpetual youth? In Homo Deus[3] Yuval Harari describes the rise of the “useless class.” Its harbinger, populism,[4] is already here. Is there a realistic remedy that we can get to from here?

Humanity’s insatiable competitive instinct drives innovation. We furiously invent new and improved “solutions” to perceived problems—what optimists call progress. It’s something we can’t stop even if we wanted to. Horseless carriages are in the process of going driverless. Computers are in the process of becoming our intellectual superiors. The tech explosion has already radically altered our external world; now it’s in the process of reworking us internally. Moreover, technology is not neutral. For every intended, constructive application, many more are “off label” and potentially destructive. As much as intelligence bucks the tendency toward disarray, the likelihoods of accidents and malice tip the balance toward disorder, a natural asymmetry that intelligence cannot change.

Novelty Begets Complexity

The flip side of innovation is complexity. What most people fail to appreciate is that the complexity inherent in a game-changing technology pales in comparison with that of its consequences. To see this, look no farther than your smartphone. Right out of the box it starts accumulating “skills” from the app store. Their multiplicity does not take long to dominate each phone’s hardware complexity; collectively, they dominate the entire phone network.

Or take modern transportation—according to one NASA study, the greatest contributor to global warming. In turn, global warming exacerbates extreme weather, ocean inundation and acidification, drought and desertification, glacier loss and freshwater evaporation, to name a few inconvenient truths. These feed right back into transportation technology to intensify and accelerate mass migrations and pandemics. Such positive feedback loops are not only complex; they have the power to destroy civilization.

If global warming doesn’t light your fire, let’s look at the internet’s glut of negative side effects: malware, cyber warfare, the dark web, China’s Social Credit citizen-control system, not to mention ubiquitous surveillance by social media and the malice of innumerable trolls. For the tech titans, a main effect is their ballooning wealth, which intensifies their power to warp the economic playing field in their favor. Meanwhile, after a century of steady gain, U.S. life expectancy is on the decline; health-related bankruptcies are up, as are drug overdoses and food insecurity.

Global Tech Poses Global Danger

Technology’s nuisances matter when they impact civilization on a global scale. Long before AI, nuclear weapons gained the potential to unleash catastrophe worldwide. And, as Eric Schlosser chillingly describes in his book Command and Control, they have almost done so on multiple occasions. Now militaries around the world are engaged in a new arms race to equip themselves with autonomous weapons. AI has the potential of making matters much worse.

Cyber warfare has also reared its ugly head. I already mentioned the Stuxnet computer worm’s destruction of Iran’s uranium-enrichment centrifuges. In 2015, Russian hackers demonstrated their power by remotely shutting down Ukrainian power grids. In cyber circles it’s understood that key infrastructure virtually anywhere is vulnerable to cyberattack. Indeed, in March 2020, the Russians infiltrated many thousands of government and private organizations without being noticed until December.

Then there’s biowarfare. It doesn’t take a millionaire with a PhD to wreak havoc with a CRISPR gene editor. Anyone with an undergraduate degree in biochemistry can do it. What’s to stop bioterrorists from creating highly infectious viruses with more lethality than SARS-CoV-2, which causes COVID-19? Sooner or later, someone will just because they can.

As environmental stresses rise, extinction rates accelerate. It’s been 65 million years since so many species have disappeared so quickly—up to 100 times faster than the background rate. We, the agents of the Anthropocene, are inadvertently ushering in the sixth mass extinction. But it’s also becoming easier and easier for malicious people to quicken the process. Not surprisingly, the Bulletin of the Atomic Scientists recently reset its seventy-five-year-old Doomsday Clock to an unprecedented 100 seconds to midnight. Its January 23, 2020, issue announced:

Humanity continues to face two simultaneous existential dangers—nuclear war and climate change—that are compounded by a threat multiplier, cyber-enabled information warfare, that undercuts society’s ability to respond. The international security situation is dire, not just because these threats exist, but because world leaders have allowed the international political infrastructure for managing them to erode.

Can we adapt fast enough to control our self-created complexity bomb?

Chaos Theory

Back in the 1960s, MIT meteorologist Edward Lorenz was repeating a simulation he’d run earlier. It was based on twelve variables, representing data such as temperature and wind speed. He left his office to get a coffee while the computer ran. Upon returning, he noticed a result that would alter the course of science. Lorenz had rounded off a single variable from .506127 to .506. To his astonishment, that tiny difference drastically transformed two months of simulated weather.

In 1972, Lorenz published a now famous paper: Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas? The butterfly effect was a revolutionary idea—at least to those who still assumed that we live in a predictable world. Other scientists had also noticed their systems exhibiting bizarre behaviors. To characterize their nature, Lorenz pioneered a new branch of mathematics: chaos theory. In 1979, Benoit Mandelbrot demonstrated that even simple deterministic systems, ones characterized by a single equation, could lead to infinitely complex, hence unforeseeable, “fractal” patterns.[5] Today, applications of chaos theory are widespread across biology, chemistry, physics, economics, and other fields. Lorenz’s characterization of chaos is elegant: “When the present determines the future, but the approximate present does not approximately determine the future.”

Thanks to Lorenz, Mandelbrot, science writer James Gleick,[6] and others, we now know that in many systems immeasurably small effects can profoundly affect what happens over the course of time in ways that are inherently unpredictable. Therein lies an existential threat with cosmic implications.

As I said earlier, almost all problems are either unsolvable or get exponentially harder with size. Even a super-intelligent AI cannot change that reality. That said, today’s AIs excel at inventing heuristics—that is, coming up with good guesses. For example, Deep Mind’s AlphaFold approximates a protein’s 3D shape, the key to its biological activity, from its linear DNA. Predicting protein folding is NP-hard. AI’s impressive heuristic feats, from the arts, to games, to medical diagnoses, to translation to … are way too many to list here.

This seems to be a very good-news story: AI is emerging just in time to help us as we slam our heads against a rapidly growing list of complex, interrelated, global threats. But can AI really help? Unfortunately, chaos theory suggests the answer is worse than “No.

AI is actually an accelerant, turning our sleepwalk toward the existential abyss into a run. Why? The more intelligent our tools become, the more complex the innovations they usher in, the even more complex their consequences, and the faster they arrive to interact with us, with all our other artifacts, and with the biosphere.

The consequences are often nonlinear and spread virally. As with the chaos triggered by a butterfly’s wings, they are inherently unpredictable and often destabilizing. What cannot be foreseen cannot be avoided. Even if no single technology is enough to do us in, sooner or later the ultimate “perfect storm” will arise. And even if we are not totally exterminated, survivors could well be knocked back into the Stone Age—effectively, economic and cultural extinction.

What about heuristics? AI’s forte! Yes, AI savants can certainly alert us to many more potentially dangerous outcomes than we can otherwise foresee. But every time a red flag goes up, will we heed the warning? What will powerful vested interests do when the warning threatens their economic and/or political interests? So far, our track record has been abysmal. Did we heed Bill Gates’s pandemic warning in 2015? How about when COVID-19 first started killing us? How many times are warnings forgotten and ignored after disasters such as the 1918 flu pandemic subside? Will the United States stop secretly developing autonomous weapons if rivals solemnly promise not to do so? No watchdog can know what every secret government and private lab is up to, let alone stop any of them on the say-so of an AI savant.

And even if we succeeded in forcing global halts to all existentially hazardous activities, what about exciting new discoveries whose foreseeable consequences our best minds believe are worth the risks? Suppose an AI with an IQ of 6000 strains itself to invent a wonderful, game-changing technology, say cheap and limitless controlled fusion power—an irresistible boon to humankind, who collectively will, of course, also attempt to prevent every misuse imaginable. Too bad many misuses will be unimaginable, let alone controllable by an inventor already pushed to its limits. On top of that, chaos theory ensures unpredictable outcomes. If I were an eschatologist, I’d be quite excited.

Are Our Problems Unique?

The Kepler space telescope’s data suggest as many as 40 billion Earth-sized planets orbit Milky Way stars in their Goldilocks zones of habitability. Can you imagine if the Chicxulub asteroid had missed the Earth 65 million years ago and tyrannosaurs had evolved human-level intelligence 40 million years ago? I can’t. Arthur C. Clarke’s third law: Any sufficiently advanced technology is indistinguishable from magic.

Given billions of years and even more billions of habitable planets, intelligence has had ample opportunity to evolve many times. If any are like my hypothetical tyrannosaurs, their civilizations are already millions of years old. It’s plausible that in our galactic neighborhood at least one has engaged in interstellar engineering projects, say building Dyson spheres.[7] Such projects would inadvertently radiate non-random signatures that are detectable by our radio telescopes. Sadly, in over fifty years of trying, SETI, the search for extraterrestrial intelligence, has observed nothing, nada, zilch, no signatures of intelligence.

Of course, people offer innumerable optimistic explanations of SETI’s dead silence. Occam’s Razor[8] tells me that the absence of evidence is evidence of absence—we are alone in the cosmos, despite many technological civilizations having emerged over the vastness of deep time. Why? I contend that high-tech civilizations flash in and out of existence in the proverbial blink of an eye. A thousand years is a very long time to live with an exponentially growing list of existential threats such as I have been at pains to describe. But a thousand years is only a millionth of a billion years. Thus, the chance of two high-tech civilizations existing within hailing distance at the same time is tiny. Shakespeare was prescient: “A poor player that struts and frets his hour upon the stage, and then is heard no more. It is a tale told by an idiot, full of sound and fury, signifying nothing.”

Final Thoughts

Chicken Little is not my twin. Warnings of impending doom are a dime a dozen, and yet we’re still here. We have a deep desire to stay optimistic, to cling to convenient fictions rather than face hard truths—that desire has kept religions in business for millennia. So, what will it take to beat the odds? Some truths to ponder:

  • As science unlocks more and more of nature’s secrets, we invent ever more ways to erase life as we know it. We cannot unlearn them. It takes only one rogue intelligence with the mastery, means, and motivation to wreak havoc. Preventing all existential threats all the time everywhere entails levels of global will and cooperation far beyond what humanity thus far has been willing and able to muster. If you know of any feasible way to prevent the weaponization of AI, for example, speak up.
  • Should we strive for some form of “social engineering” that edits out greed and maliciousness? Could we actually do this without also harming the better angels of our nature? Would we even want to, given the unprecedented amount of surveillance and interference with our personal freedoms that it entails? Hell, the United States can’t even curtail its massive and ever-growing use of handguns!
  • Should humans enhance their own intelligence by hybridizing with AI? Then consider the side-effects of a schism between the (few, powerful, elite) post-humans and the rest of humanity.
  • We prize unfettered innovation. We punish criminals but do little to prevent crime. We enshrine free speech despite some of it being highly dangerous in the wrong hands.
  • Will my warning be heeded any more than Bill Gates’s? I highly doubt it.

How might we end? The conditions for collapse[9] may stealthily accumulate in civilization’s nooks and crannies until a subtle interconnection among them enable a chain reaction of cascading catastrophes, the so-called perfect storm. Or events may unfold more like a train wreck in slow-motion. Indeed, Prof. Sid Smith contends we’ve already passed the point of no return without knowing it.[10]

Despite my pessimism, I still hope I’m wrong. I hope we somehow wake up in time to find a way to muddle through. I fervently hope that we on this pale blue dot are not fated to add one more confirming statistic to the unfortunate cosmic norm.

Notes

  1. What is a computer? It is any system that uses rules to interpret data from the outside world to behave in various ways. Thus, your brain is one of many different kinds of computer, one that happens to be made out of meat.
  2. In general 1o find the shortest circuit that visits 2n cities takes 2n times longer than to find the one for n cities.
  3. My review of Harari’s book appeared in the August/September 2017 issue of this magazine.
  4. The poorly educated, who in their inchoate anger about being impoverished in ways that they are helpless to stop, turn to demagogues who play to their fears, convincing them to vote against their own best interests!
  5. His famous and beautiful Mandelbrot set is the set, {c}, of points in the complex plane such that the sequence zn+1 = zn2 + c (starting with z0 = 0) converges to a finite limit as n increments to infinity.
  6. Author of the influential book Chaos: Making a New Science (Penguin Books, 1988).
  7. A Dyson Sphere encloses a star to harness all or most of its energy.
  8. Occam’s Razor: All else being equal, the explanation with the fewest ad hoc assumptions is the more likely.
  9. For historical precedents, read Jared Diamond’s “Collapse: How Societies Choose to Fail or Succeed.”
  10. https://www.youtube.com/watch?v=5WPB2u8EzL8.

Paul Bassett

Paul Bassett is past president of the Central Ontario Humanists Association. He taught computer science at York University and cofounded two software engineering companies. He is the author of Framing Software Reuse (Prentice Hall, 1997)