The Greatest Danger With New Advances in Artificial Intelligence —And Understanding Where the Necessary Antidote Starts

The recent dramatic appearance of large language A.I models has further awakened conversations about the dangers as well as the benefits that accompany the contributions of artificial intelligence. The potential benefits are considerable, and I don’t wish to minimize them. But along with many of the best thinkers in the A.I. field, I think that the dangers are very real. As a futurist and psychiatrist, I also think that we have tended to miss—or at least failed to fully appreciate—the danger that has the great potential to be the end of us.

Some of those who are now speaking up emphasize the dangers of disinformation and bias. Others express concern that bad actors on the world stage could wage machine learning-based attacks. And others still are even more cataclysmic in their warnings, pointing toward how a kind of universal general intelligence could out-compete us and essentially take over the world. I think the danger that puts us most at risk is more basic—more psychological and societal. 

We see its initial manifestations today with addiction to our electronic devices. Increasingly, our devices are designed to capture our attention, pretty much whatever it takes to do so. And machine learning plays an increasing role in how they accomplish this. The mechanisms with device addiction are essentially the same as those that produce the attraction of addicting drugs. Our devices create artificial stimulation that substitutes for the bodily feedback that would normally tell us that something matters. Today, machine learning algorithms compound those mechanisms many times over, supporting the creation of ever more powerful digital designer drugs—with increasingly destructive results. 

Perhaps surprisingly, this outcome requires no ill intent. Simply give a program the instruction to maximize “eyeballs”—which is what we naturally do—and with time it will create the most distracting and addictive content possible. This would be a problem at any time, but in ours it is of particular concern. I’ve written extensively about how we confront a Crisis of Purpose, something we see manifest with today’s growing prevalence of depression, suicide, degenerative diseases, and gun violence. Distraction and addiction only take us away from what engaging real purpose requires of us. 

And we confront the critical fact that this mechanism is inherently self-amplifying. Once started there is really nothing to stop it—at least of a technical sort. It may well be that there is no way to stop it. Of all the dangers that could be our ultimate undoing—nuclear annihilation, pandemic, climate change and environmental destruction—it is this that I think is most likely to succeed. 

In my most recent book, Intelligence’s Creative Multiplicity, I argue that any possibility of avoiding the most cataclysmic of potential outcomes with A.I. lies with us. I also argue that the place it must start is with understanding how fundamentally different machine intelligence is from human intelligence. In fact they have little to do with one another.

A first recognition is both basic and radical. There is an important sense in which human intelligence is not just more complex than machine learning in its considerations, it is inherently purposeful. Human intelligence is “designed” to engage us in questions of value and meaning. This in itself takes us a long ways toward an antidote. Engage human intelligence deeply and anything which distances us from our felt sense of human purpose—which addictive dynamics directly do—is experienced as a violation. 

The book draws on Creative Systems Theory’s picture of intelligence’s multiplicity to take this kind of recognition further. Creative Systems Theory describes how human intelligence, with its multiple aspects, is specifically structured to support and drive our toolmaking, meaning-making natures. And it goes further to  delineate how effectively engaging all of intelligences multiple aspects is essential to thinking with the sophistication the future will require. What the theory calls culturally mature understanding requires that we draw not just on our rationality (in which we take appropriate pride), but also on the world of feelings and emotions (that informs human relations), on the language of imagination (that inspires art and myth), and on the intelligence of the body (that provides a foundation for all the rest). We are by nature reflective, creative beings.

In contrast, while artificial intelligence can in potential be almost infinitely complex, at best it mimics one aspect of human intelligence—the rational—and even that only imperfectly. (Our rationality works in ways that are much more nuanced than we tend to appreciate.) In the end, it is machine intelligence. This recognition is critical if we care to avoid calamity. Certainly this is essential when it comes to confronting today’s Crisis of Purpose. Machine learning is a tool, and one with great potential for good. But in contrast with human intelligence, there is nothing in it that makes it inherently purposeful—or even simply good. It is also essential when it comes more broadly to the kind of understanding needed to take us forward. Artificial intelligence fails to reflect that fact that we are alive, and alive in the particular way that makes us human.

Today, we easily miss these critical distinctions. Indeed, because we so readily idealize the technological (in effect make it our god), we can get things turned around completely. Caught in techno-utopian bliss, we can make machine learning what we celebrate. And that is just a start. In an odd way machine learning becomes what we emulate. As attention spans grow shorter and shorter and we give up more and more of our attention to our devices, cognitive changes are taking place in response. Arguably, today, it is less that our machines are coming to think more like us, than that we are coming to think more and more like our machines. 

We let this happen at our peril. Our ultimate task as toolmakers is to be sure that we use our ever more amazing tools not just intelligently, but wisely. That starts with being able to clearly distinguish ourselves and our tools. Machine learning—and the ever more complex and often amazing forms it will surely take in times ahead—will provide a particularly defining test of this essential ability, one on which our survival may depend.

JOIN OUR LIST

Fill out the form below to receive monthly articles and updates from Charles Johnston, M.D.