Physicist Stephen Hawking recently proposed that artificial intelligence, as it comes to ever more exceed the processing speed of human intelligence, will, with time, be our undoing. Others, including Tesla’s Elon Musk have seconded the warning. Are the dangers as extreme as these men propose? And if so, is there anything to be done? I would suggest that the dangers are real. I would also suggest that there are very much things we can do about them. The answer to avoiding perhaps terminal calamity lies in the particular nature of human intelligence.
To make sense of this conclusion, we need two essential recognitions. The first notes that artificial intelligence and human intelligence involve very different kinds of information processing (a recognition we can miss because of the limited way we tend to think about intelligence). If we restrict what we mean by intelligence to how we most commonly conceive of it, then the extreme picture put forward by Hawking and Musk is likely correct. In a purely rational, computation sense, computers are already more intelligent than we are. And as computers of the future become capable increasingly of evolving in self-organizing ways—not just learning, but replicating themselves in ever more complex forms—they will rapidly outpace us not just in terms of processing speed, but in the complexity of interrelationships they are can take into account. In a purely technical sense, artificial intelligence will readily out-compete us. And the time is not far off. Being that there is nothing inherently moral in machine intelligence, being outcompeted could very well result in our demise.
But artificial intelligence and human intelligence are not the same and how they differ means that this image of direct competition as a function of speed and complexity of processing is not correct. Creative Systems Theory proposes that our Modern Age concept of intelligence represents but a narrow abstracted slice of the full complexity of human intelligence’s toolmaking, meaning-making—we could say simply “creative”—makeup. It describes how intelligence has multiple dimensions—along with its appropriately celebrated more rational aspects, also aspects that draw on more emotional, imaginal, and bodily ways of knowing. It goes on to describe how these various intelligences, working in conjunction with conscious awareness, function to support and drive our audaciously creative natures. This kind of processing involves a fundamentally different kind of complexity than we find with artificial intelligence. If this observation is accurate, to think in terms of direct competition misrepresents the challenge. It mixes apples and oranges.
The second recognition turns to a specific characteristic of human intelligence that is essential to appreciate if this observation of differences is to support my claim that there are very much things we can do. Creative Systems Theory describes how a key way the specifically creative kind of intelligence that makes us particular if not unique as creatures is different is that it is inherently moral. “Creative” in the way I am using the word involves more than just innovating—which self-organizing machines can do very well. The way human intelligence works makes it concerned ultimately with human betterment (and not just in a technical sense), and as attentive to larger well-being as competitive advantage. Creative Systems Theory also describes how, if anything defines Cultural Maturity, it is the ability to draw in newly conscious and integrated ways on intelligence in this more complex, explicitly creative, and ultimately moral sense. If Cultural Maturity is realizable, then not only will artificial intelligence not be our undoing, we should be able to harness it for making the kind of wisdom-based decisions on which our future depends.
A look to the cognitive reordering that produces Cultural Maturity’s changes helps clarify both of these recognitions. With Cultural Maturity, what Creative Systems Theory calls Integrative Meta-perspective replaces the from-a-balcony like, objectivist/rationalist view that defined Modern Age understanding (and successfully took us beyond the mystical assumptions of medieval belief) with a vantage that is as once more conscious and also more encompassing. Part of what it more fully encompasses is the multiple aspects of intelligence. Modern Age from-a-balcony “objective” perspective understands the world in rational, machine terms—Descartes’ great clockworks—and lumps other intelligences together as aspects of the subjective (which at best provides decoration for the machine). Computer intelligence is a logical extension of this machine language kind of understanding—which explains why, in modern times, we might confuse it with intelligence as a whole.
Integrative Meta-perspective’s more encompassing vantage alters how we think about intelligence, and, with this, how we understanding more generally. It produces a new, explicitly integrative, more dynamic and systemic view of whatever it is we might wish to consider, including the moral. Creative Systems Theory adds that we can use a creative frame both to describe intelligence’s systemic workings and to capture this further result.
When I say that human intelligence is inherently moral, I am referring to intelligence in Integrative Meta-persepctives’s more inclusive and specifically generative sense. Creative Systems Theory proposes that intelligence has always been inclusive and “creative,” simply that before now we have not been able to appreciate this fact. Three observations helps capture what makes such intelligence moral and the particular implications for our time.
First, the generative nature of such intelligence means that it inherently engaged in helping things becoming “more,” and not just in a technical sense. In Creative Systems terms, it produces increases in Capacitance—a growing ability to hold and further life. We see this with any kind of human developmental process, from the stages of individual growth to the evolution of culture. Seen from a culturally mature perspective, moral acts are acts that are life-affirming—put more abstractly, they support Capacitance. Human intelligence’s creative nature thus means that, in this most basic sense, human intelligence is engaged in a moral enterprise.
Second is how intelligences in this more complete sense involves not just aspects that are concerned primarily with categories and differences, as is the case with rationality, but also aspects that are as much or more concerned with connectedness and common good. For example, Creative Systems Theory describes how bodily intelligence is primary during tribal times in culture. In tribal times, connectedness with nature and with one another in tribal bonds is of much greater importance than the distinctions of analysis or the delineations of individual identity. It is in the nature of human intelligence that the kind of “more” it is concerned with involves not just individual benefit, but also larger well-being.
The third observation relates more specifically to now and the changes that define our time. I’ve described how intelligence as understood in culture’s Modern Age was not much different from machine intelligence. Indeed with what Creative Systems Theory calls Transition, human intelligence becomes reduced to little more than this. It is thus not surprising that in our times we so often see behavior that is essential amoral—whether based on an essentially materialistic ethic, or on a different-strokes-for-different-folks moral relativism that in the end is not moral at all. But because Integrative Meta-perspective more fully engages the whole of intelligence, it also more deeply engages our moral—life-affirming—natures. It offers that we might not just be more consciously moral, but also bring a degree of wisdom and an encompassing embrace to our moral discernments that has not before been possible.
We can miss the ultimately moral nature of human intelligence for a couple of reasons beyond just that fact that we’ve not had conscious access intelligence’s creative multiplicity. There is how, historically, we’ve tended to place the source of moral guidance and advocacy outside of ourselves—in divine forces. There is also how we humans often do decidedly immoral things—both as individuals and collectively. Culturally mature perspective addresses both concerns. It helps us appreciate the personal and the spiritual not as separate, but as aspects of a larger systemic picture. It also helps us see how acts that in limited ways do harm—often significant harm—can come with the way creative systems work. To say human intelligence is inherently moral is not to say we are inherently good. Things creative involve uncertainty and the possibility of error, and may also produce results that can be unfair and cause pain. It is also in our natures that we can respond to threat in unsavory ways.
Consistent with this creative picture, the more complete picture of human intelligence I have described does not necessarily mean our future will be positive. It is quite possible that what we will become capable of creating with our audacious tool-making capacities will indeed be the end of us. What these reflections on human intelligence do is make clear that we are wired in a way that, at least in potential, supports the managing of future invention—including kinds of intelligence we might invent—in ways that are ultimately moral, and even wise.