My contribution as a futurist focuses on the future’s human dimension. The technological is part of this. What makes us unique as a species is the audacity of our tool-making prowess. But most I am concerned with how we humans understand our worlds and the values we hold. In particular I am concerned with how our future well-being depends on the wisdom we bring to future decisions.
Technological innovation will, without question, have an essential, indeed dramatically transforming role going forward. But for critical reasons, some inherent to the nature of invention, some new to our time, future inventions will present major dangers as well as potential benefits. My intent here is to offer perspective that might help us better use future technologies wisely.
Being that I will give greatest attention to where caution is warranted, I should emphasize that I am not at all a technophobe. In my writing I emphasize the essential contributions that technologies of the future will make in meeting critical challenges before us as a species. But very often our modern tools have effects other than those we intend. And many tools that we are only now beginning to imagine warrant particular scrutiny.
I will attempt here to bring big-picture perspective to a few areas of innovation where apparent advancement easily becomes stupidity … or worse. Sometimes this is because the innovation itself presents real dangers. As often, it is more that in some way we lose track of the fact that our new tools are just that, tools. Technology today easily becomes ideology. We make our tools truth, if not gods. In the process we lose the ability to see clearly. A hammer is a tool, and a great one—at least for certain purposes. But if you met a person who thought a hammer could solve all his problems, you would appropriately be concerned for him. And if that person went further and thought the hammer to be truth itself—or even further and made it divine truth—you would appropriately assume him to be deranged.
With increasingly frequency today we distort the significance of our tools, in particular our digital tools. Creative Systems Theory predicts we would find this in our time. It describes how it is in our nature to “overshoot the mark” as we make transition from one cultural story into a needed new cultural chapter. We extend the previous cultural narrative to the point of ludicrousness. CST predicts we would be doing so today with the mechanistic worldview that gave us modern age understanding and the heroic, onward-and-upward assumptions that fueled the industrial age (see Transitional Absurdity).
The future will require greater ability than in times past to step back and deeply consider the consequences of what we create—if for no other reason that we are now capable of so much that would do great harm as well as great good. Needed new understanding starts with an observation that might seem self-evident, but which can nonetheless escape us: Inventing and using invention wisely are not at all the same. Two things happen when technology becomes ideology. We lose perspective with regard to what our technologies can and can’t accomplish. And we fail to recognize unintended consequences. Today we confront dynamics that at the least distract us from the essential tasks at hand. And sometimes the potential outcomes are calamitous.
The CST concept of Cultural Maturity argues that a new chapter in the human narrative is critical—and happening. That new chapter has essential implications for invention, including implications for what we are capable of inventing. But most specifically it is about changes in ourselves, changes in how we understand ourselves and our worlds. With regard to technology it highlights how even just making useful sense of key technology-related questions requires that we think in more sophisticated ways.
Cultural Maturity’s changes also make possible new human capacities that will be essential going forward to making good choices in every part of our lives (see Capacities.) One such new capacity has particular pertinence to the technological: We must learn to better appreciate the fact of real limits, both to what our tools can accomplish and to our own ability to fully predict the effects of our tools. A new, more mature—post-heroic—relationship to limits will be essential if we are to effectively assess benefit and risk and apply new technologies wisely. It will also be essential if we are to appreciate a limit that is particularly critical to effective decision-making in our time. While new technologies will play important roles going forward, very few of the important human challenges ahead have technological fixes. Addressing them will require changes in how we think and act.
In the end, Cultural Maturity alters not just what we understand, but how understanding works. It involves specific cognitive changes that make it possible to think in ways that are more encompassing and nuanced—we could say simply systemic (see Integrative Meta-perspective). As a start with regard to the technological, thinking more systemically means better taking into account all of the factors involved, including those that might affect whether a new technology benefits us. In addition, culturally mature perspective’s more systemic picture makes every question ultimately a moral question, a question of value. This includes technology–related questions that in times past we might have thought of as value-free (see The Future of Morality). In the end, culturally mature perspective challenges us to take a newly conscious responsibility in an ultimate technology-related moral/ethical question: just what it means to advance as a species (here specifically in our time). It requires us to rethink progress itself (see Rethinking Wealth and Progress).
Here I will touch briefly on seven specific technology-related topics where we are often not as smart as we think—and sometimes dangerously deranged: device addiction, distracted driving, unquestioned assumptions about self-driving cars, increasingly expensive technological advances in health care, the exhilarating but also deceptive promises of space travel, the potential pitfalls of artificial intelligence, and what we can miss with virtual reality. I will also touch on a couple misleading more big-picture technology-related conclusions: the popular notion that we can appropriately think of the Information Age as a “second industrial revolution,” and the assumptions of the more extreme of techno-utopian beliefs.
With the first topic, addiction to our electronic devices, the dangers easily sneak up on us. Our devices can do so many things that we find useful—and are often just plain fun. But the dangers here may prove the most ultimately significant.
If we are to use our electronic devices wisely, we need to examine not just what they can do, but our relationship to them. With growing frequency today that relationship is not a healthy one. I am comfortable asserting that the greatest danger with today’s addiction epidemic, certainly in the long term, lies not with addicting drugs or obesity, but with addiction to our electronic devices.
Addictive dynamics are most obvious with video games where shootings and explosions create readily repeatable jolts of excitement. Addiction works by substituting artificial substitutes for real fulfillment. In an experiment often described in psychology classes to teach about addiction, wires are run from excitement centers in a rats brain to a depressible pedal in its cage. After the rat discovers the pedal, he pushes it with ever greater frequency, in the end abandoning other activities including eating, and dies. The neurological feedback used to keep our eyes glued to the screen with many video game is little different in its mechanisms.
Such dynamics today present as great—or greater—a danger with our cell phones, in part just because cell phones have become an aspect of most everyone’s life, in part because of the immense commercial rewards that come with their ability to control our attention. Several times a week I walk around a lake near my home in Seattle. I see a deeply disturbing trend. Nearly half of the people walking around the lake—it is in one of Seattle’s most popular parks—do so while staring at their cell phones. Often this is the case even for couples. Not only are they missing the beauty of the lake, they are substituting the stimulation of their devices for real relationship, a phenomenon we see with growing frequency more generally with social media.
It is important to recognize that what we see is not simply a product of the usefulness of our devices. There are specific chemical reasons why people feel like they have to check their cell phones every few minutes. It is a dirty little secret of the tech world that programmers consciously design their software to be addictive. They build in rewards that make visiting a favorite site like playing a slot machine. And they make it so we feel anxiety if we are away from our devices for long. The fact that most of the content on our cell phones is advertising-driven means that we should find such addictive methodologies becoming only more sophisticated in time ahead.
These concerns are amplified by what CST refers to as our modern Crisis of Purpose (see Crisis of Purpose). As traditional cultural beliefs stop providing needed guidance, we can be left feeling adrift and alone. This circumstance makes us particularly vulnerable to addiction. It also means that we pay a particularly high price—and not just personally, but as a species—when we confuse addictive pseudo-significance with meaning. Earlier, I described how culturally mature perspective reveals all questions to be moral questions—questions of value. Given today’s Crisis of Purpose, the selling of addictive substitutes for meaning could not be more immoral—this in the sense of diminishing who we are and undermining future possibility. The antidote to our modern Crisis of Purpose lies with asking just what in fact most matters to us with new depth and courage. Being distracted and addicted undermines our capacity to take on this essential task.
The internet promised a new democratization of information. It has often provided just that. But too often today we are bombarded with “information” that has little to do with anything that matters. And the internet also offers new opportunities for deception—as we see with incessant “click bait” enticements and the growing proliferation of “fake news.” If we do not pay attention, rather than freeing us and making communication more democratic, the information revolution could end up undermining the democratic experiment—and even put the larger human experiment at risk. In his dystopian novel 1984, George Orwell warned of Big Brother taking control of our minds. The real danger in the future is not government manipulation, but artificial stimulation masquerading as substance and information being used in ways that ultimately disconnect us from real importance.
The second example, distracted driving, is more specific, but related. We’ve recently seen the biggest spike in traffic deaths in fifty years. The culprit is distracted driving. It turns out that distracted driving is more dangerous than drunk driving. And the issue is not whether we might be taking our hands off the wheel, but interference with the attention needed for the complex task of driving.
We are beginning to pass laws that make driving while holding a device illegal. But such laws are partial solutions at best. In part this is because holding the device is not the problem. More it is because making such behavior illegal is not a sufficient deterrent to override the hold our devices have on us. 4In fact none of this need be a problem. There is a straightforward technological solution that has been well worked out. We need simply make it so that cell phones do not function if a car is in motion. (Either the car or the phone can be programmed to produce this result. And it can be done in such a way that others in the car besides the driver are not affected).
The reasons this obvious step has not been implemented? The device industry and the car industry would each lose money. And we would hear an uproar from all the people addicted to their cell phones. But the most basic of public health principles make the need to apply some version of such technology obvious. One of the first steps we need to take if we are be intelligent in relation to our digital tools is to implement this kind of safeguard.
With self-driving cars the issue is not so much harm than insights we can gain from what we too often miss. I suspect that self-driving cars will in time be a good thing. Eventually they should help us address not just traffic fatalities, but also roadway congestion and fuel consumption. They should also make time spent in traffic more tolerable. But I find it fascinating how quick people have been to assume that self-driving cars will be here tomorrow and how little attention is often given to concerns that could be real problems. Wholly autonomous driving—not just augmented driving as we are seeing now—should likely in time be quite workable. But I suspect that getting there will be more difficult than our idealized pictures tend to assume.
A very basic reason has to do not with the cars themselves but with the roads they must drive on. Wholly autonomous vehicles are essentially ready to go when it comes to well-marked highways. But winding country roads, complex intersections, and road surfaces that show any great degree of neglect are another matter. Refinement of technology can rectify some of this. But the truth is that really broad implementation of fully autonomous self-driving technology will require also taking on the task of repairing and updating roadway infrastructure (a monumental task at least in the United States). This could drag out the timeline considerably.
As far as the vehicles, even the experts have often been remarkably blind to real risks inherent to the transition. Three issues most stand out. One is the the need for exceptional levels of reliability and safety. The software in devices we are more familiar with is not designed to operate indefinitely without ever crashing or freezing, errors that could be deadly in a car. And we are only just beginning to appreciate the dangers that could arise from the software in self-driving cars being hacked. 4The second issue is the “hand-off” problem—how to have a person take over in an emergency when the technology is specifically encouraging the person to be inattentive. The “hand off” problem is very real and not easily addressed. Fully autonomous vehicles are a workable technology. But the “hand-off” problem could make getting from here to there considerably more difficult than people tend to assume.
The third issue—what has been called the “trolley problem”—confronts us with the question of what kinds of moral choices we are going to program into our “intelligent” machines. Some of those choices present moral quandaries that lack obvious answers. Imagine you are driving along in your self-driving car and a bunch of kids run out in front of you. Your car has three options. It can swerve out into traffic and quite possibly kill you. It can swerve right and collide with whatever is there—a wall, an embankment, or perhaps more pedestrians. Or it can keep going straight ahead and hit the kids. Note that the moral dilemma this situation presents is not just complex, it is one for which no acceptable choice exists. (We’ve failed to recognize this kind of dilemma in the past because we responded to such situations reactively. Take time to think through the options consciously and the absence of acceptable choices becomes inescapable.) Not just software engineers, but lawmakers, insurance companies, and ethicists have a lot of work to do before we have at all acceptable solutions.
Self-driving technologies also present inescapable economic concerns. We tend to assume that new technologies will produce economic growth. But this benefit can be highly unequal in its distribution. Autonomous cars will surely benefit corporations that build them and utilize them. They will also benefit stockholders who invest in those corporations. But at the same time, they will eliminate a huge number of jobs. Very often these will be jobs held by the people least in a position to weather economic disruption. The growing gap between the planet’s haves and have nots is one of the dynamics most likely long-term to result in major disruption and even calamity on the planet.
Again, none of this is to argue that interest in self-driving technology is misplaced. It is only to highlight how readily we can leave out essential factors when a new technology affirms ideological beliefs. We are not very good at assessing—or even recognizing—risks when doing so runs counter to our modern heroic narrative.
Health Care Technologies
I’ve written extensively about the dangers of clinging to the past’s heroic mindset in to my own field of health care. Continuing to view medicine’s task only as fighting death and disease threatens to derail the whole health care endeavor. I’ve described how inescapable economic limits mean that with time we will have no choice but to consciously limit available care. I’ve also described how doing so will require a maturity in our relationship with limits—including life’s ultimate limit, death—that as a species we have not before been capable of (see Health Care Reform as Political Crazy-maker).
A major part of that need to accept health care limits is a product of new technical advancements. Many innovations now on the horizon will have great benefit. And some new approaches can reduce costs. But the larger portion of new treatments—highly individualized medicines, evermore complex transplant techniques, increasingly sophisticated gene-based approaches—are hugely expensive. Often on the same page of the newspaper we find one article on spiraling health care costs and another celebrating some dramatic new high-priced treatment. That there is no discussion of a possible conflict between these images reflects the degree a heroic mythology continues to determine our health care perceptions and drive medical decision-making. 4New health care technologies even more directly confront us with the kind of economic quandary that I described for self-driving cars. Only the wealthy will be able to afford the latest and most expensive advances. It is also very possible that genetic modifications, again available only to the advantaged, will result in social inequality being programmed into our DNA. New health care technologies, unless we manage them very wisely, could serve to further bifurcate humanity in particularly dangerous ways.
Space Travel and Colonizing Mars
Space travel is the realm of human activity that in the last century has most explicitly symbolized our heroic, onward-and-upward mindset. Today we find the goal of colonizing mars. Again, the degree of unquestioning acceptance is both striking and informative.
The scientist in me agrees that going to Mars might be fun and that attempting to establish a colony there could be a fascinating thing to try. But the argument most often used to justify the effort—that we need to inhabit mars as a safeguard given damage we are doing to the earth—just doesn’t hold up. If our concern is human survival, energy and attention is much better spent working to assure that the earth remains healthy and hospitable for the long term than trying to figure out if we can make a decidedly inhospitable place at least habitable.
It is great if we wish to go to mars. But we shouldn’t do so based on claims that it will save humanity. We should be honest and admit that the idea excites us because it represents a particularly dramatic and colorful extension of a kind of endeavor we find inspiring. At the same time, we should then give huge priority to issues that will directly affect long-term planetary and social well-being—such as the extinction of species, climate change, loss of natural resources including clean air and water, food supplies for the future, and proliferation of weapons of mass destruction.
Again here there is also a generally unacknowledged economic factor. Even if we could drastically reduce costs, travel to mars would be too expensive for anyone but the most well-off.
On all these fronts, we need to find greater social maturity here on earth before we take our ultimately unsustainable beliefs off to other planets.
While artificial Intelligence holds great promise, its promise is decidedly two-edged. A particularly extreme sort of danger has gotten the most attention of late. Several years back physicist Stephen Hawking proposed that the full development of A.I. could spell the end of the human race. Other highly respected thinkers have since supported his concern. A.I. should with time have no problem out-thinking the human species. The ready application of evolutionary principles—the use of programs that not only self-replicate, but generate novelty and select for advantage—means that it should not be long before A.I. out-competes us as well. I think this danger is very real and essential to address. I’ve also argued that there is a solution.
Appreciating how this might be so starts with a more basic recognition. The term “artificial intelligence” is a misnomer. Better we should speak of machine intelligence or digital intelligence. In fact the use of the word intelligence at all is misleading. Anyone who has worked at all deeply with human intelligence knows that both the mechanisms and the results differ fundamentally. CST let us be more specific. It describes how human intelligence has multiple layers each of which organizes experience in distinct ways (at the theory’s simplest, it speaks of body intelligence, imaginal intelligence, emotional intelligence, and rational intelligence). These various layers work together to make us the meaning-making, relationship-making, civilization-making, audaciously creative creature that we are. CST argues that not only can’t intelligence as a whole be reduced to digital processing, neither can any of its aspects (even, ultimately, the rational). (In fact it argues that even the intelligence of the most humble life forms is of a fundamentally different sort.)
I could imagine someone responding, but what about the Touring Test? (Alan Touring’s test proposes that if a computer responds to your questions and you can’t tell it is a computer, then artificial intelligence has been achieved). Computers are now passing it. My response is that the Touring test is bad science. Think about it. Imagine a bright red toy sports car made out of candy that someone pulls along with an invisible string. From a distance, you can’t tell that it is not real. Now such a toy might be fun and useful for many things, even amazing things. But that does not make it a car.
Miss this more basic recognition and we become vulnerable to getting ourselves in trouble with artificial intelligence. We end up with distorted ideas about what A.I. can accomplish. We also leave ourselves helpless to manage its unintended consequences. Managing artificial intelligence wisely—and that includes avoiding the kinds of outcomes that Hawking and others have warned of—depends on drawing on precisely what makes living intelligence, and in particular human intelligence, different.
Making sense of this assertion requires an appreciation for the cognition changes that produce culturally mature perspective (see Integrative Meta-perspective: Cultural Maturity’s Cognitive Reordering). Those changes make it newly possible to recognize intelligence’s full multiplicity and apply it in consciously integrative ways.Inherent to the Modern Age’s heroic narrative was the belief that rationality and intelligence were one and the same. In a rational/computational sense computers can already out think us—indeed many times over. But Integrative Meta-perspective reveals a more complex, encompassing, and powerful picture of intelligence’s workings. Here A.I., at least as we today conceive of it, does not begin to come close.
I’ve written about how there is an important sense in which human intelligence—when all its complexity is included—is not just “creative” in a sense we are only beginning to grasp, but also inherently “moral” (see The Key to Artificial Intelligence Not Being the End of Us). I’ve described how the fact that we have always had moral codes, including the different forms they have taken, can be understood to follow from how human intelligence works. This includes intelligence as we have thought about it in modern times—our capacity for rational processing. But this dynamic breaks down if we “overshoot the mark” and extend the modern age definition of intelligence beyond its timeliness. Intelligence becomes evermore machine-like and evermore vulnerable to exploitation (see Transitional Absurdity).
How do we keep Artificial Intelligence from being our undoing? We must manage it using the newly mature and systemic kind of perspective that comes with Cultural Maturity. That means consciously drawing on the whole of intelligence. If we can do this, not only can we make wise choices in A.I.’s application, A.I. can serve us as a rich and powerful tool. Indeed it can free us to use our great creative capacities in evermore rewarding ways.
With virtual reality again we see numerous beneficial applications—for training of all sorts, to enhance architectural design, as an adjunct to new robotic tools in medicine, and much more. As far as dangers, most have to do with V.R.’s vulnerability to misuse.
Most obviously V.R. can be can be used to confuse or deceive. “Fake news” lies and distorts. “Fake realities” have even greater potential to be used for demagoguery and manipulation.
A more immediate danger could also prove ultimately most significant. I’ve described how the kind of “artificial stimulation in the name of meaning” we currently find with video games readily translates into ever-more sophisticated digital “designer drugs.” Virtual reality provides one of the most ready means to achieve this highly destructive end. I suspect this second kind of misuse will, at least initially, drive V.R.’s most profitable and widespread applications.
The antidote in each case again comes back to Cultural Maturity’s cognitive changes. As with pseudo-meaning more generally, these changes help provide needed larger perspective. They make us less easily deceived and less vulnerable to addictive dynamics.
Particularly with regard to the second kind of misuse, Integrative Meta-perspective’s more encompassing grasp of intelligence in potential also plays a critical role. Video games gain much of their attraction by drawing 4on the imaginal aspects of intelligence. We see this in the common use of mythic themes and imagery. If Virtual Reality is used to augment the video game experience, this effect could be amplified considerably. I’ve described how, because today we have only the last faint remnants of these aspects of intelligence available to us, they are readily exploited (again see Transitional Absurdity). The answer to safeguarding against the exploiting of the imaginal lies in a deeper engagement with the whole of intelligence and, with this, an appreciation of the essential ways that each aspect of intelligence contributes to making us human (in a similar way to how the answer to not being tricked by someone’s misuse of logic lies with understanding logic sufficiently that one can’t be out-witted).
A Second Industrial Revolution
Respected thinkers have suggested that its is legitimate to think of the digital revolution as a second industrial revolution. At one level the observation is reasonable. The first industrial revolution replaced muscle power with machine power. The digital revolution is similarly replacing brain power, here with computer code. But any conclusion that the significance is analogous is I think, again, at least premature.
Computers and cell phones have changed much in our lives. But it is a considerable stretch to suggest that the changes they’ve produced, at least thus far, have had the same magnitude of effect as the advent of industrial production or the widespread introduction of running water and electricity. That we might think that they have is possible only because these earlier, more fundamentally transforming advances have become so commonplace that we can’t imagine life without them.
I am also not the first to point out that it is far from certain that further innovations just now being introduced—such as virtual reality, the smart house, big data, or the broader “internet of things”—are going to have the same level of social and economic effect as earlier digital advances. Certainly there will be impacts, and in some cases major impacts. But much that is being presented with great fanfare today seems more clever than life-changing. For example, personally I find descriptions of the “smart house” of the future decidedly underwhelming. There are some useful advances, but most of what adding digital connectivity accomplishes does little that I can’t do quite well right now in simpler ways.
The critical observation I made earlier that computers and A.I. are not really about replacing brain power, but about replacing one very specific aspect of cognitive functioning is again pertinent. I’ve emphasized the profound implications that understanding this distinction has for the question of whether future digital advances will ultimately benefit us. But this distinction also has implication for the more basic question of whether future information age advances will affect our lives as fundamentally as idealized images can suggest. Artificial intelligence can powerfully augment human intelligence. That should result in many helpful and even transforming applications. But replacing human intelligence in any ultimate way is quite a different thing and not what we will see unless we badly lose our way.
Technological gospel beliefs can today be extrapolated to the point that they become literally religious. I think in particular of the techno-utopian assertion of people like futurist Ray Kurtzweil who proposes that we are rapidly approaching a point in history—what he calls the “singularity”—when artificial intelligence will surpass human intelligence. He proposes that a whole new form of existence will result, one that will transcend not just our biology, but also our mortality. Kustzwell describes digitally downloading our neurological contents and through this attaining eternal life—which he hopes to be able to do in his lifetime.
Technologies of the future will dramatically impact how we think about ourselves. But it is important to appreciate that while modern day techno-utopian thinking is put forward as radical in its newness, it is not new in any fundamental sense. As I described with misconceptions common with A.I., such thinking falls prey to an outmoded picture of intelligence. And in an even more culminating way than space travel, it reflects an ultimate expression of the Modern Age’s heroic, onward-and-upward story. And we can rightfully tie techno-utopian thinking to even older impulses. It also extends—here to a ludicrous degree—the separation of polar opposites that has marked the historical evolution of understanding. In the end, it gets around apparent contradiction by simply eliminating the “offending” half of polarity: the body in favor of an exultant (though ultimately mechanical) mind, the unconscious in favor of a now all-knowing (but in fact absent of real human knowing) conscious, and death in favor of a now triumphant digital immortality (see the Dilemma of Trajectory). Far from being new to our time, efforts to eliminate the body, the unconscious, and death have been common to utopian beliefs for thousands of years.
Bringing Perspective to the Future 4What do these observations together tell us about what is being asked of us with regard to the technological going forward? Certainly a recognition noted in getting started is key: Very few of the really important challenges ahead are amenable to purely technological solutions. Climate change makes the most obvious example. Its solution necessarily lies with changes in how we think and the choices we make. The same ultimately holds for avoiding of nuclear catastrophe, guaranteeing clean air and water and adequate food for the world’s people, addressing the dangerously growing gap between the world’s haves and have nots, and slowing the ever-increasing rate of species extinctions. None of these questions have technological fixes.
I’ve described how CST’s concept of Cultural Maturity takes this observation further and lets us be more specific. It argues that good choices in our times will require a new kind of cultural narrative and with it, fundamentally new, more mature ways of understanding and acting. The fact of new invention is exciting, and inventions yet to come are important to contemplate. But more important, ultimately, is the need for new ways of thinking about ourselves and finding right relationships to the technologies we create. 4I am saddened whenever I meet a smart young person who assumes— without giving it much thought—that having the latest app or hand-held gizmo means he is an innovator, on the cutting edge. The cutting edge today is so much more demanding and significant—and interesting—than this. Being an innovator in any ultimately meaningful sense in our time is about bringing to bear needed new, more mature ways of understanding ourselves and our worlds. Technical invention has a place with the leadership tasks ahead. But the most important of those leadership tasks ask more and are more specifically about us.
CST proposes that what makes we humans unique—or at least strikingly unusual—is the audacity of our toolmaking, meaning-making, we could say simply “creative” natures. The concept of Cultural Maturity goes on to describe how our times are challenging us to better understand our toolmaking prowess and to find a new, more mature relationship to our tools and what we use them to create. In modern age times, we’ve elevated and idealized our tools. Indeed, as I’ve suggested, in an important sense we’ve made them truth itself—whether particular cognitive tools (“I think, therefore I am”) or the tools and products of industrial achievement. With Cultural Maturity, we become better able to draw consciously on the whole of our cognitive complexity and also to step back and appreciate our tools as simply tools. We also find the possibility of new skills and capacities that can help us utilize our tools in the most life-enhancing ways.
Our future depends on the greater responsibility and appreciation of both possibility and limitations that come with our time’s needed more mature ways of understanding and acting. With the perspective that results, our relationship to invention changes fundamentally. We better appreciate how the task with our machines is to have them serve what we at our best are that machines are not — moral, creative, loving, capable of being not just intelligent but wise. There lies the critical fork in the road. Our tools can free us or replace us depending on how we understand them and understand ourselves.