Kiteba: A Futurist Blog and Resource

Strategic Foresight: Knowledge Ideas Technology Ecology Biology Architecture

Night-Thoughts on The Singularity: A Dark Arrival in a Dark World

Leave a comment

Author’s Note: I am generally very optimistic about technology, but this is a pessimistic piece. Like most everyone else these past few years, I have been reading (too much) about what seems to be the consensus telos of technology: The Singularity, or machine super-intelligence. Though I’ve posted optimistically about it here before, something about it has been bugging me of late, so I’m posting my “night-thoughts” now, however pessimistic and long-winded they may be.

In 2015, the world finds itself under increasing pressure from a converging set of global tangents that temper my optimism regarding the future forms and applications of super-intelligent machine technologies. These tangents are the well-known, global-scale, too-big-to-fully-comprehend factors about which concerned parties annually publish detailed cautions. There are seven primary factors, so let’s call them the seven deadly factors shaping the future.

The seven deadly factors are as follows: 1) unprecedented population growth that strains resources, threatens demographic balances, and fuels conflict; 2) environmental degradation that drives extreme climate events, global warming, drought, and species extinction; 3) an imbalanced and fragile global economic system whose volatilities continually threaten to produce collapses that destroy individual livelihoods and international stability and whose increasing inequities marginalize the majority of the global population; 4) a fragmented and increasingly antagonistic international political climate marked by terrorism, failed states, nationalism, and multi-dimensional conflict; 5) a global digital computer and telecommunications network that facilitates fraud, cyber-warfare, and invasive spying; 6) the pervasive spread of mass-destruction-scale military technologies, including nuclear, biological, and chemical weapons, as well as, eventually, more cutting-edge laser, sonic, and particle weapons; and 7) the rapid automation of the world’s productive work, which threatens to render redundant (and poor) millions of human beings in the near future.

These seven deadly global factors, each on its own trajectory toward potential disaster, exacerbate one another because they are intimately connected. Overpopulation, our first example, does not improve the environment, stabilize the global economy, lessen political instability in the Somalias and Syrias of the world, enhance digital privacy, stop the manufacture of weapons of mass destruction, or create jobs and guaranteed livelihoods. In fact, overpopulation makes everything else worse, as does environmental degradation, economic volatility, and so on. Each of these seven monsters feeds the growth of the others, in a seeming death spiral to some dismal end.

On some level, many people are aware of at least some of the dangers, yet all of these trajectories and their potential outcomes are challenging to contemplate in totality, perhaps because everything is so big and interconnected, unpredictable, and out of the control of any one person or group. Or perhaps our difficulty lies in our perspective. In a very important sense, our perceptions depend upon scale and point of view. If we are scavenging for food in the dumps of Jakarta, sheltering from gunfire in Damascus, or escaping murderous warlords in Nigeria, it is difficult to contemplate the impacts of quantitative easing on the markets in the Eurozone or the privacy implications of the US Patriot Act. Conversely, it is difficult to appreciate the local livelihood impact of desertification in North Africa or the local security impact of a militant coup in Yemen if we are strolling off the Google campus to discuss a startup with a college friend over sushi in Palo Alto, or getting away from Tokyo for a weekend of golf and leisure in Honolulu, or driving a van full of eight-year-old kids to a soccer game, then pizza, in the suburbs of Chicago.

In complex systems, in other words, scale and perspective matter — and our seven deadly factors are a complex system. In it, there are points of view, geographically and economically, from which it may be difficult to see the interconnectedness of the seven trends, or even see them at all. We are biased by our local experience of the world. The Syrian refugee has his experience and may interpret it in terms of specific local factors, oblivious that the waves of a large, complex maelstrom have broken on his shores. For the Googler, the storm may not even be on the horizon, though she’s seen a few Ted Talks about some of the world’s big issues and there are corporate narratives around improving the world in the abstract, and yet, it seems so far away from the valley. It’s only by seeing the bigger picture — quantitatively and qualitatively — that one begins to see the totality of these interconnected dark forces. If they have not impacted your life, you are fortunate (and/or very wealthy), but the trends indicate that it’s only a matter of time for you. Incidentally, as the seven deadly forces gather strength, more and more of us will lose sight of the big picture; whether we’re homeless in Mosra or jobless in Memphis, we will be compelled to focus on our local situation.

So, yes, the world is spinning toward dangerous space, and there seems to be no superhero available to correct our course. It’s a world of struggle and gathering darkness for many of Earth’s inhabitants, and if/when a super-intelligent machine, or “The Singularity,” arrives, it will arrive into this dark world.

And, for the record, I think it will arrive.

The question for me, then, is not whether machine super-intelligence will arrive at all, or whether it will arrive in a bright or dark future. I believe it will arrive, and unless a lot of change happens, again, it will arrive in a dark and dangerous world. The recent AI-will-destroy-us, existential threat admonitions from various scientists and tech leaders seem mostly centered on the inherent danger of the technology, the fear that it will be smarter than us and thus beyond our control, but that is not my specific concern here at all. My concern is more about the dangers of context because the specific context into which machine super-intelligence arrives will impact what it will do and what part it will play in the developing narrative of our collective challenges in the face of the seven deadly global forces outlined here.

In my opinion, the first thing we can really predict about the arrival of machine super-intelligence is that it will arrive in a context sufficiently wealthy and technologically advanced to produce it. From there, prediction gets dicey, but you can extrapolate from possibilities. The military and intelligence industries of any number of advanced nation-states are potentially capable of producing the singularity, and of confiscating it. Also, there are advanced academic and private industrial contexts in which machine super-intelligence could be produced, but I would argue that such a genesis would be no different than it being produced by the military-intelligence agencies. If it were produced at Google, for instance, it could quickly be co-opted by the Pentagon. If a Chinese academic produced it, similarly, it would surely be conscripted by the Chinese state.

My assumption then is that, given the way the parties work today, and given all the implications of the factors of spying and nation-state competition/warfare, machine super-intelligence would end up in the hands of governments as a military and/or intelligence tool. Furthermore, if one government possessed the technology, it is safe to assume that all other sufficiently advanced competitive governments would get it eventually too, by hook or crook, as happened with nuclear weaponry.

So in our world of the seven deadly forces, it is likely that machine super-intelligence would emerge within the context of wealthy nation-state competition and/or warfare. If so, it’s safe to assume that the technology would simply accelerate the relevant deadly forces to new pitches of danger. The surveillance of populations would increase; states would further undermine and destabilize each other through cyber-warfare and economic warfare; and the new tech would be leveraged to develop even more new technology, most likely in a competitive or militaristic vein (i.e., weapons), and automate more work.

Now, the more prominent traditional proponents of machine super-intelligence have argued that machine super-intelligence would actually solve many of our problems, notably medical problems, and that it is the superman we’ve all been waiting for, but I find that perspective difficult to accept fully. To imagine that the likely use of machine super-intelligence is the prolonging of human life seems to be more the aspiration of aging wealthy technophiles than anything likely to happen in our dark world. Sure, if it arrives, there may be pockets of such benefits among those wealthy transhumanists, but to think that it would be a global offering flies in the face of everything else that is going on. Who would support prolonging the life of an increasingly unemployable, impoverished, displaced, and socio-politically fragmented (perhaps even radicalized) 90% of the overpopulated world?

So my conclusion is that, unless it were monopolized by some perfectly altruistic enclave of infinitely wise human beings, machine super-intelligence will likely, again, be a weapon in the hands of a fractured, clannish global power elite who are constantly at war with each other. At least, it will be at first, and that’s probably enough to do us in. If it turns out that machine super-intelligence is or becomes too powerful for anyone to control, its impact would vary; if moral or friendly, machine super-intelligence might think little of any of the dark uses humans may make of it and persist as nothing more than a curiosity, while the seven deadly forces continue to gather momentum and governments attempt to corrupt it; if amoral, it may think nothing of us and become the eighth and final deadly force of the future.

Our only hope, really, goes beyond ensuring that the AI we create is “friendly.” Rather, we have to make sure that machine super-intelligence does not arrive before we change the context of our world. We have to fix everything, in other words: control our population, save our natural environment, build a stable and equitable economy, achieve world peace, safeguard privacy, bury our guns, and figure out how to guarantee human livelihoods. Whether or not we can delay the arrival of machine super-intelligence is up for debate, but I think we should assume we can’t. The question for us now is how quickly can we fix it all. Can we clean everything up before AI arrives at our door? The odds are super-long, perhaps impossible, but only if we can change the direction of the seven deadly forces will machine super-intelligence be anything but a dark arrival in a dark world.

Author: Eric Kingsbury

Technology Futurism Creative Marketing Strategy Art Music Travel Writing Thinking Ideas www.kiteba.com

Leave a comment