Kiteba: A Futurist Blog and Resource

Knowledge Ideas Technology Ecology Biology Architecture


7 Comments

Short Fiction: A Life Pod at Riverton

Speculative fiction has always been a great way to imagine the future. The following is a short climate-related piece I wrote.

A Life Pod at Riverton

“When we look at biological analogues,” Jane began, lifting the cover off the evap system and dropping to one knee, “we see the many ways in which large organisms are vulnerable when climate push comes to climate shove.”

The sun hovered in an infinite sky, bright, blanching out any atmospheric color. It was spring, and the air was warming, with a sweet sugar breeze.

Jane lifted a hand to shadow her eyes.

“Elephants, lions, cows, all the big mammals,” she said, then gestured in the direction of several grassy mounds that rose from the prairie. “Too big, too slow, too pack-oriented. Vulnerable.”

Then, she reached into the evap unit and pulled out a length of rotten rubber hose.

“So too all the networks dependent on leaders,” she went on. “Bees and the like. Vulnerable.”

“And now mostly gone,” I added, handing her a wrench.

“Yep,” she reached into the opening at the base of the evap unit to screw down a new hose. “Humans in our old hierarchical mode as well. You know what almost happened to us. It’s amazing to think that the principles that gave us such tremendous adaptive benefits in the past would lead us to disaster.”

“The greed, the dependency, the consumption,” I agreed.

It seemed everyone was talking this way now, I thought to myself, after ten years of utter madness. A sudden sanity had taken hold and was spreading across a ravaged world.

It was a simple idea, a small idea in a way, but the life pod concept did it; it catalyzed the clarity, the life-affirming sanity that had begun to sweep over us all.

I had been unrooted for several years, waiting on various waiting lists. They were difficult years, but in late March, I was offered a pod in this section of the plains, and so I had made my way by foot and auto from the east coast to this new Midwestern place, Riverton. After everything went down, and society fell apart, we all abandoned the old places, the cities and towns, just left them behind where they were. We made new places, like Riverton here.

“I get it now,” she said, flipping the master switch on my life pod, “It took me a while. The problem was always the grid systems. Millions, billions of people depending on these artificial networks — agriculture, economy, power — that others control, that bunched up masses of people. Exploitation was inevitable.”

The machinery within the life pod woke up and began to hum its soft, green-power hum. Water condensing from the air, circulating in the grow systems. Photovoltaics and wind turbines charging batteries. Air scrubbers.

Jane had been one of the earliest Riverton residents, she told me, and now it was her turn to welcome me, the newest resident.

As we waited for the life pod to flush its air and water circulators through the three rooms, the garden terrace, the aquaponics, Jane offered me a glass of water from a bottle dangling from her belt. We sat on a bench in front of the main door.

“I met Sam Turner once,” she said, then laughed at my surprised expression. “He came through here seeding the pod plans, the 3D printers, helping us put it all together. I’ve been here since the beginning.”

“Wow,” I said, sipping from the cup of water. It was fresh and clear. “They say he’s disappeared.”

“Well, he moved on from here. But the pods are everywhere. They’re how we live now.”

I nodded.

Yeah, it was a small idea, a simple idea. Just give everyone a life support system, a life pod. Provide each person with an automated domicile that produces water, fish, vegetables, fresh air, security. One integrated life support system, complete in itself, powered by itself, easy to build with surviving technology, easy to maintain.

And the repercussions were simple too: no money, no economy, no deprivation, no starvation, everyone with their own place, with the freedom of guaranteed sustenance. You owned a life pod, and it’s all you could own, all you needed to own. The small barter markets of goods and services were for entertainment, diversion. Nobody’s life depended on them.

That was Turner’s gift to a world that needed both liberation and healing.

After a few minutes, the light on the front of my life pod toggled from red to green with an audible click.

“Well, here you are,” Jane smiled and patted my knee. “Welcome home.”

She stood and walked off into the bright April day, waving at me. I waved back and looked again at the rising mounds to the east, now luminous with spring grass.

Here I am.

Advertisements


Leave a comment

Tomorrow’s Yesterday is Today; or History of/and/or/in the Post-Information, Post-Truth Age; or A Dispatch from a Computer Simulation Beyond the Time Singularity that Just Happened, Whenever that Was

The following is a speculative piece, or screed or manifesto perhaps, that purports neither to be true nor to be real-time.

desk

Part One: Just Take Those “Old” Records Off the Shelf

I wanted to begin with the suggestion that, in “2015,” something deep and fundamental changed in our world. I wanted to write, also, for the first time in history — it’s such a rare opportunity that one gets to use those words in that order — so I wanted to write, for the first time in history, old music outsold new music (Pugsley). But qualifications are always in order. “History” here means the time period during which relevant sales records were kept, extending presumably back to the first time “music,” by which we must mean recorded music, was first packaged for sale in the United States. These sales figures exclude streaming music, paid or otherwise, which would probably not change the results anyway. As it is, it’s an amazing anomalistic “fact,” and “true,” though we will have to keep qualifying quotes around those words for “now,” by which I mean the duration of this narrative. So yes, I wanted to present that significant “fact,” i.e., the comparative statistic that people in the United States bought more old music than new music for the first “time” ever in “2015,” and I thought it would be useful to develop what such a historical occurrence might mean from a social change perspective.

So pursuing this line, I would go on to say, for one thing, that this soundscan inversion means in “fact” that, in “2015,” and I’ll explain those quotes later, our collective ontological base shifted as we pushed past the point where music as physical object, and thus physical constraint, had anything to do with physical consumption, because it’s almost fully digital “now,” of course, “bits” in our terminology here, and thus no longer embodied or constrained in “atoms” that must be “produced” or “shipped” rather than “copied” or “downloaded.” Now, the music industry, such as it is, may wring its collective hands over what it may think this old-music-outsold-new-music “fact” says about the quality of the new music available, as if there’s some talent shortage, among performers or consumers, one’s not sure exactly which, if not both, but even if there were universal shortages of talent, which from other evidence seems quite possible, that’s not what’s happening to music sales. Something very different explains the transactional triumph of back catalogs over, well, front catalogs.

Here it is, then, and it’s obvious — back in the previous analog world, when recorded music was a physical good embodied in atoms of vinyl, or later polycarbonate plastic, and distributed through physical stores, only so much music could be economically produced and distributed in significant quantities. And well, the people wanted “new” music, and the music industry wanted “new” music, which was inherently riskier but more exciting for all parties concerned. There was a systemic bias toward “new” music, in other words, when it was embodied in atoms. You could find “old” music then, but it was not as easy as finding “new” music; you had to commit to searching for “old” music in the delightfully crepuscular world of bargain bins and used record stores. “Now,” however, the systemic paradigm has shifted fully with the nature of the recorded music object itself, from atoms to bits. In the post-atomic ontological paradigm, in which all media is digital, and thus wholly intangible, and the distribution channels themselves are digital, again, nothing material needs to be produced, and thus there is zero marginal cost to every piece of music sold, “new” or “old,” high-demand or low. Plus, all that “old” music is actually “new” music to all the people who have never heard it before, suggesting that the power to define or decree what is worth listening to has shifted, or distributed, from a centralized authority to a decentralized everyone and anyone, with interesting broader relativistic implications we can’t pull this essay over to inspect just yet. Anyway, in the digital paradigm, all music is “now” available for consumption at all “times,” free or paid, and because there is more “old” music than “new” music, and this is more “true” every year, it will not stop, and nothing like it will stop, because the entire ontological context of cultural production, and thus the largely digital-mediated reality in which the vast majority of human beings currently exist, is shifting from atoms to bits right before our eyes, with everything that implies, and the repercussions are enormous and everywhere.

Part 2: The Black Cat Crosses Our Path All Over Again

Well, I had meant to write all that stuff in Part One, and more in the same vein, to tease out those enormous, everywhere social-change implications of how we interact with digital media “now,” but then I looked at this “old”-music-outsold-“new”-music “fact” again and saw the black cat from The Matrix. Let me explain the metaphor. In the Wachowski Brothers’ “1999” film, the main characters inhabit a large-scale computer simulation, called The Matrix, which masks the reality that they are essentially enslaved by intelligent machines. At one point, the characters see a black cat twice, a déjà vu glitch that occurs when the computer code behind the simulation is updated or changed, usually to further some nefarious objective of the code’s architects. The “old” music sales “fact” is the same as seeing a black cat twice in The Matrix: it’s a small thing after all, not a big thing, but it coincides with and thus signals something else that is itself large-scale and profound. In foresight, we might call it a “weak signal,” I suppose. On one hand, the “old” music sales “fact” indicates that, as atoms shift to bits, as we’ve noted, all music that has ever been produced will be available to everyone everywhere all at once. On the other hand, as we haven’t yet noted explicitly, it means there are plenty of people who feel no reservations about consuming old music, people for whom old music is cool, which is in itself a social trend worth analyzing.

But, on another other hand, and here’s the big scary picture, this “fact” also suggests that sequential time may be breaking down, for all intents and purposes, and if so, this “fact” should be additional “evidence,” as if, after the various events of “2016,” one needs any more “evidence,” that, in “2015,” the technological singularity or something as pervasive and bizarre happened, yanking the ontological rug out from under humans on earth, and we are now living, as Elon Musk and other ostensibly reasonable people have suggested, in something like a sophisticated computer simulation (Anderson), specifically one in which all cultural artifacts exist, and are at the same time produced and reproduced, in the present all at once, with merely an illusion of sequential time that is in the end indistinguishable from variations in aesthetic style; and that we live in a simulation, or something indistinguishable from a simulation, in which all cultural artifacts are being slowly and permanently detached from any relationship to authoritative “reality,” “history,” or “facts.” In this indistinguishable-from-simulation “reality,” in other words, time-and-space flattens further with each computing cycle, or “year,” as everything gets edited and remixed and thus rendered continually anew, streaming timelessly in the present; temporal constructs hold less and less meaning; and perceptual time accelerates in some fashion similar to Terence McKenna’s time-wave-zero (Eden), collapsing all probability waves into an increasingly information-dense, inescapable and eventually eternal-present-moment. This is why, for one thing, I’m putting dubiousness quotes around “2015” and similar temporal concepts. But more on this Big Scary Picture later.

On yet another other but related hand, and this is the point we need to stop and inspect here in this section, what’s happening with music, and thus all cultural artifacts, signifies that history and derivationally also a certain sense of “truth,” as well as certain aspects of perceived “time,” are no longer what they were. Of course, we always knew that “history” and “truth” to a certain extent could be shaped by authority to support its ideological objectives, in the same way the music industry shaped the production and distribution system of music to suit its economic objectives, back in the days of atom-based musical artifacts. But in those “old” paradigms, there was an authority, a central and hegemonic body with a narrative that could be referenced and inevitably contested/resisted. What’s happened since in the code of “the Matrix,” both figuratively and literally speaking at once, is that authority may have finally and completely collapsed, and the transcendental signifier of authority and any associated dominant narrative may have fragmented completely, and long may it rest in peace. “Now” and going “forward,” then, “history” (and “truth” and perceived “time”) will likely no longer depend upon the atomic integrity of cultural artifacts, arranged in sequential narrative strata that both produce and reproduce cultural meaning; these concepts, rather, will continue to exist only in limited and provisional usages, essentially for the temporary fulfillment of human needs.

How so? When everything — music, books, movies, images, ideas, expressions, maps, relationships, conversations, and infinitely more — is digitized into bits, everything is curate-able, editable, re-mixable, spinnable, re-sequenceable, and when there is no authority, explicit or implicit, legal or normative, to preserve an ontological model of “what is real” and an epistemological model of “what is true,” ideological or otherwise, and everyone has access to the raw material of culture and the tools to curate, edit, and reinterpret, express anew, we’ve crossed a threshold of which “social change” seems like a modest, even quaint description. This is the hidden meaning behind the seemingly banal and circuitous title, “Tomorrow’s Yesterday is Today” — in crossing this threshold into this post-temporal future (tomorrow), we have rendered history (yesterday) fully and finally a product of our present activity (today). We are free to say anything happened, whenever we want to say it happened, in whatever way we want to say it, and the evaluative criteria for the validity or legitimacy of any production, reproduction, or consumption of a cultural artifact, by which we mean everything humans express or do, is not veracity, but will be rather other criteria, such as agreement, emotional content, aesthetics, originality, or whatever. Whether something “is” “true” or “really” “happened” is rapidly becoming irrelevant, and in blunt terms, it’s the “true” death of “history” as we knew it, or even deeper, the death of the supporting ontological and epistemological structures that made history what it was, both time- and score-keeper in an all-pervasive culturally constructed narrative context. The social impact is just beginning to express itself more broadly in what has been labelled the “post-information” or “post-fact” or “post-truth” era in which we “now” are presumed to live.

Illustrative Interlude: The Curious Case of Boilerplate

In the online version of the Merriam-Webster dictionary, the word boilerplate has three definitions:

  • syndicated material supplied especially to weekly newspapers in matrix or plate form
  • a: standardized text
    b:  formulaic or hackneyed language <bureaucratic boilerplate>
  • tightly packed icy snow (Merriam-Webster)

If you click the link labeled “See boilerplate defined for English-language learners,” you get this very much more-to-the point definition: “phrases or sentences that are a standard way of saying something and are often used” (Merriam-Webster). Forgetting snow for now, let’s think of boilerplate as standardized language, often used in legal contexts, for the express purpose of saving time; boilerplate texts are shortcuts, linguistic organisms, if you will, bred to be widely reproduced through judicious cutting and pasting, but they’re often so successful at reproducing that they have apparently overwhelmed some linguistic contexts, stimulating the slightly pejorative definition 2b: “formulaic or hackneyed language.”

In the transition of cultural production and reproduction from atoms to bits, boilerplate language and things like boilerplate language, such as clichés and other overwhelmingly useful expressions, can be thought of as an analog to genetic, or better yet, memetic, material. They get spread around because they are useful and transmittable. Copy-paste. Similar cultural genetic/memetic material includes rhythms or beats, which get infinitely sampled and re-combined, iconic imagery, pleasing visual compositions, compelling conversations, commonplace situations. In short, in this new age in which ontology and epistemology as we knew them have collapsed into something entirely new, any cultural ingredient that can be edited, copied, and transmitted — the current apotheosis of which is now the internet “meme,” a picture with a little bit of text that hieroglyphically communicates a widely shareable idea, and which, when combined together, form a kind of metalanguage that has never existed before — any cultural ingredient, then, that can be copied is subject to continual redefinition and has thus lost any pretension to fixed or permanent meaning.

But now, meet another boilerplate, Boilerplate the Robot. Here’s a photo of Boilerplate with President Teddy Roosevelt after the battle of San Juan Hill, in “1898:”

And here’s Boilerplate sizing up boxer Jack Johnson, sometime around “1910:”

The product of Paul Guinan and Anita Bennett and their Adobe Photoshop license, Boilerplate the Robot appears in dozens of historical photographs, the robot itself a piece of cultural genetic/memetic material spliced right into the artifacts of history, artifacts now translated from atoms to bits, re-mixed with new time- and truth-independent content, and sent out into the digital distribution network that is the internet. Needless to say, Boilerplate has become a “steampunk” sensation, with an elaborate backstory developed in books and the potential of a film by none other than uber-geek director JJ Abrams (Zutter). Steampunk, incidentally, is a science fiction genre, general aesthetic, and subculture that imagines a fusion of advanced technology and the 19th century. It is a revisionist history, to most sensibilities, created in a crowd-sourced fashion for the entertainment of a devoted subculture; the steampunk narrative exists in parallel with the base historical narrative and projects both forward and backward in imagined “time,” an act which I call collapsing history, and creates a charming time dislocation, as we see in the photographs of Boilerplate.

So let’s get to the brass tacks of Boilerplate and things like Boilerplate, where cultural genetic/memetic material spills outside of the nicely organized sequential strata of the prevailing parallel historical narrative. These photos. Boilerplate. It’s not history, is it? Not “real” history? Not “true?” Or is it? For my part, I can hold the thought that it’s fiction in my mind; I know my way around Photoshop and know what it can do; yet I have never seen a photo of Teddy Roosevelt standing on San Juan Hill without Boilerplate standing there with him. I have never seen a photo of boxer Jack Johnson without Boilerplate clowning with him. Which is not to say such artifacts don’t exist, I will admit that possibility, but now that I’ve seen Boilerplate there, I can’t erase him. This new cultural artifact exists, contextualized among other cultural artifacts; it can’t be denied. It’s like old music and new music, coalescing and collapsing in the mix. What’s “true” doesn’t matter, and is rather a separate question, no? In Vintage Tomorrows: A Historian and a Futurist Journey Through Steampunk into the Future of Technology, futurist Brian David Johnson says Boilerplate is “better history than most ‘real’ history [co-author James H. Carrott] has seen — and that’s saying something” (Carrott and Johnson, 95).

So there’s a thing called “better history” that is distinct from “real history,” and is generally preferable to it. Let that sink in. And tell me if this looks familiar:

Part 3 and Final: A User’s Guide to the Post-Information Age,
or the Infinite SimCity

Now, let’s get back to looking more directly at the significant social change field radiating out from the atoms-to-bits singularity. In an only-slightly-hysterical “2014” piece in The Guardian, media commentator Bob Garfield writes, “Facts are over, replaced by feelings and free-floating certainty … for everything that matters, as of now, we are smack in the post-information age” (Garfield). The piece is representative of many that have been written in a similar vein over the past two years. After citing several statistics that illustrate the power of self-serving beliefs triumphing in the face of what Garfield considers “facts,” he sums up the stakes from his point of view, “What makes this all so dangerous is that it not only corrupts policy debates, it undermines serious journalism – and science and history and all other rational disciplines – by rendering their output mere arguments, no more or less credible than someone’s dogma, superstition or gut hunch” (Garfield). The loss of respect for, or relevance of, or deference to, serious journalism, science, history, etc. — in other words, authority — is what Garfield and people like Garfield have confronted.

It might be easy to dismiss such frustrations as Garfield’s if they weren’t so widespread. Dismissing them would be, again, missing the signal. It’s also tempting to blame people who don’t accept “facts,” say they’re intellectually deficient, point out the “dumbing-down” of society and failure of public education. But that’s missing the point too. It’s not the players; it’s the game. Like it or not, my argument here implies, there has indeed been a significant social change, and human beings are doing what they have always done — live according to the values that work for them in the situations in which they find themselves. I’ve articulated it already here, but to sum up: as part of our ontological shift from atoms to bits, a corresponding epistemological shift is occurring (and perhaps even metaphysical, but we can’t get into that here). As human beings, our ontological being is built upon the objects we encode with meaning, and how we interact with those objects. Because digital artifacts are editable and easily duplicated, not fixed or distinctive, so too will everything that depends upon digital artifacts be editable and easily duplicated, and I am suggesting that these things are big fundamental, interrelated things like “history,” “time,” and “truth.” As virtual and augmented realities develop and become pervasive, digital objects will be everywhere and increasingly every thing we interact with. Pokemon Go was just a weak shot across the bow from the new paradigm. If we are honest with ourselves, concepts such as “history,” “time,” and “truth” are not ancient and eternal; they arose within the context of human evolution and human cultural development. In Western culture, when “rationality” replaced a largely religious/superstitious worldview during the Enlightenment period, we experienced an ontological and epistemological shift from a somewhat animistic/spiritual experience of existence to the materialistic world with which we are familiar “now.” If you believe in progress and/or evolution, you would most likely at least entertain the idea that this post-information age is something we’ve arrived at because it’s better in some way, but to be clear, there’s no necessary reason to believe in such things (unless it suits the objective of your narrative “now”).

How this new paradigm might be better is what’s eating at everyone, what everyone is trying to parse. The post-information age to which Garfield and others have referred has also been called the “post-truth” and “post-fact” age. The Oxford English Dictionary named “post-truth” the “2016” word of the year, defining it as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief” (Oxford). Here, as in Garfield, the concept is positioned within an existing rationalist framework: objective fact vs. emotion and personal belief. To be clear, I’m not denying the existence of either facts or beliefs as concepts; my point is that what matters is rather the actions one takes, and that a society collectively takes, in this new reality, where the ontological and epistemological paradigms have possibly shifted beyond immediate recognition. While there are certainly people in the world with a medieval animistic worldview, they do not define the larger global culture, and it does no good to stomp your feet and protest change. Yet, that is an open option. However, most of us will choose to deal with this post-truth reality “going forward.” The questions to ask are, what does the change mean, how will it manifest itself in the future? How will we live in this post-information age?

The answer to these questions may require another essay to fully explain, but here’s the short of it, and it derives from the Big Scary Picture already described: we can’t prove we’re not in a computer simulation; you “now” have the power to fabricate and edit your reality; let’s collectively and individually make it work for us. Whether or not we “really” crossed some strange time-and-space singularity where we fully entered an eternal computer simulation, there’s no reason not to live as though we did. An infinite SimCity. The cyberpunk future is here: you can hack your own reality. It’s now made of hackable bits, not atoms. If one embraces the new reality, which again might as well be a simulation, we have to face the possibility of unprecedented freedom and power at the individual level, which is both thrilling and frightening. Individuals “now” have the power to create their own complete realities. You are free, no matter how much that bothers the people who have in the past depended on controlling you. And it’s weird. And new. All this retreat from “facts” and “truth” is nothing short of a new paradigm, and the potential liberation of over 7 billion creators, free to write the endless futures of humanity.

Works Cited

Anderson, Mark Robert. “Elon Musk Says We’re Probably Living in a Computer Simulation — Here’s the Science.” The Sinularity Hub. June 23, 2016. http://singularityhub.com/2016/06/23/elon-musk-says-were-probably-living-in-a-computer-simulation-heres-the-science/

Carrott, James H. and Johnson, Brian David. Vintage Tomorrows: A Historian and a Futurist Journey through Steampunk into the Future of Technology. Sebastapol, CA, O’Reilly, 2013.

Eden, Dan. “Terence McKenna’s Time Wave Theory.” Viewzone.com. n.d. http://www.mondovista.com/timewavex.html

Garfield, Bob. “Who needs facts? We appear to be in the Post-Information Age now.” The Guardian. January 3, 2014. https://www.theguardian.com/commentisfree/2014/jan/03/post-information-age-benghazi-gop

Pugsley, Adam. “Old Music is Outselling New Music for the First Time in History.”

Merriam-Webster. “Definition of boilerplate.” http://www.merriam-webster.com/dictionary/boilerplate

Oxford English Dictionary. “Word of the Year: Post-Truth.” https://en.oxforddictionaries.com/word-of-the-year/word-of-the-year-2016


Leave a comment

Disruptive Futures: A Workshop on the Future of Nuclear Weapons

Just recently, I noted the new president’s statements on nuclear weapons, in which he said the following: “It would be wonderful, a dream would be that no country would have nukes, but if countries are going to have nukes, we’re going to be at the top of the pack.”

The implication there, I suspect, is that “dream” is a code for impossibility, and thus Trump advocates expanding or somehow improving the nuclear capabilities of the United States. Perhaps it’s just a continuation of the trillion-dollar modernization program discussed by Obama or perhaps it’s a new global arms race. I don’t know for certain, and I wonder if anybody else does, including the President. With this uncertainty at the state level, and the increasing sophistication of terrorists, it reminds me, and should remind everyone, how nuclear weapons remain a tremendous existential threat to humanity and one that futurists should engage as much as possible.

To that end, this is a great occasion to showcase a extraordinary futures-related summit that happened in December of 2016, called Disruptive Futures: Nuclear Weapons Summit.

The Disruptive Futures: Nuclear Weapons Summit in Santa Fe, New Mexico, from December 4-7, 2016, was designed to engage a new type of discussion about nuclear security. Over the course of three days, 45 interdisciplinary leaders from across the country, including futurist fellows like myself from the World Future Society, were immersed in the history of nuclear weapons, discussed present day nuclear threats and — most importantly — explored ‘what if’ scenarios about the future of global security. To accomplish this innovative model for a convening about nuclear weapons Creative Santa Fe partnered with N Square, NTI (Nuclear Threat Initiative) and PopTech.

One of the unique aspects of the event was the degree to which the public in Santa Fe was engaged. Public events kicked off and closed the three day summit. The large public opening event was “A Conversation with William J. Perry and Eric Schlosser.”

Here is a full video of that conversation, as well as associated video content:

William Perry served as Secretary of Defense from 1994 to 1997, and in more recent years, he’s become a strong advocate for reducing the risks of nuclear weapons. Here’s Perry’s Nuclear Project site. Eric Schlosser is an American journalist, author and filmmaker known for investigative journalism, such as in his books Fast Food Nation (2001), Reefer Madness (2003), and Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety (2013).

As the final part of the futures process, the summit participants presented 2045 Future scenarios to the public at the Violet Crown theater in Santa Fe. The days in between the two public events were filled with past and present talks and tours, but also qi-gong and collaborative exercises designed by Rhode Island School of Design industrial designers.  The success of the summit is in large part due to the disruptive nature of the program itself.

Here’s is a video showing scenes from the final presentations:

One of Creative Santa Fe’s primary economic objectives is to shine an international spotlight on Santa Fe. They believe that Santa Fe can become a global destination for leaders to tackle some of the world’s most challenging issues by leveraging New Mexico’s key assets: art, culture, science, technology, environment, and heritage. To that end, they have launched Disruptive Futures, and there promise to be more futures-oriented events.

A hugely important topic. A powerful process. Futures in action. I encourage everyone interested in positive futures to engage with the nuclear issue at least enough to get a lot better informed than our president seems to be at this point.

The future may depend on it.

[Thanks to the folks at Creative Santa Fe for the videos and some of the event summary above.]


Leave a comment

ASU’s Emerge: A Festival of Futures, 2017

If you’re in the Phoenix area this Saturday, February 25, 2017, be sure to visit this year’s edition of Emerge: A Festival of Futures at Arizona State University’s University Club, from 3:00 pm to 9:00 pm.

Here’s the official description:

Emerge: A Festival of Futures

EMERGE is an annual transmedia art, science and technology festival designed to engage diverse publics in the creative exploration of our possible futures. The festival’s 2017 theme is Frankenstein, a 200-year old novel that still motivates us to think critically about our creative agency and scientific responsibility. This year EMERGE invites visitors into a house of wonder filled with speculative technologies, fortune tellers, music and film, and performative experiments that blur the boundaries between art and science. The festival revisits the past in order to reframe our sense of the present and inspire imagination of plausible futures, and asks what we can learn today by looking at emerging science and technology through the lens of art.

Held concurrently with Night of the Open Door, during which ASU invites the public into its laboratories and studios, EMERGE focuses a critical eye on the future implications of research taking place on campus and around the world. Visit us at the University Club and the Piper Lawn February 25th, from 3-9PM for installations and performances designed for all ages.
 .
Every year, it’s awesome and FREE. RSVP here.


1 Comment

Great New Futurist Site: Seeking Delphi

Seeking Delphi is a great new blog from futurist Mark Sackler. Mark is a fellow Houston Foresight colleague of mine and a great thinker with a wealth of experience and perspective on key future issues.

With Mark’s permission, I’d like to showcase his great, ongoing podcast series here. So far, Mark has addressed longevity and fuel cell technology with engaging interviews of people in the know. Here’s a sample:


I highly recommend that folks in interested in the future check out future insights from Seeking Delphi.


2 Comments

Three Interesting New Social Robots for 2017

Social robotics continues to develop, and new robots are appearing on the market all the time. According to reports from this year’s Consumer Electronics Show (CES), robots stole the show. Typical of the reporting, USA Today wrote, “We saw robots to make your morning coffee, pour candy, fold your clothes, turn on and off your lights, project a movie on the wall, handle your daily chores and most impressively, look just like a human, or in this case, legendary scientist Albert Einstein, with facial expressions and movement.”

Turn on and off your lights? Well, all these little household applications may seem like small, even trivial steps along the way to the robotic future of our favorite scifi movies, but they are steps, and consumer demand for social robots, i.e., robots that interact with us socially and/or play predominantly social roles in our lives, I would argue, is key to the development of useful and human-supportive (as opposed to destructive) artificial intelligence.

Although I didn’t attend CES this year, here are three relatively new social robots that intrigue me:

Pillo, The Home Health Robot for Your Family

Offered currently on indiegogo, Pillo is one of several applications of social robotics to the provision of healthcare. As I’ve written before, as medicine becomes more and more automated, there will be value in the automation having some “bedside manner,” that is, exhibiting behavior that is informative, comforting, social and friendly.

Here’s the description of Pillo from its creators: “In today’s hectic word it can be easy to forget the things that truly matter, like the health & wellbeing of you & your loved ones. That’s why we created Pillo. Pillo can answer healthcare Q&A, connect with doctors, sync with mobile & wireless devices, store & dispense vitamins & medication, & can even re-order them for you from your favorite pharmacy. What’s more, his skills will grow as we build additional applications on Pillo’s platform. Stay healthy & discover true peace of mind with Pillo.”

Mykie, “My Kitchen Elf,” The Home Kitchen Assistant

From Bosch, Mykie is at this point a concept robot that was exhibited this year at CES, but as a kitchen robot type, he exhibits an IoT connectivity that is likely to develop further in the future, especially in smart home and smart city contexts. Mykie can suggest recipes but can also connect to a smart kitchen environment to optimize recommendations. As IEEE Spectrum wrote, “Bosch is hoping that a substantial amount of Mykie’s usefulness will come from the way it can integrate into the rest of your kitchen. For example, you can ask Mykie to come up with recipes that use the food you currently have in your smart fridge, and as you start cooking, the robot will preheat the oven to the right temperature for you at the right time. You can also use Mykie’s “virtual social cooking” to remotely attend cooking classes in real time, following along in your kitchen at home as both Mykie and a human instructor help you cook something that you might not otherwise be comfortable cooking on your own.”

Gatebox’s Virtual Home Robot

Last but not least is Gatebox’s Virtual Girlfriend, uh, I mean Virtual Home Robot. Gatebox claims to be “the world-first virtual home robot with which you can spend your everyday life with your favorite characters.” In the video, of course, the application is that of a romantic or emotional companion for a lonely male corporate worker. It may be a sad reflection of the increasing isolation in our increasingly digitized global society, but it’s clear that robots and tech in general will have a valuable role to play in caring about human beings in the future. An article from the UK outlet Daily Mirror asks whether Gatebox is “romantic or incredibly creepy?” But to paraphrase the Bard of Avon, the answer is likely all in the eye of the beholder.


3 Comments

The Puritan Work Ethic, Automation, and the “Job Crisis”

When looking at the prevailing trends and forecasts related to automation and technological unemployment, it’s clear a crisis-level conflict is coming to many developed countries, but it’s not the kind of conflict many pundits think it is. It’s not going to be billionaire Nick Hanauer’s torches-and-pitchforks scenario of income inequality, for example, where we had better find some way to keep the 99% from rioting against the 1% elite (Hanauer). Similarly, it’s not going to be that huge expansion of unemployed population that will require some centralized economic intervention like the universal basic income, where everyone, not just the unemployed, receives a dividend from the government. It’s a very different kind of crisis heading our way. But before we identify what it is, let’s look at a quick sample of those automation trends and forecasts and get a feel for the two horns of the current dilemma, as it’s framed.

According to a recent article in The Economist, “47% of workers in America had jobs at high risk of potential automation” (Automation). But it’s not just the manual labor factory jobs at risk; big data, analytics, machine learning, and similar technologies are poised to impact “legal, medical, marketing, education, and even technological industries” (Alton). There seems to be broad consensus on this point, but two primary perspectives on where it goes. On one hand, Moshe Vardi, professor of computer engineering at Rice University, “foresees unemployment as surpassing 50 percent by 2045” (Matyszczyk). On the other hand, James Bessen, an economist at Boston University, notes that “while electronic discovery software has become a billion-dollar business since the late 1990s, jobs for paralegals and legal-support workers actually grew faster than the labor force as a whole, adding over 50,000 jobs since 2000, according to data from the U.S. Census Bureau. The number of lawyers increased by a quarter of a million” (Bessen).

And so the debate has been framed in the collective consciousness thus: in the future, new technologies, specifically automation, will either eliminate jobs, and create a widespread crisis of the social and economic order, or else will create new jobs we haven’t imagined yet, preserving our livelihoods, social structure, and global economy. Rather than come down on one or the other side of the debate, I’d like to suggest the crisis has less to do with technology than most think. While I will grant that the nature of the technologies coming online are unprecedented, especially advanced artificial intelligence, humans have used technologies of one sort or another to solve problems and create value for thousands of years. No, technology isn’t the problem. To think that technology is taking “jobs” from human beings is a failure to understand what’s really going on. We have a larger structural and conceptual issue coming for us: the breakdown of the concept of employment. Technology taking jobs isn’t the problem; the problem lies in what we think a “job” is, and what it increasingly isn’t.

More specifically, the coming crisis, if there is really to be one, will be a crisis of meaning, not of livelihood, and this is the key distinction. In the debate on technological unemployment, most commentators equate employment with livelihood, and it has certainly been that in our legacy capitalist system. It is natural to think concretely here, that if there is less demand for humans in the workforce, there will be more human beings who lack a means of livelihood, the means to acquire their material needs. So, it’s also natural to think we have to determine how to make sure those livelihood means exists, either in other, new jobs or through some systemic correction like a universal basic income.

But the “job” in our developed capitalist society is more than a livelihood. Setting aside for the moment the idea that it’s a social relation, a contract between capital and labor, a job is most importantly a morally loaded social concept derived from what Weber called the Puritan Work Ethic, where the work a human being does is tied up with Christian concepts of virtue. As Noble notes, “the emergence of that rational ethic, indispensable to the development of industrial capitalism, was closely associated with the rational religious ethic of the Puritans” (125). The following excerpt from Weber sums up how close employment (or “calling” in Weber’s terms) is bound up with moral (and thus social) virtue:

“The working life of man [sic] is to be a consistent ascetic exercise in virtue, a proof of one’s state of grace in the conscientiousness which is apparent in the careful and methodical way in which one follows one’s calling. Not work as such, but rational work in a calling, is what God requires” (quoted in Weber, 126).

And yes, the moral power of the Puritan work ethic not coincidentally reinforces the exploitative nature of the social relations that preserve class distinctions in capitalism, as it equates metaphysical virtue with social obedience in selling one’s labor through employment.

Even today, this close equation of employment (in the capitalist economic structure) with virtue, or to put it more plainly, being good and valuable, is one of the most powerful unacknowledged social forces in modern Western cultures. A study from Northeastern University found that employers exhibit a strong hiring bias against job seekers who are currently unemployed (Ghayad), and unemployed people have been targets of other, less-formal expressions of prejudice and exclusion as well. Unemployed recipients of welfare benefits are often perceived as disadvantaged and morally suspect, stereotyped as lazy and borderline criminal, often, in the United States at least, with racist overtones (Schneiderman). Despite the fact that our cultural and religious views no longer reflect the old Calvinist strictures, the employment-related biases of the Puritan Work Ethic persist, against all evidence that hard work, wealth, and virtue are in any way statistically correlated.

In our cultural unconscious at the present time, then, jobs are sacred, and having a job, building a career, working hard — all of these things create social status, self-esteem, and legitimacy. When jobs disappear because human labor is not needed in the quantities it once was needed, it’s not the missing livelihood that becomes the crisis. We can likely fix that, economically, if we have the will to fix it. Corporations, actually, may find motivation to support universal basic income schemes, as they may need consumer spending to support their operations. Regardless, as the future unfolds, and technology expands in the way it has the potential to expand, automation, artificial intelligence, and other advanced technologies will not only replace jobs, these technologies may provide material abundance at a level we’ve only dreamed. Tech guru Peter Diamandis, for one, believes this technological bounty is certain; his book The Future is Better Than You Think is an extended paean to the abundant future. And if technological abundance happens, fewer “jobs” will be needed, and we’ll have to give up our religious attachment to employment as a source of human value.

Finally, as humans in developed global capitalist cultures, we simply depend too much on the “job” to give us personal meaning. And that’s not just an individual issue; it’s a potential collective issue too: our religious attachment to “jobs” may do more to damage our environment and broader economy than widespread unemployment. The attachment to “jobs” may lead policymakers to invent myopic, damaging schemes such as reopening coal mines so that former miners can work. Since most mines are automated now, policymakers would have to prohibit automation in those industries. They would also need to stimulate coal demand by promoting dirty energy in the face of advances in efficiencies and demand for clean energy, as well as a need to protect the environment. Leaders also may need to financially subsidize such industries as coal in a global market environment where commodities like coal are cheap. In other words, such schemes would work against market forces to create sloppy, unsustainable solutions in order to preserve the “job” economy. As meanwhile, younger generations have entered the “gig” economy, becoming youtube stars, entrepreneurs, and digital nomads, reinventing the concept of work to fit their own aspirations, and thus abandoning the biases of their fathers and mothers. Ironically, then, in the future, we may have to abandon “employment” in order to prosper.

References:

Alton, Larry. “What Will Happen When AI Starts Replacing White-Collar Jobs.” www.forbes.com. Forbes Magazine. May 25, 2016.

“Automation and Anxiety.” www.economist.com. The Economist Newspaper Limited. June 25, 2016. 

Bessen, James. “The Automation Paradox: When computers start doing the work of people, the need for people often increases.” The Atlantic. 19 January 2016. 

Ghayad, Rand. “The Jobless Trap.” Northeastern University and Federal Reserve Bank of Boston, 2014. 

Hanauer, Nick. “The Pitchforks are Coming … For Us Plutocrats.” www.politico.com. July/August 2014. 

Matyszcyzk, Chris. “Robots Could Push Unemployment to 50% in 30 Years, Prof Says.” www.cnet.com. February 14, 2016. 

Noble, Trevor. Social Theory and Social Change. New York: Palgrave, 2000.

Schneiderman, R.M. “Why Do Americans Still Hate Welfare?” Economix Blog. The New York Times. December 10, 2008.