Kiteba: A Futurist Blog and Resource

Knowledge Ideas Technology Ecology Biology Architecture

Leave a comment

August World Future Society Arizona Meeting: Natasha Vita-More on Transhumanism

If you’re in Arizona and interested in futures topics,  I’d like to invite you to join the WFS Arizona chapter for our August meeting. I’m very excited about this month’s topic and speaker, and I hope you can join us.

On August 26, 2015, our featured presentation topic and speaker will be as follows:

Natasha Vita-More on Transhumanism

If you’re unfamiliar with Natasha’s work, here’s her bio:

Natasha Vita-More, PhD is a designer and author whose research concerns the technological design of human enhancement. Wired magazine called Natasha an “early adopter of revolutionary ideas” and Village Voice claimed she is “a role model for superlongevity”.

Having been published in numerous academic journals, such as Metaverse Creativity and Technoetic Arts – A Journal of speculative Research, New Realities: Being Syncretic, Beyond Darwin, and D’ARS. Vita-More is also a contributing author to chapters of the books AI Society, Anticipating 2025, Intelligence Unbound. Her own book (below) is published by the renown Wiley Blackwell Publishing.

Vita-More received Special Recognition at Women in Video and has exhibited at the London Contemporary Art Museum, Niet Normaal, and the Moscow Film Festival. She is most known for having designed the pioneering whole body prototype known as “Platform Diverse Body” (f/k/a Primo Posthuman) and the networked identity “Substrate Autonomous Identity”. With a background as a fine artist, videographer, and bioartist, her work evolved into the field of design in linking transdisciplinary of multi-media design, science, and technology. Her current work includes scientific research on memory of C. elegans nematode after vitrification.

Vita-More is co-editor and contributing author of The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Future Human.  Featured in LAWeekly, The New York Times, and U.S. News & World Report, Vita-More has appeared in over two dozen televised documentaries on the future, including PBS, BBC, TLC, ABC, NBC, and CBS. Dr. Vita-More is a professor at the University of Advancing Technology, Chairman of Humanity+, and a Fellow at Institute for Ethics & Emerging Technologies.

Join us on Wednesday, August 26, 2015, from 6:30 pm to 8:30 pm, at the Scottsdale Public Library Civic Center at 3839 North Drinkwater Boulevard, Scottsdale, AZ (we’ll be in the Gold room on the first floor).

To attend our meetings, you don’t have to be a formal WFS member, you just have to be interested.

RSVP at our meetup site here.

Leave a comment

The Debate on Autonomous Weapons and Weaponized AI

I recently joined over 2,000 scientists, researchers, businesspeople, and other informed and interested parties in signing an open letter against the development of autonomous weapons. Sponsored by the Future of Life Institute, an organization dedicated to “safeguarding life and developing positive visions of the future,” this open letter proposes “a ban on offensive autonomous weapons beyond meaningful human control.”

Since its release, the open letter achieved its aim of raising the issue publicly and stimulating awareness and open debate about the state of the technology and accompanying ethical issues.

Here’s CNN’s report on the issue and the open letter:

It’s important to realize of course that this issue didn’t just come out of nowhere. A part of the incremental march of technology has led us to this latest inflection point: AI technology is becoming slowly more sophisticated, and the drone culture of remote warfare more embedded in military thinking, that it’s an inevitable intersection of trends pointing to the future.

The tech signatories of this FLI Open Letter, then, are simply bringing to light and providing support for an issue which many concerned parties have discussing for a couple of years. There is an active non-profit dedicated to the issue, the International Committee for Robot Arms Control, and last year, the Red Cross held an expert meeting on the subject. You can read the Red Cross’s report here: 4221-002-autonomous-weapons-systems-full-report.

It seems like a no-brainer to suggest a ban on the development of “killer robots,” but like so many technological issues, it’s complicated.

Writing in IEEE Spectrum, Evan Ackerman provides more than a contrarian view when he writes that We Should Not Ban Killer Robots. His excellent point is simply that it’s pointless to ban them because “no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots.” Instead, Ackerman argues, we need to accept that it will happen and work not on bans, but the technology to instill ethical behavior in autonomous weapons. To quote, “What we really need, then, is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing.”

In a remote way, Ackerman’s point is very similar to the one I made in a previous post, where I argued that AI is likely inevitable, but that, should it arrive in our world, its character and use would depend a great deal on the conditions of our global society when it arrives. To quote that piece, “my assumption then is that, given the way the [governments and societies] work today, and given all the implications of the factors of spying and nation-state competition/warfare, machine super-intelligence [or AI] would end up in the hands of governments as a military and/or intelligence tool.”

As I argued then, it’s not enough to work on ethical or friendly AI — we should do that, sure — but it’s not enough. Rather, we need to work on the ethical context in which AI will emerge. To quote again, our best hope of preventing the weaponization of full AI “goes beyond ensuring that the AI we create is ‘friendly.’ Rather, we have to make sure that machine super-intelligence [or AI] does not arrive before we change the context of our world.”

And by change, I mean improve. The context of our world needs to become more moral and informed, more cooperative and less warlike, in order to avoid the potential dangers of weaponized AI. It needs to be a world in which we are less interested in weaponizing AI or anything else in the first place. It sounds utopian, sure, but I think such efforts as the FLI’s open letter is a small example of an emergent behavior in the right spirit — people connecting to face an issue, communicating a position on it, and inviting discussion.

That’s the right stuff, in my opinion. So to me, it’s worth signing.

If you agree, add your signature here.

1 Comment

Sadly, hitchBOT Didn’t Survive Philadelphia

A couple of weeks ago, I wrote about a robot named hitchBOT thumbing his way across America. As I wrote then, hitchBOT’s plan was to travel, take photos, and post to social media, but the crux of the journey apparently involved interaction and trust with humans.

Here’s hitchBOT starting his American odyssey:

hitchBOT had already successfully made it across Canada and Germany, but guess what? Two weeks was all it took for hitchBOT’s American adventure to meet an untimely end.

Apparently, today, hitchBOT was found largely destroyed on the side of a road in Philadelphia. Read about it here and here.

Here’s a sad photo of the scene:

hitchBOT, who again had planned to hitchhike all the way to California, only made it this far:

I won’t moralize too much here about hitchBOT’s fate and what it says about our culture in the United States. Yes, it’s tragic and pointless as these things go, but if it was truly an experiment, among other things, that involved human trust and human-machine interaction, it’s fair to draw whatever conclusions are fair to draw.

I think it’s obvious hitchBOT was perceived (and treated) by his assailants as an object, not a subject, and furthermore, in many news reports, the word “vandalized” was used, a word that certainly means violence against objects, not subjects. Will there be a future where machine subjectivity is sufficiently advanced that an act like this could be called murder? Where is the intelligence/personality threshold at which smart machines win some measure of respect as subjects? In other words, where on the spectrum of machine intelligence will human attitudes change? Will it require full AI and/or consciousness, or some sufficiently developed point along the way? Or will human attitudes ever change?

Finally, I saw one person on twitter comment, “America, this is why the world hates you,” but I’ll go ahead and add, “Humans, maybe this is why we should worry about SkyNet.” And things like autonomous weapons.

RIP hitchBOT.

Leave a comment

Digital Nomads: Harbingers of the Future of Work

At the Rise conference this weekend in Hong Kong, Google for Work president Amit Singh noted the coming end of the desktop computer, and thereby the end of the traditional work desk, as more and more people and technology become mobile. According to coverage in TNV, Singh said workers are “increasingly getting more done on mobile devices. In the future, you’ll be spending even more time on them, away from your desk.”

Further, with its acquisition of machine learning firm Deepmind, Singh indicated Google is working on AI assistants that will further facilitate the liberation of workers from desks. To quote Singh’s talk at the Rise conference:

“We’ve been thinking a lot about the the increasing importance of mobility at work. We’re currently taking traditional data and tools and unlocking them from your desk. But creating an intelligent assistant that goes where you do and helps you out by surfacing data when you need it, in context, cognitive in real-time — I believe that’s the future.”

That the future of digital communications and productivity is mobile is no newsflash; it’s a trend that’s been building for some time. There will be more and more opportunities for remote and place-independent work, and more tools like AI assistants available to remote workers.

What’s interesting now is the emergence of the cultural corollary: the rise of the “digital nomad” as a distinctive lifestyle choice and self-identification for (at present) mostly young, educated, tech-savvy people. To the point, also occurring this weekend was DNX 2015, the Global Digital Nomad Conference in Berlin. According to its web site, DNX sees remote work as a kind of moral revolution, and the organizers go so far as encouraging people to quit conventional jobs and join the nomadic horde:

“DNX is changing lives and inspiring people to start to work and live location independent. Our vision is that more and more people live their lives free and self-determined. We strongly believe that meeting other cultures makes us personally richer and the world a better place. Everybody can find his or her passion, live their dreams and work self-determined. DNX is part of the freedom revolution, in that people take ownership of their jobs, time and life. People quit their conventional jobs to reclaim the freedom to design their own lives.”

Here also is a video clip of the founders of DNX talking about DNX:

In addition to technology tools and at least one conference, there are other services (some in development) that support digital nomads. Sharing economy stalwarts such as uber and airbnb come immdiately to mind as nomad-friendly, but there are many more. One interesting example is Nomad House, “a housing solution that offers flexible living arrangements while bringing together great people; to stimulate ideas, incubate projects, and create the best possible home; in the best locations in the world.” With Nomad House, apparently, you subscribe to digital nomad community housing credits that you can consume in locations around the world.

All of this labor freedom seems presently restricted to a limited set of high-tech skills and trades, of course, but as this lifestyle movement grows in possibility, and the tools to support it improve, I predict that we will all become more place-independent and thus potentially nomadic. Ironically, I think corporations will have already embraced it. Governments, on the other hand, may need to adapt to millions or even billions of fully employed wanderers crossing and recrossing international borders, working for various companies in various countries, banking, shopping, and investing outside of traditional rooted-national-citizen patterns.

Leave a comment

Bob Bergman on Systems Thinking and Chronic Social Problems

At the July 29, 2015, meeting of the Arizona Chapter of the World Future Society, I and 21 other Arizona futurists were pleased to hear Bob Bergman present and discuss “Systems Thinking and Chronic Social Problems.” Bob is head of Arizona Decision Sciences and a long-time technology, strategy, and decision sciences professional. Specifically, at our meeting, Bob shared with us his application of systems thinking to the homeless problem in Arizona, on which he did work for the Maricopa Association of Governments (MAG) in 2014.

Here is Bob’s full presentation, which includes a great introductory overview of systems thinking:

A big part of the work Bob did for MAG was the development of a systems model for homelessness, as expressed in a simulation. Scenarios and simulations are standard futures methods, of course, and have their uses and limits. In this case, by taking all the variables associated with local homelessness, as defined by representatives of MAG, Bob was able to create a simulator that facilitated assessing various drivers of the problem, as well as various outcomes of possible policy decisions.

Here is Bob’s Arizona homeless problem simulation. You can access it yourself online and play with the variables such as homeless shelter capacity and recidivism, and thereby understand how such systems models work, understand the complexity of the homeless problem, as well as the challenges of defining and analyzing complex systems.

Of course, our group of futurists had many questions and challenges for the model, most of which were related to how the problem is defined, system boundaries and scoping, and more, all of which are part of the point of studying complex systems. And the fun in talking about it.

If you’re in Arizona and interested in the future, I encourage you to join us at one of our WFS Arizona monthly meetings. Join our meetup group here. Our meetings are free and open to everyone.