Kiteba: A Futurist Blog and Resource

Knowledge Ideas Technology Ecology Biology Architecture

The Debate on Autonomous Weapons and Weaponized AI

Leave a comment

I recently joined over 2,000 scientists, researchers, businesspeople, and other informed and interested parties in signing an open letter against the development of autonomous weapons. Sponsored by the Future of Life Institute, an organization dedicated to “safeguarding life and developing positive visions of the future,” this open letter proposes “a ban on offensive autonomous weapons beyond meaningful human control.”

Since its release, the open letter achieved its aim of raising the issue publicly and stimulating awareness and open debate about the state of the technology and accompanying ethical issues.

Here’s CNN’s report on the issue and the open letter:

It’s important to realize of course that this issue didn’t just come out of nowhere. A part of the incremental march of technology has led us to this latest inflection point: AI technology is becoming slowly more sophisticated, and the drone culture of remote warfare more embedded in military thinking, that it’s an inevitable intersection of trends pointing to the future.

The tech signatories of this FLI Open Letter, then, are simply bringing to light and providing support for an issue which many concerned parties have discussing for a couple of years. There is an active non-profit dedicated to the issue, the International Committee for Robot Arms Control, and last year, the Red Cross held an expert meeting on the subject. You can read the Red Cross’s report here: 4221-002-autonomous-weapons-systems-full-report.

It seems like a no-brainer to suggest a ban on the development of “killer robots,” but like so many technological issues, it’s complicated.

Writing in IEEE Spectrum, Evan Ackerman provides more than a contrarian view when he writes that We Should Not Ban Killer Robots. His excellent point is simply that it’s pointless to ban them because “no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots.” Instead, Ackerman argues, we need to accept that it will happen and work not on bans, but the technology to instill ethical behavior in autonomous weapons. To quote, “What we really need, then, is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing.”

In a remote way, Ackerman’s point is very similar to the one I made in a previous post, where I argued that AI is likely inevitable, but that, should it arrive in our world, its character and use would depend a great deal on the conditions of our global society when it arrives. To quote that piece, “my assumption then is that, given the way the [governments and societies] work today, and given all the implications of the factors of spying and nation-state competition/warfare, machine super-intelligence [or AI] would end up in the hands of governments as a military and/or intelligence tool.”

As I argued then, it’s not enough to work on ethical or friendly AI — we should do that, sure — but it’s not enough. Rather, we need to work on the ethical context in which AI will emerge. To quote again, our best hope of preventing the weaponization of full AI “goes beyond ensuring that the AI we create is ‘friendly.’ Rather, we have to make sure that machine super-intelligence [or AI] does not arrive before we change the context of our world.”

And by change, I mean improve. The context of our world needs to become more moral and informed, more cooperative and less warlike, in order to avoid the potential dangers of weaponized AI. It needs to be a world in which we are less interested in weaponizing AI or anything else in the first place. It sounds utopian, sure, but I think such efforts as the FLI’s open letter is a small example of an emergent behavior in the right spirit — people connecting to face an issue, communicating a position on it, and inviting discussion.

That’s the right stuff, in my opinion. So to me, it’s worth signing.

If you agree, add your signature here.

Advertisements

Author: Eric Kingsbury

Technology Futurism Creative Marketing Strategy Art Music Writing Thinking Ideas www.kiteba.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s