A couple of weeks ago, I wrote about a robot named hitchBOT thumbing his way across America. As I wrote then, hitchBOT’s plan was to travel, take photos, and post to social media, but the crux of the journey apparently involved interaction and trust with humans.
Here’s hitchBOT starting his American odyssey:
hitchBOT had already successfully made it across Canada and Germany, but guess what? Two weeks was all it took for hitchBOT’s American adventure to meet an untimely end.
Here’s a sad photo of the scene:
hitchBOT, who again had planned to hitchhike all the way to California, only made it this far:
I won’t moralize too much here about hitchBOT’s fate and what it says about our culture in the United States. Yes, it’s tragic and pointless as these things go, but if it was truly an experiment, among other things, that involved human trust and human-machine interaction, it’s fair to draw whatever conclusions are fair to draw.
I think it’s obvious hitchBOT was perceived (and treated) by his assailants as an object, not a subject, and furthermore, in many news reports, the word “vandalized” was used, a word that certainly means violence against objects, not subjects. Will there be a future where machine subjectivity is sufficiently advanced that an act like this could be called murder? Where is the intelligence/personality threshold at which smart machines win some measure of respect as subjects? In other words, where on the spectrum of machine intelligence will human attitudes change? Will it require full AI and/or consciousness, or some sufficiently developed point along the way? Or will human attitudes ever change?
Finally, I saw one person on twitter comment, “America, this is why the world hates you,” but I’ll go ahead and add, “Humans, maybe this is why we should worry about SkyNet.” And things like autonomous weapons.