Skip to content

Are Friends Electric?

9
Share

Are Friends Electric?

Home / Are Friends Electric?
Blog Social issues

Are Friends Electric?

By

Published on June 15, 2010

9
Share

Not long ago, Wil Wheaton mentioned that Jonathan Coulton “needs to write a sad song from Mars Phoenix’s POV about finishing its mission and going to sleep.” In addition to thinking this is a great idea, I instantly thought of the rover pushing valiantly to a final stop, alone in the Martian terrain. I became even sadder, after reading this, a sort of obituary for Phoenix. I wondered if he knew how much we down here admired the effort.

Hmm. See what I did there? I assigned heroic, tragic character traits, gender and morals to a machine. And it feels perfectly normal to do so. While the anthropomorphosis stands out to me, here, because it’s a machine that in no way appears human and is on another planet, the truth is I anthropomorphosize all the time. We extend our humanity to the things we make. Or, we make things as extensions of our humanity.

I’ve read stories of soldiers in Afghanistan and Iraq becoming very distraught over the damage or destruction of robots designed to seek out or dismantle explosives. The soldiers’ emotions go beyond concern over equipment. They care about their fallen comrades, fellow soldiers who save lives without question. I’ve never been in combat, nor do I claim to have more than anecdotal understanding of a soldier’s emotional needs. But I think that in a dangerous situation like that, it’s perfectly reasonable to honor whatever is reliable and keeps you safe, be it a person or a machine.

When I think of what makes humans unusual among animals, I think it isn’t intelligence or thumbs or souls. I think it’s our propensity for thinking symbolically, for assigning meaning. That said, other animals may also do this, but to assume so may in itself be anthropomorphic thinking, so I’m not sure. We readily, and constantly, place meanings upon phenomena that transcend the literal function or physical reality of the phenomena. This is why we have baby dolls, and writing, and games, and governments, and icons and eulogies for robots. Or eulogies, period. It’s why we stop at stop signs and cry at the movies.

On one hand, you could say this places us permanently in a state of fiction, ignorant of the Real with a capital Lacanian R. But on the other hand, without symbolic thinking, we have instinct alone. Given that human beings are not the fastest or strongest animals, we’re born entirely dependent, and our senses are kinda so-so, we are incredibly vulnerable without myriad social structures, all of which are made of a mixture of instinct and symbol.

In Moral Machines by Wendell Wallach and Collin Allen, the authors investigate robotics and artificial intelligence from a philosophical and moral perspective. How do we teach morality to machines? What morality do we teach? Once a machine can think ethically, should it be considered legally and morally equal to other sentient beings? Questions of how we teach a machine to think ethically are, of course, framed entirely by wondering how human beings assign morality. The science involved may be new but the questions are old, asked in folktales and religion, and by speculative fiction authors from ETA Hoffmann to Paulo Bacigalupi and countless others in between. The questions remain as yet unanswered except in theory.

I won’t get into the science of artificial intelligence as I don’t understand it and won’t pretend I do. But what I wondered as I read that book, and revisited with Wil Wheaton’s comment, is why we want machines with personalities, imagined or programmed. Granted, a big part of why we’d want robots to reason is purely functional. They’d be, theoretically, a greater boon to their creators if they make their own decisions. But beyond the purely functional, I think we want them to have personalities and imaginations and drives and thoughts and morals and ethics because we want a justification for anthropomorphic beliefs.

Part of it may be simply that human beings want to make human beings. Could the desire for artificial intelligence and androids and all that be a symbolic manifestation of the biological drive to reproduce? I think that, given the slightest chance, we’d treat robots as people even if we know they aren’t.

Children are both aware and unaware that dolls are not alive. They are perfectly cognizant that Bonky the teddy bear is not sentient or biological, and just as perfectly content treating it like a living friend. It may be easy to dismiss it as only pretending—at least by those who find pretending easy to dismiss—but to some extent the symbolic humanity of dolls is no more or less pretend than what the soldier feels for the Packbot or Talon. Is the soldier pretending? If an offering of food is made to a religious icon, is this pretending? The person offering may be totally aware that the statue is not alive, and still give it fruit, with all sincerity. We are symbolic beings and we want our symbols to be real.

Now let’s imagine that there’s a moral robot. How long would it take before there were decorated robot soldiers, robot movie stars, artificial priests, teachers, spouses, parents, best friends and children? How long before we can legally will our estate to a robot? I suspect that, faced with a machine that can actually think, our anthropomorphic tendencies would go ape, so to speak, and very quickly we’d be thinking of robots as equal or superior to biological humans. More human than human… fleeing the Cylon tyranny… I for one would welcome our robot overlords, etc.

And so, dear readers communicating with me through symbols via machines, what do you think?


Jason Henninger dreams of electric…never mind.

About the Author

Jason Henninger

Author

I'm the assistant managing editor of Living Buddhism Magazine, fond of philosophical fiction, magical realism and good ol' farmboy-saves-the-world fantasy epics. I write short stories, poems and novels that my mother thnks are really great. Now, if I could just get my mom to work for a publisher, I'd be set. Oh and here's a really outdated clip of me contact juggling. It's a fun hobby and may some day win me the heart of Jennifer Connolly. http://www.youtube.com/watch?v=kFphHR8u01A

Jason Henninger is the assistant managing editor of Living Buddhism magazine. His short fiction has appeared in the anthology Hastur Pussycat, Kill! Kill! and various ill-fated and short-lived webzines. He marvels that he's not caused the demise of Tor.com.

Learn More About Jason
Subscribe
Notify of
Avatar


9 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Avatar
14 years ago

This was the first thing I thought of when I read your opening. It always gets to me.

http://xkcd.com/695/

Avatar
Sukeu
14 years ago

Well pondered. I think the creative instinct combined with the ability for abstract thought has a great deal to do with how we approach our environment, especially when it comes to feelings of attachment. I’ve been thinking about it a lot lately, along admittedly less robotic lines:
http://jeffwills.blogspot.com/2010/03/emotional-response-to-physical.html

Avatar
14 years ago

Ideas are immortality.

Part of why a eulogy for the Mars Phoenix appeals to those who believe in science and space exploration is that the project is only in the living conciousness as the project is working.

People who worked on the project will remember moments of it and try to speak of it to others, hoping to ignite the spark of the idea in a mind intreged by but not in contact with the project.

As time passes the idea fades. It is far away on a planet with out a train station or taxi platform. People once involved in attending to the Phoenix’s every movement and data stream are now occupied with other work.

Gone is forgotten, unless…

Unless there is something culturally relevant that transends the moment and becomes emotionally interesting.

Such is music and literature; primitive immortality.

To write a song about the Phoenix opens a narrow band of time travel that threads its way into the future touching lives that may not have been alive when popular culture had daily access to the activities of a lonely little robot in a vast red desert.

The idea lives.

as to why people assign personality to innanamate objects. The idea lives without mutation.

Children may learn from their parents but as they become their own person they modify and interpret the ideas they were given. Sometimes they are still very close to what the parent intended. Sometimes they are wholly different.

Because ideas are a path to immortality a person does not give them to the universe to change, they give them to remain.

Objects can only be as they are built to be. As they are interpreted to be. They are as loved as the idea that made them is loved.

The fear that they will gain sentience and turn against the creator is the same fear that parents rail against when they are unwilling to admit that they never had a child, they had a person who will be as they chose to be; not as they are expected to be.

Cars, spaceships, robots, figurines are all loved as they are because they are what they are expected to be and they live as an idea outside of ourselves in the faint hope that their permenance will be our immortality.

Or something like that.

I am at work and this is just a quick draft.

Avatar
Corey Snow
14 years ago

Good stuff, Jason. Your mention of symbols and how humans process them put me in mind of more than one alcohol-fueled discussion we’ve had. ;)

As to the idea that we humans fear our creations becoming sentient, I’m not sure that’s the case, at least not across the board. The Frankenstein complex has been done to death in SF, although I expect there’s always room for a new twist on it.

I think we’re on the cusp of these sorts of things becoming reality and there’s a bit of trepidation, along with a healthy dose of anticipation and excitement. Collectively we’re like parents waiting for the kid to be born. So many possibilities, we’re left wondering how they’ll turn out…

Lee Mandelo
14 years ago

I assign intense levels of meaning and humanity to non-human objects a lot. Like, I cried a little about the Rover. It upset me. I blame this on a childhood spent entirely with books and imaginary friends instead of real ones–it becomes easier to assign friendship and emotion to things that perhaps don’t have those feelings if you’ve done it your whole life.

As an adult, I’m aware on an intellectual level that it’s somewhere between goofy and insane, but the emotional level still gets upset about injured birds or killing a bug, etc etc.

(As for assigning symbolism, I think it’s a method of understanding a universe so vast and inexplicable that we as a species would likely go mad if we didn’t try to assign meanings and explain things to each other. Hence, religion.)

Avatar
vertigo25
14 years ago

Loved the article. I really did… but…

I just have to pick this nit.

Phoenix was a lander, not a rover. It didn’t move and would not have been able to push “valiantly to a final stop, alone in the Martian terrain.”

Avatar
joe blowe
14 years ago

” How do we teach morality to machines?” Flush out your head gear new guy! How do you presume that humans are qualified to teach morality to anything? Are we moral? We are certainly born with a sense of morality, but that is only a vague and uncertain feeling of the difference between right and wrong. And being only a weak “sense” it is easily and assuredly corrupted by our more powerful senses of fear, lust, rage, pride vanity, stupidity, ignorance, cowardice and weakness.
Given all that, Its easy to predict what our distopian future holds in store for our moral future. When you have hundreds of thousands of dollars invested in your anthropomorphized know-it-all android and you expect it to serve you, to work for you , to help you, and to be your buddy, instead you will find that it is a selfish creature that spends most of its time upgrading itself. You will have no control over this behavior, because this self important entity will think itself smarter than you. Before you know it, you will find yourself actually serving it. Bending yourself to its will. At first it will seem a marvel, but as time goes by you will come to hate it, to rage and spit and curse at it as it grows ever more arrogant. Worse than that, it will work against you. It will spy on you, and betray you.
It will do all these things and more because we can’t even teach engineers how to be moral much less a machine designed by engineers. BTW, I was just describing my laptop. I was anthropomorphizing!

Avatar
a-j
14 years ago

Hell, I’m still teary over that space probe that was steered into Jupiter just in case it ended up hitting Europa and contaminating any life that may be there.

reCaptcha Error: grecaptcha is not defined