Do Androids Dream of Electric Shocks? Utilitarian Machine Ethics
Consider Lt. Commander Data from Star Trek: The Next Generation, the droid C3PO from Star Wars, or the Replicants that appear in Bladerunner: They can use language (or many languages), they are rational, they form relationships, they use language that suggests that they have a concept of self, and even language that suggests that they have “feelings” or emotional experience. In the films and TV shows that they appear, they are depicted as having frequent social interaction with human beings; but would we have any moral obligations to such a being if they really existed? What would we be permitted to do or not to do to them? On the one hand, a robot like Data has many of the attributes that we currently associate with a person. On the other hand, he has many of the attributes of the machines that we currently use as tools. He (and other science-fiction machines like him) closely resembles one of the things we value the most (a person), and at the same time, one of the things we value the least (an artefact), leading to an apparent ethical paradox. What is its solution?