The Daily Gamecock

Column: Sentient robots deserve rights too

<p></p>

Humanoid robots have been restrained to science fiction since the late 1970s, an extravagant ideal for human advancement. But in recent developments from companies such as Hanson Robotics and Boston Dynamics, these dreams have begun to materialize. By bringing humanoid robots with artificial intelligence into reality, companies like these raise an ethical dilemma: What will the rights of these so-called androids be?

To explore why this is such an important question, allow me to describe the various faculties of these increasingly capable humanoid robots. In projects such as PETMAN and Atlas from Boston Dynamics, robots are being developed that look and move like humans.  The purpose of PETMAN is to simulate the stature and movement of a soldier on rough terrain and to test the efficacy of clothing designed to be resilient to chemical weapons. Atlas has a similar design, but is aimed at accomplishing more menial tasks like opening doors and relocating cargo.

Meanwhile, Hanson Robotics is focusing on the more personal traits of a human. One of their main projects, Philip K. Dick, is a robot that uses artificial intelligence to meet and learn new faces and their respective personalities and thus be able to hold conversation with these new acquaintances. Another project by Hanson, Diego-San is currently capable of recognizing the emotions of a person in front of him via their facial expressions and then responding to those expressions empathetically with his own rather convincing expressions.

So the differences between these robots and us begin to narrow. A problem with this arose when Boston Dynamics was testing Atlas. While trying to see how quickly and efficaciously Atlas could identify a crate and attempt to pick it up, the team of engineers did not only tamper with the position of the crate during these attempts, but also the position of Atlas. In a short video found on the Boston Dynamics’ YouTube channel, engineers can be seen violently knocking Atlas down with a hockey stick in order to test his ability to get back up and continue with his attempt at catching the crate and picking it up.

No one has ever really had a problem before with tests that abuse machines like cars in crash tests. But when robots begin to closely resemble humans, people are found to have a somewhat empathetic reaction: They feel bad for the robot. Upon viewing the test trials, I myself felt somewhat disturbed at the abuse of Atlas and even a little angered at the researchers. 

Why? Because we feel empathy only when we perceive someone's situation to be similar to ourselves or our own situation. It is upon this empathetic reaction that we have developed moral — and thusly legal — codes.  If we start to show empathy towards humanoid robots, it follows that we will also start to start to allocate rights to them. 

This reasoning, which I support, becomes less absurd as robots become more like humans in both form and behavior. If artificial intelligence is developed enough to become sentient, then the rights we are talking about allocating them would be justified on a much deeper level. It boils down to a very political question: To whom — and what — should our laws apply?

I think that a robot that has achieved sentience and can truly like or dislike the state of its world while also affecting it should be protected by the same rights as me. However, I do not think a car should be afforded the same rights. Even if I don't condone basing our entire legal system off of our emotions.


Comments