Alex first step to ensuring a peaceful and safe

Alex Stearns

Peters

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

12/3/2017

English 121

 

The
term “robotics” was first coined by the legendary science fiction writer Sir
Isaac Asimov in his 1941 short story “Liar!”. One of the first to see the vast
potential of the up and coming technologies that were yet to see public
approval or interest in his time. Since then, however, robotics has been on a
startling upward trajectory that has placed it into the forefront of cutting
edge technologies. While robotics has come with many benefits to modern day
humanity it is also a subject of endless heated debates. Humanity is on the
verge of a robot revolution. And while many see it as a gateway to progress not
seen since the Renaissance it could just as easily result in the end of humanity.
With the ever-present threat of accidentally creating humanities unfeeling
successors it’s only natural to question how much, if at all, we should allow
ourselves to become reliant on our technologies.

“As machines get smarter and
smarter, it becomes more important that their goals, what they are trying to
achieve with their decisions, are closely aligned with human values,” said
Stuart Russell, a professor of computer science at UC Berkley and
co-author of the universities textbook on artificial intelligence. A
strong believer that the survival of humanity may well depend on instilling morals
in our AI’s, and that doing so could be the first step to ensuring a peaceful
and safe relationship between people and robots, especially regarding simpler
settings. “A domestic robot, for example, will have to know that you value your
cat,” he says, “and that the cat is not something that can be put in the oven
for dinner just because the fridge is empty.” This begs the obvious
question, how on Earth do we convince these potentially godlike beings to
conform to a system of values that benefits us?

While experts from several fields
around the world attempt to work through the ever-growing list of problems to
create more obedient robots, others caution that it could be a double-edged
sword. While it may lead to machines that are safer and ultimately better it
may also introduce an avalanche of problems regarding the rights of the intelligences
that we have created.

The notion that human/robot
relations might prove tricky far from a new one. In 1947, legendary science
fiction writer Isaac Asimov introduced his Three Laws of Robotics in the short
story collection I, Robot, which were designed to be a basic set of
laws that all robots must follow to ensure the safety of humans. 1) A robot
cannot harm human beings, 2) A robot must obey orders given to it unless it
conflicts with the first law, and 3) A robot must protect its own existence
unless in conflicts with either of the first two laws. Asimov’s robots adhere
strictly to the laws and yet, limited by their rigid robot brains, become trapped
in unresolvable moral dilemmas. In one story, a robot lies to a woman and
falsely tells her that a certain man loves her who doesn’t, because the truth
might hurt her feelings, which the robot interprets as a violation of the first
law. To not break her heart, the robot breaks her trust, traumatizing her and ultimately
violating the first law anyway. The conundrum ultimately drives the
robot insane. Although fictional literature, Asimov’s Laws have remained a
central and basic point entry point for serious discussions about the nature of
morality in robots and acting as a reminder that even clear, well defined rules
may fail when interpreted by individual robots on a case to case basis.  

Accelerating advances in new AI
technology have recently spurred an increased interest to the question of how
newly intelligent robots might navigate our world. With a future of highly
intelligent AI seemingly close at hand, robot morality has emerged as a growing
field of discussion, attracting scholars from ethics, philosophy, human rights,
law, psychology, and theology. There have also been several public concerns as
many noteworthy minds in the scientific and robotics communities have cautioned
that the uprise of machines could well mean the end of the world.

Public concern has centered around “the
singularity,” the theoretical moment when machine intelligence surpasses our
own. Such machines could defy humanities attempts to control them, the argument
goes, and lacking proper morality, could use their superior intellects to
extinguish the human race. Ideally, AI with human-level intelligence will
need a matching level of morality as a check against potential bad behaviors.

However, as Russell’s example of
the feline-roasting domestic robot illustrates, machines would not necessarily
need to be super intelligent to create problems. Soon, we are likely to
interact with smaller scale, simpler, robots. And those too, will benefit from
increased moral awareness. The immediate issue, is not perfectly replicating
humanlike morality, but rather making robots that are more sensitive to
ethically relevant aspects of their individual jobs.

Ethical sensitivity, could make
robots better, more effective tools. Imagine if you will an automatic car that
always followed the speed limit. Programmed to never speed. On paper, this
appears to be a sound idea, until a passenger is bleeding out in the back seat.
They would shout at the car to break the speed limit and get them necessary
medical attention, but the car would respond, ‘Sorry, I can’t do that.’ A
machine that always follows its programming is useful, but limited. A far more
useful robot is one that can break the rules if something even worse will
happen if it doesn’t

As machines get smarter and more
independent they will require increasingly fine-tuned moral capabilities. The
end goal, is to develop robots “that extend our will and our capability to
realize whatever it is we dream.” But before machines can support the
realization of our dreams, they must be able to understand our values, or at
least act in accordance with them.

Which leads into the major hurdle
of robot ethics: There is no universal or even largely agreed upon set of human
morals. Morality is often specific to cultures, continually evolving, and
eternally debated. If robots are to live by an ethical code, who will it be
decided by? Where will it come from? This one question will make or break the
field of advanced robotics. When, not if we create these machines, what
inevitably imperfect morals do we bestow upon our collective technological
child.  The rules could be set in stone,
such as Asimov’s Three Laws of Robotics or the Ten Commandments; or they could
be more flexible and open to interpretation. What is important is that the
machine is given solid guidelines upon which to base its decisions.

 

 

 

 

 

 

 

 

 

 

 

 

 

Stuart
Russell

I
Robot Sir Issac Asomov