Saturday, March 10, 2007


Pre-Futurism:
The Ethics
Of Robotics



The future is coming at us so fast, it is palpable now. We must begin to make to make the ethical decisions for our future now.

Try wrapping your head around this. If you don't, a robot will, eventually:


If the idea of robot ethics sounds like something out of science fiction, think again, writes Dylan Evans.

Scientists are already beginning to think seriously about the new ethical problems posed by current developments in robotics.

This week, experts in South Korea said they were drawing up an ethical code to prevent humans abusing robots, and vice versa. And, a group of leading roboticists called the European Robotics Network (Euron) has even started lobbying governments for legislation.

At the top of their list of concerns is safety. Robots were once confined to specialist applications in industry and the military, where users received extensive training on their use, but they are increasingly being used by ordinary people.

Robot vacuum cleaners and lawn mowers are already in many homes, and robotic toys are increasingly popular with children.

As these robots become more intelligent, it will become harder to decide who is responsible if they injure someone. Is the designer to blame, or the user, or the robot itself?

Decisions

Software robots - basically, just complicated computer programmes - already make important financial decisions. Whose fault is it if they make a bad investment?

Isaac Asimov was already thinking about these problems back in the 1940s, when he developed his famous "three laws of robotics".

He argued that intelligent robots should all be programmed to obey the following three laws:

1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm


2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

These three laws might seem like a good way to keep robots from harming people. But to a roboticist they pose more problems than they solve. In fact, programming a real robot to follow the three laws would itself be very difficult.

For a start, the robot would need to be able to tell humans apart from similar-looking things such as chimpanzees, statues and humanoid robots.

This may be easy for us humans, but it is a very hard problem for robots, as anyone working in machine vision will tell you.

Robot 'rights'

Similar problems arise with rule two, as the robot would have to be capable of telling an order apart from a casual request, which would involve more research in the field of natural language processing.

Asimov's three laws only address the problem of making robots safe, so even if we could find a way to program robots to follow them, other problems could arise if robots became sentient.

If robots can feel pain, should they be granted certain rights? If robots develop emotions, as some experts think they will, should they be allowed to marry humans? Should they be allowed to own property?


And the technology is progressing so fast that it is probably wise to start addressing the issues now.

One area of robotics that raises some difficult ethical questions, and which is already developing rapidly, is the field of emotional robotics.

This is the attempt to endow robots with the ability to recognise human expressions of emotion, and to engage in behaviour that humans readily perceive as emotional. Humanoid heads with expressive features have become alarmingly lifelike.

David Hanson, an American scientist who once worked for Disney, has developed a novel form of artificial skin that bunches and wrinkles just like human skin, and the robot heads he covers in this can smile, frown, and grimace in very human-like ways.

These robots are specifically designed to encourage human beings to form emotional attachments to them. From a commercial point of view, this is a perfectly legitimate way of increasing sales. But the ethics of robot-human interaction are more murky.

Jaron Lanier, an internet pioneer, has warned of the dangers such technology poses to our sense of our own humanity. If we see machines as increasingly human-like, will we come to see ourselves as more machine-like?


Lanier talks of the dangers of "widening the moral circle" too much.

If we grant rights to more and more entities besides ourselves, will we dilute our sense of our own specialness?

This kind of speculation may miss the point, however. More pressing moral questions are already being raised by the increasing use of robots in the military.

The US military plans to have a fifth of its combat units fully automated by the year 2020. Asimov's laws don't apply to machines which are designed to harm people. When an army can strike at an enemy with no risk to lives on its own side, it may be less scrupulous in using force.

If we are to provide intelligent answers to the moral and legal questions raised by the developments in robotics, lawyers and ethicists will have to work closely alongside the engineers and scientists developing the technology. And that, of course, will be a challenge in itself.


Yes, and of course, lawyers and ethicists have such a great track record on moral questions.

In the future, mankind will find himself up against the limits of his own humanity. It will be the first time, since we were still tilling the fields as our primary occupation- appealing to a pantheon of "gods" for our daily rain and our daily bread - when we will have been faced with the obliteration of our own humanity, on an almost daily basis.

It will be the first time in thousands of years when human history will be mediated by visceral concerns, rather than philosophical, governmental, and theological texts.

We will find ourselves alone, up against the reality of the limits of our humaness. We will find ourselves alone with the Force of the Universe.

I pray that we will make the correct moral decisions. I think that we have been given the directions necessary to make these decisions, but while we were made to stand upright, we have sought out many devices.