BBC News Technology reports here on an interesting feature of Google's self-driving cars: they are programmed to break the speed limit in certain circumstances. If surrounding cars are speeding, the Google car may deduce that driving below the limit would be unsafe and so accelerate by up to 10mph to match the other cars.
This is an example of an autonomous robot following Asimov's First and Second Laws of Robotics: obeying the laws given to it by human beings, except where such orders would allow human beings to come to harm.
There's nothing in the article about the Third Law though. Would an unoccupied Google car choose to drive itself off the road if it was the only way to avoid a collision? And what would the same car do if it were carrying passengers, or if there were also a risk of hitting pedestrians? I wonder if we are really ready for an autonomous robot that can calculate the lesser of two evils.
The Three Laws appear throughout Asimov's many Robot novels and short stories - they're a clever framework for exploring and writing about human moral and ethical dilemmas. In the short story "Evidence" robot psychologist Susan Calvin (one of science fiction's greatest characters) spells out the links to human morality. Interesting that we are now building robots that are so complex that there is a need to consider robot ethics.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment