The Three Laws of Robotics: The Time is Now!

Granted, when robots get rights, so do Techyum minions (*evil laugh*, cue winged monkeys), but the BBC’s The ethical dilemmas of robotics is very interesting and provocative in its timing when you consider how good gynoids are getting these days. It’s certainly thoughtful and a wonderful exploration of robotics and software ethics — a discussion I’ve been excited to see happen for quite some time. According to the article, now there’s even some legislation on the table in different countries about this… But now I wonder — do I need to start getting informed consent from my vibrator? Maybe only when it (damned freeloading machine) learns to mix me a drink and stops stealing the covers… Snip:

Scientists are already beginning to think seriously about the new ethical problems posed by current developments in robotics.
This week, experts in South Korea said they were drawing up an ethical code to prevent humans abusing robots, and vice versa. And, a group of leading roboticists called the European Robotics Network (Euron) has even started lobbying governments for legislation.
(…) Isaac Asimov was already thinking about these problems back in the 1940s, when he developed his famous “three laws of robotics”.
He argued that intelligent robots should all be programmed to obey the following three laws:
* A robot may not injure a human being, or, through inaction, allow a human being to come to harm
* A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
* A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
These three laws might seem like a good way to keep robots from harming people. But to a roboticist they pose more problems than they solve. In fact, programming a real robot to follow the three laws would itself be very difficult.

Link. (image via

Possibly related posts: