September 24th, 2009
Isaac Asimov’s Three Law’s of Robtics were the basis for many of his short stories and several novels. They stated:
“ |
|
†|
Well, why would people actually implement these!?
I mean, we do have lawyers doing things like this to cover people’s asses, and harm from robots is one of these things that needs covering. Potentially. But wouldn’t a military robot have a law that says something like:
“4. If there is any uncertainty, kill them all and let god sort them out. I’m just a fucking robot, all right?”
Space.com offers up an article this week concerning a reality check for Asimov’s three laws. There are going to be a lot of variations of this as we move toward a reality of robots.
What do you think? Are we actually going to have any control over our creations, or will people operating inside or outside the law going to circumvent anything at all resembling Asimov’s very reasonable and rational vision?
You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
I think it depends a lot on how far we can develop artificial/sentient intelligence.
If all we can do is create very clever programs that simulate intelligence then Asimov’s rules (including the Zeroth law) are probably something that programmers should bare in mind.
If we can create truly sentient intelligences then I think we would be far better off trying to educate/reason with these intelligences in a similar manner* to how we educate and reason with children to teach them how to be adults.
I also think that, being too critical of Asmivo’s laws ‘because they don’t work’ is disingenuous, as Asimov wrote these laws in 1940, before computers existed, let alone programming languages or any real ‘scientific’ understanding of how to create his robots.
*Similar intent, rather than application necessarily.
I think “implementing” laws of robotics is about as utopic as, say, creating computers that won’t allow their users to commit cybercrimes. Asimov’s genius in my opinion was to redefine robots as “tools”, as opposed to the “rebellious creatures” depicted by various Frankensteins myths. Complex, self-aware, smart tools, but tools nonetheless.
But as tools, robots would operate on the same level as any other kind of tools : no matter how autonomous they can become, they’ll always operate on the basis of the original programming, a programming that is still subject to human intervention at least at SOME point.
That, in a way, makes the three laws ineffective, because we cannot escape our responsibility when faced with the question “how are you going to use this technology ?”. I have huge respect for Asimov and I believe the Three Laws have served a wonderful narrative and dramatic purpose, but they come as a contradiction with the idea of technology itself.
Think about it. Any programming you insert in a robot, no matter how basic and deep-rooted it is, can be hacked. Any human with sufficient knowledge could hack into a robot and rewire it to be a killer, a thief, or a gardener. Suppress, rewrite or redefine the Laws.
But as you said, it was another time, and Asimov wasn’t familiar with how easy it is to crack, break or hack into any piece of programming these days.
well, it was utopian, but you first have to imagine that there was a monopoly by US Robotics… they defined the laws, and considering that they considered selling billions of robots for all humanity was more profitable than selling them to the military, and that the only way to achieve that was to make humans CONFORTABLE with robots, and that meant ALL ROBOTS SHOULD BE SAFE, explains the three laws, which were not software, but HARDWARE… (pathways created into the circuit itself)
Hackers probably couldnt modify the complex circuits… such tech was monopoly of US Robotics. Thats not to say that we cant imagine such cases never happened… we can imagine them, but Asimov never covered them. Still, it doesnt makes his universe unrealistic imho.
Also, while its utopic, isnt it also utopic to assume that people wont produce biological weapons? Or clone people and use as slaves?
We have to assume that culture just made military robots a TABOO in Asimov ´s universe, just like biological weapons are in our world. A country that produced military robots would suffer all kinds of sanctions from the rest of the world.
And US Robotics huge power would probably keep in check governments trying to make military robots.
Finally, such taboo against military robots was even stronger in the Spacers time… a society completely dependend on robots, and probably even more horrified by the idea of killer robots, a HUGE TABOO which they didnt dared to circunvent, even now THEY having the monopoly.
You have a good point, as I obviously underestimated the social factor in the Asimov universe. The idea of taboo is indeed very important here.
Nevertheless, I was trying to imagine it in our world – and in our world, it still does seem utopic, since the US military is ALREADY developping killing robots. Very basic, but still…
Well, this comes down to the principle that any computer software is only as useful or moral as the human who programmed it. These three laws are only valid if everyone agrees to follow them, which obviously everyone won’t do. And I don’t think that a robot who is forced to follow these laws can ever truly be considered an independent creature, because they are still slaves to their own programming. So I think that we have absolute control over our creations, barring human-error (like underestimating the amount of stress an engine can take before it explodes and kills everyone), until such time as they can think for themselves, in which case, they won’t be bound to their programming anymore.
well, are we humans free in that sense? We are also slaves to so many things programmed in our genetic code through millions of years of evolution…
1. A robot may not injure a human being or, through inaction, allow
a human being to come to harm.
2. A robot must obey orders given it by human beings except where
such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such
protection does not conflict with the First or Second Law.
The conflict which arose in the movie “I Robot” was possible only because of the second part of the first law: “or, through inaction, allow a human being to come to harm”. Strike this segment, and the movie conflict could not arise.
The apparent intent of these laws is to preserve human autonomy over that of robotic autonomy. The first part of the first law ensures that the robots themselves do not become a source of harm to humans. The second part charges robots with the responsibility of human protection from any source and in precedence to the other two laws.
The implicit assumption in the first law (actually two distinct laws) is that inaction is ethically equivalent to being the active source of harm (rather than inaction being, perhaps, ethically neutral for a robot).
It is this second half of the first law that allowed the robots to supersede human autonomy, in contrast to the apparent intent of these laws.
If our safety from dangers other than robots takes precedence over our autonomy, we inevitably become the “subjects” of the “protectors” we ourselves created. Perhaps this is the message of the movie.
Preserving our autonomy and self-direction above some risks may ultimately be the more ethical choice. This is the basis of the “prime directive” in Star Trek.
I might try these out for size and see what happens.
1. A robot may not injure a human being.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot may not, through inaction, allow a human being to come to harm.
4. A robot must protect its own existence as long as such protection does not conflict with the first 3 laws.
Many people have heard of the laws, but haven’t bothered to read Asimov. His book is about instances where the three laws lead to unintended consequences. You’d never actually want to program a machine in this way after reading I, Robot. The stories warn that even the clearest, simplest code of machine behavior could fail when faced with real-world situations. At its core, the lesson is to be careful about what we ask our technology to do for us, because we just might get it.