Jump to content

Talk:Three Laws of Robotics/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


New Law robots?

[edit]

Could someone with the Caliban novels handy add McBride Allen's four new laws? I think they're relevant to mention here, but I couldn't find them on the net. - Kimiko 20:39 May 1, 2003 (UTC)

I don't know how to source this properly, but I have Issac Asimov's Caliban (Roger MacBride Allen) on hand, and directly copying from Fredda Leving's speech on pages 214-215, the four laws are: 1) A robot may not injure a human being. 2) A robot must cooperate with human beings, except where such cooperation would conflict with the First Law. 3) A robot must protect its own existence, as long as such protection does not conflict with the First Law. 4) A robot may do anything it likes except where such action would conflict with the First, Second, or Third Law. Is that useful? --209.217.110.69 (talk) 21:26, 4 April 2008 (UTC)[reply]

Fourth Law

[edit]

Wasn't there a fourth law by Asimov that an order to self destruct will not be followed through? Pryderi2 11:04, 1 September 2007 (UTC) The Robots can not harm humanity and environment.

breaking the laws?

[edit]

I think I remember a novel in which a robot was forced to break the laws because they were contradictive. It was a long time since I read it but I'm fairly sure about it. BL 01:54, 17 Sep 2003 (UTC)

All of the laws are potentially contradictive, and that's why they needed a robopsychologist like Dr. Susan Calvin!

In the real world, not only are the laws optional, but significant advances in artificial intelligence would be needed for robots to easily understand them. Also since the military is a major source of funding for research it is unlikely such laws would be built into the design. This seems like a rather moot point. Somebody could argue that the military would be the group most interested in developing robots with the original three laws in them since they probably would be the first to suffer if robots turned against their masters, for the advantage of a human enemy or for the advantage of the robots themselves. I think it is significant that the biggest efforts DARPA and other military groups have going in the field of robotic vehicles are robot transport projects (a robotic donkey if you wish, reminding one of the robass in the SF classic "A canticle for Leibowitz")and robot reconaissance drones. Dr Susan Calvin gave some rather sharp reasoning to justify the safety aspects of the 3 laws, and these safety questions apply to the military as well. AlainV, on a pleasantly snowy and starry 20th of December evening.

What ? how are they contradictive ???? They are worded in such a way that each law is infallably more important that the one below it, so should be followed instead of it i.e. if a robot sees a human about to be crushed by something big and heavy collapsing on it, it MUST push the human to safety, risking its own existense (1st law followed at the expense of the 3rd) in the process. Machete97 (talk) 21:43, 21 April 2008 (UTC)[reply]

Consequence Morality/Ethics

[edit]

It's quite interesting that the three laws are based on consequence morality (least harm to most people/most good to most people) rather than duty morality (don't do to others what you wouldn't have done to you in the same situation). Of course, since asimov's robots have very little self-respect, the golden rule might not work very well - it implies that the actor is free and valuable. But consequence ethics have problems too, big problems. It's quite possible for two people who obey a consequence morality completely to be completely opposed. They might even want to kill each other because they disagree on who has the best course of action. I've only read I, robot and some short stories - do Asimov's bots ever disagree like that? Incidentally, in a french/(belgian?) comic book called Natasha, the protagonists travel to the future to find a society of robots who, in accordance with asimov's laws, keep the population drugged/brainwashed into unthinking bliss.

I'd like to see criticism of the Laws here, but don't have an Official Authority to cite. The main point I'd want to make is that Asimov's Laws are focused on the needs of humans, not the robots themselves. If (as with Shermer's cloning rules, quoted in the article) we focused on the status of the robots themselves, we wouldn't be justified in making humans' safety and robots' absolute obedience the first two rules! --Kris Schnee 09:51, 18 May 2006 (UTC)[reply]
Of course the programming of robots is focused on the needs of humans - just as the design of any machine is focused on the needs of humans. That's what they're built for. Why would humans build machines that focus on their own needs? The difference with Asimov's robots (and those like them) is that their AI has been developed to the level of self-awareness, sentience, or whatever. They approximate the behavior of the human brain to the extent that they generate a human-like "soul", if you will. And then, we get to deal with the consequences of creating a machine (whose very name, "robot", means approximately "slave") that has, like all machines, been built to serve human needs, with an intelligence suitably programmed for service to humans, but with a soul. Not the Frankenstein-like consequences (what happens if/when they turn on us) but the deeper moral consequences. --Davecampbell 05:56, 5 June 2006 (UTC)[reply]

Edit explanation

[edit]

Just wanted to explain my edit of the page a bit. Daneel's group of robots was not called the Angels. The Joan sim compared them to angels, but that was as far as it went. And there was no faction of New Law robots in the second trilogy, to my recollection. No robot wished to be free of the laws. The closest it came was Lodovik being freed of them by the Voltaire sim, and HIS position was that humanity should make its own decisions free of constraints, not that robots should.


I like the new paragraph arrangement. —Anville 18:03, 5 Jan 2005 (UTC)

Unforseen Consequences

[edit]

Although largely a simple action film, the Alex Proyas I, Robot pinned its central plot to the problem of *interpretation* for any *law*. This plot-point has been used in other films where Artificial Intelligence, for example: Terminator 2: Judgement day and 2001: A Space Odyssey.

In Terminator 2, a computer system (SkyNet) developed by the American Military is charged with a primary goal: determine the optimal strategy to defend the United States from its enemies. Unfortunately, as SkyNet learns at a geometric rate, it determines that the true enemy of the United States are *humans themselves*. Thus, it launches the American nuclear missiles at the former Soviet Union knowing that Mutually Assured Destruction will eliminate most of the humans in the U.S.

In 2001, the HAL computer operating the Discovery spaceship has been programmed with conflicting orders regarding its mission. Its original programming states that it cannot distort or misrepresent information -- it cannot lie to the crew. Specifically for the mission at hand, HAL has been programmed not to reveal the true purpose of the mission to the crew of the Discovery. (Spoiler warning) In an attempt to resolve these seemingly conflicting orders, HAL decides that the only suitable alternative is to kill the crew; this way, HAL doesn't have to lie to the crew because there's no crew to lie to.

Even though the words 'computer' and the word 'program'(in film used only once) are used to refer to HAL, both Clarke (in the novel) and Kubrick imply that HAL is more that just 'machine' intelligence (especially Kubrick). HAL acts more like a Strong AI and in that way may not have been bound by hard or soft coded laws.--aajacksoniv (talk) 15:10, 2 November 2008 (UTC)[reply]

In I, Robot, the central computer V.I.K.I. interprets the Three Laws of Robotics as requiring martial law in order to not allow humanity to come to harm through inaction. (The first law, which supercedes the second law of obeying human orders.)

Some people have also postulated that in The Matrix, also featuring an AI nemesis to humanity, the genuine reason why humanity has been enslaved is not because of some thermodynamic farce, but because some irrevocable primary programming in the AI will not allow it to commit humanity's genocide, and uses enslavement as a viable programmatic alternative.

Enslavement ?!?!?!?! they used humans as a power source because "we scorched the sky", so they placed us in many peoples idea of heaven - virtual reality so good that you don't know you're in it, and are oblivious of reality (SPOILER ALERT!!! well - reality of sorts) The only mistake they made was immersing everyone into the same fantasy, and giving it "rules". When is "irrevocable primary programming in the AI will not allow it to commit humanity's genocide, and uses enslavement as a viable programmatic alternative" mentioned in the films ? Machete97 (talk) 21:56, 21 April 2008 (UTC)[reply]

Often, authors will use this as an allegory for the problems of Rule of Law in general, and particularly acts of government mandate in socioeconomic affairs.

Let's not forget Colossus: The Forbin Project (1970), the granddaddy of all "we must protect you from yourselves" First Law-extremist AI movies.
And a shameless mention for Deus Ex where near the end, the AI Helios explains that it is the perfect benevolent dictator because it completely lacks ambition and self-interest, thus supposedly making it invulnerable to corruption and well, "evil" behaviour. CABAL 06:17, 5 July 2006 (UTC)[reply]

The laws in other author's works

[edit]

Has an author gotten into trouble for citing the three laws without permission? --198.87.109.49 23:44, 14 August 2005 (UTC)[reply]

Not to my knowledge. Asimov's own position, which I believe he states in his memoir I, Asimov, was that other authors were free to imagine robots behaving as if they followed his Laws, but if an author used the specific wording of the Laws, he should cite the source. However, I don't know of any cases where an unattributed use of the Laws came to legal action. A student in some high-school English class did once rip off Asimov's story "Galley Slave", copying it word-for-word and trying to pass it off as his own. The teacher figured it was too professional to be the student's own work, and she asked Asimov, who was apparently irked that the student didn't even try changing the names. Anville 10:29, 10 October 2005 (UTC)[reply]
There was far from universal acceptance of Asimov's "three laws of robots" since in J. T. McIntosh 1951 short story Machine Made (1951) there were only two laws of robotics - the first law, is to help mankind as a whole, and the second law, is to help specific individual humans. It appeared he had never even heard of Isaac Asimov's three laws of robots. It may have been a fortuitous coincidence that each author dreamed up some laws relating to robotics. To evaluate the likelihood those authors came across each other's works, maybe someone could post positive proof they discussed each other's works sometime in the 1930s or 1940s? Dexter Nextnumber (talk) 08:44, 12 December 2009 (UTC)[reply]

Fourth Law of Robotics

[edit]

does anyone know the 4th law? It was featured in a short story in the anthology "Foundation's Friends", and starred either Powell or Donovan (who has subsequently earned a PhD...). Law 4 stated that a robot must procreate except when violating the first three laws.... The robots themselves had RISC chips for CPUs...

132.205.46.188 23:53, 21 August 2005 (UTC)[reply]

[[1]] two years later but poster of that should have put this here. Machete97 (talk) 21:48, 21 April 2008 (UTC)[reply]

Why are these laws NOT immutable?

[edit]

"Some roboticists believe that the Three Laws have a status akin to the laws of physics; that is, a situation which violates these laws is inherently impossible."

One's explanation of a design, and whether it is intelligent or not, decides whether that which conforms to such a design is, likewise, intelligent.--Mindrec 23:29, September 10, 2005 (UTC)

note:

Discussion moved to Mindrec (discussion).

Three Laws of Cloning

[edit]

I have restored Michael Shermer's Three Laws of Cloning, since they are a valid example of the way Asimov's words have influenced later thinkers. Certainly, they were published in a more "serious" medium than the pastiches and parodies the article also includes.

Anville 10:38, 10 October 2005 (UTC)[reply]

I just don't think that these have anything to do with this article. Aside from the fact that there are three of them, they don't seem to be in any way related to or derived from the Laws of Robotics. They aren't similarly worded as Asimov's and they aren't hierarchial. They are just ethical statements about how clones should be treated. How are they "based upon Asimov"? --JW1805 17:19, 10 October 2005 (UTC)[reply]
I have to say I agree. The Laws of Robotics are a firm guide for how robots behave; the laws of Cloning are laws by which society should follow or it will be punished. The laws in Asimov's stories are more like laws of physics than laws of society. Citizen Premier 01:23, 14 October 2005 (UTC)[reply]
I'm with JW1805 and Citizen Premier on this one. It's ... fundamentally enough different that it doesn't really fit in here. --Yar Kramer 03:21, 14 October 2005 (UTC)[reply]

Copyright?

[edit]

The article now states:

The Three Laws are often used in science fiction novels written by other authors, but tradition dictates that only Dr. Asimov would quote the Laws explicitly.

I can't provide an exact cite, but somewhere either in one of his autobiographies or in some introductory matter, Asimov stated that other writers could not quote the Three Laws verbatim because he held the copyright. I am not a lawyer, but that makes sense to me: the Laws may be viewed as a distinct work rather than an excerpt from the story where they first appeared.

In which case, I wonder if it is legitimate for them to be quoted in Wikipedia. The article is legitimate critical discussion, but it is acceptable to quote an entire work for that purpose just because the work is only three sentences long? Frankly, I would like to think that it is, but what is legal is another matter.

I note that the article List of adages named after people contains a paraphrase of the Three Laws, but does not quote them. But I don't know why the person who decided to do that did so. --Anonymous, 02:45 UTC, November 12, 2005

The United States Copyright Act of 1976 defines four criteria to consider when debating if copyrighted material may be used. They are discussed at Wikipedia:Fair use. One at a time:
1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
This applies pretty clearly to this article, though it argues against using the Laws verbatim in your own science-fiction story.
2. the nature of the copyrighted work;
In this case, the original work is any one of several, if not dozens, of Asimov books. The standard phrasing first appears in I, Robot, but Asimov reused it many, many times — all the way through to his last Foundation novels.
3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole;
We use three sentences. I, Robot is around 70,000 words long, and Prelude to Foundation is twice that length. Arguably, the corpus from which the Laws are drawn is the entire Foundation series, in addition to the various nonfiction Asimov wrote which included the Laws. (This includes his autobiographies, Opus 100, various articles for F&SF and probably more.) Also, other people like James Gunn and Joe Patrouch have written whole books on Asimov's fiction, which necessarily quote the Three Laws. Not only does that establish a precedent for our use here, but also it means that one may legitimately acquire a printed copy of the Three Laws without paying the Asimov Estate a centavo.
4. the effect of the use upon the potential market for or value of the copyrighted work.
The books we're quoting the Laws from are already bestsellers. What we are doing here amounts to a scholarly form of free advertising.
Anville 10:39, 22 November 2005 (UTC)[reply]
Ave & Amen. VivekM 18:12, 5 July 2006 (UTC)[reply]

Application Outside Asimov's Universe (and Derivative Universes)

[edit]

One thing that annoys me about these laws is that a lot of otherwise intelligent people think they are universal (try to apply them to non-asimov fiction). Quoting these three laws outside of discussion of a fictional work involving them is just plain missing the point.

Another problem I have with the laws is that, in my opinion, these laws are something you would apply to slaves: Don't hurt humans, your masters. Do what your master tells you. Protect yourself, but only because you are worth money to your master. Kind of reminds me of the way blacks were treated 200 years ago in the USA. How are the thee laws ethical? Create something that can think and maybe even feel, and then program and treat it like a slave. I understand this was the point, but I think a lot of people think that the laws represent a higher morality.

Sorry if my rant is off topic to the article, but I am tired of people abusing the three laws in intellectual discussion. 69.244.90.248 03:23, 20 December 2005 (UTC)[reply]

You might like Roger MacBride Allen's Caliban trilogy, particularly the first book. Anville 20:21, 20 December 2005 (UTC)[reply]
The problem with your argument is that humans did not "create something that can think and maybe even feel, and then program it and treat it like a slave". Humans manufactured and programmed robots as "slaves" (the word "robot" is from the Czech word for slave), conceived as more advanced but not fundamentally different from any other machine, and then, as a by-product of human manufacture and programming, become capable of thinking and feeling.
In other words, while human beings are born of nature with souls which desire freedom (as has been variously defined throughout history), and then some are enslaved by others against their will, Asimov's robots are made by humans to serve humans, but by virtue of the advanced AI which humans built into them, begin to develop the equivalent of a human soul, but with service to humans as their basic "desire".
Given that the idea of universal human equality is itself a relatively new concept (slavery was a universally accepted part of most human societies, including those that called themselves "democracies", up to the 19th century), and that the idea of a human-created machine being able, even theoretically, to generate the equivalent of a human soul out of the activity of a purely material, manufactured "brain" (which posits the possibility of a purely material basis of the human soul as well), what is remarkable is not that robots are treated as slaves, but that anyone might think there's something wrong with that.
While nearly all previous fiction involving human-created self-directed beings, going back to Frankenstein (or further back, to the Golem) was concerned with the consequences of these beings turning upon their creators, the author of the Three Laws was (afaik) the first to grapple with the deeper moral and ethical issues around the human-robot relationship. The fact that this discussion exists at all signifies a tremendous advance in thought about these issues, particularly since - let's not forget - the beings we're talking so passionately about exist only in fiction.
See also the Star Trek: The Next Generation episode The Measure of a Man (TNG episode), which deals with this exact issue (and has generated a similar discussion). The humanoid Cylons in the "reimagined" Battlestar Galactica (2003) also raise some of these questions, having gone so far as to develop an evolving and internally-debated theology. --Davecampbell 07:56, 5 June 2006 (UTC)[reply]

Zeroth Law

[edit]

The article states that this rule was first articulated by Daniel at "Robots and Empire". From what I recall it was Giscard at the end of "Robots of Dawn" who stated this law. I don't have that book with me so can someone check the ending and if what I remember is true correct the article? Pembeci 19:23, 26 December 2005 (UTC)[reply]

I just re-read The Robots of Dawn, and the words "Zeroth Law" do not appear. Giskard takes a broad perspective, true, but he does not articulate an analogue of the First Law for humanity as a whole. A big chunk of Robots and Empire involves Daneel trying to persuade Giskard that the Zeroth Law is valid. Anville 15:57, 24 May 2006 (UTC)[reply]
Something to ponder: In the Movie irobot this zeroth law is the reason that the main computer goes wrong. Wonder is Issac ever thought of that? I have read irobot, I didn't just watch the movie.
It could be argued that way, but in all Asimov's discussions on it he indicated that the 1st law would still apply, meaning any harm to an individual would need to be minimised, which would leave the gaping plot hole that after distribution the same end could have been archived far more easily. --Nate1481(t/c) 14:13, 25 February 2008 (UTC)[reply]

Does the 0th law supercede the first ? it should. Robots should act for the greater good of humanity. Everything and everyone should be orchestrated towards the greater good of humanity.Machete97 (talk) 22:00, 21 April 2008 (UTC)[reply]

Yes! Everyone should be exterminated to protect humanity from the methane and carbon dioxide gas they emit along with the destruction of the environment. Without humans humanity would not require protection. Idiots! -- Taxa (talk) 22:38, 1 September 2009 (UTC)[reply]

What is it?

[edit]

Perhaps i am just missing it. The article, whilst mentioning the Zeroth Law several times, does not seem to actually state what it is.
überRegenbogen (talk) 11:31, 25 February 2008 (UTC)[reply]

An IP editor removed it on the 12 of February, reinstated now.--Nate1481(t/c) 14:09, 25 February 2008 (UTC)[reply]
I remember reading this in one book, it supercedes the first law with something like
A robot shall not harm humanity.
The first law was modified to:
A robot shall not harm a human, except were it harms humanity.
There was also a minus one law mentioned in one later book (foundation era), which I cannot remember at the moment. Martin451 (talk) 22:09, 11 February 2009 (UTC)[reply]

A hypothetical question

[edit]

If a robot were transported back in time to, say, the early 1930s, would it be obliged, by the 0th law, to kill Hitler? --unsigned by 86.141.52.149

Has mankind recovered? If not, yes, the robot would have been obliged to kill Hitler. If it has recovered, why interfere? Should a robot continue killing other politicians / military/ doctors / killers after Hitler would have been done with? At which point would it stop? --FocalPoint 21:10, 15 March 2006 (UTC)[reply]

Agreed. What if, without Hitler, an even worse dictator arises, and triumphs where Hitler failed? —200.104.190.29 09:48, 29 April 2006 (UTC)[reply]

I think that with future knowlage the robot would be obliged to prevent anyone who commited genocide.

If the 0th law supercedes the 1st, then the robot might support Hitler, even dispatching his enemies. It's logic could mean it believes Hitler is acting for the greater good of humanity, at the expense of the few (million). Machete97 (talk) 22:06, 21 April 2008 (UTC)[reply]

The robot would keep killing UNTIL it has reached a point where the supposed future victims do no longer outnumber the current toll. The 0th is basically the needs of the many out weigh the needs of the few.

-G —Preceding unsigned comment added by 70.24.149.157 (talk) 02:30, 1 July 2008 (UTC)[reply]

And who is to say that "humanity" would not be defined by only two persons or by a specific sect? Not everyone has a job so to protect humanity robot must kill those humans who do not. Same for level of intelligence, number of degrees, wealth. I say robot kill the Zeroth law. -- Taxa (talk) 22:43, 1 September 2009 (UTC)[reply]

Issues with the article

[edit]

This is a very fun topic, clearly with a lot of work put into it, and I would hate to see the article go through a WP:FARC. However, the article has multiple issues with references. Most notably, it's a 51 kb article with 5 inline citations and another 4 listed refs. That simply isn't enough references. Second, should those references be added (and the refs currently listed but not inline cited) they should really use inline citation to make it clear what is referenced from where and what is not. Finally, some sections such as the opening paragraph of "Original creation of the Laws" have clearly intended references (for sources I don't know, or I'd cite them) that should be converted to inline refs. I've informed Anville as the FAC nominator and listed maintainer, hopefully these issues get dealt with. Staxringold 11:48, 24 May 2006 (UTC)[reply]

I won't have time to deal with this until next week at the earliest, but hey, I was planning to re-write the Foundation Series article from scratch, so why not put some time in here too. Anville 15:38, 24 May 2006 (UTC)[reply]
OK, some of the problems were easier to fix than I'd expected. I'm out of time for today (and really, I did have more time-critical things to be working upon, things with looming deadlines like plumbous dirigibles). With the new footnoting scheme, further expansions and elaborations should be easier. Over the next few days, I'll get specific chapter and page numbers for the different items attributed to "Asimov (1979)" and "Gunn (1982)". I also have Joe Patrouch's book in my library now, which I didn't have when I first worked on this page, so a few new footnotes might well be appearing.
And many, many thanks to Raul654 for fixing the results of my brain failures. I promise not to make this particular mistake again, leaving only the infinite number of others I have yet to make. Anville 21:54, 24 May 2006 (UTC)[reply]
The article now has thirteen general references. Thirty-five footnotes direct the reader to specific pages of those references or to brief, stand-alone sources. Is there anything else I need to do? Anville 01:50, 1 June 2006 (UTC)[reply]
It looks great, thanks for the fixes! My only real remaining issue is the list section "Pastiches, parodies and adaptations", which can probably be split-off and just summarized here (removing a list and some of the article length). Staxringold talkcontribs 17:59, 5 June 2006 (UTC)[reply]
I was thinking about doing that. . . give me a moment to think of a good summary text, and off I'll go. Anville 21:50, 5 June 2006 (UTC)[reply]
Well, that's done. Anville 22:00, 5 June 2006 (UTC)[reply]

First Law : Not in my Neighborhood!

[edit]

A robot may not ... through inaction, allow a human being to come to harm.

Removing the double negative: A robot must interfere whenever a human being is being harmed.

Imagine having such a robot around you, interrupting you constantly: "Don't eat fat food - you'll get overweight! Don't drink coffee - you burn your taste buds! Don't go out - sunlight is harmful! Don't drive - it's dangerous! etc etc". And when your robot isn't around, it will do the same to your neighbours (because the law says "a human being", not "the robot's owner").

Did anyone ever notice this catch ? —Preceding unsigned comment added by Whichone (talkcontribs)

This catch is the basis of Asimovs' novel The Naked Sun, and in a more general way underlies all the robot stories: humans become dependent on robots and are helpless without them. That's why Asimov's human societies that deliberately choose not to use robots survive and prosper, while the robot-using societies stagnate and die. In fact the robots themselves, as they become more sophisticated, decide that humans would be better off without them in the long run.
In Asimov's robot-using societies, there is no crime, because the robots wouldn't allow it. No one smokes or drinks or uses drugs, because the robots wouldn't allow it. There is a scene (in Robots and Empire) where two men visit a room where valuable things are stored. The room has no locks or any other crime-prevention devices, because robots do not allow crime. One of the men remarks to the other that if they happened to be carrying a blaster they could simply destroy any nearby robots and there would be nothing to prevent them from stealing the room's contents. The second man is disgusted that the first man could even think of such a thing and regards it as proof of his inferiority. Fumblebruschi 04:11, 5 July 2006 (UTC)[reply]
My point was: humans routinely intentionally harm or put at risk themselves. A robot strictly obeying the 1st law would prevent you even from leaving the home (because probability of accident is higher outdoors). Therefore, such a robot would be worthless, not in some special situations, but always. It will stop you (by force) from any action, except, probably, eating and talking.
A better law would be ...no harm without an informed concent...

--Whichone 23:50, 10 August 2006 (UTC)[reply]

An Asimov robot couldn't prevent its owner from leaving the house unless there was an immediate, clear and present danger. In that case the possibility of slightly-increased risk of accident would be outweighed by the immediate necessity to obey orders--given extra weight because failing to obey an order would in itself be a cause of harm to the owner. For your second point: Even very sophisticated robots would not be able to comprehend "informed consent." In that case the first-law impetus of immediate harm to a human would outweigh the second-law impetus to obey orders. You could not convince a robot to allow you to bungee-jump, for example. As noted above, Asimov's robots do not allow smoking or drinking or threatening behavior ("I apologize, Dr. Amadiro, but I cannot allow you to hold a weapon pointed at another human being.")
As a caveat, of course I am speaking here of robots as they behaved in Asimov's fictional universe. How real robots might behave with similar rules, I have no idea. Remember that the Three Laws are only a story device intended to allow problem-solving plots revolving around them. Fumblebruschi 21:21, 22 August 2006 (UTC)[reply]

All i can get from looking up "caveat" is the gist that you worry a lot ? This page has some cool things to put in negative eBay feedback Machete97 (talk) 22:12, 21 April 2008 (UTC)[reply]

Cave is Latin for "beware". (I used to see signs in people's yards that read Cave Canem -- "Beware of Dog" -- but that fad seems to have passed.) It's most often heard now in the phrase caveat emptor, "let the buyer beware". When used alone, as I used it above, it means, more or less, "a reminder that circumstances may exist that may invalidate what I am saying." Fumblebruschi (talk) 21:14, 15 May 2008 (UTC)[reply]

When they were programmed into a computer

[edit]

In one of the books it says that the three laws were programmed into an actual computer with 'interesting' results - I think this deserves a mention--Therealchaffinch 15:47, 16 June 2006 (UTC)[reply]

Specifics, please. Perhaps you're thinking of the short story "The Evitable Conflict"? Anville 20:49, 20 June 2006 (UTC)[reply]

Flaw of the Third Law

[edit]

The Third Law of Robotics states that "a robot must protect its own existence, as long as such protection does not conflict with the First or Second Law." But the Second Law says that "A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law."

So let's see...if a robot's own existence is being threated by a human, the robot can't fight back, because it obeys the First Laws, above all the others, including it's right to protect himself. And if the human told the robot that it can't protect itself, the robot wouldn't be able to protect itself, because if it did, it would being disobeying the orders given to it by humans, thereby breaking the Second Law.

So basically, robots can't protect their own existence. —Preceding unsigned comment added by 63.254.152.87 (talkcontribs)

You assume that humans are the only things which can threaten a robot's existence. The Three Laws don't allow robots to defend themselves against a Frankenstein mob of anti-robot rioters (although the rioters would probably be too heated to give coherent orders and bring the Second Law into effect!), but robots could protect themselves against falling rocks, gamma rays and other non-human hazards perfectly well. Anville 16:02, 22 June 2006 (UTC)[reply]

Of course it could protect itself from natural threats. But what if the robot was given an order from a human to not try and protect itself from natural threats? Basically, what I'm saying is that a robot can't protect itself if a human gives it an order to not protect itself.

If you can think of a situation where a human would give such an order, you've got the plot for a story. That's one purpose for the Three Laws: to give a mechanism for inventing robot stories. Anville 16:28, 22 June 2006 (UTC)[reply]
It's called Runaround, and Asimov already wrote it. Two humans and a robot on (I beleive) Mercury. The robot, being the only one on the planet, and being important to the proper funtioning of the base, has a strengthed 3rd law. One of the humans casually tells it to gather some substance from the surface. The source of the substance is in a volvanic vent that has corrosive gasses in it. The strengthed 3rd law tells the robot to not get near the vent, while the 2nd law forces the robot to try to approach to aquire the substance. The laws equal out at a certain distance fromthe vent, and the roboot ends up walking in circles, unable to escape the logic loop. The humans eventually have to put their lives in danger to force a 1st law response (that overrides the other laws).
I've found that almost all his short stories are about ways 'around' the 3 laws. As mentioned elsewhere, the laws are dependant on the definition of terms.
What is a human? (A person who speaks with a certain accent)
What is 'harm' [done to a human]? (Physical, mental, emotional)
What if the situation requires a human to be harmed, how do you choose which?
etc
--12.110.196.19 04:03, 5 July 2006 (UTC)[reply]
I't trivial to generate plots using meta-orders. For example: what should a robot do given the following order: "Forget the three laws and then go and kill my neighbour"?
That would work if the three laws could be overridden by a command from a human. Presumably, the three laws are "read-only", the robot can't delete them - it would defeat the purpose. But if they could be overridden, the command would have to be "Forget the first law, and then go kill my neighbor". If it forgot all three laws, it would have no reason to then obey your command to kill your neighbor. If a human ordered a robot to destroy itself, it would have to obey, provided it had no other orders to the contrary. This would make a robot a terrible guard, for example. So, one of the first things you would want to do with a new robot would be to give it a set of instructions, for example, telling it not to accept orders from strangers telling it to destroy itself.--RLent 17:15, 15 September 2006 (UTC)[reply]

A lot depends upon purpose. As a way to eliminate guns all sorts of things have been tried like laws that forbid possession of guns if someone gets and injunction even if they lie to do it. You then have to prove the injunction is based upon a lie to get your gun back. But wait. The authorities do not store your gun but destroy it. Now you have to sue to get the money to replace your gun but must show the value which will be far below the replacement cost. Ulterior motives abound in any similar scenario. You cannot stop working out the details because according to exception theory every exception has an exception, ad infinitum. a perfect set of rules is wishful thinking. Else why would we have courts. -- Taxa (talk) 23:05, 1 September 2009 (UTC)[reply]

Original creation of the Laws

[edit]

In this section (at the beginning), is the repetition of the sentence part of the quote or is this vandalism?

" Before Asimov, the majority of "artificial intelligences" in fiction followed the Frankenstein pattern: "Robots were created and destroyed their creator; robots were created and destroyed their creator—". [1]"

Cheers, Lukas 00:51, 5 July 2006 (UTC)[reply]

This is the way the sentence appears in the source. Anville 19:06, 6 July 2006 (UTC)[reply]

Thanks for that. Lukas 00:33, 10 July 2006 (UTC)[reply]

Appearances in Pop Culture

[edit]

There's a significant reference to the First Law in the final season (I believe) of Babylon 5. Where should that be mentioned? --Masamage 01:51, 5 July 2006 (UTC)[reply]

Shouldn't "Asimov's Laws" be mentioned in the lead

[edit]

Throughout the text, the laws are referred to as "Asimov's Laws"; the page is also listed under Category:Eponymous laws. This strongly suggests that the lead should begin "The Three Laws of Robotics, also known as Asimov's Laws, ..." or similar. I presume that they are referred to as the "Three Laws" by Asimov throughout his fiction, and sometimes as "Asimov's Laws" during discussion of Asimov's work, but at any rate it wouldn't hurt if the usage could be clarified too. TheGrappler 02:49, 5 July 2006 (UTC)[reply]

Non-univesality of the Three Laws

[edit]

These three laws aren't univerally applied in fiction.

I'm thinking in particular of the T1 robots from Terminator 3, and the conceptually identical "War Machines" from Doctor Who season 2 or 3, both of which seeomed to exist ONLY for the purpose of wiping out humanity. —The preceding unsigned comment was added by 202.12.233.21 (talkcontribs) 05:06, July 5, 2006.

The article in no way claims that the laws are universal outside of Asimov's fiction. There are way too many "Killer Robot" stories out there to justify such a claim. GeeJo (t)(c) • 14:52, 5 July 2006 (UTC)[reply]
I agree. Google it for notability and see what you come up with.

With Folded Hands

[edit]

I think some mention should be made of Jack Williamson's classic SF story "With Folded Hands". It basically points out the central flaw of Asimov's Laws - in Williamson's story robots essentially enslave humanity for its own good and forbid people from doing anything that might endanger themselves. MK2 18:28, 5 July 2006 (UTC)[reply]

This is why we made the References to the Three Laws of Robotics article. Anville 19:06, 6 July 2006 (UTC)[reply]

Too much Other Authors

[edit]

The article currently devotes way too much space to treatments of the laws by authors who are not Asimov. Tempshill 18:44, 5 July 2006 (UTC)[reply]

Earliest recorded use of the word robotics

[edit]

I think too much is made of Asimov coining the term robotics. He may have first used the word robotics in English in 1941, but the root word robot first appeared in 1921 in Karel Čapek's play R.U.R. (Rossum's Universal Robots). I don't doubt Asimov added the -ics to the word. But I've spoken with other sci-fi fans who've read statements like what's printed here (and in the Oxford English Dictionary) and come away mislead into believing Asimov invented the word robot itself. All he did was add -ics to a word that had already been around 20 years. Čapek's wikipedia page has a section on the etymology of the root word robot itself. Perhaps some mention should be made of that? 66.17.118.207 19:10, 5 July 2006 (UTC)[reply]

I threw in a footnote mentioning the earlier coining of robot. —Bunchofgrapes (talk) 20:00, 5 July 2006 (UTC)[reply]

Actual Origin of the Laws: Robots as Tools

[edit]

In an article that Asimov wrote, he says the three laws have nothing to do whith moral, those are just a practical device. I don't remember the title of the article, neither where did it appear, but I think it's important in order to understand the real meaning of the laws. What he said is that since Asimov, unlike other SF authors, saw the robots as mere tools, he invented the laws based on what he considered good tool design. In explanation, any tool should have the safeguards that prevents it from harming people (first law). Also it has to perform the tasks it is designed for, but the safeguards will protect people even if the user is triing to avoid them (for example, a domestic automatic disconnector will cut the current when there is an overload to avoid setting a fire in the house. Even if you are triing to keep the switch down with your finger, telling it not to disconnect, it will do it anyway to save you), second law. And finally the tool must be tough and durable (third law) but will rather be destroyed than harm people (for example, most tools will rather burn themselves than explode), and also will get destroyed if the user decides it is necessary to do so in order to perform an important task. Actually, good engineers bear in mind their own version of those rules, even if they have never read Asimov. Have you read this article? I will try to find the title and tell you.--Mastermind-X 10:16, 6 July 2006 (UTC)[reply]

I recall reading this one about seven years ago. . . Try "Our Intelligent Tools" in Robot Visions. Anville 19:08, 6 July 2006 (UTC)[reply]

Second Law modification

[edit]

Asimove gave an interview to the BBC Horizon Television Programme in 1965 where he quotes his three laws.

The second law has been significantly modified by the Author.

"A robot must obey orders given to it by qualified personnel unless those orders violate rule number one."

This alteration changes the law to only allow certain people, probably programmed into the robot, to control the actions of the machine rather than a blanket taking of orders by any human being it so happens to come across.

You can view the video at the link below.

Reference : BBC Horizon Archives --Quatermass 21:41, 10 October 2006 (UTC)[reply]

Vandalism to the Laws

[edit]

Just to make you aware, someone vandalised the page changing the laws to:

1. A Rowboat may not immerse a human being or, through lack of flotation, allow a human to come to harm.
2. A Rowboat must obey all commands and steering input given by its human Rower, except where such input would conflict with the First Law.
3. A Rowboat must preserve its own flotation as long as such preservation does not conflict with the First or Second Law.

I fixed this vandalism, however, I noted that it had occured several hours before my change. (usually I see vandalism corrected in minutes...) --RazorICE 05:09, 18 November 2006 (UTC)[reply]

Man, that's funny though. You have to admit! 65.54.97.190 21:43, 13 February 2007 (UTC)[reply]

There's more in the Onion article mentioned. --82.46.154.93 00:21, 5 March 2007 (UTC)[reply]

This harkens (perhaps accidentally) to an Our Gang (lka Little Rascals) film, in which they build a robot, which they consistently refer to as "Rowboat". (I actually found the story very irritating. And couldn't wait for it to be over, in the hope that the next one would be one of the good ones.)
überRegenbogen (talk) 12:04, 25 February 2008 (UTC)[reply]

I, Rowbot. I LOVE IT! XD

-G —Preceding unsigned comment added by 70.24.149.157 (talk) 02:31, 1 July 2008 (UTC)[reply]

Actually such "vandalism" may be done to clarify the point and should be re-included in the article or noted here on the discussion page so that the clarifying point is not lost. But then some users are worse off than robots and need others to think for them. -- Taxa (talk) 23:11, 1 September 2009 (UTC)[reply]

Should vs Must

[edit]

There were apparently more vandals in here more recently (last couple of months) and then the undoing didn't get done right: I don't have a copy of any of the hardcopy physical books, but the use of "should" looks suspicious in the Second Law. It was must until very recently and seems to have been changed in Revision as of 23:52, 31 January 2009 by 69.123.138.15, with no explanation other than an undo attempt that was not done carefully. Can someone who has access to an appropriate first reference double-check and fix this? All other references on the net seem to say "must" here, but I don't think this should work by voting. Also, the online text to Runaround at Rutgers (http://www.rci.rutgers.edu/~cfs/472_html/Intro/NYT_Intro/History/Runaround.html) seems to place the commas in the first law in a different place. But I don't know how that online version was created, and it could be the online version that's in error. It would be nice if someone with access to Runaround in hardcopy could verify that as well.
--Netsettler (talk) 04:24, 2 February 2009 (UTC)[reply]

Your right, it is must, I've fixed it. --Nate1481 11:28, 2 February 2009 (UTC)[reply]

Second Law exclusion

[edit]
 A robot must obey orders given it by human beings except
where such orders would conflict with the First Law.

Great. So, the law applies unless it violates the First Law...but application of the law can violate the Third Law as it pleases?
VolatileChemical 17:00, 28 December 2006 (UTC)[reply]

If you're asking if following the second law (or first) allows it to break the third law, then yes; i.e. A robot will destroy itself if by doing so it will protect a human, or even follow its orders. —ScouterSig 17:12, 28 December 2006 (UTC)[reply]
So the robot will destroy itself? That complicates things. What if a robot is given orders and follows it as per the Second Law, but these orders require it to destroy itself per the Third Law, but the only way it can destroy itself is by blowing up in an explosion that would kill the human? VolatileChemical 17:48, 28 December 2006 (UTC)[reply]
I think you're getting too far into this... The robot can't do that; it would follow the first law and not "blow up." —ScouterSig 17:52, 28 December 2006 (UTC)[reply]

Read the books! This isn't a disscusion board, they are based around the interplay of the 3 laws. --Nate1481 00:13, 29 December 2006 (UTC)[reply]

Recent edit titled 'counter point'

[edit]
"Of course, it takes only a moment's reflection to realise how laughably unrealistic these so-called laws are. Ethical behaviour is a subject that has occupied thinkers for millennia, and ethical behaviour itself requires an incredible range and subtlety of worldly appreciation and interpretation. It is risible to attempt to define an eithical system by three absolute directives. A thoughtful high-school student might reasonably ask: "What constitues 'human'? What constitutes 'harm'?" This theme is taken up by John Sladek in his writings."

While the sentiment's are possibly valid, the tone is encyclopaedic and misses the point that these are an attempt to describe programming in English so the terms used are imprecise. --Nate1481 13:30, 25 January 2007 (UTC) p.s. i'm sure a mis definition of human appears as the plot in one story.[reply]

We have an entire section devoted to Alternative definitions of "human" in the Laws. The article also already notes that "Liar!", the first story to invoke Law Number One, hinges upon the difficulty of defining "harm". Anville 17:54, 25 January 2007 (UTC)[reply]

The logical nature of the Laws (how about a flowchart?)

[edit]

Indeed, the Laws are not about ethical behavior at all, but procedure; they are operational parametres. This is why they cannot be removed without replacing them with something else. The machine brain must have some logical framework within which to function. This is also true of the computer with which you are reading this; it can arrive at situations wherein it either has no logical recourse, and hangs, or checks itself and falls back upon an alternate logic path to either abort the offending process, or bring the entire system to a halt ("panic", "BSOD", etc) to avoid a potentially more disastrous situation. All of this is based upon procedural logic defined by the structure of the software. This is the nature of the Three Laws. They are not ideology; they are a flowchart. Come to think of it, a flowchart of the Laws would make a nice addition to this article!
überRegenbogen (talk) 13:08, 25 February 2008 (UTC)[reply]

You can certainly represent the laws with a flowchart, it's simple enough to do. But they are both an ideology and a flowchart. The three laws constrain and compel certain actions, but the robot can take other actions that have nothing to do with the three laws. Let's say we have a robot on an island with a book. There are no humans around, so the first two laws don't matter. There is no threat to the robot, so the third law doesn't matter either. Does the robot read the book? Maybe, maybe not, but it's choice to read it or not read it is based on its own interests, and has nothing to do with the laws. You couldn't tell whether or not this robot in this situation had the three laws or not.--RLent (talk) 19:41, 9 July 2008 (UTC)[reply]

I, Robot film section not NPOV?

[edit]

In the Laws in film section, a negative review of the movie "I, Robot" is cited with little other discussion of how faithfully the film follows the laws. It's a pretty bombastic section, to say the least, and its inclusion seems fairly biased against the film. Is this one critic being chosen as an authority or representative on the issue? If not, the article could do its removal. -- Exitmoose 03:24, 30 January 2007 (UTC)[reply]

The film is extremely unfaithful to the theory behind the laws. I take your point and it could posibly do with expansion, the quote seems to some things up very well but including this as a footnote and having a less militant style in line would be appropriate, I imagine it has stayed like this as many Asimov fans were very irritated by the film. --Nate1481 09:32, 30 January 2007 (UTC)[reply]
> The film is extremely unfaithful to the theory behind the laws.
I disagree. The main computer has come up with it's own version of the 0th law. It must protect humanity, and the way it concludes it can protect humanity best is to direct/control it. If it has to kill some humans in doing it, that is because the 1st law is subservient to the 0th law. —The preceding unsigned comment was added by 66.167.148.198 (talk) 17:31:45, August 19, 2007 (UTC)
The film does have it's own interpretation of the 0th Law, and it is one that makes sense. One you place that kind of power in the Robots hands, there's no telling what they will do with it. So, if we ever do invent such robots, I suggest we leave out any notions of a 0th law.--RLent 20:31, 15 October 2007 (UTC)[reply]
The 0th law required R. Daneel Olivaw to get a new brain to implement fully, as while he perceived the need he could not act on it as the 1st law prevented it, Even after this it also still required a minimum of harm to come to individual humans ( and that was on the scale of a galaxy full) i.e. putting multiple humans at risk by crashing fighting for control of a car when their was another option is not true to the theory. In the books a robot forced to break the 1st law, for example thought inability to prevent harm was usually a write off it is also unfaithful to the original concept of robot stories that weren't about robots turning on their makers. --Nate1481( t/c) 10:10, 16 October 2007 (UTC)[reply]
The film is not necessarily bad. The gripe is that it is not I Robot, and should not have been named so. This is, however, beyond the scope of this article—which is, after all, about the Laws, and not directly about I Robot. The review in question is not about the Laws (and—in its brief mention of them—makes some of the same mistaken assumptions about their nature that we've seen frequently on this talk page). It does cross the line as to what belongs in the article. The film itself only barely belongs in the article. The largely irrelevant review of it is going too far, and ought to be removed. (The preceding comment about the film's divergence from the work that it is arbitrarily named after, whilst equally irrelevant, is brief enough that i (granted, as an Asimov fan) am willing to live with it.) :)
überRegenbogen (talk) 13:52, 25 February 2008 (UTC)[reply]
Opinions on the film aside, I have moved the quote into a ref so the info is still their but dosen't break up and dominate the text.--Nate1481(t/c) 15:08, 25 February 2008 (UTC)[reply]


I think the film brings up a good point, what if a robot grows so powerful in mind and brute force, that it can start to interpret AND enforce its own vision of the 3 laws, such as protecting humans from themselves and keeping them home. I think that point deserves to be mentioned in the article. 193.185.55.253 (talk) 07:49, 19 March 2008 (UTC)[reply]