Jump to content

Talk:Ethics of artificial intelligence: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 132: Line 132:


AI ethics - a big topic - seems to have short-changed robot ethics. You might want to reference other articles rather than being inaccurate. In reality, the terms roboethics and robot ethics have meant specifically different things. That is reflected in the intentions of the two groups with those names on FB. Roboethics has a closer relationship with philosophy and social science, among other things. It even reaches on that basis into public policy. Robot ethics is a term that's referred to the technical challenge of building ai ethics machines.
AI ethics - a big topic - seems to have short-changed robot ethics. You might want to reference other articles rather than being inaccurate. In reality, the terms roboethics and robot ethics have meant specifically different things. That is reflected in the intentions of the two groups with those names on FB. Roboethics has a closer relationship with philosophy and social science, among other things. It even reaches on that basis into public policy. Robot ethics is a term that's referred to the technical challenge of building ai ethics machines.

see my comment above under 'roboethics' [[User:EthicsScholar93|EthicsScholar93]] ([[User talk:EthicsScholar93|talk]]) 09:24, 29 November 2020 (UTC)


== Disorganized ==
== Disorganized ==

Revision as of 09:24, 29 November 2020

This article was the subject of a Wiki Education Foundation-supported course assignment, between 4 February 2019 and 15 March 2019. Further details are available on the course page. Student editor(s): Branden Hendricks (article contribs). This article was the subject of a Wiki Education Foundation-supported course assignment, between 27 August 2019 and 14 December 2019. Further details are available on the course page. Student editor(s): Gordon1kuo (article contribs).

WikiProject iconWomen in Red: BLM/Anti-discrimination (2020)
WikiProject iconThis article was created or improved during the BLM/Anti-discrimination edit-a-thon hosted by the Women in Red project from July to December 2020. The editor(s) involved may be new; please assume good faith regarding their contributions before making changes.

This article was the subject of a Wiki Education Foundation-supported course assignment, between 6 September 2020 and 6 December 2020. Further details are available on the course page. Student editor(s): Nectaros (article contribs).

Comments moved from "Talk:Philosophy of artificial intelligence"

"A major influence in the AI ethics dialogue was Isaac Asimov who fictitiously created Three Laws of Robotics to govern artificial intelligent systems." I've removed "fictitiously." While the Three Laws of Robotics were created for a fictitious universe, Asimov really did create them. It might be appropriate to somehow add that he developed them for his science fiction books. goaway110 21:57, 22 June 2006 (UTC)[reply]

I don't see the point why Ethical issues of AI should be an independent encyclopedia entry. The lexicographic lemma here is surely "Artificial Intelligence". --Fasten 14:41, 7 October 2005 (UTC)[reply]

I disagree Fasten, I think the main AI article should briefly mention ethical issues and we should keep this as a separate article; the subject can be much more extended to include uses of AI(in wars, in saving people of dangerous conditions, in working under unhealthy circunstances(like mining)), AI as a possible future singularity Technological singularity(ie, what will happen if AI eventually become more intelligent and capable than humans), it could also include more deep discussions about the possibility of sensations(qualia) and consciousness in AI, some comments on what will happen if AI gets widespread in the future society with behavior, appearance and activities very similar to ours, and about issues as "should AI have rights and obligation?", "does it makes sense to create laws for AI beings to obey?" Rend 01:29, 17 October 2005 (UTC)[reply]

The first question I would have regarding the ethics of AI would be whether it is possible for a machine to be capable of conciousness. This is obviously a very difficult question given the fact that no human being can really be capable of knowing anyone's internal existence other than his own. Hell, maybe computers really to have conciousness. But if they do, they would be the only one's who would know this for certain since it is difficult to ask a computer if it exists without programing it to say it exists beforehand. I believe animals have conciousness and are capable of feeling emotions even though they cannot tell us this. Also, would the fact that a computer wasn't capable of conciousness, much less emotions, mean that is should not be protected and given rights. This may seem impropper but I can help bring to mind the Terri shivo case. It is very possible that she was fully concious and fully capable of emotions even though she was in a permenantly catatonic state.207.157.121.50 12:49, 25 October 2005 (UTC)[reply]

That might get difficult without OR. I changed the merge suggestion from "Artificial Intelligence" to "Artificial intelligence (philosophy)", which is referred to by the Portal:Artificial_intelligence --Fasten 13:51, 19 October 2005 (UTC)[reply]

The subject can be much more extended to include:

  • Use of AI in wars
  • Use of AI in conditions hazardous to human(saving people from fire, drowning, poisoned or radioactive areas)
  • Use of AI in human activities(doing human work, AI failing, substituting human jobs, doing unhealthy or dangerous work(eg: mining), what will happen if AI gets better than us in most of our work activities, what AI will not be able to do(at least in near future))
  • AI as a possible future singularity Technological singularity: what will happen if AI eventually become more intelligent and capable than humans, being able to produce even more intelligent AIs, possibly to a level that we won't be able to understand.
  • More deep discussions about the possibility of sensations(qualia) and consciousness in AI
  • Some comments on what will happen if AI gets widespread in the future society with behavior, appearance and activities very similar or even better than ours(could it bring problems about machine "treatment"? I mean, could we still throw them away as they were a simply expensive toy, if they become better than us in all our practical activities?)
  • As AI usage and presence become greater and widespread, should we discuss issues as: "should AI have rights and obligation?", "does it makes sense to create laws for AI beings to obey?"

I ask anyone who has references and contents to include them properly. Rend 23:11, 21 October 2005 (UTC)[reply]

Just following up on some of Rend's questions...

  • Is it ethical for a person to own an AI "being"? Would an AI being necessarily "prefer" to be unowned?
  • A computer is owned by whoever owns the factory that makes it (until the factory sells the computer to a person) -- is the same true of an AI being?
  • If an AI being is unowned, and it builds another AI being, then does the first AI being own the second one?
  • Are the interests of human society served by incorporating unowned AI beings into it? Would humans in such a society be at a competitive disadvantage?
  • Would the collective wisdom of AI beings come to the conclusion that humans are but one of many forms of life on the planet, and therefore humans don't deserve any more special treatment than, say, mice? Or lichen?

Whichever way these questions are answered, more questions lie ahead. For example, if we say it isn't ethical for a person to own an AI being, then can or should society as a whole constrain the behavior of AI beings through ownership or through "laws of robotics"? If we are able to predict that the behavior of AI beings will not be readily channeled to the exclusive benefit of humans, then is there a "window of opportunity" to constrain their behavior before it gets "out of hand" (from human society's point of view)?

A survey of current philosophical thought on questions such as these (and the slippery slope issues surrounding them) would be very helpful here.—GraemeMcRaetalk 05:55, 3 November 2005 (UTC)[reply]

On whether Asimov's laws can be enforced or only taught

Not all artificially intelligent machines are necessarily "programmed" as intelligent. For example, the deep belief networks of Geoff Hinton et al. can learn yet can be implemented in unprogrammed hardware. I accept that Hinton's simulations require programs, but the programming does not embody intelligence, only neural and synaptic functions which do do not in themselves incorporate meaning.

I don't want to get into semantic or philosophical arguments here about what "programmed" really means, but my key point is that Asimov's laws seem to require some sort of "override" over instinct or learning which rather implies a high-level program. Without that, one would have to teach the laws to the AI (or in Hinton's terminology, "learn" the AI to understand the laws) but that would leave open the possibility for the AI to decide to ignore the laws and Terminator scenarios follow.

So for that reason, the ethics of creating AI machines deserves continuing attention. P.r.newman (talk) 08:42, 18 January 2011 (UTC)[reply]

Robot rights

Moved Robot rights from Artificial intelligence in fiction to Ethics of artificial intelligence Thomas Kist 18:57, 15 October 2007 (UTC)[reply]

Proposed merge

Oppose -- for no other reason than that the fact that philosophy of AI is already 10 pages long, and this subject probably requires about ten pages on its own. ---- CharlesGillingham (talk) 17:15, 14 August 2008 (UTC)[reply]

http://www.youtube.com/watch?v=7VpXekQGzqg — Preceding unsigned comment added by Angienoid (talkcontribs) 13:39, 19 March 2013 (UTC)[reply]

Re Rule 61 of for the 2003 Loebner Prize competition

As it stands today (7/4/2013), the rule with its legal specifications says nothing at all to the subject of Robots' rights. Suggest deleting.Svato (talk) 19:31, 4 July 2013 (UTC)[reply]

Roboethics

The lead to this section currently states, "The term "roboethics" .. [refers] to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans." I find the two halves of this to be contradictory. The first is clear enough - if a robot is a conscious being then we must treat it ethically. The second appears to be a simple application of machine ethics to AI, and nothing whatever to do with roboethics as defined in the first sentence. — Cheers, Steelpillow (Talk) 12:08, 13 November 2014 (UTC)[reply]

We have to consider whether 'robotethics' is appropriate as a section in this article and not its own page. Of course, roboethics and the ethics of AI are deeply connected, they are properly their own fields, given that robotics doe snot necessarily entail AI, and the embodiment of AI in robotics comes it with its own ethical issues and have a huge literature behind it. EthicsScholar93 (talk) 09:23, 29 November 2020 (UTC)[reply]

Solznenitsyn

re: (Sorry, the source is McCorduck, whose "Machines That Think" is the definitive work on the history of AI. She considers this an issue in the ethics of AI. The synthesis is hers, not Wikipedia's.)

In this case the section should be written in the followng way: "McCorduck <bla-bla bla>. As an example of this possibility McCorduck cites Sozhenitsyn's <...>". Are you saying that the rest of the section (after the footnote) is attributed to McCorduck as well? If not, then {{cn}} is due. Staszek Lem (talk) 00:45, 15 May 2015 (UTC)[reply]

At least 50 years

I changed the sentence about how most scientists think it will be at least 50 years before we have human-equivalent AI. First, I added in the time -- 50 years from 2007 is not the same as 50 years from 2016! I also traced down the actual source, which was an Independent article from that period, about an AI symposium. The existing links were 404s, so I switched over to use the Web Archive. I think that's probably not the best for Wikipedia, but better than a lost source. --ESP (talk) 16:02, 19 January 2016 (UTC)[reply]

Hello fellow Wikipedians,

I have just modified one external link on Ethics of artificial intelligence. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

checkY An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 21:47, 20 July 2016 (UTC)[reply]

Hello fellow Wikipedians,

I have just modified one external link on Ethics of artificial intelligence. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 16:58, 26 December 2016 (UTC)[reply]

Hello fellow Wikipedians,

I have just modified one external link on Ethics of artificial intelligence. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 04:07, 24 September 2017 (UTC)[reply]

weaponization of artficial intelligence

I started a discussion on Talk:Lethal autonomous weapon about whether to have a separate article, on the global AI arms race or perhaps instead on policy of weaponization of AI, feel free to chime in there. There is also some overlap between the weaponization section here and the current Lethal autonomous weapon; probably Lethal autonomous weapon or a new standalone article would be a better home to such discussion, and in the long term I might end up proposing that this current vague article might be dismantled or turned into an overview page. Rolf H Nelson (talk) 20:12, 24 December 2017 (UTC)[reply]

Robot ethics

AI ethics - a big topic - seems to have short-changed robot ethics. You might want to reference other articles rather than being inaccurate. In reality, the terms roboethics and robot ethics have meant specifically different things. That is reflected in the intentions of the two groups with those names on FB. Roboethics has a closer relationship with philosophy and social science, among other things. It even reaches on that basis into public policy. Robot ethics is a term that's referred to the technical challenge of building ai ethics machines.

see my comment above under 'roboethics' EthicsScholar93 (talk) 09:24, 29 November 2020 (UTC)[reply]

Disorganized

This article has become very disorganized, and no longer provides a comprehensive overview of its topic. Most of the contributions seem to have been inserted piecemeal and the overall structure makes no sense. May I suggest the someone take a look at the appropriate section(s) of the article artificial intelligence and attempt to reorganize this article in a more logical form? E.g. (1) risks, unintended consequences, abuses (2) ethical reasoning, "friendly" AI, etc. (3) consciousness/sentience and robot rights. Is anyone maintaining this article? ---- CharlesGillingham (talk) 20:54, 29 July 2018 (UTC)[reply]

Addition of sub-topic: Biases in AI Systems

AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases. For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people’s gender [23] . These AI systems were able to detect gender of white men more accurately than gender of darker skin men. Similarly, Amazon’s.com Inc’s termination of AI hiring and recruitment is another example which exhibit AI cannot be fair. The algorithm preferred more male candidates then female. This was because Amazon’s system was trained with data collected over 10 year period that came mostly from male candidates. [24] — Preceding unsigned comment added by 2604:3D08:8380:B90:856E:28B1:1641:8BFB (talk) 04:18, 29 May 2019 (UTC)[reply]

Addition of sub-topic: Liability for Partial or Fully Automated Cars

The wide use of partial to fully autonomous cars seems to be imminent in the future. But these new technologies also bring new issues. Recently, a debate over the legal liability have risen over the responsible party if these cars get into accident. In one of the reports [25] a driver less car hit a pedestrian and had a dilemma over whom to blame for the accident. Even though the driver was inside the car during the accident, the controls were fully in the hand of computers. Before such cars become widely used, these issues need to be tackled through new policies — Preceding unsigned comment added by 2604:3D08:8380:B90:856E:28B1:1641:8BFB (talk) 04:23, 29 May 2019 (UTC)[reply]

Actuaries

I developed a report of the ethical use of AI for actuaries, sponsored by the Society of Actuaries. https://www.soa.org/globalassets/assets/files/resources/research-report/2019/ethics-ai.pdf Neil Raden (talk) 21:06, 25 October 2019 (UTC)[reply]

ethics instettutions involved in AI ethics

suggestion to add a section listing the largescale institutions involved in AI ethics. like "The Institute for Ethical AI & Machine Learning"

problem is, many of these instetutions dont have thare own wikipedia articles. RJJ4y7 (talk) 15:16, 30 June 2020 (UTC)[reply]

if thare is agreement to this, then ill attempt to start the section. — Preceding unsigned comment added by RJJ4y7 (talkcontribs)

I can't find strong WP:SECONDARY sources on "The Institute for Ethical AI & Machine Learning". Where there is at least one strong secondary sources for an institution, I'm fine with it being added, even if it doesn't have its own wikipedia page. Rolf H Nelson (talk) 04:18, 1 July 2020 (UTC)[reply]


here are some Institutions to consider adding: Future of Humanity Institute, Global Catastrophic Risk Institute, Institute for Ethics and Emerging Technologies, Future of Life institute, Institute for Ethics and Artificial Intelligence, The Institute for Ethics in AI (Oxford) EthicsScholar93 (talk) 12:11, 28 November 2020 (UTC)[reply]