User:Eshafer68/Ethics of Artificial Intelligence
Biases in AI systems[edit]
[edit]Main article: Algorithmic bias [[User:Eshafer68/Ethics of Artificial Intelligence/wiki/File:Kamala_Harris_speaks_about_racial_bias_in_artificial_intelligence_-_2020-04-23.ogv|thumb|Then-US Senator Kamala Harris speaking about racial bias in artificial intelligence in 2020]] AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business applications and directly impact people. These systems are vulnerable to biases and errors introduced by its human creators. Also, the data used to train these AI systems itself can have biases. For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender; these AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's. Furthermore, Amazon terminated their use of AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates.
Bias can creep into algorithms in many ways. The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system. For instance, Amazon's AI-powered recruitment tool was trained with its own recruitment data accumulated over the years, during which time the candidates that successfully got the job were mostly white males. Consequently, the algorithms learned the (biased) pattern from the historical data and generated predictions for the present/future that these types of candidates are most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turn out to be biased against female and minority candidates. Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias. In natural language processing, problems can arise from the text corpus — the source material the algorithm uses to learn about the relationships between different words.
Large companies such as IBM, Google, etc. have made efforts to research and address these biases. One solution for addressing bias is to create documentation for the data used to train AI systems. Process mining can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions.
The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries and that almost no one is making an effort to identify or correct it. There are some open-sourced tools by civil societies that are looking to bring more awareness to biased AI.
My addition- AI is also being incorporated into the hiring processes for almost every major company. There are many examples of certain characteristics that the AI is less likely to choose. Including the association between typically white names being more qualified, and the exclusion of anyone who went to a women's college.[1] Facial recognition is also proven to be highly biased against those with darker skin tones. The word Muslims is shown to be more highly associated with violence than any other religions. Often times being able to easily detect the faces of white people while being unable to register the faces of people who are black.[2] This is even more disconcerting considering the unproportionate use of security cameras and surveillance in communities that have high percentages of black or brown people. This fact has even been acknowledged in some states and led to the ban of police usage of AI materials or software. Even within the justice system AI has been proven to have biases against black people, labeling black court participants as high risk at a much larger rate then white participants. Often AI struggles to determine racial slurs and when they need to be censored. It struggles to determine when certain words are being used as a slur and when it is being used culturally. [2] The reason for these biases is that AI pulls information from across the internet to influence its responses in each situation. A good example of this being if a Facial recognition system was only tested on people who were white then it would only have the data and face scans of white people making it much harder for it to interpret the facial structure and tones of other races and ethnicities. To stop these biases there is not one single answer that can be used. The most useful approach has seemed to be the use of data scientists, ethicists and other policymakers to improve AI's problems with biases. Often times the reasons for biases within AI is the data behind the program rather than the algorithm of the bot itself. AI's information is often times pulled from past human decisions or inequalities that can lead to biases in the decision-making processes for that bot. [3]
Injustice in the use of AI will be much harder to eliminate within healthcare system as often times diseases and conditions can affect different races and genders differently. This can lead to confusion as the AI may be making decisions based on statistics showing that one patient is more likely to have problems due to their gender or race. This can be perceived as a bias because each patient is a different case and AI is making decisions based on what it is programmed to group that individual into. This leads to a discussion about what is considered a biased decision on who receives what treatment. While it is known that there are differences in how diseases and injuries effect different genders and races, there is a discussion on whether it is fairer to incorporate this into healthcare treatments, or to examine each patient without this knowledge. In modern society there is already certain tests for diseases, such as Breast cancer, that is recommended to a certain group of people over others because they are more likely to contract it. If AI implements these statistics and applies them to each patient, it could be considered biased. [4]
Examples of AI being proven to have bias include when the system used to predict which Defendents would be more likely to commit crimes in the future, COMPAS, was found to predict higher risk values for black people than what their actual risk was. Another example being within Google's ads which targeted men with higher paying jobs and women with lower paying jobs. Often times it can be hard to detect AI biases within an Algorithm as often times it is not linked to the actual words associated with bias but rather words that biases can be affected by. An example of this being a person's residential area which can be used to link them to a certain group. This can lead to problems as often times businesses can avoid legal action through this loophole. This being because of the specific laws regarding the verbiage that is considered discriminatory by governments enforcing these policies. [5]
Increase in AI use
[edit]AI has been slowly making its presence more known throughout the world, from chat bots that seemingly have answers for every homework question to Generative artificial intelligence that can create a painting about whatever one desires. AI has become increasingly more popular with in hiring markets. Being used throughout every stage of the hiring process from the ads that target certain people according to what they are looking for, to the inspection of applications of potential hires. Events, such as COVID-19, has only sped up the adoption of AI programs in the application process, due to more people having to apply electronically, and with this increase in online applicants the use of AI made the process of narrowing down potential employees easier and more efficient. [2] AI has become more prominent as businesses have to keep up with the times and ever-expanding internet. Processing analytics and making decisions becomes much easier with the help of AI. As Tensor Processing Unit -(TPUs) and Graphics processing unit (GPUs) become more powerful AI's power will to increase, forcing companies to use it to keep up with the competition. Managing customers' needs and automating many parts of the workplace leads to companies having to spend less money on employees.
AI has also seen increased usage in criminal justice and healthcare. For medicinal means, AI is being used more often to analyze patient data to make predictions about future patients' conditions and possible treatments. These programs are called Clinical decision support system (DSS). AI's future in healthcare may develop into something further than just recommended treatments, such as referring certain patients over others, leading to the possibility of inequalities.[6]
This is the sandbox page where you will draft your initial Wikipedia contribution.
If you're starting a new article, you can develop it here until it's ready to go live. If you're working on improvements to an existing article, copy only one section at a time of the article to this sandbox to work on, and be sure to use an edit summary linking to the article you copied from. Do not copy over the entire article. You can find additional instructions here. Remember to save your work regularly using the "Publish page" button. (It just means 'save'; it will still be in the sandbox.) You can add bold formatting to your additions to differentiate them from existing content. |
Article Draft
[edit]Lead
[edit]Article body
[edit]References
[edit]- ^ Aizenberg, Evgeni; Dennis, Matthew J.; van den Hoven, Jeroen (2023-10-21). "Examining the assumptions of AI hiring assessments and their impact on job seekers' autonomy over self-representation". AI & SOCIETY. doi:10.1007/s00146-023-01783-1. ISSN 0951-5666.
- ^ a b c "Artificial Intelligence (AI) — Top 3 Pros and Cons". search.credoreference.com. Retrieved 2023-12-14.
- ^ Silberg, Jake; Manyika, James (June 2019). "Notes from the AI frontier: Tackling bias in AI (and in humans)" (PDF).
- ^ Cirillo, Davide; Catuara-Solarz, Silvina; Morey, Czuee; Guney, Emre; Subirats, Laia; Mellino, Simona; Gigante, Annalisa; Valencia, Alfonso; Rementeria, María José; Chadha, Antonella Santuccione; Mavridis, Nikolaos (2020-06-01). "Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare". npj Digital Medicine. 3 (1): 1–11. doi:10.1038/s41746-020-0288-5. ISSN 2398-6352.
- ^ Ntoutsi, Eirini; Fafalios, Pavlos; Gadiraju, Ujwal; Iosifidis, Vasileios; Nejdl, Wolfgang; Vidal, Maria‐Esther; Ruggieri, Salvatore; Turini, Franco; Papadopoulos, Symeon; Krasanakis, Emmanouil; Kompatsiaris, Ioannis; Kinder‐Kurlanda, Katharina; Wagner, Claudia; Karimi, Fariba; Fernandez, Miriam (May 2020). "Bias in data‐driven artificial intelligence systems—An introductory survey". WIREs Data Mining and Knowledge Discovery. 10 (3). doi:10.1002/widm.1356. ISSN 1942-4787 – via WIREs.
- ^ Challen, Robert; Denny, Joshua; Pitt, Martin; Gompels, Luke; Edwards, Tom; Tsaneva-Atanasova, Krasimira (2019-03-01). "Artificial intelligence, bias and clinical safety". BMJ Quality & Safety. 28 (3): 231–237. doi:10.1136/bmjqs-2018-008370. ISSN 2044-5415. PMID 30636200.