Received: 20 July 2022
Revised: 3 August 2023
Accepted: 4 December 2023
DOI: 10.1002/sres.2994
RESEARCH PAPER
Professionalism in artificial intelligence: The link between
technology and ethics
Anton Klarin 1
| Hossein Ali Abadi 2 |
Rifat Sharmelly 3
1
School of Management and Marketing,
Curtin University, Bentley, WA, Australia
2
School of Business and Law, Edith
Cowan University, Joondalup, Australia
3
Abstract
Ethical conduct of artificial intelligence (AI) is undoubtedly becoming an ever
more pressing issue considering the inevitable integration of these technologies
Western Sydney University, Penrith,
Australia
into our lives. The literature so far discussed the responsibility domains of AI;
this study asks the question of how to instil ethicality into AI technologies.
Correspondence
Anton Klarin, Curtin University, School
of Management and Marketing, Building
402, Curtin Business School 1, Kent
Street, Bentley, WA 6102, Australia.
Email:
[email protected]
Through a three-step review of the AI ethics literature, we find that (i) the literature is weak in identifying solutions in ensuring ethical conduct of AI, (ii) the
role of professional conduct is underexplored, and (iii) based on the values
extracted from studies about AI ethical breaches, we thus propose a conceptual
framework that offers professionalism as a solution in ensuring ethical AI. The
framework stipulates fairness, nonmaleficence, responsibility, freedom, and trust
as values necessary for developers and operators, as well as transparency,
privacy, fairness, trust, solidarity, and sustainability as organizational values to
ensure sustainability in ethical development and operation of AI.
KEYWORDS
accountability, algorithms, integrative literature review, social contract theory, systems
research
1 | INTRODUCTION
The phenomenon of accelerating change suggests that technological development occurs at an increasingly rapid
pace—the greater the growth in capability of the technology, the greater the acceleration of its further development
(see, e.g., Eliazar & Shlesinger, 2018). We are currently on
the cusp of large-scale integration of cyber physical systems
otherwise known as the Fourth Industrial Revolution
where industries are turning to smart technology communicating seamlessly via the Internet of Things (IoT), greater
use of cloud computing, increasing automation, Web 3.0,
big data, and other technologies (Ghosh et al., 2021;
Lu, 2019; Nazarov & Klarin, 2020; Xu, 2020). True to the
nature of accelerating change, the world's leading
organizations and institutes are developing technologies
that venture into deep learning and working towards
general artificial intelligence (AI), which are the domains
of the Fifth Industrial Revolution (El Namaki, 2018; Petrillo
et al., 2018; Serrano, 2018; Valenduc, 2018).
Technological landscapes are evolving ever faster to
the point that policymakers and institutions are struggling to regulate AI technologies to ensure ethical conduct of the technologies (Heilinger, 2022; The Bureau of
National Affairs, 2020). Hence, the questions of moral
obligation of agents engaged in the development of AI
technologies become increasingly pertinent especially
considering technology-related autonomous actions and,
more importantly, ethical outcomes that impact society
(Gibert & Martin, 2022; Matthias, 2004; Ziewitz, 2016).
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided
the original work is properly cited.
© 2024 The Authors. Systems Research and Behavioral Science published by International Federation for Systems Research and John Wiley & Sons Ltd.
Syst Res Behav Sci. 2024;1–24.
wileyonlinelibrary.com/journal/sres
1
Research suggests that ethical considerations should be
adopted by developers and organizations throughout the
entire AI development and deployment process (Almeida
et al., 2022; Felzmann et al., 2020; K. E. Martin, 2019a;
Neubert & Montañez, 2020). For example, healthcare and
the military call for operators or professionals using these
technologies to be ultimately responsible for actions of
these technologies (Luxton, 2014a; O'Sullivan et al., 2019;
oth et al., 2022). Research also
RAND Corporation, 2020; T
highlights the need for macro-level regulation of AI development and use to ensure the creation of institutional
frameworks for users and developers to operate within
(Montes & Goertzel, 2019; Raab, 2020; Weber, 2020). The
importance of professionalism consideration in developing
and deploying AI to ensure accountability of the actions
performed by the technology has been suggested by a
number of studies (e.g., Carter, 2020; Gillies &
Smith, 2022; Howard & Borenstein, 2018; Luxton, 2014a).
The concept of professionalism highlights the importance
of the contract between professional members and society
(Cruess & Cruess, 2014). The success of this contract is
dependent on maintaining public trust, which is associated
with the demonstration of knowledge, skills, and adherence to a code of ethics by those working within the profession (Pfadenhauer, 2006; Svensson, 2006). Research
argues that ethics, as a component of professionalism,
plays a crucial role in promoting trust in professional practice in society (Evetts, 2013). Research basically emphasizes the importance of placing ethics at the core of
professional practice, as it allows for a clear distinction
between professional and unprofessional behaviour. However, we do not know how professionalism of AI developers and operators can play a role in mitigating the
ethical issues of AI. As we witness an increase in utilization of AI and the concerns about its ethical issues for society, it is timely to study the role of professionalism in
developing and deployment of AI. Generally speaking,
professionalism is viewed as a response to manage unprofessional behaviour by delivering quality outputs and following standards of practice conducted by professionals
(Abbott, 1983; Burmeister, 2017); this is especially pertinent in the emergence and integration of AI (Stahl, 2021).
Further, the literature on professionalism suggests that
ethical behaviour is one of the core elements of being a
professional, and organizations should aim to maintain
high levels of ethical conduct in the practice of their professionals (Abbott, 1983; Evetts, 2013; Goto, 2021). Therefore, our study aims to answer the following question:
How does professionalism bridge the separation between
AI technologies and ethics? We aim to utilize systems perspective in answering the posed question.
Systems research is an interdisciplinary approach to
study complex systems in society, nature, and designed
KLARIN ET AL.
systems (Checkland, 1999). Ethics is a social construct and
as such is a designed system. Furthermore, AI ethics is an
interdisciplinary phenomenon and is best studied through
the systems thinking approach (Jiang et al., 2022; Li
et al., 2022). Considering the multiplicity, complexity, and
interdisciplinarity of AI ethics, systems thinking approach
will allow to see links and interrelationships between subsystems and constructs and chains of causality between
constructs. This practice of analysing the whole rather
than individual subsystems is referred to as holism
(Nazarov & Klarin, 2020; von Bertalanffy, 1968). Holism
approach is ideal in identifying possible causes of AI ethical breaches, the role of professionalism in AI, and what
are the prerequisites in ensuring ethical development and
operation of AI, which is what we aim to do in this study.
The objectives of this study are threefold. First, we
aim to identify a gap in the literature of AI and ethics
through conducting a comprehensive review by way of
exploring and mapping the data that are available on the
entire Web of Science (WoS) dataset of AI and ethics literature. Given that existing studies on AI and ethics are
not constrained within specific research disciplines
(Hildt, 2019; Martin et al., 2019; Saithibvongsa &
Yu, 2018; Xu, 2022), it is important for a review to be
cross-disciplined in order to have a systems perspective of
the scholarship (Galvin et al., 2021; Klarin et al,, 2021)
surrounding AI and ethics. Second, this study aims to
highlight the integral relationship of professionalism and
ethics. The logic for this relationship is embedded in the
professionalism literature as well as the social contract
theory that highlights an ethical responsibility of members of a profession towards society. Third, building on
the above, we draw on the current evidence of AI and
algorithm behaviour in practice through an in-depth
analysis of available literature to identify the current state
of the link between AI and professionalism. This allows
to draw the link between AI, professionalism, and ethics.
To fulfil these objectives, this study captures the scholarship of AI and ethics through a three-step review study
and provides a systems overview of the concepts and the
relationships between them in the interdisciplinary
fashion.
This study has a number of contributions to the literature. First, the study expands on the concept of professionalism and its importance within the business ethics
paradigm. We would like to emphasize the imperative
nature of professionalism in business ethics, thus stressing that labour-related ethics research (including its
relationship with automation and AI) inevitably relies on
professionalism at work. Second, we provide a systems
view of the AI and ethics scholarship to highlight the
lack of professionalism-related discussions in this vast
interdisciplinary academic scholarship. Third, we provide
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
2
evidence to suggest that failures of AI in practice are at
least partially related to the lack of professional behaviour in the development or deployment of the technology.
In this paper, we contend that successful AI implementation depends very much on professional behaviour
instilled into the machines as well as professional behaviour practices of developers and/or deployers, which is
reflected in specific values and norms, leading to success
in AI utilization. Fourth, we draw the relationship
between AI, professionalism, and ethics, which should
form the fundamental framework of AI implementation
in practice. Finally, this study offers future research
directions to strengthen the academic research to guide
practitioners and policymakers.
2 | P R O F E S S IO N A L I S M AN D A I
ETHICS
2.1 | Ethics in professionalism
Professionalism refers to the beliefs that are held by
members of a profession, known as professionals, and is
exhibited in the practice of the members' profession
(Snizek, 1972). One of the fundamental objectives of professionalism is to be performing and providing good practices and service for society (Pfadenhauer, 2006). Also,
the need for professionalism is central to the trust of society (Pfadenhauer, 2006) as well as the status and image
of the profession (Author, 2020). As a belief at the individual level, professionalism is closely related to professional self-concept, which is about seeing oneself as a
professional (Gibson, 2003). Professional self-concept
guides the attitudes, behaviours, and actions of individuals towards their practices (Gibson, 2003). Hence, professionalism is viewed as a way of controlling and
improving the quality of work and standards of practice
conducted by professionals (Evetts, 2011). This is mainly
due to the two core elements of professionalism: knowledge and skills as well as ethicality (see the seminal studies by Larson, 1979; Wilensky, 1964). Knowledge and
skills refer to the assumption that the work of professionals requires a specialized knowledge base as well as a
set of skills coupled with competency, which are obtained
through extensive education, training, and experience
(Hargreaves, 2000).
Ethicality, as the other core component of professionalism, refers to the professionals' adherence to the code
of ethics as set by a profession, which aims to control
behaviour and holds professionals accountable to clients
and society (Abbott, 1983; Brown, 2013). Ethical
behaviour is the other key criteria to achieve professional
status, which derives the work and practice of individuals
3
by constraining self-interest and making sure clients are
provided with quality service (Brown, 2013). Demonstrating ethical behaviour that is rooted in the code of ethics
and norms of the conduct of the profession is also considered as an indicator of distinguishing professional behaviour from that which is considered unprofessional
(Abbott, 1983).
We do note, however, that it is difficult to definitively
identify what constitutes professional behaviour because
of the many different ways members of a profession
might interpret what it means to act professionally
(Evetts, 2013). Primarily, codes of conduct provide guidance, but it is equally important to note that professional
behaviour is associated with individual professional characteristics and attitudes towards work. Further, each
work context has its own values reflected in norms and
presents different constraints on professionalism and a
professional's practice (Evetts, 2011).
In addition, research suggests that understanding as
to what constitutes professionalism requires an understanding of how the work and actions of professionals
generate values and build trust in society (Evetts, 2013;
Saks, 2016). Generating values and trust underpin the
responsibility of the professionals towards clients and
society (Evetts, 2013). In light of the contribution to society, a review of application of social contract theory suggests that adherence to the expectations of society is the
key element of perceived professional behaviour
(Cruess & Cruess, 2008; Jos, 2006; Welie, 2012). Drawing
on the nature of social contract theory, it is argued that
there is an agreement between a profession and society in
relation to the details of appropriate behaviours and obligations to practice and service expected of professionals
(Donaldson & Dunfee, 1994; Mohamed et al., 2020;
Rahwan, 2018). Failure in demonstration of certain behaviour and obligations expected by society can be perceived
unethical (Jos, 2006). Therefore, one's professional behaviour is to a great extent determined through interaction
with society.
2.2 | Professionalism and ethical
conduct of AI
The AI ethical codes are also derived from the general
organizational code of ethics. It is crucial to ensure that
they are embedded into the business practice of organizations and individuals deploying AI technologies. In this
way, AI ethics codes guide and maximize the ethical utilization and application of AI technologies (Ayling &
Chapman, 2022). Thus, ethicality—which is acknowledged as the core component of professionalism
(Abbott, 1983; Brown, 2013)—has been regarded by AI
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KLARIN ET AL.
research as a mean to interpret the outcomes of AI technology development and utilization and its impact on
society. However, it is important to note that little attention has been devoted to the investigation of the role of
professionalism in addressing and developing codes
of ethics for AI. For example, Price et al. (2019) discuss
the legal liability of physicians as professionals using AI;
however, the discussion does not venture into how professional the practices of physicians are, nor those of the
developers of the technology.
Studies that have focused on the standards expected
of professionals in a profession who are using AI technologies in delivering services offer some evidence for the
existence of the role of professionalism in developing
codes of ethics for AI. For example, Cox (2022) has introduced an educational programme that consists of eight
ethics scenarios of AI for information professions to help
the professionals to learn about the key ethical issues of
AI as a technology to increase their self-awareness of AI
operation in their professional service. In the field of radiology, there is a global consensus statement on the ethics
of AI, which requires all professionals to consider the promotion of well-being, the least harm, and an equal distribution of benefits and harms among all parties when
they are deploying AI technologies (Geis et al., 2019).
Research also recommends that informed consent, high
level of safety, privacy and transparency, algorithmic fairness and biases, and optimal liability as the fundamental
ethical principles that should be taken into account and
addressed by all healthcare professionals to ensure that
AI technologies will be implemented ethically and successfully (Gerke et al., 2020). In the domain of defence,
the five ethical principles, namely, justified and overridable uses, just and transparent systems and processes,
human moral responsibility, meaningful human control,
and reliable AI systems, have been recommended to be
considered to minimize the ethical issues that may occur
as the result of deploying of AI (Taddeo et al., 2021).
There is also increasing evidence of the lack of professionalism in AI systems, especially in terms of biased
data, which is argued to be caused by human judgment
(Nelson, 2019). Bias in AI occurs when the outcome of
the machine favours one group compared with another,
which is often perceived as unfair (Fletcher et al., 2021;
Mehrabi et al., 2021). For example, gender bias is evident
in the recruitment context. Amazon's AI hiring tool
excluded women candidates until the company discovered about the issue in 2018 (Dastin, 2018). The case of
Amazon suggests that the company's algorithmic bias
reinforced discrimination in hiring practices, which this
issue can be regarded as unprofessional behaviour.
Another example can be the issue of gender bias, which
is found in statistical translation tools such as Google
KLARIN ET AL.
Translate, as it exhibits male defaults more than female
and gender-neutral defaults in some occupations, including science, technology, engineering, and mathematics
(Prates et al., 2020). Therefore, it can be argued that AI
bias has a potential to cause repercussions when the technology is used to make decisions or solve problems
(Zajko, 2021). It is important to note that public trust in
AI is essential when it comes to the societal acceptance of
AI decision-making (Adnan et al., 2018). Public trust
depends on the ethical implication of AI for users, affecting their behavioural intention to accept the technology
(Adnan et al., 2018). Research on the profession points to
the role of professionalism in promoting trust in professional practitioner–client relations (Evetts, 2011). This
suggests that biased algorithms that were fed into the system have a likelihood to result in ethical and professional
issues. This also suggests determining if professionalism
can address AI ethical issues. The consideration of professionalism is important because AI technology is being
gradually integrated into business practices. Professionalism can help mitigate biased algorithms by ensuring that
the ethical implications are measured before making the
AI technologies available to the public.
2.3 | Institutions, AI ethics, and
professionalism
Having highlighted the importance of professional conduct, research identifies that supranational institutions
including the European Commission High-Level Expert
Group on Artificial Intelligence (ECHLEG) with its
“Ethics Guidelines for Trustworthy AI” (European
Commission, 2021) and OECD values-based privacy principles that address data protection (OECD, 2019b) are
active in designing principles of ethical use and development of AI. The European Union General Data Protection
Regulation (GDPR) that took effect in May 2018 stipulates
a number of conditions on the use of data. One such condition is “the right to explanation,” which will require
transparency to build trust and reliability, thus resulting in
implied accountability placed on the manufacturers of AI
technologies (Goodman & Flaxman, 2017). On a national
level, governments invariably realize the inevitable nature
of technological adoption and thus introduce and expand
on AI implementation in their respective countries, for
example, the Italian government's White Paper on Artificial Intelligence at the service of the citizen (Vetrò
et al., 2019) and British Sociological Association's (BSA)
Annexe for Digital Research that accompanies the BSA's
Statement of Ethical Practice (Raab, 2020). On top of
national and supranational recognition, industry bodies
including Institute of Electrical and Electronics Engineers
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
4
(IEEE), Information Technology Industry Council (ITIC),
and Association for Computing Machinery (ACM) also
actively engage in development of principles and guidelines for primarily organizations and other stakeholders
developing and using AI (Clarke, 2019).
National and international institutional efforts are
being introduced to sanction breaches or issues arising
from the development and utilization of algorithms and
AI systems. Zuiderveen Borgesius (2020) highlights that
the current laws and regulations including the European
Convention on Human Rights (that was drafted in 1950)
are often inadequate in upholding ethical and legal standards in relation to the development and deployment of
AI systems and that provisions such as the GDPR are limited to a specific area of conduct. For example, the Council of Europe put forth nine principles and priorities that
are intended to underpin binding and nonbinding legal
instruments (Leslie et al., 2021). As such, the Member
States are to introduce legislation relating to AI systems
in protecting basic rights and freedoms—(i) human dignity, (ii) human freedom and autonomy, (iii) prevention
of harm, (iv) nondiscrimination, gender equality, fairness
and diversity, (v) transparency and explainability of AI
systems, (vi) data protection and the right to privacy,
(vii) accountability and responsibility, (viii) democracy,
and (ix) rule of law (Leslie et al., 2021). With the EU,
arguably, leading institutional regulation of AI and welcoming inputs from society, professionals and organizations will inevitably be held accountable for the
development and deployment of AI.
Furthermore, the European Commission (2021),
OECD (2019a), and a number of other institutions propose principles, values, and guidelines that AI systems
should meet in order to be deemed trustworthy. For
example, Goodman and Flaxman's (2017) study describe
the General Data Protection Regulation (GDPR) adopted
by the European Parliament, which requires algorithms
to operate within this new legal framework. In addition,
Wagner (2018) describes the set of ethics guidelines
developed by the European Group on Ethics in Science
and New Technologies (EGE). In addressing the question
of whose responsibility it is to assess and manage any
risks associated with AI applications, the European Parliament recommended that the deployer of an AI system
holds the responsibility of controlling any risks and the
level of risk should indicate the liability regime.
The European Parliament has endorsed that autonomous
traffic management system and vehicles, unmanned aircrafts, autonomous cleaning devices for public places,
and robots are high risk AI systems that should be subject
to a strict liability regime (Stahl, 2021). In this regard,
Borenstein and Howard (2021) argue that while professional bodies and associations provide specific guidance
5
on AI and the ethical issues, it is the ultimate responsibility of the AI professionals—for example, developers of
AI technologies to ensure that the AI system is intertwined with ethical requirements. This requires having a
professional mindset related to moral sensitivity, which
emphasizes that technical aspects/functionalities of the
designed AI systems should consider ethical guidelines
as part of their professional responsibilities without conceiving the mentality that ethics is someone else's problem (Borenstein & Howard, 2021).
3 | AI ETHICS R EVIEW AND
EVIDENCE-BASED RESEARCH
We conducted a three-step literature review investigation
to ensure a comprehensive systems approach to explicate
the role of professionalism in AI and ethics. First, we
undertook an overarching review of the relationship
between AI technologies and ethics, mostly in social sciences literature, to identify the current issues of this contentious relationship. Second, we conducted an in-depth
analysis of the dataset that we extracted from WoS
coupled with further research into other databases to
identify specific studies that depict perceived unethical
outcomes of AI and its related technologies deployment.
The analysis involved categorization of the literature into
breaches of ethical conduct in deployment of AI and its
related technologies. Following this evidence-based
review process, we then attempt to identify the causes of
the breaches of ethical conduct by AI. Finally, from the
above, we suggest that professionalism is the key to ethicality of the development and use of AI technology.
3.1 | Step 1: An overarching view of the
relationship between AI and ethics
In the first step, the overarching review provides us with
the preliminary assessment of the literature on AI and
ethics, its main discussions, and conclusions. At this
stage, we aim to identify a gap in the literature and propose a way to fill this gap. We carry out our overarching
review using data from WoS. We chose to use the WoS
database as it is considered one of the largest scientific
knowledge databases (Crossan & Apaydin, 2010;
Podsakoff et al., 2008) and has major overlaps with its
closest contestant database, Scopus, which means the
results have marginal divergences between the two databases especially if looking to compare large volumes of
publications (Vieira & Gomes, 2009). Despite nonlisting
the extra sources including many book chapters, meeting
abstracts, news, and proceedings, WoS is still more
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KLARIN ET AL.
KLARIN ET AL.
popular in informetric studies as well as meta analyses
compared with Scopus (Zhu & Liu, 2020).
The search query was set as “artificial intelligence” or
“machine learning” or “intelligent agent*” or “computational intelligence” or “neural network*” or “deep learn*”
AND “ethic*” or “moral*,” which returned 1592 publications that contain this search query within the topic areas
(titles, abstracts, and keywords) of the original works for
the period of 1980 up to 01 July 2020 (the beginning of
the available data in WoS and the end date of search).
Figure 1 demonstrates the study selection criteria.
We carefully read through titles, abstracts, and keywords of all 1592 studies to identify papers that specifically study the relationship between AI and ethics.
Although there is no commonly accepted definition of
AI, for the purposes of this study, we adopt a broad
understanding of AI as any device that perceives its environment through sensors and acts towards achieving its
goals (Russell & Norvig, 2010, p. 34). This includes expert
systems that operate using predefined parameters to
more complex self-learning technologies (Omoteso, 2012;
Ryan, 2014; Searle, 1980).
The great majority of the studies fleetingly mention
ethics and implications when discussing AI or related
technologies. Although there are plenty studies that discuss AI ethics, we chose ones that identify potential and
uncovered threats and failures of AI including expert
systems (ESs) and various autonomous systems, identification of the responsible parties, and resolutions, recommendations, propositions, and/or contributions of the
studies, the studies are listed in Table 1. For example,
an excellent review study by Jobin et al. (2019) is not
part of Table 1 as it does not depict any specific threats
or failures; instead, it carries out a review of guidelines
put forth by various institutions in ensuring ethical
development and deployment of AI.
FIGURE 1
Results of the search and study selection criteria.
Almost all the depicted studies in Table 1 call for the
ultimate responsibility of the developers and organizations in maintaining the control over the development
and the use of AI and ESs. Although the studies either
specify guidelines for developers and organizations or
they call for regulatory interventions into this contentious issue, the studies that emphasize a holistic outlook
on all three levels of stakeholders are rare (see,
e.g., Munoko et al., 2020).
There are studies that show a positive dynamic of
organizations to introduce self-imposed ethics codes. For
example, Carter (2018, 2020) notes that Google, Microsoft, and IBM have developed their own AI ethical standards in the previous 2–3 years, whereas Amazon
commented only that its AI developments and use are
ethical and that there is no need to formalize it. We
are inevitably left wondering as to what organizations
like Amazon consider “ethical.” The self-imposed ethics
guidelines thus result in an issue of organizations' vested
interests in directing the ethicality of development and
operation of AI systems. There is a substantial risk that
these guidelines become arbitrary, optional, and eventually meaningless (Wagner, 2018; Yeung et al., 2020).
Recent studies dismissed the self-imposed ethics guidelines as “ethics washing” (Benkler, 2019; Bietti, 2020;
Rességuier & Rodrigues, 2020).
The researchers also significantly contribute by
compiling, analysing, and offering their own expert
recommendations on the ethical outcomes of AI utilization (Clarke, 2019; Vetrò et al., 2019; Wright &
Schultz, 2018). For example, Jobin et al. (2019) carried
out a comprehensive review of ethical principles identified in existing AI guidelines, where it was found that
transparency, justice, nonmaleficence, responsibility, privacy, and several other values were common among the
proposed guidelines.
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
6
TABLE 1
7
Studies of the relationship between AI and ethics.
Reference
Focus of paper/
problem
Accountabilitya
Responsibility of
Propositions/recommendations/
solutions/contributions
mi
me
Brusoni and
Vaccaro (2017)
Technology for
virtuous
behaviour
✓
✓
Managers, researchers,
organizations
An invitation to consider using
technologies to embed, spread, and
convey ethical values.
Carter (2018)
Impact of AI
✓
✓
Professionals and
organizations
Follow data quality, data governance,
and data ethics guidelines.
Carter (2020)
Unregulated
potential threat
of AI
Primarily regulators, also selfcompliance by
organizations
International accords recommended;
state regulation is emphasized;
organizations to follow the
regulators and introduce selfcompliance.
Clarke (2019)
AI threatens
society
✓
✓
Executives and organizations
Proposed a set of principles that is
more comprehensive than previous
recommendations by individuals
and institutions that organizations
should follow.
Davenport et al.
(2020)
Impact of AI on
marketing
✓
✓
Primarily organizations and
also developers
Organizations should exceed
consumer privacy expectations and
overcomply with legal mandates.
Dignum (2018)
Potential issues of
new technologies
✓
Human responsibility
Special issue proposing: Ethics by
design; ethics in design; ethics for
design
Dodig Crnkovic
and Çürüklü
(2012)
Robot safety issues
and morality
✓
✓
All levels within organizations
AI systems should be embedded with
principles of morality and exhibit
ethical behaviour. If the system
evolves, so too should the morals
of the system. The system and
stakeholders carry accountability.
Etzioni and
Etzioni (2017)
It is impractical to
make AI ethical
✓
✓
Organizations and specialists
need to develop ethics bots
Ethics bots should learn from
humans through observation
instead of being taught ethics.
Felzmann et al.
(2020)
The need for
transparency by
design
✓
✓
Organizations and AI
developers
Proposition of transparency by
design that organizations and
developers should implement from
the outset of the design process.
Johnson (2015)
Responsibility gap
✓
✓
Developers and organizations
Developers carry the ultimate
responsibility for the consequences
of AI technologies.
Kaplan and
Haenlein (2020)
Potential threat of
AI
Firms represented by
managers and society to
regulate AI
AI technologies should be launched
only after substantial trials; firms
ought to build trust with
customers; AI needs to be
transparent.
Khalil (1993)
ESs lack ethical
capacity
✓
Managers; in some cases,
software engineers
ESs to be used in advising capacity;
professionals should hold the
ultimate responsibility.
Martin (2019a, b)
Unethical conduct
of technologies
✓
Coders/developers/
programmers, wider
creators, and organizations
Organizations and developers to take
the ultimate responsibility for the
release and use of technologies.
✓
✓
✓
ma
✓
✓
(Continues)
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KLARIN ET AL.
KLARIN ET AL.
TABLE 1
a
(Continued)
Accountabilitya
Reference
Focus of paper/
problem
Martin and
Freeman (2004)
Technology and
ethics separation
Montes and
Goertzel (2019)
Potential harm of
AI to society
Munoko et al.
(2020)
Unintended
consequences of
AI use
✓
✓
Neubert and
Montañez
(2020)
Lack of ethical
guidance of AI
✓
✓
Ozkazanc-Pan
(2019)
Potential
workplace
inequality
✓
Raab (2020)
AI use ethical
concerns
✓
Sarikakis et al.
(2018)
mi
me
✓
✓
Responsibility of
Propositions/recommendations/
solutions/contributions
ma
Managers, developers, and
ultimately organizations
When designing technologies agents
should take a pragmatic approach
of the implications of the
technology on community.
✓
Implied responsibility of
regulators to curtail large
tech organizations
Development of open AI systems,
taking away the control for tech
giants.
✓
Developers, adopting firms,
professionals, regulators,
stakeholders
A tension is growing between the
three levels of stakeholders in
auditing, where ethical guidelines
ought to be created.
Primarily organizations, also
managers
Organizations to introduce virtue
frameworks for AI development
and use, e.g., Google.
✓
Regulators and organizations
Regulatory bodies will need input
from stakeholders, while
organizations need voluntary codes
of conduct.
✓
Regulators and organizations
Regulation of technologies and their
application through approval
systems with an emphasis on
ethics—fairness, accountability
and transparency.
Potential breach of
human rights
✓
Policymakers nationally and
internationally
Social control perspective—conscious
quest for impacting change and
cartographing a path of actions and
intentions.
Timmers (2019)
Country
sovereignty at
risk
✓
Primarily regulators at
national and international
levels
Public collaboration,
intergovernmental collaboration,
and supranational body
interference.
Vakkuri et al.
(2020)
AI ethics field is in
its infancy
✓
Organizations and people
within them
Guidelines for organizations
including following regulations,
greater stakeholder links, and
others.
Vetrò et al. (2019)
Discrimination and
biases
✓
Organizations
A set of principles to be followed is
advised which include openness
and transparency requirements.
Weber (2020)
AI platforms
impact on
society
✓
✓
Regulators; ethics committees
in organizations
Legal framework safeguarding socioethical values as well as
fundamental rights necessary.
Wright and
Schultz (2018)
Social contract
violations
✓
✓
Firms, policymakers, and
researchers
Integrated ethical framework:
identify stakeholders, enumerate
social contract, assess stakeholder
impact, minimize violations/
disruption.
Level of accountability: mi—micro (individuals), me—meso (groups and organizations); ma—macro (governments, societies, etc.).
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
8
As seen from the studies in Table 1, the research suggests accountability of organizations and developers to
ensure AI technologies operate to the highest ethical
standards. Professional associations as well as governing
institutions also play a part in regulating the development of these technologies to the highest possible standards. We agree and build on the findings of the
publications by proposing professionalism of individuals
either developing or operating the technologies as a pathway to ensure ethical outcomes in the use of the technologies. In the next review step, we identify to what extent
professionalism is being discussed in the current literature of AI and ethics.
3.2 | Step 2: Perceived unethical
outcomes of AI deployment
In the second step of our analysis to provide an overarching analysis of available literature of unethical outcome
of AI technologies, we searched the WoS, Scopus, Google
Scholar, and ProQuest Central databases using the following string (partially built on the search string provided by Khalil et al., 2020): “artificial intelligence” or
“machine learning” or “intelligent agent*” or “computational intelligence” or “neural network*” or “deep learn*”
AND “failur*” or “discriminat*” or “racis*” or “sexis*” or
“mistak*” or “error*” or “bias*” or “unethical” or “harm”
or “legal” or “unprofessional” or “danger” or “death.” We
were thus able to locate studies that identify the various
unethical practices of AI technology as well as implications and a variety of recommendations that these studies
offer to reduce the risks associated with unethical outcomes of the technology use.
The extensive search for studies that portrays the state
of the literature of perceived breaches of ethical conduct
(as agglomerated by Jobin et al., 2019) in deployment of AI
technologies garnered a number of studies, depicted in
Table 2. The studies in the first column are available in the
list of references. The second column depicts unethical
conduct of AI as described in the corresponding studies;
the studies also point out or imply the responsible parties
in column 3 as well as the implications of the research in
propositions, recommendations, or contributions of
research in column 4. It was thus possible to code the
unethical breaches against a variety of ethical principles as
proffered by Jobin et al. (2019) in column 5. The research
demonstrates common causes of the perceived unethical
conduct, most of which seem to involve the possibility of
aversion of these miscarriages of technology through
appropriate professionalism checks and balances systems.
The extracted studies demonstrated in Table 2 show
that a common cause for perceived unethical behaviour
9
among AI-related technologies falls under questionable
design including rushed releases and often implicit bias
that is often programmed in algorithm design. As such,
research demonstrates that virtual personal assistants
(VPAs) including Amazon's Alexa, Microsoft's Cortana,
and Apple's Siri appear to enforce gender stereotypes
through the programmed behaviour of these assistants
(Loideain & Adams, 2020; Woods, 2018). These practices
are thought to infringe such values as equality (justice),
integrity (nonmaleficence), and choice (freedom) and
require further investigation as well as society and state
involvement. A purposeful design of the Australian government machine learning mechanism for debt recovery,
Robodebt, was deemed to provide false or incorrectly calculated debt notices to eventually equate to government
extortion (Braithwaite, 2020; Martin, 2018). As observed
by Carney (2019) such legal errors of the Robodebt programme were due to the rushed design by the government not following the legal and ethical standards on
machine learning provided by Administrative Review
Council in 2004 including breaches in solidarity, dignity,
transparency, and trust, which eventually led to the programme being shut down and repayments of unfair
recoveries. Further, consider Google Duplex, a virtual
helper that is capable of impersonating humanlike conversation for booking reservations and service enquiries
including sounds like “umms” and “aahs,” these capabilities may be perceived unethical and as such contravene
codes prescribed by the British Standards Institution
(BSI) and IEEE including dignity and trust in the new
technology (Chen & Metz, 2019; O'Leary, 2019).
In addition to the questionable design as outlined
above, organizations may not disclose the coding that is
responsible for the conduct of such technologies and/or
algorithms. As such, YouTube's algorithm ultimately
rewards hegemonic and normative performances of femininity, stratifying by class and gender, and subsequently
punishing content that does not fit specific criteria some of
which are advertisement-centric (Bishop, 2018). Similarly,
discrimination demonstrated in the Google Ad service
where women get fewer instances of higher paying jobs
advertisement as compared with men is hidden behind the
blackbox algorithms of Google (Datta et al., 2015). This is
supported by the fact that AI-based Facebook Business
Manager and Google AdWords showed a potential for discriminatory usage when they are used to exclude people of
a certain ethnic or racial affinity as target audience for
advertisements posted on these platforms due to the data
set chosen and targeting rules embedded in the systems
(Dalenberg, 2018). In another example, risk assessment
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software used in various US
jurisdictions to provide sentencing advice has also been
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KLARIN ET AL.
KLARIN ET AL.
TABLE 2
Perceived breaches of ethical conduct in deployment of AI technologies.
References
Unethical or problematic
conduct
Responsible
Propositions/
recommendations/
solutions/contributions
Breached ethical
principles
Angwin et al.
(2016); Martin
(2019a)
Government correctional
service (COMPAS)
algorithms demonstrate
apparent racial bias due to
value-laden design.
Organizations
Organizations should take
responsibility for the
actions of the algorithms
and technology.
Justice (e.g., fairness and
equality), transparency,
freedom (e.g., consent),
trust, dignity,
nonmaleficence (e.g.,
safety), responsibility
Belk (2020)
Various types of
discrimination by robotics
in surveillance, social
engineering, and military
Organizations and
governments
Further research, raising
public awareness and
scrutiny of the robots.
Justice (equality and equity),
nonmaleficence,
responsibility, freedom
(choice and
empowerment), trust,
dignity
Bishop (2018)
YouTube's “black box”
algorithms can be class
and gender biased.
Organizations
Further research is
recommended to bring the
issue to light to influence
positive changes.
Transparency, justice
(equality, nonbias),
responsibility, trust,
dignity, freedom (consent)
Carney (2019)
Australian government's
automated debt recovery
system issued incorrectly
calculated debt notices.
Government
Due to mounting criticism
the system has been
scrapped on 29/05/2020.
Beneficence (e.g., wellbeing), freedom, trust,
responsibility, privacy,
justice, dignity, solidarity
(social security)
Cohen (2019)
Discrimination against
women in job application
software developed by
Amazon.
Organizations
Ensuring technologies are
devoid of bias will
enhance HR functions.
Transparency, justice
(equality and nonbias),
responsibility, privacy,
freedom (choice and
empowerment), trust,
dignity
Dalenberg (2018)
Automated job advertising
may be discriminating on
the grounds race, sex,
class, and other
associations
Organizations
Organizations should
comply to the relevant
regulations when
designing these systems.
Transparency, justice,
responsibility, privacy,
freedom, trust, dignity
Datta et al.
(2015)
Google job search ads for
highly paid positions are
less likely to be presented
to women.
Google
Further research by
academics, organizations
themselves, and regulators
is recommended.
Transparency, justice,
responsibility, privacy,
freedom, trust, dignity
Etzioni and
Etzioni (2017)
Errors (sometimes fatal)
caused by AI-based
driverless cars.
Developers,
deployers, and
regulators
Regulations as well as
ethical guidance should be
established by developers
and deployers, i.e., users.
Responsibility, transparency,
solidarity (social security),
privacy, freedom, trust,
dignity
Feuerriegel et al.
(2020)
AI-based credit loan
application system was
found to favour certain
socio-demographic groups.
Organizations
Organizations are required
to understand perceptions
of fair AI and align these
values, to conform to
regulations and to build
trust.
Justice (equity and fairness),
trust, responsibility,
transparency, privacy,
freedom (choice and
empowerment), dignity
Finlayson et al.
(2019)
Minor undetectable
manipulations of data can
change the behaviour of
AI systems.
Regulators and
decision makers
Regulatory safeguards must
be introduced—
accountability and
standards of procedure.
Nonmaleficence, trust,
justice, responsibility,
beneficence (well-being),
freedom, dignity,
solidarity
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
10
TABLE 2
11
(Continued)
References
Unethical or problematic
conduct
Propositions/
recommendations/
solutions/contributions
Responsible
Breached ethical
principles
Gong et al.
(2020)
Deepfake content as a threat
to society, politics, and
commerce.
Government and
society
Legislation, company policy,
publicity, training and
education, and technology
Nonmaleficence, trust,
justice, privacy, freedom,
dignity, responsibility,
beneficence, solidarity
Howard and
Borenstein
(2018)
Implicit bias in the design
and the data used to guide
technologies leads to
biased outcomes.
Organizations and
all stakeholders
Community involvement,
monitoring,
multidisciplinary teams,
litmus tests, decisions
transparency, selectivity of
words, built-in
comparative analysis.
Transparency, justice,
responsibility, privacy,
freedom, trust, dignity
Iacobucci (2018)
Health self-diagnosis system
previously missed
symptoms, generated a
high rate of false positives,
and regulators considered
the app to be outside their
regulatory remit.
Regulators and
developers
Regulators should be
proactive and test new
diagnostic technology in
safe limited situations,
with independent trials
and careful assessment.
Transparency, justice,
freedom, trust, dignity,
nonmaleficence,
beneficence, responsibility
Loideain and
Adams (2020)
Bias in design of virtual
personal assistants with an
implied discrimination of
women to be ‘submissive
and secondary to men’.
Regulators and
organizations
Further research is required
into the digitally gendered
servitude, and regulators
need to intervene in
design.
Transparency, justice,
responsibility, privacy,
freedom, trust, dignity,
nonmaleficence
Obermeyer et al.
(2019)
Racial bias because the
algorithm predicts health
care costs instead of
illnesses of patients.
Developers of
algorithms
Changing the algorithms to
be more impartial and
objective by adjusting
some indices.
Nonmaleficence, trust,
justice (equality),
responsibility, beneficence
(well-being), freedom,
dignity, solidarity
O'Leary (2019)
Google's Duplex is a voice
assistant that is capable of
impersonating humans
and thus is potentially
unethical.
Developers and
regulators
Ensuring that the system
identifies itself as a robot
from the outset when
performing tasks for
owners.
Nonmaleficence (including
protection), trust, justice,
privacy, freedom, dignity,
responsibility
Strickland (2019)
Oncology Expert Advisor
system provided useless
and sometimes dangerous
recommendations. IBM
Watson is generally not
ready to replace medical
specialists.
Developers
Currently no specific
solutions are available.
Transparency, justice
(remedy and redress),
freedom, trust, dignity,
nonmaleficence,
beneficence (well-being
and social good),
responsibility
Zuiderveen
Borgesius
(2020)
Algorithm for a medical
school's admission system
discriminated against
women and people with
an immigrant background.
Regulators
Nondiscrimination and data
protection laws need to be
enforced to prevent
discrimination by
algorithms.
Transparency, justice
(equality), responsibility,
privacy, freedom (choice
and empowerment), trust,
dignity
found to be engaging in racial profiling in its decisionmaking, which was not “available for defendants to question” (Martin, 2019b).
The other common reason for perceived bias in
machine decision-making lurks in the data that are fed to
the algorithms, intentionally or unintentionally. In an
international beauty contest that utilized AI algorithms
to identify the most attractive contestants from a pool of
6000 entrants representing over 100 countries, the winners were predominantly Caucasian (Khalil et al., 2020).
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KLARIN ET AL.
Further, face recognition systems are often discriminatory
against people with an Asian background, be it a passport
photo system or an iPhone that allegedly shows Asian
applicants with the eyes closed when in fact they are open
in the former or failing to distinguish between faces of
Asian origin in the later (Howard & Borenstein, 2018;
Tischbirek, 2020). In another example, Amazon was forced
to shut down its job application rating system as it was
biased by providing less of a score for female applicants as
compared with male counterparts due to the data being
based on the pool of Amazon employees, which were
mostly male (Cohen, 2019).
Drawing from the above examples in Table 2, developing algorithms in AI systems requires selecting datasets
as input. The moral behaviour of developers influences
the input of data, the type of data fed as inputs, suitability
of the data feed (e.g., limited or biased sets of data), how
the data will be used, and who will be using the final output in the decision-making context. AI system developers
take on algorithmic accountability, also referred to as
artificial morality (Dodig Crnkovic & Çürüklü, 2012),
and the responsibility of the system-generated decisions
and the consequent implications including potential
biases, diminished rights, mistakes, and violated principles (Martin, 2019b). For example, an AI-based system to
approve or reject a potential customer a bank loan, a
credit card, or a mortgage should be traced to organizations and developers when they decide to automate the
process using logics that can potentially exhibit prejudices (Whitby, 2008).
Many of the issues arguably stem from the novelty of
the algorithms and technologies introduced, which,
unsurprisingly, are unregulated due to the lack of policymakers' inability to keep up with development and subsequent utilization of these technologies. As such, the
largest technology companies that are in the forefront of
AI technological development remain largely unregulated in this sphere. Some of these conglomerates including Microsoft, Alphabet, and IBM have introduced and
are working within the framework of their own ethical
conduct guidelines, while others including Amazon have
not introduced any such guidelines instead providing an
assurance of ethical conduct being programmed into the
new AI technologies (Carter, 2020). As described earlier,
there are multinational accords, as well as various
national, professional association guidelines available to
regulate and create some institutional framework on the
responsibilities of individuals, organizations, and the consequent technologies on ethical behaviour.
A review of studies in Tables 1 and 2 also shows that
the concept of AI ethics at individual level is associated
with perceptions embedded in experiences, and the measurement of such concept should be able to capture and
KLARIN ET AL.
assess one's experience of using AI technologies. For example, the emulated capabilities of Google's Duplex system to
impersonate human voice has been measured and analysed, and it has been found that impersonation deceives a
user by pretending to be a human (O'Leary, 2019). Concerning discriminatory visibility, Bishop (2018) attempted
to measure the impact of self-optimization tactic of YouTube's algorithm. It is found that self-optimization tactic
of YouTube's algorithm favours middle-class social users
to create specific gendered content for advertisement.
Within the law field, the measurement of the impact of
algorithmic decision-making on human rights has
explored nondiscrimination and that they have been
argued to be due to that such system can reproduce discrimination from biased training data consisting of discriminatory decision human decisions (e.g., gender, sexual
preference, or ethnic origin) (Zuiderveen Borgesius, 2020).
Similarly, the utilization of automated online targeting of
job advertisement is evaluated, and it is found to be able to
cause direct and indirect discrimination when the targeting setting is designed to exclude some specific or a larger
group of people (Dalenberg, 2018).
Regarding gender stereotypes and equity, research
shows that the gendering of VPAs such as Alexa,
Cortana, and Siri causes indirect indiscrimination and
societal bias towards women because it creates a notion
that women are “submissive and secondary to men”
(Loideain & Adams, 2020). Through an experimental
technique and statistical analysis, Datta et al. (2015)
found that compared with men, females receive fewer
ads encouraging them to consider applying for highpaying roles from the “Ad Settings webpages” created by
Google, stressing that discrimination exists in online
advertainments. Also, Cohen (2019) points to the existence of AI discrimination against women in employee
screening developed by Amazon. It is further argued that
the discrimination issue is embedded in the data, which
have been gathered based on those individuals who have
already been recruited, which are primarily men. In addition to the role of such data, research points to the selection bias of the experts, the selection of an image to
represent an area of expertise, and the predetermined
outputs for a case as the other reasons for the existence of
ethical problems in AI (Howard & Borenstein, 2018).
Applying the bias measures on several real datasets,
Vetrò et al. (2019) explored ethical-social issues associated with the use of AI agents for decision making for
some population groups (e.g., disadvantaged), when the
database is designed to specifically focus on some narrow
objectives such as efficiency and optimizations goals.
Although studies have focused on the potential (un)
ethical outcomes of the utilization of AI, the measurement of AI ethics seems to be a major challenge for
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
12
researchers. This is due to the fact that the ethics construct comprises different elements and deals with individuals' perceptions of what is right and wrong—all of
which mean different things to different people
(Nguyen et al., 2008). This becomes even more challenging for researchers when measuring AI ethics based
on societal views. With the fact that the societal view of
AI ethics is influenced by various social systems
(Paraman & Anamalah, 2023), the key challenge for measurement is identifying a set of variables that “corresponds
to the aggregate ethical view of society” (Baum, 2020,
p. 166), which is also argued to be crucial for developing a
framework for a “good AI society” (Paraman &
Anamalah, 2023). Hence, the measurement is about selecting a procedure to identify the views on different aspects
of AI ethics in a specific context. For example, a survey
can be designed to explore the perceptions of a select
group of people about the threats of job loss as a result of
using AI (Vu & Lim, 2022) or the implication of such technologies on community well-being (Musikanski
et al., 2020). A scenario-based survey experiment can also
be used to gather perceptions about the fairness and risk
of AI systems in making a decision (Araujo et al., 2020).
As part of the social system in a context, the perception of
policymakers, designers, and developers should also be
captured and measured to obtain the composite ethical
perception from different stakeholders. The aggregation of
ethical views is the next step when the measurement is
completed. Drawing on social choice theory (Sen, 2008),
research suggests the role of an aggregation procedure in
identifying a single ethics view from the measurement
outcomes (Baum, 2020). The aggregation is considered
important because the AI systems use one single view to
decide (Baum, 2020). Therefore, it can be argued that
the measurement of societal views contributes to the
understanding of the AI social choice ethics.
3.3 | Step 3: Professionalism in AI and
ethics scholarship
As seen from the analysis of the perceived ethical conduct breaches in Step 2 as summarized in Table 2, there
are specific ethical principles that have been breached in
each case. It is seen that individual-level ethical principles including fairness, nonmaleficence, responsibility,
freedom, and trust as well as organizational-level principles of transparency, privacy, fairness, trust, solidarity,
and sustainability are consistently infracted. When we
talk about these moral principles, we inevitably venture
into ethicality of actions. Despite notable progress in the
development of AI systems in terms of technological
innovation, which is derived from knowledge and skills,
13
we see that issues including rushed designs, insufficient
attention to the information fed to the machines, and
high levels of secrecy in the design process are the causes
of ethical breaches. This paper contends that technology
trust and acceptance does not only stem from highest
levels of technological innovation development but also
stem from the question of ethicality in development and
use of these technologies, thus bringing us to the construct of knowledge and skills supported by ethicality,
which together constitute professionalism. To find out
how much attention is given to the holistic understanding of professionalism, that is, knowledge, skills, and ethicality, we delve deeper into the literature.
In the third step of our overview of AI ethics scholarship study, we identify all publications that contain terms
with the professional* core in them. We aim to discuss
the context and the role of professionalism within the
studies to identify a gap in the literature. We note that
professionals may be referred to by other labels including
specific occupation-related labels such as surgeons or
engineers; this limitation is addressed in the conclusion.
However, we contend that when professions are discussed in terms of codes of conduct, the term profession*
will likely show in the discussion (professional code, professional behaviour, etc.). The terms with the professional* core appear in only 120 publications of the entire
set of 1592, which amounts to approximately 8% of all
publications in this scholarship.
We read through all WoS 120 publications that mention the term profession*; we note that the majority of the
studies use the term to identify a context within which a
particular phenomenon is investigated. For example, Flahault et al. (2017) demonstrate how technology including
AI algorithms and machine learning improves professionals' ability to provide healthcare with greater efficiencies and effectiveness. Further related studies examine
the direct impact of these technologies on professionals
including challenges it presents to professionals (see,
e.g., Benke & Benke, 2018) to a potential direct replacement of professionals or at least part of their duties, for
example, in legal professions (Armour & Sako, 2020;
Sutton et al., 2018). Finally, another stream of research
utilizes professionals as samples in studies (e.g., Wangmo
et al., 2019; Wiesenberg & Tench, 2020). Almost half of
the studies were in the context of the healthcare sector.
There were only six studies of 120 that highlight the
relationship between professionalism and ethical development/use of AI technologies. First, Delacroix (2018)
looked at automation within the legal profession and suggested that automated systems have a potential to hinder
the quality and legal service of professionals in this field.
Similarly, Neri et al. (2020) questioned the trustworthiness and reliability of AI use in radiology and argued that
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KLARIN ET AL.
the ultimate responsibility ought to remain with radiologists in utilization of AI. Luxton (2014a, 2014b) reported
some ethical issues as the consequence of using AI by
professionals in mental healthcare. The studies call for
professionals to engage in the study, development, and
guidance of the technologies considering the topic of
intelligence of these technologies. This argument is further supported by Bezemer et al. (2019), who argue that
healthcare professionals ought to initiate and guide the
development of support algorithms in clinical care
instead of being overwhelmed by technological advancements. The study argues for the developers and the relevant practitioners to collaborate in interprofessional
fashion to design these systems with ethics in mind.
Finally, a study conducted by Thomson and Schmoldt
(2001) in software design calls for AI development organizations to uphold professional codes of conduct to
ensure values in design and the resultant success of
technologies.
More recently, interdisciplinary research recognizes
the role of professionalism in ensuring ethical AI. For
example, Gillies and Smith (2022) outline the principles
of professional codes of conduct that define professionals;
the authors then find that AI systems currently do not
meet ethical standards that govern human professionals.
Borenstein and Howard (2021) argue that AI ethics
should be parts of curricula for members of AI community to ensure professional behaviour of AI and its users.
Goto (2021) provided an in-depth analysis of AI development in auditing at a collective professional identity level
and found a mutually reinforcing managerial and professional practices relationship when professionals adopt AI
at a collective level. This argument is supported in
Klarin & Xiao (2023) recommendations for professionals
in architecture, engineering, and construction industry.
Whilst Qu et al. (2022) argue for the need to ensure AI
development is regulated sufficiently to avoid risks in
education sector. Finally, Stahl (2021) notes that in computing, including AI, professionalism is less developed
than in other areas including medicine and law. Thus,
suggesting the need for advancement of professional bodies' involvement in developing AI ethics standardization.
Whilst the studies depicted above are instructive in
bringing attention to the field of professional behaviour
of AI and its use, they do not provide the detail of the
implications of professionalism in the development and
implementation of AI from the wider stakeholder perspective. In particular, these studies identify the term
profession and professionalism with skills and outcomes
instead of the entire complexity of the professionalism
construct, which also concerns the perceived professionally ethical behaviour according to certain codes of conduct from the point of view society at large. Finally, these
KLARIN ET AL.
studies tend to be context specific, with four of the six
studies that highlight the relationship between professionalism and ethical development/use of AI technologies being centred around the healthcare sector.
Ultimately, we argue that designing algorithms in AI
systems require selecting a set of data sets as input and
the moral behaviour of designers influence the role of the
actor inputting the data, the type of data feed as input,
suitability of the data feed, how the data will be used,
and who will be using the final output in the decisionmaking context. As AI system developers/designers with
specialized knowledge and abilities are uniquely positioned to design and develop AI system algorithm, they
make a moral choice and take responsibility for ethical
implications of algorithms given that that an algorithm
can either reinforce or infringe ethical principles of the
decision-making context (Martin et al., 2019). We furthermore argue that the errors or shortcomings of certain AI
systems are due to lack of professional behaviour and
accountability on the part of deployers or operators of
these systems. Although it is generally understood that
there should be a more thorough scrutiny of professionals
and organizations in ensuring ethical AI, there have been
no studies that identify the values required to be upheld
to ensure ethical AI by individuals nor organizations,
which is what we aim to propose in this study.
4 | ENSURING ETHICAL AI
DISCUSSION
It is futile to expect AI technologies to behave ethically
without identifying the causes of unethical behaviour.
The evidence-based approach of this review on AI and
ethics was instrumental in identifying the gap of connecting AI technologies to the required ethical outcomes of
the use and operation of these technologies. This study
adopted systems research approach (Klarin et al., 2023)
in studying the interdisciplinary nature of AI ethics by
merging the AI and ethics scholarships together to offer a
holistic and emergent perspective of the scholarship. The
holistic systems overview identifies a clear gap in practice
and the consequent literature in ensuring ethicality in
development and operation of AI. Development of
designed systems requires not only knowledge and skills
but also ethicality, which is only achievable through
instilling professionalism in designing and operating AI
systems. Below is the discourse of how this study derives
the findings and offers a theoretical contribution in offering ethicality principles at individual and organizational
levels in ensuring ethical AI. The study further proffers
practice and policy implications as well as directions for
further research in this pertinent subject.
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
14
15
4.1 | Theoretical contribution and
implications
The studies that examine the relationship between AI
and ethics, as depicted in Table 1, call for greater
accountability of developers and operators, alongside regulatory bodies, and various stakeholders to create institutional frameworks for the development and use of AI
technologies. Building on previous research, we aim to go
further and ask: How do we achieve ethical behaviour of
developers and operators of AI and the related technologies? We thus contend that professionalism of the
accountable parties will ensure proper ethical standards
maintenance and a possibility of eradication of perceived
unethical behaviour by the development and use of such
technologies (Figure 2). We specifically argue that the
shortcomings of AI-based systems, regardless of whether
they are intentional or unintentional, are due to the lack
of professionalism on the part of the developers and organizations designing and releasing these systems. Consider, for example, a relatively common issue with the
data source used to feed the algorithm that eventually
produces biased results. The data collected in an imperfect world, for example, skewed perceptions of the risks
to society that are partially based on racial aggregation of
certain individuals, fed to a tabula rasa machine will
inevitably lead to machine preconceptions and the resulting biases in the actions of the machine. It is for the
developers' professional responsibility not only to ensure
that the machine is fed the data that exist (reminder of
the adage ‘garbage in, garbage out’) but also to ensure
that the data are parsed and unbiased; this is also to be
followed by extensive checks and balances system before
it is released to serve its purpose. Arguably, there has
been some progress in this sphere where national and
transnational level institutions design frameworks that
aim to curtail the use of data and its resultant use and
technologies. For example, Goodman and Flaxman
(2017) describe GDPR adopted by the European
FIGURE 2
Parliament, which requires algorithms to operate within
this new legal framework. Elsewhere, Leslie et al. (2021)
has established nine principles and priorities that serve
as the foundation for both binding and nonbinding legal
guideline: (i) human dignity, (ii) human freedom and
autonomy,
(iii)
prevention
of
harm,
(iv) nondiscrimination, gender equality, fairness and
diversity, (v) transparency and explainability of AI systems, (vi) data protection and the right to privacy,
(vii) accountability and responsibility, (viii) democracy,
and (ix) rule of law. In addition, Wagner (2018) describes
the set of ethics guidelines developed by the European
Group on Ethics in Science and New Technologies
(EGE).
Figure 2 identified the proposed principles of professionalism in AI on individual professional and organizational levels to ensure ethical AI. Each of the principles is
detailed through a comprehensive scoping review carried
out by Jobin et al. (2019); we suggest readers to refer to
the study for further elaboration of each of the principles
as it falls outside the scope of this study to explain each
principle in detail. At the individual level, professionals
ought to exhibit fairness by mitigating bias through
respect, inclusion, and equality in access, use and challenge; nonmaleficence through safety and security, and
aversion of any unintentional and/or foreseeable harm;
responsibility—acting with integrity and being accountable for the actions and responsibilities for society; freedom through moral courage to do the right thing; and
building trust by the right professional conduct. At the
organizational level, professionalism is underpinned by
transparency, which ensures clarity in the processes for
the optimal trust; respect for privacy as a fundamental
right of society; fairness by proactive consolidation of the
just cause, inclusion, and equity; trust through assurance
of the righteous actions in ensuring societal confidence in
the organization and its operations; solidarity towards
society and equitable distribution of the AI benefits; and
sustainability as it is expected by society that
Professionalism role in the perceived ethical use of AI technologies.
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KLARIN ET AL.
organizations, especially the larger ones, upkeep with the
triple bottom line responsibility towards the environment, society, and stakeholders. Having identified the
key professional ethical traits for individuals and organizations, we do note that some values certainly apply to
both individuals and organizations. For example, it is
expected that at both individual and organizational
levels, the respect for privacy is of utmost importance.
However, organizations carry the ultimate responsibility
to ensure that employees within respect and uphold the
privacy rights as we believe that individuals should exercise the righteous moral standards within or by default to
be considered truly professional in what they do. Thus,
we put privacy principles as a responsibility of organizations more so than individual professionals.
When professionalism in AI is viewed through the
social contract theory lens (Donaldson & Dunfee, 1994),
ethical or unethical behaviour can be determined based
on societal expectations of practices of developers. The
theory suggests that the linkage between AI and professionalism should not be disjointed from the expectations
of society. In fact, any unethical outcomes experienced or
observed by society in utilization of AI technologies jeopardize the social contract between developers and society.
In other words, society plays a role in evaluating the
practice of AI developers. The more developers embrace
societal expectations of AI, the more they are likely to
build trust with the general public (Araujo et al., 2020).
Trust in applied AI is essential to the process of overcoming society's scepticism about AI adoption (Hengstler
et al., 2016). Given that society's trust and expectations
have become an integral consideration for applied AI, we
contend that professionalism should be exhibited by AI
developers and deployers. Accordingly, trust in AI refers
not only to the considered ethical standards as part of the
development of technologies but also in the evaluation by
society (the general public) of the perceived (un)professional behaviour of developers or deployers.
Indeed, developers from organizations designing AI
systems are also members of a broader society, and they
are expected to comply with the norms of the community
and be accountable for the moral consequences related to
algorithm as part of a social technical system for diverse
stakeholders to realize further social values (Munoko
et al., 2020; Sekiguchi & Hori, 2020; Sun et al., 2022).
Thereby, drawing on professionalism, this study suggests
that professional behaviours of AI technology developers
or the entrusted operators centred on the principles,
norms of the profession and codes of ethics help ensure
ethically acceptable behaviour of AI technologies.
Unsurprisingly, supranational institutions, national
governments, and professional associations propose principles, values, and guidelines that AI systems should meet in
order to be deemed trustworthy. For example, in
KLARIN ET AL.
addressing the question of whose responsibility it is to
assess and manage any risks associated with AI applications, the European Parliament recommended that the
deployer of an AI system holds the responsibility of controlling any risks and the level of risk should indicate the
liability regime. The European Parliament has recommended that autonomous traffic management system and
vehicles, unmanned aircrafts, autonomous cleaning
devices for public places, and robots are high-risk AI systems that should be subject to a strict liability regime
(Stahl, 2021). In this regard, Borenstein and Howard
(2021) argue that while professional bodies and associations provide specific guidance on AI and the ethical
issues, it is the ultimate responsibility of the AI
professionals—for example, developers of AI technologies
to ensure that the AI system is intertwined with ethical
requirements. This requires having a professional mindset
related to moral sensitivity which emphasizes that technical aspects/functionalities of the designed AI systems
should consider ethical guidelines as part of their professional responsibilities without conceiving the mentality
that ethics is someone else's problem (Borenstein &
Howard, 2021).
However, currently there are no discussions as to
what values and standards should AI professionals and
organizations exhibit to develop and deploy AI systems.
Based on the evidence collected from studies compiled in
Table 2, we highlight professionalism-related breaches of
values that led to the documented AI ethical failures.
Thus, building on AI ethics principles (see, e.g., Jobin
et al., 2019; Mittelstadt, 2019), we contend that the key in
ensuring professionalism in AI is the need for regulation
by unified “fundamental principles of AI professionalism” (Figure 2).
If we adopt a systems perspective of holism that
demonstrates how integral mechanisms of a system are
interrelated and apply it to professionalism in AI
(Nazarov & Klarin, 2020), we inevitably discuss stakeholders as the unit of analysis and their interrelationships (Su et al., 2022; Sun et al., 2022). In Figure 3, we
propose the interaction of the key AI stakeholders
(Bietti, 2020; Deshpande & Sharp, 2022; Güngör, 2020;
Langer et al., 2021; Meske et al., 2022) in developing and
utilizing AI from the systems holism perspective that
demonstrates how professionalism among developers and
organizations is transferred to society, and there are feedback loops in resources and demands from society to
organizations and developers.
4.2 | Practice and policy implications
We also note that in certain sectors including military
and healthcare (see, e.g., the Step 2 review studies that
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
16
FIGURE 3
17
Interaction of stakeholders in AI development and consumption.
depict professionalism), where the perceived risks are significantly higher, the responsibility will rest with the person in the “driving seat” of the machine, that is, the
operator or deployer. Consider the fact that the US military holds those in charge of autonomous machines ultimately responsible for the actions of these machines,
while China has proposed a ban on lethal autonomous
weapon systems (RAND Corporation, 2020). In the medical field, O'Sullivan et al. (2019) envision that complex
medical machines will be able to perform surgical operations under the guidance of a human surgeon, who will
hold the ultimate responsibility as a human in the driving
seat of autonomous machinery similarly to the case with
autonomous vehicles that roam the roads today. While
these two sectors, the military and healthcare, entrust
operations to the machines, the responsibility for the
actions of AI technologies is in the hands of the professionals in charge of these technologies. Therefore, we are,
at least for now, ruling out the possibility of completely
autonomous machines performing actions, which hold
the possibility of harm to health or loss of life. It is thus
the professional behaviour of the operators of these
machines that separates the potentially harmful conduct
of the machines and society.
Furthermore, we recognize that each profession has
its own ethical standards; our intention is to provide
generic sets of values for employees/professionals that
develop solutions and those that utilize the solutions in
practice as well as organizations that disseminate and utilize these machines. We argue that the values demonstrated in Figure 2 and discussed in the previous
subsection are the key values to ensure ethical AI in any
industry and sector. We utilize the analogy of arms or
vehicles being provided to users—where there is only so
much manufacturers can do to prevent harm caused by
the users. Nevertheless, it is important to produce to the
highest quality and ethical standards with various safety
mechanisms to ensure the products and services are as
safe as they can be. Ultimately, however, it is the user
that is held accountable and responsible for the use of
guns or vehicles. As such, we argue that organizations
and individuals that utilize AI are also required to exercise professional standards to ensure safety and risk aversion of the use of these technologies. Therefore, if all
parties (regardless of the industry or context) ensure professionalism in either creation, distribution, or use of AI
technologies, this will result in seamless integration of
technologies with minimal risks and highest ethical standards involved.
Professional agencies similarly play an inseparable
role in advising and guiding certain technological projects to ensure future ethical and impeccable standards
are upheld by the technology and its use. As such,
Abràmoff et al. (2018) were able to develop one of the
first AI-based technology completing the Food and Drug
Administration (FDA) regulatory process in the
United States. The system was designed from the data of
over 1 million samples based on extensive publication
history, which spanned two previous decades. The FDA
was closely involved in advising and guiding the company through the clinical trial of 900 subjects. This close
collaboration undoubtedly played a part in the device
being approved for medical use in diagnosing diabetic
retinopathy.
We thus aim to bring forth the professionalism role in
achieving ethical outcomes as perceived by society
in development and operation of AI technologies. AI system developers need to exhibit professionalism in terms
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KLARIN ET AL.
of appropriate knowledge and skills and conformation to
ethical codes of conduct related to the profession. These
professionals ought to design appropriate AI systems
since professional behaviour fundamentally refers to
adhering to codes of conduct as part of the profession,
understanding ethical implications, working with professional associations, and thus ensuring maximization of
benefits for diverse stakeholders including customers,
firms, and society (Evetts, 2013; Suddaby & Muzio, 2015).
Our proposition lends support to the emerging body of
literature that suggests that developers and operators
with specialized knowledge and skills can make moral
choices and thus they are responsible for the ethical consequences of the utilization of machines given that the
technologies can either reinforce or infringe ethical
behaviour (Allen et al., 2000; Davis, 2010). Irrespective of
whether AI technology is autonomous of human control
or is operated by individuals, the ultimate responsibility
should remain with the developers and/or operators,
dependent on context.
We do realize that many of those who develop AI are
not necessarily part of one single profession, which
means there is not just one code of conduct that applies
to all AI developers. Furthermore, many AI developers
are not members of a specific profession (in the traditional sense of the term) or a professional organization.
Therefore, we call on a systematic approach to identification of the responsible parties either by holding organizations or individuals that released the technology or those
organizations or individuals that utilize AI for operational purposes.
4.3 | Future research directions
From the extensive in-depth review of the scholarship
and the pertaining discussion, we propose future research
directions. First, future studies need to clarify in which
industries and/or sectors the responsibility for the AI
actions rests upon the developers, the organization that
developed the system, the ultimate operator of the
machine, or no clear responsibility exists and why. From
this, policy recommendations are required to ensure consistency and alleviation of possible breaches of ethical
conduct. Second, we need to explore and measure the
impact of appropriate training in professional behaviour
of developers and its impact on the conduct of AI technologies. Building on this point, further research is
recommended into how organizations that develop these
technologies ensure and instil ethical conduct to be followed by their employees including the developers. How
are the various regulations and recommendations from
intergovernmental and intragovernmental, professional
KLARIN ET AL.
association, and research bodies are applied and/or
adhered to? If none of these are employed or insufficiently employed by firms developing these technologies,
what is the cause and how these issues are to be resolved
to ensure ethical conduct of the technologies? Finally,
the research in relation to AI rarely, if at all, specifies
who are the managers, developers, engineers, deployers,
and professionals. Do the same ethical guidelines apply
to a temporary contract employee who replaces one of
the developers with the relevant qualifications and conduct? If so, are these temporary workers properly
informed of the guidelines? In theory, this should remain
so, but does it really apply to practice?
5 | CONCLUSION
The aim of the paper was to build on the existing literature, which calls for greater accountability of developers
and organizations in designing and using AI technologies, by proposing professionalism in development and
use of these technologies as the cornerstone in ensuring
ethical outcomes. While the existing literature calls for
greater accountability, using systems thinking approach,
we propose professionalism of AI designers and operators
to avoid unethical outcomes. Having analysed the perceived unethical behaviour of AI technologies in the past
(Table 2), we concluded that these unethical outcomes
would be mitigated if organizations devote attention to
the importance of professionalism of AI developers and
deployers throughout the entire development and utilization of these technologies. Professionalism is demonstrated through knowledge and a set of skills coupled
with ethicality of its members. We thus set out several
values that individual professionals and organizations
ought to exhibit to reduce unethical outcomes of AI technology development and deployment (Figure 2).
The developers inexorably are a part of the real
world; thus, it is up to these individuals to instil values
into an inanimate object that is tasked with (semi)autonomous conduct. Once the inanimate object is capable of
action, the ultimate responsibility should remain with
the developer and its organization. Professionals including scientists, engineers, and behavioural specialists are
practitioners and disseminators of specialist knowledge
and thus are governed by various standards and regulations including ethical codes of conduct. These professionals that design such systems are expected to stand
by their developments and are ultimately accountable
for the conduct of these developments. Therefore, when
a wrongdoing is discovered, the professionals accountable should brunt the full extent of the penalties. On the
other hand, experiences from the medical and military
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
18
19
applications of AI technologies demonstrate that the
responsibility for the machines may be transferred to the
deployers and the users of the machines. These users are
also governed by professional conduct and are in agreement with society to uphold the utmost standards in ethical conduct in applying their specialist knowledge and
skills whilst in operation of the technologies. In either of
the above circumstances, it is the professional behaviour
of individuals and organizations either in designing or
instilling the ethical values into the machines or operating these machines that will ensure ethical conduct of
the technology. To complement, institutional environment in regard to regulating and enforcing ethical practices is also developing and is a necessary component
together with responsible practices of professionals and
organizations in ensuring ethicality of AI.
ORCID
Anton Klarin
https://orcid.org/0000-0002-5597-4027
R EF E RE N C E S
Abbott, A. (1983). Professional ethics. American Journal of Sociology, 88(5), 855–885. https://doi.org/10.1086/227762
Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C.
(2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care
offices. NPJ Digital Medicine, 1(1), 1–8. https://doi.org/10.1530/
ey.16.12.1
Adnan, N., Md Nordin, S., Bin Bahruddin, M. A., & Ali, M. (2018).
How trust can drive forward the user acceptance to the technology? In-vehicle technology for autonomous vehicle. Transportation Research Part a: Policy and Practice, 118(November), 819–
836. https://doi.org/10.1016/j.tra.2018.10.019
Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future
artificial moral agent. Journal of Experimental & Theoretical
Artificial Intelligence, 12(3), 251–261. https://doi.org/10.1080/
09528130050111428
Almeida, D., Shmarko, K., & Lomas, E. (2022). The ethics of facial
recognition technologies, surveillance, and accountability in an
age of artificial intelligence: A comparative analysis of US, EU,
and UK regulatory frameworks. AI and Ethics, 2(3), 377–387.
https://doi.org/10.1007/s43681-021-00077-w
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine
bias. ProPublica https://www.propublica.org/article/machinebias-risk-assessments-in-criminal-sentencing
Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H.
(2020). In AI we trust? Perceptions about automated decisionmaking by artificial intelligence. AI & Society, 35(3), 611–623.
https://doi.org/10.1007/s00146-019-00931-w
Armour, J., & Sako, M. (2020). AI-enabled business models in legal
services: From traditional law firms to next-generation law
companies? Journal of Professions and Organization, 7(1), 27–
46. https://doi.org/10.1093/jpo/joaa001
Ayling, J., & Chapman, A. (2022). Putting AI ethics to work: Are
the tools fit for purpose? AI and Ethics, 2(3), 405–429. https://
doi.org/10.1007/s43681-021-00084-x
Baum, S. D. (2020). Social choice ethics in artificial intelligence.
AI & Society, 35(1), 165–176. https://doi.org/10.1007/s00146017-0760-1
Belk, R. (2020). Ethical issues in service robotics and artificial intelligence. Service Industries Journal, 41(13–14), 860–876. https://
doi.org/10.1080/02642069.2020.1727892
Benke, K., & Benke, G. (2018). Artificial intelligence and big data in
public health. International Journal of Environmental Research
and Public Health, 15(12), 2796. https://doi.org/10.3390/
ijerph15122796
Benkler, Y. (2019). Don't let industry write the rules for AI. Nature,
569(7755), 161. https://doi.org/10.1038/d41586-019-01413-1
Bezemer, T., De Groot, M. C. H., Blasse, E., Ten Berg, M. J.,
Kappen, T. H., Bredenoord, A. L., Van Solinge, W. W.,
Hoefer, I. E., & Haitjema, S. (2019). A human(e) factor in clinical decision support systems. Journal of Medical Internet
Research, 21(3), 1–9. https://doi.org/10.2196/11732
Bietti, E. (2020). From ethics washing to ethics bashing. Proceedings of the 2020 Conference on Fairness, Accountability, and
Transparency, 210–219.
Bishop, S. (2018). Anxiety, panic and self-optimization: Inequalities
and the YouTube algorithm. Convergence, 24(1), 69–84. https://
doi.org/10.1177/1354856517736978
Borenstein, J., & Howard, A. (2021). Emerging challenges in AI and
the need for AI ethics education. AI and Ethics, 1(1), 61–65.
https://doi.org/10.1007/s43681-020-00002-7
Braithwaite, V. (2020). Beyond the bubble that is Robodebt: How
governments that lose integrity threaten democracy. Australian
Journal of Social Issues, 55(3), 242–259. https://doi.org/10.1002/
ajs4.122
Brown, E. (2013). Vulnerability and the basis of business ethics:
From fiduciary duties to professionalism. Journal of Business
Ethics, 113(3), 489–504. https://doi.org/10.1007/s10551-0121318-2
Brusoni, S., & Vaccaro, A. (2017). Ethics, technology and organizational innovation. Journal of Business Ethics, 143(2), 223–226.
https://doi.org/10.1007/s10551-016-3061-6
Burmeister, O. K. (2017). Professional ethics in the information age.
Journal of Information, Communication and Ethics in Society,
15(4), 348–356. https://doi.org/10.1108/JICES-11-2016-0045
Carney, T. (2019). Robo-debt illegality: The seven veils of failed
guarantees of the rule of law. Alternative Law Journal, 44(1), 4–
10. https://doi.org/10.1177/1037969X18815913
Carter, D. (2018). How real is the impact of artificial intelligence?
The business information survey 2018. Business Information
Review, 35(3), 99–115. https://doi.org/10.1177/0266382118790150
Carter, D. (2020). Regulation and ethics in artificial intelligence and
machine learning technologies: Where are we now? Who is
responsible? Can the information professional play a role. Business Information Review, 37(2), 60–68. https://doi.org/10.1177/
0266382120923962
Checkland, P. (1999). Systems thinking. In Rethinking management
information systems (pp. 45–56). Oxford University Press.
https://doi.org/10.1093/oso/9780198775331.003.0004
Chen, B., & Metz, C. (2019). Google's Duplex uses A.I. to mimic
humans. New York Times.
Clarke, R. (2019). Principles and business processes for responsible
AI. Computer Law and Security Review, 35(4), 410–422. https://
doi.org/10.1016/j.clsr.2019.04.007
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KLARIN ET AL.
Cohen, T. (2019). How to leverage arti fi cial intelligence to meet
your diversity goals. Strategic HR Review, 18(2), 62–65. https://
doi.org/10.1108/SHR-12-2018-0105
Cox, A. (2022). The ethics of AI for information professionals: Eight
scenarios. Journal of the Australian Library and Information
Association, 71(3), 201–214. https://doi.org/10.1080/24750158.
2022.2084885
Crossan, M. M., & Apaydin, M. (2010). A multi-dimensional framework of organizational innovation: A systematic review of the
literature. Journal of Management Studies, 47(6), 1154–1191.
https://doi.org/10.1111/j.1467-6486.2009.00880.x
Cruess, R. L., & Cruess, S. R. (2008). Expectations and obligations:
Professionalism and medicine's social contract with society.
Perspectives in Biology and Medicine, 51(4), 579–598. https://doi.
org/10.1353/pbm.0.0045
Cruess, S. R., & Cruess, R. L. (2014). Professionalism and medicine's
social contract. Focus on Health Professional Education: A
Multi-Disciplinary Journal, 16(1), 4–19. https://doi.org/10.
11157/fohpe.v16i1.52
Dalenberg, D. J. (2018). Preventing discrimination in the automated
targeting of job advertisements. Computer Law and Security
Review, 34(3), 615–627. https://doi.org/10.1016/j.clsr.2017.
11.009
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that
showed bias against women. In Ethics of data and analytics
(pp. 296–299). Auerbach Publications. https://doi.org/10.1201/
9781003278290-44
Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing
Technologies, 2015(1), 92–112. https://doi.org/10.1515/popets2015-0007
Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How
artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48, 24–42. https://doi.
org/10.1007/s11747-019-00696-0
Davis, M. (2010). Ain't no one here but us social forces: Constructing the professional responsibility of engineers. Science and
Engineering Ethics, 1353–3452, 1–22.
Delacroix, S. (2018). Computer systems fit for the legal profession?
Legal Ethics, 21(2), 119–135. https://doi.org/10.1080/1460728x.
2018.1551702
Deshpande, A., & Sharp, H. (2022). Responsible AI systems: Who
are the stakeholders? AIES 2022 - Proceedings of the 2022
AAAI/ACM Conference on AI, Ethics, and Society, 227–236.
https://doi.org/10.1145/3514094.3534187
Dignum, V. (2018). Ethics in artificial intelligence: Introduction to
the special issue. Ethics and Information Technology, 20(1), 1–3.
https://doi.org/10.1007/s10676-018-9450-z
Dodig Crnkovic, G., & Çürüklü, B. (2012). Robots: Ethical by
design. Ethics and Information Technology, 14(1), 61–71.
https://doi.org/10.1007/s10676-011-9278-2
Donaldson, T., & Dunfee, T. W. (1994). Toward a unified conception of business ethics: Integrative social contracts theory.
Academy of Management Review, 19(2), 252–284. https://doi.
org/10.2307/258705
El Namaki, M. S. S. (2018). How companies are applying AI to the
business strategy formulation. Scholedge International Journal
of Business Policy & Governance, 5(8), 77. https://doi.org/10.
19085/journal.sijbpg050801
KLARIN ET AL.
Eliazar, I., & Shlesinger, M. F. (2018). Universality of accelerating
change. Physica a: Statistical Mechanics and its Applications,
494, 430–445. https://doi.org/10.1016/j.physa.2017.12.021
Etzioni, A., & Etzioni, O. (2017). Incorporating ethics into artificial
intelligence. The Journal of Ethics, 21(4), 403–418. https://doi.
org/10.1007/s10892-017-9252-2
European Commission. (2021). Ethics guidelines for trustworthy
AI. Shaping Europe's digital future. https://digital-strategy.ec.
europa.eu/en/library/ethics-guidelines-trustworthy-ai
Evetts, J. (2011). A new professionalism? Challenges and opportunities. Current Sociology, 59(4), 406–422. https://doi.org/10.1177/
0011392111402585
Evetts, J. (2013). Professionalism: Value and ideology. Current Sociology, 61(5–6), 778–796. https://doi.org/10.1177/0011392113479316
Felzmann, H., Fosch-Villaronga, E., Lutz, C., & Tamò-Larrieux, A.
(2020). Towards transparency by design for artificial intelligence. Science and Engineering Ethics, 26(6), 3333–3361.
https://doi.org/10.1007/s11948-020-00276-4
Feuerriegel, S., Dolata, M., & Schwabe, G. (2020). Fair AI:
Challenges and opportunities. Business & Information Systems
Engineering, 62(4), 379–384. https://doi.org/10.1007/s12599020-00650-3
Finlayson, S. G., Bowers, J. D., Ito, J., Zittrain, J. L., Beam, L., &
Kohane, I. S. (2019). Adversarial attacks on medical machine
learning: Emerging vulnerabilities demand new conversations.
Science, 363(6433), 1287–1290. https://doi.org/10.1126/science.
aaw4399
Flahault, A., Geissbuhler, A., Guessous, I., Guérin, P. J., Bolon, I.,
Salathé, M., & Escher, G. (2017). Precision global health in the
digital age. Swiss Medical Weekly, 147, w14423. https://doi.org/
10.4414/smw.2017.14423
Fletcher, R. R., Nakeshimana, A., & Olubeko, O. (2021). Addressing
fairness, bias, and appropriate use of artificial intelligence and
machine learning in global health. Frontiers in Artificial Intelligence, 3(April), 561802. https://doi.org/10.3389/frai.2020.561802
Galvin, P., Klarin, A., Nyuur, R., & Burton, N. (2021). A bibliometric content analysis of do-it-yourself (DIY) science: where
to from here for management research? Technology Analysis &
Strategic Management, 33(10), 1255–1266. https://doi.org/10.
1080/09537325.2021.1959031
Geis, J. R., Brady, A. P., Wu, C. C., Spencer, J., Ranschaert, E.,
Jaremko, J. L., Langer, S. G., Kitts, A. B., Birch, J.,
Shields, W. F., Van den Hoven van Genderen, R., Kotter, E.,
Gichoya, J. W., Cook, T. S., Morgan, M. B., An Tang, M.,
Safdar, N. M., & Kohl, M. (2019). Ethics of artificial intelligence
in radiology: Summary of the Joint European and North American Multisociety Statement. Radiology, 293(2), 436–440. https://
doi.org/10.1148/radiol.2019191586
Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and legal challenges of artificial intelligence-driven healthcare. In Artificial
intelligence in healthcare (pp. 295–336). Academic Press.
https://doi.org/10.1016/B978-0-12-818438-7.00012-5
Ghosh, A. K., Ullah, A. M. M. S., Teti, R., & Kubo, A. (2021). Developing sensor signal-based digital twins for intelligent machine
tools. Journal of Industrial Information Integration, 24, 100242.
https://doi.org/10.1016/j.jii.2021.100242
Gibert, M., & Martin, D. (2022). In search of the moral status of AI:
Why sentience is a strong argument. AI & Society, 37(1), 319–
330. https://doi.org/10.1007/s00146-021-01179-z
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
20
Gibson, D. E. (2003). Developing the professional self-concept: Role
model construals in early, middle, and late career stages. Organization Science, 14(5), 591–610. https://doi.org/10.1287/orsc.
14.5.591.16767
Gillies, A., & Smith, P. (2022). Can AI systems meet the ethical
requirements of professional decision-making in health care.
AI and Ethics, 2(1), 41–47. https://doi.org/10.1007/s43681-02100085-w
Gong, D., Goh, O. S., Kumar, Y. J., Ye, Z., & Chi, W. (2020).
Deepfake forensics, an ai-synthesized detection with deep convolutional generative adversarial networks. International Journal of
Advanced Trends in Computer Science and Engineering, 9(3),
2861–2870. https://doi.org/10.30534/ijatcse/2020/58932020
Goodman, B., & Flaxman, S. (2017). European union regulations on
algorithmic decision making and a “right to explanation”. AI
Magazine, 38(3), 50–57. https://doi.org/10.1609/aimag.v38i3.
2741
Goto, M. (2021). Collective professional role identity in the age of
artificial intelligence. Journal of Professions and Organization,
8(1), 86–107. https://doi.org/10.1093/jpo/joab003
Güngör, H. (2020). Creating value with artificial intelligence: A
multi-stakeholder perspective. Journal of Creating Value, 6(1),
72–85. https://doi.org/10.1177/2394964320921071
Hargreaves, A. (2000). Four ages of professionalism and professional learning. Teachers and Teaching: Theory and Practice,
6(2), 151–182. https://doi.org/10.1080/713698714
Heilinger, J. C. (2022). The Ethics of AI Ethics. A Constructive Critique. Philosophy & Technology, 35(3), 1–20. https://doi.org/10.
1007/s13347-022-00557-9
Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust—The case of autonomous vehicles and medical
assistance devices. Technological Forecasting and Social Change,
105, 105–120. https://doi.org/10.1016/j.techfore.2015.12.014
Hildt, E. (2019). Artificial intelligence: Does consciousness matter?
Frontiers in Psychology, 10(JUL), 1535. https://doi.org/10.3389/
fpsyg.2019.01535
Howard, A., & Borenstein, J. (2018). The ugly truth about ourselves
and our robot creations: The problem of bias and social inequity. Science and Engineering Ethics, 24(5), 1521–1536. https://
doi.org/10.1007/s11948-017-9975-2
Iacobucci, G. (2018). Babylon app will be properly regulated to
ensure safety, government insists. BMJ, 362(July), k3215.
https://doi.org/10.1136/bmj.k3215
Jiang, H., Gai, J., Zhao, S., Chaudhry, P. E., & Chaudhry, S. S.
(2022). Applications and development of artificial intelligence
system from the perspective of system science: A bibliometric
review. Systems Research and Behavioral Science, 39(3), 361–
378. https://doi.org/10.1002/sres.2865
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of
AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
https://doi.org/10.1038/s42256-019-0088-2
Johnson, D. G. (2015). Technology with no human responsibility.
Journal of Business Ethics, 127(4), 707–715. https://doi.org/10.
1007/s10551-014-2180-1
Jos, P. H. (2006). Social contract theory: Implications for professional ethics. The American Review of Public Administration,
36(2), 139–155. https://doi.org/10.1177/0275074005282860
Kaplan, A., & Haenlein, M. (2020). Rulers of the world, unite! The
challenges and opportunities of artificial intelligence. Business
21
Horizons, 63(1), 37–50. https://doi.org/10.1016/j.bushor.2019.
09.003
Khalil, A., Ahmed, S. G., Khattak, A. M., & Al-Qirim, N. (2020).
Investigating bias in facial analysis systems: A systematic
review. IEEE Access, 8, 130751–130761. https://doi.org/10.1109/
ACCESS.2020.3006051
Khalil, O. E. M. (1993). Artificial decision-making and artificial
ethics: A management concern. Journal of Business Ethics,
12(4), 313–321. https://doi.org/10.1007/BF01666535
Klarin, A., Sharmelly, R., & Suseno, Y. (2021). A systems perspective in examining industry clusters: Case studies of clusters in
Russia and India. Journal of Risk and Financial Management,
14(8), 367. https://doi.org/10.3390/jrfm14080367
Klarin, A., Suseno, Y., & Lajom, J. A. L. (2023). Systematic literature review of convergence: A systems perspective and re-evaluation of the convergence process. IEEE Transactions on
Engineering Management, 70(4), 1531–1543. https://doi.org/10.
1109/TEM.2021.3126055
Klarin, A., & Xiao, Q. (2023). Automation in architecture, engineering and construction: A scientometric analysis and implications
for management architecture. Engineering Construction and
Architectural Management, In press. https://doi.org/10.1108/
ECAM-08-2022-0770
Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L.,
Schmidt, E., Sesing, A., & Baum, K. (2021). What do we want
from Explainable Artificial Intelligence (XAI)?—A stakeholder
perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473.
https://doi.org/10.1016/j.artint.2021.103473
Larson, M. S. (1979). The rise of professionalism: A sociological analysis. University of California Press.
Leslie, D., Burr, C., Aitken, M., Cowls, J., Katell, M., & Briggs, M.
(2021). Artificial intelligence, human rights, democracy, and
the rule of law: A primer. In The Council of Europe. https://
doi.org/10.2139/ssrn.3817999
Li, M., Xie, Y., Gao, Y., & Zhao, Y. (2022). Organization virtualization driven by artificial intelligence. Systems Research and
Behavioral Science, 39(3), 633–640. https://doi.org/10.1002/sres.
2863
Loideain, N. N., & Adams, R. (2020). From Alexa to Siri and the
GDPR: The gendering of virtual personal assistants and the role
of data protection impact assessments. Computer Law and Security Review, 36, 105366. https://doi.org/10.1016/j.clsr.2019.105366
Lu, Y. (2019). Artificial intelligence: A survey on evolution, models,
applications and future trends. Journal of Management Analytics, 6(1), 1–29. https://doi.org/10.1080/23270012.2019.1570365
Luxton, D. D. (2014a). Artificial intelligence in psychological practice: Current and future applications and implications. Professional Psychology: Research and Practice, 45(5), 332–339.
https://doi.org/10.1037/a0034559
Luxton, D. D. (2014b). Recommendations for the ethical use and
design of artificial intelligent care providers. Artificial Intelligence in Medicine, 62(1), 1–10. https://doi.org/10.1016/j.artmed.
2014.06.004
Martin, K. E. (2019a). Designing ethical algorithms. MIS Quarterly
Executive, 18(2), 129–142. https://doi.org/10.17705/2msqe.00012
Martin, K. E. (2019b). Ethical implications and accountability of
algorithms. Journal of Business Ethics, 160(4), 835–850. https://
doi.org/10.1007/s10551-018-3921-3
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KLARIN ET AL.
Martin, K. E., & Freeman, R. E. (2004). The separation of technology and ethics in business ethics. Journal of Business Ethics,
53(4), 353–364. https://doi.org/10.1023/B:BUSI.0000043492.
42150.b6
Martin, K. E., Shilton, K., & Smith, J. (2019). Business and the ethical implications of technology: Introduction to the symposium.
Journal of Business Ethics, 160(2), 307–317. https://doi.org/10.
1007/s10551-019-04213-9
Martin, P. (2018). Extortion is no way to fix the budget. Sydney
Morning Herald.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility
for the actions of learning automata. Ethics and Information
Technology, 6(3), 175–183. https://doi.org/10.1007/s10676-0043422-1
Mehrabi, N., Morstatter, F., Saxena, N., & Lerman, K. (2021). A survey on bias and fairness in machine learning. ACM Computing
Surveys (CSUR), 54(6), 1–35. https://doi.org/10.1145/3457607
Meske, C., Bunde, E., Schneider, J., & Gersch, M. (2022). Explainable artificial intelligence: Objectives, stakeholders, and future
research opportunities. Information Systems Management,
39(1), 53–63. https://doi.org/10.1080/10580530.2020.1849465
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI.
Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.
1038/s42256-019-0114-4
Mohamed, S., Png, M., & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence.
Philosophy & Technology, 33, 3–26. https://doi.org/10.1007/
s13347-020-00405-8
Montes, G. A., & Goertzel, B. (2019). Distributed, decentralized,
and democratized artificial intelligence. Technological Forecasting and Social Change, 141(October 2018), 354–358. https://doi.
org/10.1016/j.techfore.2018.11.010
Munoko, I., Brown-Liburd, H. L., & Vasarhelyi, M. (2020). The ethical implications of using artificial intelligence in auditing. Journal of Business Ethics, 167, 209–234. https://doi.org/10.1007/
s10551-019-04407-1
Musikanski, L., Rakova, B., Bradbury, J., Phillips, R., & Manson, M.
(2020). Artificial intelligence and community well-being: A proposal for an emerging area of research. International Journal of
Community Well-Being, 3(1), 39–55. https://doi.org/10.1007/
s42413-019-00054-6
Nazarov, D., & Klarin, A. (2020). Taxonomy of Industry 4.0
research: Mapping scholarship and industry insights. Systems
Research and Behavioral Science, 37(4), 535–556. https://doi.
org/10.1002/sres.2700
Nelson, G. S. (2019). Bias in Artificial Intelligence. North Carolina
Medical Journal, 80(4), 220–222. https://doi.org/10.18043/ncm.
80.4.220
Neri, E., Coppola, F., Miele, V., Bibbolino, C., & Grassi, R. (2020).
Artificial intelligence: Who is responsible for the diagnosis?
Radiologia Medica, 125(6), 517–521. https://doi.org/10.1007/
s11547-020-01135-9
Neubert, M. J., & Montañez, G. D. (2020). Virtue as a framework for
the design and use of artificial intelligence. Business Horizons,
63(2), 195–204. https://doi.org/10.1016/j.bushor.2019.11.001
Nguyen, N. T., Basuray, M. T., Smith, W. P., Kopka, D., &
McCulloh, D. N. (2008). Ethics perception: Does teaching make
a difference. Journal of Education for Business, 84(2), 66–75.
https://doi.org/10.3200/JOEB.84.2.66-75
KLARIN ET AL.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019).
Dissecting racial bias in an algorithm used to manage the
health of populations. Science, 366(6464), 447–453. https://doi.
org/10.1126/science.aax2342
OECD. (2019a). Forty-two countries adopt new OECD principles on
artificial intelligence. Science and Technology. https://www.
oecd.org/science/forty-two-countries-adopt-new-oecdprinciples-on-artificial-intelligence.htm
OECD. (2019b). Recommendation of the council on artificial intelligence. OECD Legal Instruments. https://legalinstruments.oecd.
org/en/instruments/OECD-LEGAL-0449
O'Leary, D. E. (2019). Google's Duplex: Pretending to be human.
Intelligent Systems in Accounting, Finance and Management,
26(1), 46–53. https://doi.org/10.1002/isaf.1443
Omoteso, K. (2012). The application of artificial intelligence in auditing: Looking back to the future. Expert Systems with Applications,
39(9), 8490–8495. https://doi.org/10.1016/j.eswa.2012.01.098
O'Sullivan, S., Nevejans, N., Allen, C., Blyth, A., Leonard, S.,
Pagallo, U., Holzinger, K., Holzinger, A., Sajid, M. I., &
Ashrafian, H. (2019). Legal, regulatory, and ethical frameworks
for development of standards in artificial intelligence (AI) and
autonomous robotic surgery. International Journal of Medical
Robotics and Computer Assisted Surgery, 15(1), 1–12. https://
doi.org/10.1002/rcs.1968
Ozkazanc-Pan, B. (2019). Diversity and future of work: Inequality
abound or opportunities for all. Management Decision, 59,
2645–2659. https://doi.org/10.1108/MD-02-2019-0244
Paraman, P., & Anamalah, S. (2023). Ethical artificial intelligence
framework for a good AI society: Principles, opportunities and
perils. AI & Society, 38(2), 595–611. https://doi.org/10.1007/
s00146-022-01458-3
Petrillo, A., De Felice, F., Cioffi, R., & Zomparelli, F. (2018). Fourth
industrial revolution: Current practices, challenges, and opportunities. In A. Petrillo (Ed.), Digital transformation in smart
manufacturing (pp. 1–20). Intech Open. https://doi.org/10.
5772/32009
Pfadenhauer, M. (2006). Crisis or decline?: Problems of legitimation
and loss of trust in modern professionalism. Current Sociology,
54(4), 565–578. https://doi.org/10.1177/0011392106065088
Podsakoff, P. M., MacKenzie, S. B., Podsakoff, N. P., &
Bachrach, D. G. (2008). Scholarly influence in the field of management: A bibliometric analysis of the determinants of university and author impact in the management literature in the
past quarter century. Journal of Management, 34(4), 641–720.
https://doi.org/10.1177/0149206308319533
Prates, M. O. R., Avelar, P. H., & Lamb, L. C. (2020). Assessing gender bias in machine translation: A case study with Google
Translate. Neural Computing and Applications, 32(10), 6363–
6381. https://doi.org/10.1007/s00521-019-04144-6
Price, W. N., Gerke, S., & Cohen, I. G. (2019). Potential liability for
physicians using artificial intelligence. JAMA: The Journal of
the American Medical Association, 322(18), 1765–1766. https://
doi.org/10.1001/jama.2019.4914
Qu, J., Zhao, Y., & Xie, Y. (2022). Artificial intelligence leads the
reform of education models. Systems Research and Behavioral
Science, 39(3), 581–588. https://doi.org/10.1002/sres.2864
Raab, C. D. (2020). Information privacy, impact assessment, and
the place of ethics. Computer Law and Security Review, 37,
105404. https://doi.org/10.1016/j.clsr.2020.105404
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
22
Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20, 5–
14. https://doi.org/10.1007/s10676-017-9430-8
RAND Corporation. (2020). Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. In IEEE
Region 5 Conference.
Rességuier, A., & Rodrigues, R. (2020). AI ethics should not remain
toothless! A call to bring back the teeth of ethics. Big Data &
Society, 7(2), 1–5. https://doi.org/10.1177/2053951720942541
Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern
approach (3rd ed.). Pearson.
Ryan, M. (2014). The digital mind: An exploration of artificial intelligence. Createspace Independent Pub.
Saithibvongsa, P., & Yu, J. E. (2018). Artificial intelligence in the
computer-age threatens human beings and working conditions
at workplaces. Electronics Science Technology and Application,
5(3), 1–12. https://doi.org/10.18686/esta.v5i3.76
Saks, M. (2016). A review of theories of professions, organizations
and society: The case for neo-Weberianism, neoinstitutionalism and eclecticism. Journal of Professions and
Organization, 3(2), 170–187. https://doi.org/10.1093/jpo/jow005
Sarikakis, K., Korbiel, I., & Piassaroli Mantovaneli, W. (2018).
Social control and the institutionalization of human rights as
an ethical framework for media and ICT corporations. Journal
of Information, Communication and Ethics in Society, 16(3),
275–289. https://doi.org/10.1108/JICES-02-2018-0018
Searle, J. R. (1980). Minds, brains, and programs. Behavioral and
Brain Sciences, 3(3), 417–424. https://doi.org/10.1017/
S0140525X00005756
Sekiguchi, K., & Hori, K. (2020). Organic and dynamic tool for use
with knowledge base of AI ethics for promoting engineers'
practice of ethical AI design. AI & Society, 35, 51–71. https://
doi.org/10.1007/s00146-018-0867-z
Sen, A. (2008). Social choice theory: A re-examination. Econometrica: Journal of the Econometric Society, 45(1), 53–89. https://
doi.org/10.2307/1913287
Serrano, W. (2018). Digital systems in smart city and infrastructure:
Digital as a service. Smart Cities, 1(1), 134–153. https://doi.org/
10.3390/smartcities1010008
Snizek, W. E. (1972). Hall's professionalism scale: An empirical
reassessment. American Sociological Review, 37(1), 109–114.
https://doi.org/10.2307/2093498
Stahl, B. C. (2021). Addressing ethical issues in AI. In B. C. Stahl
(Ed.), Artificial intelligence for a better future (pp. 55–79).
Springer. https://doi.org/10.1007/978-3-030-69978-9
Strickland, E. (2019). IBM Watson, heal thyself: How IBM overpromised and underdelivered on AI health care. IEEE Spectrum,
56(4), 24–31. https://doi.org/10.1109/MSPEC.2019.8678513
Su, H., Qu, X., Tian, S., Ma, Q., Li, L., & Chen, Y. (2022). Artificial
intelligence empowerment: The impact of research and development investment on green radical innovation in high-tech
enterprises. Systems Research and Behavioral Science, 39(3),
489–502. https://doi.org/10.1002/sres.2853
Suddaby, R., & Muzio, D. (2015). Theoretical perspectives on the
professions. In The Oxford handbook of professional service
firms. Oxford University Press.
Sun, Y., Xu, X., Yu, H., & Wang, H. (2022). Impact of value cocreation in the artificial intelligence innovation ecosystem on
competitive advantage and innovation intelligibility. Systems
23
Research and Behavioral Science, 39(3), 474–488. https://doi.
org/10.1002/sres.2860
Sutton, S. G., Arnold, V., & Holt, M. (2018). How much automation
is too much? Keeping the human relevant in knowledge work.
Journal of Emerging Technologies in Accounting., 15(2), 15–25.
https://doi.org/10.2308/jeta-52311
Svensson, L. G. (2006). New professionalism, trust and competence:
Some conceptual remarks and empirical data. Current Sociology, 54(4), 579–593. https://doi.org/10.1177/0011392106065089
Taddeo, M., McNeish, D., Blanchard, A., & Edgar, E. (2021). Ethical
principles for artificial intelligence in national defence. Philosophy and Technology, 34(4), 1707–1729. https://doi.org/10.1007/
s13347-021-00482-3
The Bureau of National Affairs. (2020). Regulation and Legislation
Lag Behind Constantly Evolving Technology. Bloomberg Law.
https://pro.bloomberglaw.com/regulation-and-legislation-lagbehind-technology/.
Thomson, A. J., & Schmoldt, D. L. (2001). Ethics in computer software design and development. Computers and Electronics in
Agriculture, 30(1–3), 85–102. https://doi.org/10.1016/S01681699(00)00158-7
Timmers, P. (2019). Ethics of AI and cybersecurity when sovereignty is at stake. Minds and Machines, 29(4), 635–645. https://
doi.org/10.1007/s11023-019-09508-4
Tischbirek, A. (2020). Artificial intelligence and discrimination: Discriminating against discriminatory systems. In T. Wischmeyer &
T. Rademacher (Eds.), Regulating artificial intelligence (pp. 103–
121). Springer. https://doi.org/10.1007/978-3-030-32361-5_5
T
oth, Z., Caruana, R., Gruber, T., & Loebbecke, C. (2022). The
dawn of the AI robots: Towards a new framework of AI robot
accountability. Journal of Business Ethics, 178(4), 895–916.
https://doi.org/10.1007/s10551-022-05050-z
Vakkuri, V., Kemell, K. K., Kultanen, J., & Abrahamsson, P. (2020).
The current state of industrial practice in artificial intelligence
ethics. IEEE Software, 37(4), 50–57. https://doi.org/10.1109/MS.
2020.2985621
Valenduc, G. (2018). Technological revolutions and societal transitions. Foresight Brief, 4, 3180000. https://doi.org/10.2139/ssrn.
3180000
Vetrò, A., Santangelo, A., Beretta, E., & De Martin, J. C. (2019). AI:
From rational agents to socially responsible agents. Digital Policy, Regulation and Governance, 21(3), 291–304. https://doi.org/
10.1108/DPRG-08-2018-0049
Vieira, E. S., & Gomes, J. A. N. F. (2009). A comparison of Scopus
and Web of Science for a typical university. Scientometrics,
81(2), 587–600. https://doi.org/10.1007/s11192-009-2178-0
von Bertalanffy, L. (1968). General system theory. George Braziller.
Vu, H. T., & Lim, J. (2022). Effects of country and individual factors
on public acceptance of artificial intelligence and robotics technologies: A multilevel SEM analysis of 28-country survey data.
Behaviour & Information Technology, 41(7), 1515–1528. https://
doi.org/10.1080/0144929X.2021.1884288
Wagner, B. (2018). Ethics as an escape from regulation. From
“ethics-washing” to ethics-shopping? In E. Bayamlioglu, I.
Baraliuc, & L. Janssens (Eds.), Being Profiled: Cogitas Ergo Sum.
10 Years of “Profiling the European Citizen” (pp. 84–88). Amsterdam University Press. https://doi.org/10.2307/j.ctvhrd092.18
Wangmo, T., Lipps, M., Kressig, R. W., & Ienca, M. (2019). Ethical
concerns with the use of intelligent assistive technology:
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KLARIN ET AL.
Findings from a qualitative study with professional stakeholders. BMC Medical Ethics, 20(1), 98. https://doi.org/10.1186/
s12910-019-0437-z
Weber, R. H. (2020). Socio-ethical values and legal rules on automated platforms: The quest for a symbiotic relationship. Computer Law and Security Review, 36, 105380. https://doi.org/10.
1016/j.clsr.2019.105380
Welie, J. V. M. (2012). Social contract theory as a foundation of the
social responsibilities of health professionals. Medicine, Health
Care and Philosophy, 15(3), 347–355. https://doi.org/10.1007/
s11019-011-9355-7
Whitby, B. (2008). Computing machinery and morality. AI & Society, 22(4), 551–563. https://doi.org/10.1007/s00146-007-0100-y
Wiesenberg, M., & Tench, R. (2020). Deep strategic mediatization:
Organizational leaders' knowledge and usage of social bots in
an era of disinformation. International Journal of Information
Management, 51(April 2019), 102042. https://doi.org/10.1016/j.
ijinfomgt.2019.102042
Wilensky, H. L. (1964). The professionalization of everyone? American Journal of Sociology, 70(2), 137–158. https://doi.org/10.
1086/223790
Woods, H. S. (2018). Asking more of Siri and Alexa: Feminine persona in service of surveillance capitalism. Critical Studies in
Media Communication, 35(4), 334–349. https://doi.org/10.1080/
15295036.2018.1488082
Wright, S. A., & Schultz, A. E. (2018). The rising tide of artificial
intelligence and business automation: Developing an ethical
framework. Business Horizons, 61(6), 823–832. https://doi.org/
10.1016/j.bushor.2018.07.001
Xu, L. D. (2020). Industry 4.0—Frontiers of fourth industrial revolution. Systems Research and Behavioral Science, 37(4), 531–534.
https://doi.org/10.1002/sres.2719
KLARIN ET AL.
Xu, L. D. (2022). Systems research on artificial intelligence. Systems
Research and Behavioral Science, 39(3), 359–360. https://doi.
org/10.1002/sres.2839
Yeung, K., Howes, A., & Pogrebna, G. (2020). AI governance by
human rights-centred design, deliberation and oversight: An
end to ethics washing. In M. D. Dubber, F. Pasquale, & S. Das
(Eds.), The Oxford Handbook of Ethics of AI (pp. 77–106).
Oxford University Press.
Zajko, M. (2021). Conservative AI and social inequality: Conceptualizing alternatives to bias through social theory. AI & Society,
36(3), 1047–1056. https://doi.org/10.1007/s00146-021-01153-9
Zhu, J., & Liu, W. (2020). A tale of two databases: The use of Web
of Science and Scopus in academic papers. Scientometrics,
123(1), 321–335. https://doi.org/10.1007/s11192-020-03387-8
Ziewitz, M. (2016). Governing algorithms: Myth, mess, and
methods. Science, Technology & Human Values, 41(1), 3–16.
https://doi.org/10.1177/0162243915608948
Zuiderveen Borgesius, F. J. (2020). Strengthening legal protection
against discrimination by algorithms and artificial intelligence.
International Journal of Human Rights, 24(10), 1572–1593.
https://doi.org/10.1080/13642987.2020.1743976
How to cite this article: Klarin, A., Ali Abadi, H.,
& Sharmelly, R. (2024). Professionalism in artificial
intelligence: The link between technology and
ethics. Systems Research and Behavioral Science,
1–24. https://doi.org/10.1002/sres.2994
10991743a, 0, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/sres.2994 by National Health And Medical Research Council, Wiley Online Library on [18/05/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
24