Books by John Danaher
Automation and Utopia: Human Flourishing in a World Without Work, 2019
Human obsolescence is imminent. We are living through an era in which our activity is becoming le... more Human obsolescence is imminent. We are living through an era in which our activity is becoming less and less relevant to our well-being and to the fate of our planet. This trend toward increased obsolescence is likely to continue in the future, and we must do our best to prepare ourselves and our societies for this reality. Far from being a cause for despair, this is in fact an opportunity for optimism. Harnessed in the right way, the technology that hastens our obsolescence can open us up to new utopian possibilities and enable heightened forms of human flourishing.
Peer-reviewed Journals by John Danaher
Futures
Human values seem to vary across time and space. What implications does this have for the future ... more Human values seem to vary across time and space. What implications does this have for the future of human value? Will our human and (perhaps) post-human offspring have very different values from our own? Can we study the future of human values in an insightful and systematic way? This article makes three contributions to the debate about the future of human values. First, it argues that the systematic study of future values is both necessary in and of itself and an important complement to other future-oriented inquiries. Second, it sets out a methodology and a set of methods for undertaking this study. Third, it gives a practical illustration of what this 'axiological futurism' might look like by developing a model of the axiological possibility space that humans are likely to navigate over the coming decades.
Cambridge Quarterly of Healthcare Ethics, 2021
Henry Shevlin's paper-"How could we know when a robot was a moral patient?"-argues that we should... more Henry Shevlin's paper-"How could we know when a robot was a moral patient?"-argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the "behavioral equivalence" strategy that I have defended in previous work but argues that it is flawed in crucial respects. Unfortunately-and I guess this is hardly surprising-I cannot bring myself to agree that the cognitive equivalence strategy is the superior one. In this article, I try to explain why in three steps. First, I clarify the nature of the question that I take both myself and Shevlin to be answering. Second, I clear up some potential confusions about the behavioral equivalence strategy, addressing some other recent criticisms of it. Third, I will explain why I still favor the behavioral equivalence strategy over the cognitive equivalence one.
AI and Ethics
Rapid advances in AI-based automation have led to a number of existential and economic concerns. ... more Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency, they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people's ability to participate in meaningful forms of work. Achievement gaps are interesting, in part, because they are the inverse of the (negative) responsibility gaps already widely discussed in the literature on AI ethics. Having described and explained the problem of achievement gaps, the article concludes by identifying four possible policy responses to the problem.
AI and Ethics, 2020
Rapid advances in AI-based automation have led to a number of existential and economic concerns. ... more Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people's ability to participate in meaningful forms of work. Achievement gaps are interesting, in part, because they are the inverse of the (negative) responsibility gaps already widely discussed in the literature on AI ethics. Having described and explained the problem of achievement gaps, the article concludes by identifying four possible policy responses to the problem.
Neuroethics
The idea that humans should abandon their individuality and use technology to bind themselves tog... more The idea that humans should abandon their individuality and use technology to bind themselves together into hivemind societies seems both farfetched and frightening-something that is redolent of the worst dystopias from science fiction. In this article, we argue that these common reactions to the ideal of a hivemind society are mistaken. The idea that humans could form hiveminds is sufficiently plausible for its axiological consequences to be taken seriously. Furthermore, far from being a dystopian nightmare, the hivemind society could be desirable and could enable a form of sentient flourishing. Consequently, we should not be so quick to deny it. We provide two arguments in support of this claim-the axiological openness argument and the desirability argument-and then defend it against three major objections. "We are the Borg. Lower your shields and surrender your ships. We will add your biological and technological distinctiveness to our own. Your culture will adapt to service us. Resistance is futile."
Law, Innovation and Technology, 2020
Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of soci... more Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) defend the need for a novel conceptual model for understanding the systemic legal disruption caused by new technologies such as AI; (ii) to situate this model in relation to preceding debates about the interaction of regulation with new technologies (particularly the 'cyberlaw' and 'robolaw' debates); and (iii) to set out a detailed model for understanding the legal disruption precipitated by AI, examining both pathways stemming from new affordances that can give rise to a regulatory 'disruptive moment', as well as the Legal Development, Displacement or Destruction that can ensue. The article proposes that this model of legal disruption can be broadly generalisable to understanding the legal effects and challenges of other emerging technologies. Thus, while our model of legal disruption is crafted in response to the specific regulatory challenges raised by AI, we believe that, with minor modifications, this model can be usefully deployed to understand the challenges raised by future emerging technologies, and to structure regulatory responses to those challenges.
Techne: Research in Philosophy and Technology, 2020
Can human life have value in a world in which humans are rendered obsolete by technological advan... more Can human life have value in a world in which humans are rendered obsolete by technological advances? This article answers this question by developing an extended analysis of the axiological impact of human obsolescence. In doing so, it makes four main arguments. First, it argues that human obsolescence is a complex phenomenon that can take on at least four distinct forms. Second, it argues that one of these forms of obsolescence ('actual-general' obsolescence) is not a coherent concept and hence not a plausible threat to human well-being. Third, it argues that existing fears of technologically-induced human obsolescence are less compelling than they first appear. Fourth, it argues that there are two reasons for embracing a world of widespread, technologically-induced human obsolescence.
Social Theory and Practice, 2020
This article argues that access to meaningful sexual experience should be included within the set... more This article argues that access to meaningful sexual experience should be included within the set of the goods that are subject to principles of distributive justice. It argues that some people are currently unjustly excluded from meaningful sexual experience and it is not implausible to suggest that they might thereby have certain claim rights to sexual inclusion. This does not entail that anyone has a right to sex with another person, but it does entail that duties may be imposed on society to foster greater sexual inclusion. This is a controversial thesis and this article addresses this controversy by engaging with four major objections to it: the misogyny objection; the impossibility objection; the stigmatisation objection; and the unjust social engineering objection.
American Journal of Bioethics, 2019
In both the present target article (Sparrow 2019), and in his earlier work on obsolescence and th... more In both the present target article (Sparrow 2019), and in his earlier work on obsolescence and the enhanced rat-race (Sparrow 2015), Robert Sparrow has identified an important and neglected concern about the impact of rapid improvements in enhancement technology on the quality of human life. For this he is to be commended. Broadly speaking, I agree with Sparrow that rapid obsolescence is a problem that proponents of enhancement need to address: a world in which I and my offspring obsolesce as quickly as the latest model of smartphone does not seem particularly inspiring at a first glance. But, as Sparrow himself points out, the concern about obsolescence does not provide an all things considered case against the use of genetic enhancement, nor does it balance the risk of human obsolescence against other risks and rewards of rapid technological change. I believe that if we take a broader perspective on technological change and its consequences for human flourishing, we can approach Sparrow's concerns in a new light, and see human obsolescence as an opportunity not a crisis.
Ethics and Information Technology, 2019
If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical a... more If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception-superficial state deception-is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use.
AJOB Neuroscience, 2019
In her target article, Karola Kreitmair (in press) discusses what she calls direct to consumer ne... more In her target article, Karola Kreitmair (in press) discusses what she calls direct to consumer neurotechnologies (DTC neurotechnologies): technologies available on the market for monitoring or modulating neurological and psychological functioning. Kreitmair’s aim is to identify a set of basic ethical concerns that apply to this class of technologies – a class which, Kreitmair acknowledges, is unwieldy. We recently undertook a similar project in a joint pair of papers (Danaher, Nyholm, and Earp 2018a, 2018b): we identified an unwieldy class of what we call quantified relationship technologies (QR technologies): technologies used for tracking or logging aspects of romantic or other intimate relationships with the aim of improving them. In our articles, we identified and critically assessed a set of ethical concerns related to such technologies. In this commentary, we wish to explore the relationship between our treatment of the ethics of QR technologies and Kreitmair’s treatment of the ethics of DTC neurotechnologies.
Science and Engineering Ethics, 2019
Can robots have significant moral status? This is an emerging topic of debate among roboticists a... more Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory-'ethical behaviourism'-which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven't done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of 'procreative beneficence' towards robots.
Journal of Posthuman Studies
Friendship is an important part of the good life. While many roboticists are eager to create frie... more Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered our virtue friends - that to do so is philosophically reasonable. Furthermore, I argue that even if you do not think that robots can be our virtue friends, they can fulfil other important friendship roles, and can complement and enhance the virtue friendships between human beings.
Medical Law Review
In July 2014, the roboticist Ronald Arkin suggested that child sex robots could be used to treat ... more In July 2014, the roboticist Ronald Arkin suggested that child sex robots could be used to treat those with paedophilic predilections in the same way that methadone is used to treat heroin addicts. Taking this onboard, it would seem that there is reason to experiment with the regulation of this technology. But most people seem to disagree with this idea, with legal authorities in both the UK and US taking steps to outlaw such devices. In this paper, I subject these different regulatory attitudes to critical scrutiny. In doing so, I make three main contributions to the debate. First, I present a framework for thinking about the regulatory options that we confront when dealing with child sex robots. Second, I argue that there is a prima facie case for restrictive regulation, but that this is contingent on whether Arkin’s hypothesis has a reasonable prospect of being successfully tested. Third, I argue that Arkin’s hypothesis probably does not have a reasonable prospect of being successfully tested. Consequently, we should proceed with utmost caution when it comes to this technology.
Philosophy and Technology
Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes... more Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.
According to the common view, conscientious objection is grounded in autonomy or in 'freedom of c... more According to the common view, conscientious objection is grounded in autonomy or in 'freedom of conscience' and is tolerated out of respect for the objector's autonomy. Emphasising freedom of conscience or autonomy as a central concept within the issue of conscientious objection implies that the conscientious objector should have an independent choice among alternative beliefs, positions or values. In this paper it is argued that: (a) it is not true that the typical conscientious objector has such a choice when they decide to act upon their conscience and (b) it is not true that the typical conscientious objector exercises autonomy when developing or acquiring their conscience. Therefore, with regard to tolerating conscientious objection, we should apply the concept of autonomy with caution, as tolerating conscientious objection does not reflect respect for the conscientious objector's right to choose but rather acknowledges their lack of real ability to choose their conscience and to refrain from acting upon their conscience. This has both normative and analytical implications for the treatment of conscientious objectors. 51
American Journal of Bioethics, 2018
Our critics argue that quantified relationships (QR) will threaten privacy, undermine autonomy, r... more Our critics argue that quantified relationships (QR) will threaten privacy, undermine autonomy, reinforce problematic business models, and promote epistemic injustice. We do not deny these risks. But to determine the appropriate policy response, it will be necessary to assess their likelihood, scope, and severity; how feasibly they can be mitigated by various means; and whether and to what extent they are (or can be made to be) counterbalanced or even outweighed by the benefits QR technologies might bring for individuals, relationships, and society.
Journal of Consciousness Studies
Humans have long wondered whether they can survive the death of their physical bodies. Some peopl... more Humans have long wondered whether they can survive the death of their physical bodies. Some people, including some prominent billionaires and tech entrepreneurs, now look to technology as a means by which this might occur, using terms such “whole brain emulation”, “mind uploading” and “substrate independent minds” to describe a set of hypothetical procedures for transferring or emulating the functioning of a human mind on a synthetic substrate. There has been much debate about the philosophical implications of such procedures for personal survival. Most participants to that debate assume that the continuation of identity is an objective fact that can be revealed by scientific enquiry or rational debate. We bring into this debate a perspective that has so far been neglected: that personal identities are in large part social constructs. Consequently, to enable a particular identity to survive the transference process, it is not sufficient to settle age-old philosophical questions about the nature of identity. It is also necessary to maintain certain networks of interaction between the synthetic person and its social environment, and sustain a collective belief in the persistence of identity. We defend this position by using the example of the Dalai Lama in Tibetan Buddhist tradition and identify technological procedures that could increase the credibility of personal continuity between biological and artificial substrates.
This paper adds another argument to the rising tide of panic about robots and AI. The argument is... more This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made (or implied) in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections.
Uploads
Books by John Danaher
Peer-reviewed Journals by John Danaher