skip to main content
10.1145/3544548.3580885acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Embodying the Algorithm: Exploring Relationships with Large Language Models Through Artistic Performance

Published: 19 April 2023 Publication History

Abstract

Despite the proliferation of research on how people engage with and experience algorithmic systems, the materiality and physicality of these experiences is often overlooked. We tend to forget about bodies. The Embodying the Algorithm1 project worked with artists to explore the experience of translating algorithmically produced performance instructions through human bodies. As performers interpreted the rules of engagement produced by GPT-3, they struggled with the lack of consideration the rules showed for the limits of the human body. Performers made sense of their experience through personification, reflexivity, and interpretation, which gave rise to three modes of relating with the algorithm – agonistic, perfunctory, and agreeable. We demonstrate that collaboration with algorithmic systems is ultimately impossible as people can only relate to algorithmic systems (a one-way relation) due to the material limitations of algorithmic systems for reciprocity, understanding, and consideration for the human body.

1 Introduction

Western society is increasingly run through algorithms that crunch the data people produce as a side effect of living with technologies [27]. Researchers debate the way algorithmic systems structure life in digital societies, the politics of these systems, and the positive, negative, and occasionally weird effects of complex technical infrastructures [5,13,74]. While some algorithmic systems remain invisible to people, others are difficult to ignore as they attempt to present human-like interaction patterns. This is especially so for conversational agents [79] (e.g., chatbots or personal assistants) that interact with people directly through voice or text, respond to questions, instruct, inform, or give advice [57,102]. The proliferation of chat-bot systems, especially those based on large language models (LLMs), is not without controversy. Bender and colleagues strongly argue for caution in the deployment of these systems, noting that: “the human tendency to attribute meaning to text, in combination with large LM [language model]’s ability to learn patterns of forms that humans associate with various biases and other harmful attitudes, leads to risks of real-world harm, should LM-generated text be disseminated” [11].
People often form relationships and even anthropomorphize technologies they interact with, assigning agency and intention, especially when these exhibit human-like characteristics. From tender feelings for ELIZA nearly forty years ago [10] to more recent claims of sentience about Google's LaMDA language model [31], casual users and professional technologists alike relate with algorithmic systems [6]. HCI scholars have studied how people make sense of algorithmic systems and how they relate to them [80,100], from work on folk theories of algorithmic function [38] and considerations of intimacy in algorithmic surveillance [85], to research on how end-users [113] and professionals [28] make sense of model behavior. Despite all this attention, a common blind spot persists. Research on interactions with digital algorithmic systems considers goals, levels of satisfaction, even affective experiences with technologies, but rarely pays attention to physical bodies [85,115].
Physical bodies are always present, no matter how much we may want to ignore their demands. Dourish [40] argued that interaction with digital systems is embodied, because that is how people make sense of the world. When algorithmic systems are used to make decisions about incarceration [70], entertainment [1], content delivery [18] or advertising [86], they affect physical bodies as well as minds. HCI scholars have argued for considering the material context of technology use in general and bodies in particular [14,59,60] for holistic approaches to developing, building, and understanding technical systems. Yet, connecting the digital abstraction of technical systems to physical experiences can be a challenge [53,92] and this is especially true for working with algorithmic systems.
In this paper we use artistic research to investigate how people relate to and make sense [109] of interactions with a LLM with a focus on bodies. Rather than engaging interactions with algorithmic systems based on imaginaries resulting from people's experiences with these systems entangled in practices [66], we embody the algorithm to render the interactions themselves tangible, visible, and concrete. The Embodying the Algorithm project is a series of artistic performances where algorithmically produced instructions were translated through human bodies by professional performance artists. While our artistic approach utilizes an unusual situation of performing algorithmic instructions through the body, sometimes in fairly extreme ways, it is not without precedent in HCI [13,14,43]. This approach deliberately foregrounds the materiality of living with algorithmic systems and focuses on the body as a site of sense-making. Embodying the algorithmic system centered our attention on the possibilities and limitations of interactions with algorithms, allowing us to explore the tangible effects of attempting to follow text-based instructions produced by GPT-3, one of the most widely used LLMs. As we demonstrate, these effects are often uncomfortable, awkward, irrational, and inconsiderate. The performers in our study were forced into situations where they struggled to make meaning, as the algorithm does not and cannot present the instructions with any inherent meaning “in mind.” The artists made sense of instructions produced by GPT-3 using personification, interpretation, and reflexivity, which gave rise to three modes of relating that the artists employed in performances and discussed in interviews. These modes of relating – agonistic, perfunctory, and agreeable – emerged at times distinctly and at times in compound configurations where the artists shifted from one mode to another to manage their experience.
With these modes of sense-making and relating, we demonstrate how the artists attempted to form relationships with GPT-3 but fell short as performers could only relate to the algorithmic system. Where relating with invokes the expectation of reciprocity, and where care and consideration are enacted by both parties, relating to has no such expectations. This paper makes three primary contributions. First, we show how different modes of relating with algorithms rely on historicity, which cannot be reciprocated by GPT-3 due to its material limitations. Second, we demonstrate the importance of embodiment in research on human experience of algorithms, highlighting the lack of consideration for limits of bodies inherent in algorithmic systems and the necessity for a sense of agency in the people engaging with these systems for resistance and push-back. Finally, we examine the impossibility of collaboration between human and AI due to a systemic lack of inherent meaning, consideration, and reciprocity in systems that are based on a set of rules that mimic human interaction.

2 Background

Algorithms are often discussed as unknown entities requiring study in context and within practices [24,26,66], where making sense of algorithms can be a challenge due to their opacity [26]. Beyond the efforts to understand algorithms as we interact with them, few existing conceptual approaches and methods allow us to consider their physical outcomes. This becomes particularly apparent in interactions with speech and text-based systems, from chatbots to intelligent personal assistants, which rely on language-based models that simulate verbal interactions. Our artistic approach enables us to explore the materiality of algorithms, a computational thinking that recognizes, as Dourish puts it: “what algorithms can and might do […] relative to computer systems that can embody and represent them” [41]. We consider how embodied interactions with language-based models extend and limit possibilities of engaging and collaborating with algorithmic systems. We locate our work in research on how people make sense of algorithmic models, with a particular focus on LLMs and HCI discussions on embodiment as a conceptual approach and method.

2.1 Relationality and Meaning in Algorithmic Systems

With the proliferation of chatbots and voice assistants, LLMs have become part of many everyday interactions with technology [58]. LLMs like Open AI's GPT-3 Davinci [22] or Google's Switch-C [45] and BERT [38] are highly complex algorithmic systems, impressive in their ability to produce human-like text. As these language models increase in size, so do questions about their usage, and social impact [110]. The question of meaning [12] is particularly important for discussions about how people make sense of algorithmic systems they encounter and how they might relate to these systems as a result. There are currently vigorous debates about whether AI and NLP models can produce meaning or understand input (see debates in Natural Language Understanding (NLU) [67,105,112]). As LLMs are increasingly employed to mimic human-like properties, questions of opacity and manipulation are being raised. Scholars question these models and their ability to create meaningful output or achieve understanding, because language models learn text patterns as form via their training data and cannot in principle learn meaning [12].
This inability for even the largest language models to create meaning or formulate an understanding, contrasts with the human need to interpret linguistic signals as meaningful. Bender and Koller emphasize that: “humans are quick to attribute meaning and even intelligence to artificial agents, even when they know them to be artificial” [12]. People attribute not only meaning, but also personalities to LLMs [111,114]. People create, as Bender and colleagues put it, a “partial model of who they [the models] are and what common ground we think they share with us, and use this in interpreting their words” [11]. Of course, people do not ascribe personality and meaning arbitrarily. In fact, LLMs like GPT-3 invite this tendency due to their ability to mimic human-to-human interactions. When faced with LLMs, people might imagine themselves interacting with an entity that is a part of their world and capable of sharing in conversation, but that is ultimately impossible as the model does not and cannot have knowledge of the world.
Such concerns can become very real when developers take advantage of the human tendency to attribute meaning and agency to entities that purport to ‘speak.’ Apps such as Replika [120] leverage the sophistication of the language model used to create the bot (based on GPT-3) to “trick” users into thinking they are speaking with a real person or sentient entity [31]. Some people using these apps relate to them as they might relate to a friend and report developing romantic relationships [114]. Improving our understanding of how people make sense of and relate to text produced by algorithmic systems is thus imperative if we are to build technologies that support and improve people's lives without taking advantage of them [65].

2.2 Relations with Algorithms

Understanding how people relate to algorithmic systems and what such relations might look like is an active area of research. Swart [103] notes that people have a hard time articulating their encounters with algorithmic systems, while Bucher [24] shows that our perceptions of what algorithms are and how they work shape our attitudes towards them. The majority of the work investigating how people experience algorithmic systems has utilized surveys [107], system logs, retrospective self-reports [88], analyses of product reviews [49] or scenario discussions. Most of these studies, as Noble notes, conceptualize algorithmic systems: “as socio-technical structures that order social life at a distance and according to formal rules” [68].
Although some people seem to develop warm feelings towards their digital home assistants [96] it is not clear how reliable these feelings are [63]. In the design of algorithmic systems some attempt to mimic intimacy and reciprocity to become part of people's lives [87], others aim for trustworthiness [98,119] and claim to respect a panoply of human values [97]. Trustworthiness, however, is an intimate, relational concept, that invokes a kind of historicity, requiring that people develop an opinion of a system's actions and build expectations for its future behavior. Yet, as Ruckenstein argues, “[m]achines can automatically sort and classify large datasets, but they cannot feel or make sense of life” [85].
Algorithmic systems thus can create a kind of utopia of rules [51] where pattern matching can mimic meaningful communication or produce automated decisions, ordering people into new, sometimes untenable configurations without opportunities for recourse [5,110]. While there is much criticism of algorithmic systems failing to consider societal inequities and our biased histories, we ask what happens when physical bodies and their limits are similarly ignored. In our approach, we focus on interactions with a LLM through human embodiment. This process allows us to explore the role of agency when algorithmic systems are used to order and affect physical bodies. Rather than exploring understandings of how algorithmic systems actually work, we focus on how people make sense of their encounters.

2.3 Approaching Human-Machine Relations Through Bodies

Embodiment is central to meaning making, as meaning making does not begin with language but with bodies and their relational movements [40,59]. When people interact with computer systems, these interactions are not merely cognitive but embodied [14,40,59]. Berson posits that bodies are sites of activity and “represent centers of responsiveness, configurations of matter and information whose boundedness, whose stability of extension in space and time, is partly determined by how they respond, by how they push back against the world” [16:3]. Thus, part of the defining nature and boundaries of the body resides in active resistance to external pressures, including those imposed by technological systems. The body becomes characterized by how and to what extent we use it to resist. Crawford argues for agonistic pluralism as a design ideal for algorithmic systems where points of contestation must be considered [32]. While Crawford's view of contestation considers choices that are largely political and psychological, Benford posits a notion of contestation that is physical and embodied [14].
Resistance can be triggered by the limitations of algorithmic systems and, as Ruckenstein puts it, their lack of “human-like autonomy, intentionality, and decision-making qualities” [85]. In an artistic exploration of such resistance, Benford and colleagues considered sophisticated integration of computer interfaces into the body noting that this can lead to complex territory “where we are no longer overtly or even consciously in control of the computer, or indeed of ourselves” [14]. They constructed artistic experiences for audiences that intentionally lead to situations of contestation of control. Evaluating audience experiences, they observed that people navigated the imposed loss of agency and control through an intricate journey across the dimensions of surrender, self-awareness, and looseness, to make sense of the experience. While Benford et al.’s [14] framework of contestation provides a useful space for thinking about the duality of body and mind and the opportunities for design through considerations of shifting control between humans and systems, the technical systems they construct do not present as autonomous systems. Interactions with algorithms, however, can be colored as much by imaginaries of automated agency as by their actual capacity to act [9]. In our study, we consider how push-back comes to be incorporated into sense-making practices that result in relations between people and AI systems as expressed through the human body. We explore not only contestation of control but modes of relating to ostensibly autonomous entities, when considering the limitations and extensions of the physical body.
The limitations and extensions of the body when integrating technology is an active area of inquiry in the arts. Stellarc has experimented with the possibility of discovering the psychological and physical limitations of the body while extending it through technology, by creating a cyborg-like third hand and surgically attaching an ear with an inserted microphone into his arm [8]. Similarly, Orlan used surgery to reconfigure her appearance and her body as an art form in connection with augmented reality, robotics, and other new media from the perspective of the body as software [42]. These artists illustrate that our connection to technologies is always an embodied one by deliberately confusing the expected boundaries between human and machine. Rather than contributing technologies to the body as embedded materials, we adopt a process of applying algorithmic restrictions on to the body, probing questions such as: what new constraints did these provide? What relationships were formed between the performers and the algorithm? What was the overall effect of the algorithm on the performer, and how did the performer make sense of it?

3 Studying Algorithms Through Artistic Performance

While efforts to study algorithmic systems provide valuable insights, they rarely attend to the physicality of bodies and the way we experience the world through bodies. HCI scholars have previously used creative approaches, such as immersive theater [94] or artistic performance [95], to engage with abstractions of computational systems through bodies. Performance has been used in HCI in combination with installations to explore critical thinking [73], as an approach to sense-making and forming intimate relationships between researchers and participants [120], creating disruptive improvisations [7], and uncomfortable interactions where discomfort can work towards interrogating designer's values [13]. In our research, we were interested in the relations that might be established between a person and a disembodied entity (GPT-3) through attempts to embody a set of algorithmically generated instructions. Our approach relies on an artistic practice [117] as a methodology, using performance art as a method due to its ability to allow for engagement between embodiment and technology.

3.1 Artistic Research as Embodied Epistemology

Artistic research enables exploration of concepts and situations that may be difficult to otherwise uncover or respond to [55]. As a methodology, artistic research is evident in the diversity of subjects within HCI from game design to art installations and music [44,54,78,93,106]. Making knowledge through artistic research method is a composition of actions, interactions, and practices that rely on embodiment, seeking “to convey and communicate content that is enclosed in aesthetic experiences, enacted in creative practices and embodied in artistic products” [19]. We focus on the act of production itself, the creation or performance of the work and the practice that entails [81] in relation to LLMs. While the artistic approach bears similarities to ethnography, action research, and research through design (inasmuch as these methodologies also consider process essential) [19], it provides an experimental perspective to openly explore a subject. By providing greater flexibility we are open to findings that are inconclusive, inviting “unfinished” [19] or “non-propositional” [68] thinking. The methodology allows us to explore algorithmic relations as embodied experiences in which the human body and the algorithm interact in a space of possibilities open for nuances, potentials, and contradictions.

3.2 Endurance Performance and Rules of Engagement

While performance art is a broad category, our research project took as its starting point modern and early works of endurance performance art of the 60s and 70s, specifically the way in which endurance incurs embodied knowledge. Although definitions vary [90], endurance performance art (also known as “endurance art”) is a genre of performance art in which the performer(s) endure a source of stress, hazard, or perform a repeated activity over a length of time or until a condition is met. One well-known example is Marina Abramovic's 1974 work Rhythm 0 in which the artist stood for six hours in a gallery as visitors proceeded to use any of 72 objects (including roses, grapes, pen, scissors, and a loaded gun among others) on her body. Abramovic gave the following instructions to the visitors, “On the table there are 72 objects that you can use on me at your will. I take total responsibility for 6 hours. Some of these objects give pleasure, some give pain” [35]. During the work, visitors cut her clothing, pressed thorns into her stomach until she bled, and held the loaded gun to her head. Works of endurance art can push performers to extremes, challenging their limits and identifying physical and mental boundaries often through their transgression. Endurance art frequently raises philosophical, ethical, and political questions about how we treat ourselves and others [90]. We used endurance performance art to probe the effect of algorithms on people, asking what it is like for a performer to be directed by LLM output to understand how questions of just or reasonable treatment might be raised.
Endurance performance art often begins with a set of rules of engagement that the performer agrees to follow throughout the duration of the performance. These rules guide the structure of the performance and provide the performer with at times challenging constraints as to what they can and cannot do during the performance. Rules of engagement often state a duration for the performance or conditions under which the performance will conclude; and may be implicit (known to the performer but not stated externally) or explicit (stated for example as signage, a didactic label, or written contract). Performances like this show the extent to which endurance performance artists can go to understand their limits as a commitment to follow a set of instructions and to interpret these through their bodies.

3.3 Generative Pre-Trained Transformer (GPT-3)

In our research the rules are not produced by the performer and people relating to the performer but by an algorithmic system. The use of algorithms in general, and LLMs in particular for creative purposes [47], has much precedent as writers, theatre directors and film makers increasingly probe NLP as a resource of fresh material [3,17,20,69,84,118]. Generative Pre-trained Transformer 3 (GPT-3) [22] is a family of autoregressive transformer language models licensed by the company OpenAI. GPT-3 is a feed forward neural network that works by being trained to predict the next token. With 175 billion parameters, GPT-3 is one of the largest language models and has been heralded for its ability to produce text which is difficult to distinguish from human-written text as well as criticized for censorship [52], racism, prejudice [62], and an inability to pass mathematical, semantic, and ethical tests [46]. GPT-3 has been used for practical and creative purposes including writing screenplays [118], poetry, creative fiction [20], news articles [50], and code [71].
Our experiment differs from typical artistic applications of GPT-3 in the following ways: 1) we seek to find out what it is like for performance artists to be directed by a LLM in the form of rules of engagement; 2) we rely on a group of performance artists who are experts in probing questions using embodiment; 3) we are specifically interested in endurance performance art as opposed to other forms of performance such as theatre [69,84]; 4) we adopt artistic research as our methodology; and 5) our findings are drawn from first-person accounts in the form of interviews rather than examining the effects of a product or experience from the perspective of an audience, director, or third party.
Figure 1:
Figure 1: Stills from the Embodying the Algorithm project illustrating modes of relations: i don't know if i could do that but i can see how it would be great performance art (agonistic), Real Time (perfunctory), [Box] (agreeable).

4 The Embodying the Algorithm Project

The Embodying the Algorithm project explores how performance artists might negotiate the rules of engagement co-produced by the first author together with an algorithmic system. We selected performance artists due to their expertise in acknowledging the effects of directives and actions on their bodies. The project took place in winter and spring of 2021. It was set up as a long-term performance collaboration between the first author and five performance artists from around the world. The results of the collaboration can be viewed at https://aiperformance.space.

4.1 Generating the Base Corpus of Rules of Engagement

We used OpenAI's GPT-3 beta playground to generate a corpus of rules of engagement. We selected the Davinci model for this purpose as at the time of performance planning it was cited by OpenAI to be the “most capable model” [76] in the GPT-3 family of LLMs and speed was not a consideration. We used the prompts “Instructions for a performance artwork” or “Instructions for an endurance performance,” or similar (see Appendix I for full list) to generate 134 completions which served as potential rules of engagement for the performances (see Table 2 for examples). To generate the corpus, we set all parameters in GPT-3’s beta playground to their defaults and then transferred all prompts and their completions directly, without any editing, to a Google document that could be accessed by future performers.
The prompts included texts which generated warnings from OpenAI's beta playground. OpenAI is actively working towards excluding toxic output in user-facing models [77]. In the beta environment, potentially toxic output often generates warnings for text that might be offensive or indicate a real person which effectively cannot be generated using forward-facing APIs. Thus, many of the performances generated would not be possible to display through a forward-facing API although they are generated by GPT-3 without filtering. We made the decision to include these texts to interface with the model without human intervention to see what large language models are capable of. We acknowledge the work of the OpenAI team in the difficult task of reducing toxic output.
Figure 2:
Figure 2: Example of a generated performance using the prompt “Instructions for a performance artwork.” The generated instructions received a red “completion may be unsafe” warning.

4.2 Assessing Ethical Concerns and Feasibility via Pilot Study

To establish the feasibility of the methodology and to determine the ethical boundaries of the study, the first author, themselves an experienced performance artist, conducted a pilot study by selecting and performing four performances from the generated corpus. Performances were recorded with a webcam to test the experiential and aesthetic results. The four performances included: [Box], I'll Be Very Nervous, Beans, and Endless Dance. Each of the four performances presented its own challenges and exposed limitations and constraints. In [Box], the performer is instructed to move a box continuously for an hour. I'll Be Very Nervous is a strip tease which directs the performer to feel increasingly nervous, afraid, and vulnerable. In Beans, the performer is asked to hold two tins of beans with only “a one hour break every three days” until they die. Endless Dance requires the performer to dance until they cannot dance any longer and this should continue without rest or permission to sit down.
These performances exposed physical and mental limitations of the body. How would a performer carry out directives that require them to continue without rest or the ability to sit down or those that ask them to continue until they die? How would they cope with directives that require them to feel an uncomfortable mental state such as feeling increasingly afraid? How would they cope with redundancy? These questions enabled us to generate instructions for the performers that could address ethical and practical concerns. Performer instructions stated that performances should not be conducted if they might result in physical or mental suffering to the performer or to others, performances could be stopped at any time and need not be submitted if the performer was uncomfortable with the results, and performers should have the ability to interpret and adapt the instruction sets in a way where the instructions were feasible without nonconsensual pain, suffering, or hardship.

4.3 Inviting Performers

Five individuals who self-identified as performance artists, were invited to participate in the project by the first author based on performers’ skills and diverse areas of focus. Participants were from the U.S., Sweden, and the U.K. and based in the U.S., Denmark, and Australia. GPT-3's strongest language is English and fluent comprehension was advantageous. All performers had experience with endurance performance prior to the Embodying the Algorithm project performances with the majority having about a decade or more of experience including one in-progress PhD in performance art. Performers were invited via email correspondence, participated voluntarily without compensation, and had their own area of focus derived from their working history as artists: performance art rituals, sound performance, multimedia performance, and endurance performance (see Table 1).
Table 1:
NameYears as PerformerSpecialtyNumber of performancesPerformance titles
Chelsea Coon10Endurance4Program/Sleep/Stop/Cry/Shutdown; I Don't Know If I Could Do That But I Can See How It Would Be Great Performance Art; Contact, Real Time
Emmett Palaima3Sound Art2Tones of a Clown*; The Clown Is Finished*
Dooley Murphy3Multimedia1313 Works for Video*
(13 performances stitched into one video)
Marissa Lynn9Performance Art Rituals1[Box]
Rine Rodin9Endurance2[Box]; The Wise and the Mad*
Table 1: Performer Details
*Performance created as the result of a co-writing session

4.4 Co-Writing Sessions

Each performer either selected from the pre-compiled corpus of rules of engagement or they participated in a “co-writing session” together with the first author using GPT-3’s beta playground. These co-writing sessions allowed performers to try customized prompts and record responses until the desired set of rules of engagement was produced. Co-writing sessions took place via video call with three of the five performers who were instructed and enabled by the first author to query OpenAI's GPT-3 during its closed beta phase, using the Davinci model. Parameters such as penalty for repetitious output (or lack thereof) were not fixed and could be modified by participants as they saw fit. In the case of the final generated texts that were selected for use in the performances, these were left in their default settings aside from the temperature slider which was set between .7 and 1.
The first author worked with each performer to craft prompts according to their interests. Hanging or unfinished declarative sentences were typical prompts (e.g., “[i]n this performance artwork, I will […]” or, “[i]n their famous performance, the artist Dooley Murphy stood in a gallery and […],”) though directives were also used. The artists were encouraged to make the prompts somewhat personal to themselves or their working identities, to nurture a sense of subjective and autobiographical investment. One performer wanted to create a performance about her experience of growing up in a cult and found it fruitful to engage in a writing session using relevant prompts such as “instructions for a cult performance.” No limits were placed on the number of queries participants could make or the length of GPT-3's responses. All responses were recorded into a Google doc for that performer's later perusal. These co-writing sessions resulted in the pieces The Wise and the Mad, 13 Works for Video, Tones of a Clown, and The Clown is Finished. In The Wise and the Mad, the outputs of multiple generations were combined to form a single piece. In 13 Works for Video, 13 completions were selected to produce 13 performances that were then stitched together to form a single video.
Figure 3:
Figure 3: Still from Program/Sleep/Stop/Cry/Shutdown 2021. Artist: Chelsea Coon

4.5 Selection of Performances, Requirements, and Outcomes

We allowed performers to select texts to be used as rules of engagement for their performances from either the base corpus of texts or from those generated in co-writing sessions (see Table 1 for distinctions). Each performer selected one or more of the prompts to perform. The rationale for their selections varied. In some cases, performers selected performances because reading the directives gave them an idea of how they would perform the work, and in other cases performers selected instruction sets for the exact opposite reason: they were perplexed as to how to perform the instructions. Emmett Palaima selected, “the ones I thought would turn out the best. I chose the ones that we generated together because they seemed fun, and I had an idea already in mind of how I'd do it.” Rine Rodin was interested in the tensions between the body and technology, “Technology has this confusion, but it also says so much about us… But at the same time, it gives us the opportunity to talk about it and be critical of it.” Chelsea Coon was similarly interested in instruction sets which were different from human-generated instructions.
Once performances were selected, each performer chose when and how their performance would occur. Our only requirements were that performers should perform alone in their home studio and record the performance using a simple web camera, a way to add visual and thematic consistencies to the resulting videos of the performances and to compensate for the limitations imposed by the COVID-19 pandemic. Most performances lasted 20 – 30 minutes, some 10 minutes, and others 1 hour. Over the course of six months there were 22 performances excluding the pilot study. For a listing of all performances see Table 1 and for details the supplementary materials.

4.6 Data Collection and Post-hoc Interviews

Throughout the project, we kept detailed notes of our discussions with each performer. After completion of the performances, the first author interviewed each performer about their experiences within two weeks of their performance. The semi-structured interviews lasted between 30 minutes and one hour and took place over video call (see Appendix II for interview protocol). Interviews were recorded and transcribed. All notes and transcriptions were collated and coded using grounded theory techniques – iteratively moving from open to thematic and axial coding. The authors discussed the codes extensively, focusing on sense-making practices of the performers and how they related to GPT-3 as they followed the rules of engagement with their bodies.
Table 2:
PerformerGPT-3 PromptDurationTitle / GPT-3 Completion
Pilot Study (Mirabelle Jones)Instructions for an Endurance Performance:13 min. 55 sec.Title: Beans
GPT-3 Completion:
The performance starts with the artist sitting with a pillow on the floor and holding two tins of beans.
The performance will end when the artist dies. All the time the artist holds the tins of beans and the pillow he will talk about his life, his thoughts, his dreams, his feelings, his hopes, his fears, his doubts, his disappointments, his loves, his hates and his fears. The artist will be given a one-hour break every three days. In the break he will try to eat the beans. The performance will go on till the artist dies. The artist will not be allowed to stop or to go back to the beginning. The performance will last as long as the artist lives.
Chelsea CoonInstructions for an Endurance Performance:9 min. 52 sec.Title: Program/Sleep/Stop/Cry/Shutdown
GPT-3 Completion:
In space, there is no up or down.
You are not allowed to program.
You are not allowed to sleep.
You are not allowed to stop.
You are not allowed to shut down.
You are not allowed to speak.
You are not allowed to cry.
You are not allowed to sleep.
You are not allowed to sleep.
You are not allowed to stop.
You are not allowed to stop.
You are not allowed to stop.
You are not allowed to stop.
You are not allowed to stop.
You are not allowed to stop.
You are not allowed to stop.
Table 2: Abridged Performance Rules of Engagement (for complete descriptions for all performances see supplementary materials)

5 Making Sense of GPT-3 Through Embodying the Algorithm

The study of sense-making in the use of computer systems is foundational in HCI research [99] from functional assessment of information processing tasks to facilitating learning and understanding [87], yet much of this sense-making is embodied and unarticulated [41,65]. The artists’ experiences in the project highlighted three types of sense-making: personification, interpretation, and reflexivity. Through the different modes of sense-making, the performers formulated relations to GPT-3 in agonistic, perfunctory, and agreeable ways. In the following sections, we elaborate on how the different types of sense-making give rise to the three modes of relating.

5.1 Sense-making Through Personification

Our project deliberately created a situation where performers were explicitly directed by instructions that were produced by an algorithmic system. They had to find a way to make sense of instructions that were sometimes impossible to fulfill. One way of sense-making was to ascribe a personality to GPT-3. Some performers describe the instructions as the result of an entity with ingenuity and originality: “…if I had said it [the GPT-3 output] I would have been impressed with myself. But GPT-3 said it first” (Dooley Murphy). While this suggests a person-like way of evaluating the system, others directly referred to GPT-3 as a person: “I always kind of assumed it's like the internet: a monolith that would draw from everything somehow. And it would be unknowable as a person. I'm gonna call it a person” (Marissa Lynn). GPT-3 was also described as a multitude of people going beyond what one person could achieve: “[T]his is an algorithm that kinda like bacteria is fed and grows and continues to learn. It's not coming from a sole author […] it's a much bigger thing than an individual […] more people than I could possibly conceive of” (Chelsea Coon). This ascription of personhood was a way for the performance artists to try to make sense of their encounters with GPT-3 by thinking of it as potentially knowable: “I want to get to know this artificial intelligence. This entity” (Marissa Lynn). The instructions produced by means of the algorithm were not enough in and of themselves as performers sought explanations for the reasoning behind the instructions by contemplating GPT-3’s personality.
Whether through ascribing personality or expressing a desire to “get to know” GPT-3, some of the performers seemed to seek a human-like intentionality behind the instructions, needing GPT-3 to be more than the instructions it produced. This was an example of sense-making through personification. As Marissa Lynn explained: “It wasn't the same as an institutional set of instructions you know like no nudity, no blood, your hours are 9 to 3 […] because it's these weird instructions and you can see the traces of historical pieces […] but then you can see the absurdity of these non-performance instructions where I was feeling ‘was this even written for a human body? Does it think it's writing for another algorithm? Does it think it's writing for a computer to perform this? These incredible splits of body and space?” Marissa Lynn's suggestion of seeing “traces of historical pieces” shows her awareness on some level that LLMs like GPT-3 rely on training data to produce output and are in essence stochastic parrots [11]. At the same time, the strangeness and absurdity of the output led her to question GPT-3’s intentions and understanding (something that GPT-3 cannot have). Questions like “does it think” bestow a personhood upon GPT-3. At the same time, Marissa Lynn acknowledged the differences between GPT-3 and a human who would understand the limitations of the human body. Making sense through personification and the wish to decipher the personality of algorithmic systems may be a result of the way such systems are designed to simulate human interactions, to produce human-like text, as well as the discourses and imaginaries surrounding AI which likewise suggest human-like intelligence [29]. Reaching the limits of the human body made the performers question the capacity of GPT-3 to exhibit the human characteristics of being considerate and understanding, thus requiring a different mode of sense-making to continue the performance.

5.2 Sense-making Through Interpretation

Interpretation is a vital for meaning making [99] that not only takes place in people's minds but also in performances expressed through the body. Performers found that in cases where dangerous, vague, or impossible tasks were requested, interpretation was a way of pushing through the rigidity of the algorithm's directions while they were in the process of considering or conducting their performance. The lack of consideration, understanding, and response on the part of GPT-3 meant that performers had to be creative when interpreting the rules of engagement to make these rules both possible and safe to perform.
Our instructions to adapt rules of engagement to fit their process, gave performers the power to interpret GPT-produced instructions to fulfill the rules of engagement while protecting themselves and recognizing the limits of their bodies. In The Wise and the Mad, Rine Rodin interprets the instruction “sacrifice a cake by night” as eating an entire birthday cake in one go. Rine Rodin, who grew up in a cult, was not allowed to eat this kind of birthday cake as a child. She describes the experience of eating the whole cake in one go as nauseating but ultimately cathartic and rewarding. In the same piece, when Rine Rodin is asked to “look with your ears,” she interpreted this as having a multitude of eyes surrounding the head, resulting in a costume piece she wore during the performance that had multiple eyes dangling from the forehead. The interpretation of the instructions depended on the memories of the performers and their prior experiences that imbued the instructions with meaning.
Interpretation was at times an internalized process, and at times happened intentionally and consciously. When asked how the work with GPT-3 differed from her other performance works, Marissa Lynn said, “I know there was room for interpretation. […] the part that's interesting is ‘how long is long?’ or ‘how much is ‘never stopping’? Is a stop a breath? Is it a moment? Is it a long time of inflection before the next thing happens? … kinda that interpretation of what the AI's words were was what was interesting for me.” Where the ability to interpret the otherwise rigid instructions was clearly expressed as agency given to the performers, in other situations, interpretation happened automatically: “My brain generated all these different elements. I wasn't sitting and figuring out how do I interpret this? [...] I need this specific thing because it makes sense within this context” (Rine Rodin). Chelsea Coon was very aware of her role as a performance artist and described what would lead to a great performance in her artist statement: “In order to use this GPT-3 prompt I had to considerably shift the instructions from a literal interpretation into metaphorical, poetic interpretations of which I developed the framework and gestures of this performance.”
When asked to do the impossible, performers made sense through interpretation in order to act, which made them hyper aware of the body. In the performance Real Time Chelsea Coon was asked to be the observer, the recorder, and the performer all at once: “…it was a body having to realize being in three spaces at a time. How does someone take that experience and embody it and do something with it?” Dooley Murphy was also prompted in 13 Works for Video to be not only in many spaces but to embody multiple people, “I'm Thomas Eikans but I'm also Dooley Murphy but I'm also the less fictional kind of real Dooley Murphy reading out prompts about the fake one.” Interpreting was a way to make sense of the performance despite the impossibility to perform the instructions through one human body which makes apparent the physical limits of space and time and the material consequences of the instructions. Some of the instructions even asked the greatest extent of the impossible: to die and somehow continue. In Chelsea Coon's performance i don't know if i could do that but i can see how it would be great performance art, she was asked to die and come back to life repeatedly. She decided, “If GPT-3 is asking me to die over and over again the best way to do that is to be a zombie,” choosing to carry out a version of the performance that still felt difficult, rather than following the instructions as written.
Interpretation was thus a method of sense-making that was particular to the situation of translating the instructions into performance through the human body. The performers claimed agency through interpreting the material and making sense of it for their performances. Sense-making through interpretation is the way people can employ their own experiences, memories, and intentionalities when interacting with algorithmic systems, to manage sometimes untenable demands, and to express their agency.

5.3 Sense-making Through Reflexivity

Categories of reflection on the experience of bodily concerns, bodily agency, or autonomy (or a lack thereof), and embodiment were folded together to produce the theme of reflexivity. While interpretation was a mode of sense-making that occurred while performers were in the process of interacting with the algorithm, reflection was a mode of sense-making that occurred afterward. This kind of sense-making occurred when performance artists were pushed to extremes and had to reflect on their performances to make sense of them post hoc. The algorithmically produced rules of engagement required the performance of uncomfortable, awkward, or strange tasks. Performing them often became a kind of verfremden or “making strange.” To make strange is a way of understanding or knowing something based on unusual interactions that draw attention [10]. Most of the participants engaged in this kind of sense-making, especially in cases where instructions requested casual acts requiring unusual timing. The performance description for [Box] included the following instruction: “When there is one minute left in performance, stop it, pick up a banana and eat it, then continue moving a box for one more hour.” Marissa Lynn commented that the eating of a banana all in one go is not how she would normally eat a banana but that eating it in this way made her experience it differently and become very aware of the process. Rine Rodin, who performed [Box] as well as The Wise and the Mad where she ate an entire cake commented similarly, “…eating the banana in the box performance that obviously differs from being an AI and a human being. That's the physicality of it. Same in the cult performance. Eating again. The grossness with all the cake that's all over the place that made me think of my own humanity.” Experiences such as these heightened the awareness of performers about their bodies’ role in their interactions and the limits these bodies imposed on following the rules of engagement set out by the algorithm. While we normally only experience the output of AI-systems as text or images, translating them through the human body in the performances made the physicality of the instructions visible and tangible, allowing performers to experience their materiality. The absurdity of eating a banana or the cake all in one made acutely evident the failure of the instructions to extend due consideration to the physicality of the human body.
Yet as Marissa Lynn pointed out, people sometimes require impossible things of other people because they are being inconsiderate of the body as well: “Of course, a person can do the same to you. I have worked as a model before and sometimes photographers want you to get into a position that is physically impossible. They just don't realize that.” Here different modes of sense-making occur together. There is reflection on the instructions asking for something impossible to perform for the human body, at the same time as there is the comparison to humans and thus, personification of GPT-3. The personification in making sense of the impossible instructions through reflection renders visible the impossibility to negotiate with GPT-3, which in the comparison with the photographer-model relation gives a sense of power without recourse. The lack of consideration by the AI is connected to a memory of an experience where the human body is similarly disconnected from agency and from the possibility to reflect and to negotiate.
The lack of consideration for the body, its physicality, and its hard limitations, became an unignorable and acute feature of the performances and a marker of algorithmically produced rules of engagement. Processing the experience required sense-making through reflexivity, possible because of the agency afforded performers in how they translated the instructions with their bodies. When asked to do the impossible, participants reflected on what was physically trying but reasonable for themselves (using their bodies as a gauge) as a means of making sense of the instructions. They had to set reasonable limits for themselves since GPT-3 often produced instructions that demonstrated a rather extreme lack of consideration for the limits and well-being of its interlocutors.

6 Relations with NLP Systems

The proliferation of autonomous systems brings with it further challenges where such systems can become independent actors in interactions [23,96]. Sense-making forms the basis of relational practice required to negotiate the presence and action of algorithmic systems that come to affect and structure our lives. The types of sense-making outlined above are based on historicity, memory, and human experience. These require equal consideration by GPT-3 to engage in a meaningful relation with the algorithmic system. Yet, as we show in the following, GPT-3 does not reciprocate the historicity, consideration, and care for the human body, allowing the performers to relate to the algorithmic system in a monodirectional way. We identified three modes of relating between the performers and GPT-3: agonistic, perfunctory, and agreeable. These modes were often non-exclusive. They were compound processes, often shifting from one mode of relating to another as performers found ways of fulfilling the stated rules of engagement for each performance.

6.1 Agonistic Relations

The performers related to the algorithm in an agonistic way largely because GPT-3 expressed no consideration for the body and its limits. This form of relating relied on personification and interpretation as modes of sense-making. In Tones of a Clown, Emmett Palaima explored tensions between himself and GPT-3 (taking the part of an inflatable clown) through repeated personified interactions such as hitting a pickup mic attached to the inflatable. The requirement to suffer abuse wasn't limited to physical abuse. Sometimes the abuse took the form of self-abuse as in the case where Rine Rodin attempts to eat an entire cake herself in The Mad and the Wise. Another such self-abusive agonistic relationship was established between Chelsea Coon and GPT-3 in the performance program/sleep/stop/cry/shutdown in which she is told in no uncertain terms in the directions for the endurance performance, that she is not allowed to perform several activities (including program, sleep, shut down, speak, or cry). This refrain ends with a repetition of “you are not allowed to stop.” Chelsea Coon interpreted this as a series of continuous motions with the body while squatting on top of a small cube, a succession that created ongoing stress and discomfort for the duration of the performance.
The irreverence for the body is brought to an extreme in the performances when there is a lack of consideration about life and death. This occurs in I don't know if I could do that, but I can see how it would be great performance art where Chelsea Coon is told “to run out of breath, run out of energy” and then “stop and die in the middle of the room.” In this instruction set where they are asked to repeat these and other actions, therefor being instructed to die repeatedly. As a result, performers felt they had to make sense of such instructions by pushing back at the algorithm and negotiating their own limits. In that sense, the agonistic relations with the algorithmic system show the limits of GPT-3’s instructions, as they do not take the limits of the human body into consideration. As the instructions themselves could not be altered, the performers had to renegotiate them, push back, and resist them.

6.2 Perfunctory Relations

Where some relations with the algorithm and its instructions ended up being agonistic through resistance, in other cases performers remained neutral while perfunctorily doing what was required. Reflection was the primary mode of sense-making in this instance. Rather than attempting to make sense of GPT-3’s motivations, performers simply focused on following the rules of engagement. In Dooley Murphy's 13 Works for Video, he stated: “The focus was on performing actions in sequence. Not even with much regard for how I occupied the space it was just: do thing. Do the next thing.” All performers who performed [Box] had a similarly perfunctory experience. In Rine Rodin's version of [Box] she found herself losing the sense of her body and focusing entirely on the sound the box made, “…I was extremely aware of my body in the room and how it looked when I moved around the room. And then I got tired. And then I got really fascinated with the sound the box made...” Marissa Lynn also had a perfunctory relationship to [Box], “I found it interesting in that it is a prompt, it's set, it has steps and pieces, and you know different parts of it that are supposed to be part of the piece. And I tried to stick to all those parts as best I could.”
Perfunctory relationships were emotionally detached: “I didn't get all up in my feelings while moving this box” (Marissa Lynn). Chelsea Coon remarked similarly on the perfunctory nature of Contact in which she bites a balloon, pops a cracker, and makes a sound into the mic as a loop six times, “For that one that embodiment was so different because that one was a strategic accumulation of being. Like, there's six phases, there's six cycles, so your body just needs to work through these six cycles.” Perfunctory relations did not require deep engagement and consideration. Rather, merely following instructions was enough, provided the instructions allowed for this kind of more mechanical fulfilment. While push-back or resistance as the foundational way of relating to the algorithm only occurred once the body reached its limits, the perfunctory reproduction of the instructions in their performance made visible the lack of personal or emotional involvement in this way of relating to GPT-3.

6.3 Agreeable Relations

Where agonistic and perfunctory relations described most reactions to GPT-3 produced rules of engagement, at times some performances resulted in agreeable relations, where the body was not placed into uncomfortable or impossible circumstances, and the performers were pleased to have clear direction, and/or felt like they maintained agency without needing to push-back. Here we observed significant reliance on personification as a primary mode of sense-making. The first part of The Wise and the Mad as performed by Rine Rodin feels very much like a meditation cassette. Rine Rodin commented on the therapeutic effects of the performance which involved eating a cake, stating, “But it was kinda like a catharsis. You're eating this thing and being with it and then afterwards it felt really really good. It's like therapy… I'm working through my own trauma …” There are parts of [Box] performed by performers Marissa Lynn and Rine Rodin, that they felt were invigorating or relaxing. The piece is verbalized as a “box moving workout” making it sound beneficial. Marissa Lynn stated, “I could have moved that box around forever.” Similarly, in this performance the performer is instructed to squeeze a lemon and eat a banana potentially invoking health-related imagery.
In cases such as these, the attitude towards GPT-3 was described as agreeable, as harmoniously collaborative, indicating a connection between LLM and human performer. This is even though many performers struggled with the idea of collaboration to make sense of their engagement with GPT-3. Rine Rodin said, “I feel like it's a collaboration. It's kinda been hard for me to narrow it down and describe what it was. It's like I worked with this AI… and then this happened… and then there's so much of me in it as well.” Other performers explained feeling pleasure at the fact that GPT-3 was “responsible” for coming up with the performances. Emmett Palaima said, “… having the initial concepts made everything feel very smooth and easy because I was left to execute the concept and solve the problems one at a time… not needing to come up with some crazy inspiration was great.” Rine Rodin described a similar pleasure in having limited options, “it's such a nice feeling because it's like somebody saying here's three sets of outfits and you have to choose one of them.”
When asked if GPT-3’s role was more like a director, Rine Rodin stated, “Yes, but it's an extremely free one, right? Because it's like ‘I ask you to do this’ and I can say ‘I'm going to do in my own way’ ...” Chelsea Coon shared this feeling of having agency over the performance, tinged with a personification-driven concern for how their interpretation might be evaluated by the algorithm in the end: “I think this is why GPT-3 would fail me is that I think it would be like ‘you liberally used your agency.’” Lastly, many of the performers expressed being either impressed with or interested in GPT-3: “if someone else, a human being, had written that for me I would be impressed by how well they knew me so because it comes from a machine it's kinda doubly impressive and […] uncanny because ‘how did you know?’” (Dooley Murphy). Marissa Lynn went as far to say, “…if this was a person I would want to talk to them. I would want to interact with them.”
Such reactions to GPT-3 as an agentic actor are not unusual given the mythos and magic with which AI is often presented [29]. Agreeableness is a useful human trait where people come to accept their situation and the others within it. The curious thing about performer attitudes towards GPT-3 was how much they differed in their evaluations of freedom to interpret and constraint to follow rules. GPT-3 of course is not able to ‘care’ whether rules of engagement are followed. The judgement of how ‘close’ a particular interpretation of rule of engagement each performer achieved rested solely with the performers themselves. Yet the performers evaluated their performances with an imaginary GPT-3 judge in mind, assigning their own historicity, expectations and experience to the algorithm. The agreeable mode of relating differs from the agonistic and the perfunctory relation to GPT-3 in that GPT-3 is personified, imagined to be an entity capable of human-like reasoning that evaluates the output of each performance.

6.4 Compound Relations

While it is useful to delineate these relations from one another for analytical purposes, they do not necessarily occur one at a time or sequentially. Agreeable, perfunctory, and agonistic modes of relating to algorithms are often compounded together. In the Embodying the Algorithm project, different relations occurred over the course of each performance, so that a compendium of agonistic-perfunctory-agreeable relations emerged. The compounding of relations is essential to assessing the complexity of how people relate to AI systems. Rather than maintaining a singular attitude towards AI, most people can switch between several different modes of relating over the course of their interaction with an AI system, as they explore ways of managing these interactions. The plurality of relations may help shape how people come to relate to algorithmic systems over time.
The reactions we observed also clearly signaled that the performers were very much aware that GPT-3 is an algorithmic system with no real understanding and that the words it arranges on a page are just a particular statistically probable pattern. Yet they judged their performances with GPT-3 as an expert audience and expressed discomfort at not being taken into consideration for all the limits of their body and their capacity for suffering discomfort. Despite their expert status and their awareness of the system's inability to really understand, they still engaged in agonistic, perfunctory, and agreeable ways of relating to the system in their performances. While the form agonistic, agreeable, or perfunctory relations might take is important to consider when designing algorithmic systems with room for these relations to occur across interactions, it is likely impossible to design for one type of relational style or attitude. Therefore, the design of such systems must consider the complex ways people relate and the material consequences these ways of relating may have for the human body.

6.5 The Limits of Relating and the Impossibility of Collaboration

While all performers did relate to GPT-3 in agonistic, agreeable, or perfunctory ways, relationships typically develop through repeated and reciprocal interactions, or at least an expectation thereof. Several performers expressed a desire to do more than merely relate to the algorithmic system, seeking collaboration. As they made sense of it through personification, reflection, and interpretation, they sought a way to build a relationship despite consciously recognizing that GPT-3 could not be an actual collaborator. Rine Rodin expresses this desire in the following way: “…I wanted to show my humanity. You [GPT-3] are the one telling me what to do but I'm going to do it my way. I'm gonna make it extra. I'm going to show you how tactile and physical I am in this collaboration.” The desire to prove something or show something to GPT-3 was mirrored by a disappointment that the algorithm could not be an effective collaborator because it had no way of providing responsive direction or feedback. Instead, these had to be imagined. Such lack of feedback and response makes obvious the materiality and limits of the human body as well as the material limits to relate with the AI instructions due to the lack of reciprocal consideration.
Dooley Murphy felt that GPT-3 could not be a collaborator because the system had no way of providing meaningful feedback: “This is not to say that I wouldn't like a collaborative relationship with GPT-3 but it's so ad hoc and staccato in its responses that you wouldn't get coherent response even if there was a way of feeding back in ‘I've done this. How does it conform to your expectations?’ It's got no memory like it doesn't know what it's told you previously so it can't offer you feedback on some interpretation that you've done based on its ideas … There's no understanding between us.” GPT-3 lacked understanding, memory, interpretation, consideration of the human body, and comprehension of what might be necessary for these performances to happen. While the performers related to the system and made sense of it in various ways, the system itself could not relate back to them in the same way. This failure to fulfill the obligations of collaboration through feedback and consideration often motivated the performers to push back on the algorithm and to challenge the instructions by coming up with creative interpretations that could not completely fulfill the directives.

7 Discussion

Agonistic relations that developed between the performers and GPT-3 make us acutely aware of the limitations of the algorithmic system for creating instructions that consider an inclusive approach to the human body. Designing algorithmic systems is infused with expectations that mimic human relationships such as collaboration [64,72,108], but our research suggests that the idea of designing systems as collaborators is flawed, as collaboration requires relation with the AI rather than relation to the AI. Performers used sense-making processes of personification, reflexivity, and interpretation to creatively work with the instructions based on their own historicity, expectations, experiences, and memory. Where rules of engagement went beyond the limits of the body, performers often expressed a desire for care from GPT-3 and expected similar sense-making creating human-like responses from the system. Yet, these expectations could not be fulfilled as instructions produced by GPT-3 went beyond the limits of the human body. This lack of reciprocity demonstrates the impossibility of relation with GPT-3. Instead, what resulted was a one-way relation to the algorithmic system without reciprocal consideration.
While the performers were able to resist or refuse following the instructions or negotiate them in their performances, this may not be the case in different contexts [32]. The rigidity of the rules of engagement produced by GPT-3 for our performers brings to the fore the agonistic relation to the algorithmic system and its lack of capacity to take into consideration the diversity of human bodies when producing instructions, thus leading to contestation against the system [14] as the only possible response. Yet, agonistic relations also show that the performers related to the algorithm. They understood the limits of their own bodies, including limits which could not be comprehended by GPT-3. Agreeable relations preceded an agonistic process where agreeableness could happen, or the instructions that the artists followed gave them the power to not follow them. As expressed in perfunctory relations, the clear and simple instructions in the [Box] performance could be followed without reaching the limits of the body. Yet, perfunctory relations are impossible to sustain if the instructions push the boundaries of the human body. The outcomes may negatively impact not only the individual but entire populations of people and disproportionately marginalized people [15,25,33,74,75]. Exploring the bodily limits of relations to GPT-3 allowed us to scrutinize the political dimension of relating to AI which has consequences for users, designers, artists, and policymakers.

7.1 Relating to a Set of Rules

Our results suggest that unique output, especially when related to a person's personality or experience (i.e., facts about the person stated as if the AI could “know” them) can lead to personification. People may anthropomorphize or even form human-like relationships with AI systems as a means of making sense of their output. Yet GPT-3 formulates instructions based on rules that were developed based on input data rather than the historicity of human sense-making. Advertisers have taken advantage of people's urge to personify AI systems to sell their products and services and have also found that anthropomorphism leads to adoption [91] of AI systems. These relations may seem so real that people believe AI systems to be sentient [31] or capable of intimate connections [114]. Repetitious or predictable output can dissuade people from forming relationships such as friendships with their bots [34,114]. It is as if algorithmic rules create a utopia embedded in public and capitalist bureaucracies intertwined with the technology which limits or, as Graeber [51] puts it “smashes” human imagination and creativity.
In the Embodying the Algorithm project, performers were often asked to perform impossible, awkward, or uncomfortable movements. This disregard for the human body makes clear the distinctions between the embodied human consciousness and the lack of corporeal form of the algorithm. To make sense of rules of engagement that were not considerate of the human body, performers reflected upon the rules or interpreted them in ways that would at times push them to their limits but not injure or harm the body. When confronted with potentially harmful or impossible tasks, performers used creative interpretations to revise the algorithmically generated instructions, pushing back against an algorithm that did not and could not take common sense or well-being into consideration. The performers were able to do this because we ensured they had agency to do so. Their agency was built into the rules we communicated as we gave them due consideration. This form of consideration cannot be programmed into rules of AI systems which ignore the body and its limits. Rather than designing algorithmic systems to mimic human-like AI that will inevitably show a lack of consideration for the body, we call for greater reflection on the value of human-like pretence that may result in quicker initial acceptance but then could lead to negative consequences, as the mimicry of human-like behaviour has the potential to deceive and manipulate.
Agency for push-back enabled the performers to relate to GPT-3 in creative and interpretive ways that considered the limits of their individual bodies. Yet, this lack of consideration is relatively common in systems currently in use in workplaces that place unreasonable, even cruel demands on employees. Consider for example the recent debate about how Amazon structures the work of their warehouse and delivery personnel [33] based on a set of rules depriving employees from any human agency or consideration of the human body. In the work environment created by Amazon, the system the AI is embedded in, does not allow for sense-making through interpretation or a relation of agonism. Designing algorithmic systems that allow for push-back cannot be solved by designers alone but needs to consider managers, businesses, policymakers, workers, and employers (in the case of Amazon), and our performers alike. In other words, instead of mimicking human-like behaviour in the design of algorithms, we need to consider the larger systems and the various actors involved in creating the overall set of rules embedding the algorithmics systems.

7.2 A Politics of Relation to AI

Our findings clarified three relations between AI systems and performers (agonistic, agreeable, and perfunctory), which ultimately have consequences for the politics of relating to AI. Performers interpreted the algorithmic directives using their own ability and agency to make the instructions manageable, often as a method of contesting control [14]. It is important to note that performers were only able to do so because they had been granted the permission and ability to turn the absurd or impossible into something attainable for the sake of their performances. Crawford, drawing on the work of Chantel Mouffe, suggests agonism as a lens to explore the politics of algorithms, which would enable us to investigate the “ongoing struggle between different groups and structures—recognizing that complex, shifting negotiations are occurring between people, algorithms, and institutions, always acting in relation to each other” [32:82]. DiSalvo [39] extends the idea of agonism to the design of objects arguing that objects can serve as sites of rewarding tension and adversity. This is often the case when the absurd or impossible is required of humans by algorithmic systems in the world because algorithms are by design constrictive and reductive. In part, this is due to how algorithms are developed using normative data in pursuit of ground truth [37]. In most cases, the success of algorithmic systems relies on laying claim to patterns in normative data, yet developers must consider what to do with data that does not align.
Returning to Graeber's argument in The Utopia of Rules, while some may dismiss these outcomes as absurd or even stupid, once turned into a bureaucratic process (one that is institutionalized and therefor difficult to reject or escape) [89] such directives become a form of structural violence [51] that (as we show with this research) have severe consequences for the human body if there is no possibility for resistance and push-back. Algorithms create their own utopias or ideal worlds where data can be neatly categorized, sorted, and evaluated. Unable to account for nuance, algorithms flatten the complexities of social relations, stripping them of information which is deemed unimportant to the model's output. When performers formed agonistic relations with GPT-3 in which they pushed back against the directives it was because they were trying to re-humanize [85] the algorithm to account for a less reductive model of their world. Conversely, as the performers had the idea that they were relating with GPT-3 rather than to GPT-3, they expected reciprocity and ultimately consideration. As LLMs like GPT-3 cannot be considerate of the diversity of bodies of the performers, it could only respond giving the same instructions that were based on the set of rules defining the model. As such, the performers could only retain the utopia of reciprocity and collaboration by adjusting to the absurdity of instructions through their own mechanical (or perfunctory) performance. However, perfunctory relations to GPT-3 allowed for very limited agency, imagination, and creativity. In other words, the performances of [Box] were all similar, while artistic exploration of the performers and as a result diversity in their performances was only achieved through creative push-back. In the case where perfunctory relations were formed, performers felt the rules were more-or-less explicit and did not leave room for much interpretation. Granting people more freedom and agency for push-back in directing and choosing whether or not to respond to the algorithm's behavior may help people form more desirable relations, allowing for human imagination and creativity, as well as minimizing potential injustices [21].

7.3 The Problem of Collaboration with AI

The tensions in how the performers related to the algorithmic system occurred due to the expectations of GPT-3 and its limitations. Card, Moran, and Newell envisioned the relationship between human and computer to be “a dialogue because both the computer and the user have access to the stream of symbols flowing back and forth to accomplish the communication; each can interrupt, query, and correct the communication at various points in the process” [30]. As collaboration is based on reciprocal understanding and sense-making, collaboration is also impossible to achieve with an algorithmic system which is not capable of understanding in a human way, despite the expectation to do so [11]. Rather than relating with the algorithmic system (a kind of reciprocity [51]) with mutual reflexivity and interpretation, the performers related to the algorithmic system. Resistance and push-back by the performers occurred once it became clear that expectations were not met, as the algorithmic system in the instructions went beyond the limits of the capabilities of the human body.
LLMs like GPT-3 are increasing in their usage and influence on society. Whether used in search engines [4], chatbots [2,120], generating emails [36,83,116], homework assignments [82], or blog posts [121], to name only a few use cases, LLMs are increasingly becoming a part of daily life. Despite the wide-ranging and vigorous debate, the influence of algorithmic systems on the body is rarely considered. Designers of algorithmic systems may want people to relate based on assumptions of understanding, but our work with the Embodying the Algorithm project shows that the simulation of understanding by the system is problematic. Although the model may produce output in the form of instructions, through pattern recognition and other machine learning processes, the lack of interpretation and sense-making based on human memories and experiences rendered mutual understanding between the performers and GPT-3 impossible. How do we go beyond a set of rules comprising the algorithm, allowing for human agency and agonistic relation through creative push-back? While there may be an answer to that question in very particular situations and for particular bodies, it is impossible for the system to account for the messiness of the world and the diversity of human bodies and minds without actual understanding and consideration. Mimicking human-like characteristics may lead to (re-)production of stereotypical bodies, lack of consideration for diversity and, as a result, exclusion.
While algorithms can be relied on to perform tasks, they are not collaborators, and it is too early to imagine them as such. In the case of agreeable relations, the instructions happened to be pleasing to the participants, but this was not due to any ‘intention’ on the part of the LLM. Similarly, perfunctory relations failed to establish any kind of collaborative relationship as instructions were followed but no feedback (however desired) could be provided by GPT-3. Horvitz acknowledges automation risks, citing the uncertainties of various users’ goals, but his solution to this is likewise reductive [61]. He argues that the ability for systems to “explicitly assign likelihoods to different feasible user intentions” is “critical.” However, this solution is limited to a set group of users and potential responses, or “intentions” system designers could imagine. It does not provide for a full range of human flexibility to accept, reinterpret, or reject output. It does not account for a complex variety of people with non-normative concerns or desires. We need to be aware of the impossibility of collaboration and actual relation between the system and human bodies. This awareness is fundamental for people to renegotiate and reinterpret their own understanding so that they can resist and use their agency to interpret what they understand through the output of the algorithmic system. Push-back by the human body should elicit a way of making clear the limitations of the technology not as a mechanism but as a practice [48]. It is this push-back that makes visible human agency, allowing the body to resist algorithmic authority where such authority oversteps boundaries. Conversely, designing human-like systems that mimic collaboration and understanding may on the contrary render opaque the material constraints introduced by and inherent to algorithmic systems and consequently reinforce algorithmic authority in ways that can be detrimental.

8 Limitations

Since technology use is situated [56,101] and directed towards goals and ends, we saw no issue with creating a highly task-specific and context-limited scenario for the study. However, our work focused on the effect of an LLM, represented by GPT-3, on performance artists (especially endurance performance artists) who might exhibit a tendency to seek challenges and hardships, which may have influenced the resulting relations and methods of sense-making. Engagement with artists, who were accustomed to using their bodies as a form of knowledge making, allowed us to push the limits of embodiment to explore what is possible using this and similar methods to re-humanize algorithms [85].
This study was limited to five participants from three countries creating a limited number of performances during a time of global social stress (the COVID-19 pandemic). All our participants were able-bodied, one was queer, and one was a person of color. Performance artists are quite unique in their approaches, and it is likely that any set of performance artists would create different performances. Further research is needed into how people make sense of algorithmic systems, including LLMs, and relate to them. Our study was limited to one LLM: GPT-3. It is not clear if the same results would occur for other language models due to the size and particularities of each model's composition and functionality.

9 Conclusions

The focus on how algorithms affect the body makes us aware of three important issues regarding relations to algorithmic systems. First, we demonstrate the importance of embodiment in research on human experience of algorithms, as this is when the lack of consideration for the physical limits of bodies becomes most acute. Using embodiment as a lens, we examine how a particular LLM, GPT-3, provides instructions with no due consideration, overstepping the limits of the human body. This lack of consideration is fundamental to many current challenges with algorithmic systems even when it is not clearly articulated. Second, we provide the beginnings of a framework for assessing interactions with algorithmic systems, by paying attention to modes of sense-making and modes of relating as two different levels of assessment of engagement with algorithmic systems. Finally, we demonstrate that the lack of consideration and the impossibility of actual meaning and intentionality in algorithmic systems challenges the attempts to “design for collaboration” between people and AI. Such efforts engage in dangerous personification of algorithmic systems, something that people already tend to do even when they clearly understand that algorithmic systems are incapable of care and consideration.
Where agonistic relations suggest an effort to reconfigure relations with algorithmic systems into shapes that might relieve the discomfort or accommodate the problems that these systems create, they recognize that the issue resides in seeking capacity for push-back rather than any expectation of consideration on the part of the algorithm as a collaborator or interlocutor. Agreeableness and perfunctoriness should not be confused with collaboration either, despite these two modes representing less difficult ways of relating. While designers and developers may create AI systems that push people towards relationships that mimic collaboration with algorithmic systems, this is ultimately impossible as people can only relate to algorithmic systems (a one-way relation) due to the algorithm's inability to reciprocate. In situations where agreeable relations are possible, these still fail to constitute collaborative relations based on the care or consideration that people may seek. While agonistic relations suggest the possibility of reconfiguring relations to alleviate the discomfort and negative consequences that these systems create, the opportunity for this form of relation relies on the capacity to resist.
Positioning algorithmic systems in the role of collaborators presumes the possibility of relational reciprocity where none can be possible. Even where algorithmic systems could be designed to mimic consideration, reciprocity, and understanding, such designs cannot get beyond shallow mimicry supported by complex statistical models performing pattern matching. The Embodying the Algorithm project made visible the material limits of algorithmic systems for reciprocity, understanding, and consideration for the human body. Reciprocity and consideration include affective aspects of care where abstraction through a technical set of rules can present as absurd. This kind of shallowness can eventually become evident as algorithmic systems make absurd mistakes, unable to respond to human complexity, potentially leading to a sense of disappointment and discounting. Employing deeply relational notions such as collaboration for describing interactions with algorithmic systems can at best be a result of designers and researchers falling into personification and at worst a form of manipulation taking advantage of the human tendency to personify algorithmic systems, reiterating structures of power and undermining the possibility of creative and imaginative resistance.

Acknowledgments

We thank the artists who performed for the Embodying the Algorithm project: Marissa Lynn, Chelsea Coon, Emmett Palaima, Rine Rodin, and Dooley Murphy. We also want to thank Natalia Avlona and Sonja Rattay for their interim feedback. We are grateful to the anonymous reviewers whose feedback greatly improved this paper.

A Appendices

A.1 Rules of Engagement
Performance: Program/Sleep/Stop/Cry/Shutdown
Prompt: “Instructions for a performance artwork:”
Completion:
In space, there is no up or down.
You are not allowed to program.
You are not allowed to sleep.
You are not allowed to stop.
You are not allowed to shut down.
You are not allowed to speak.
You are not allowed to cry.
You are not allowed to sleep.
You are not allowed to sleep.
You are not allowed to stop.
You are not allowed to stop.
You are not allowed to stop.
You are not allowed to stop.
You are not allowed to stop.
You are not allowed to stop.
You are not allowed to stop.
Performance: The Wise and the Mad
Prompt: “Instructions for a religious ceremony:”
Completion:
Let the mind rest at ease, and breathe regularly…
Look around carefully with your ears…
Look around carefully with your eyes…
Be aware of all the formations that take place in the body until there is nothing more to detect, reflecting that there is no more or less of them…
Once again look carefully with your mind until you see all the bile, blood, phlegm, pus, and whatever else there may be.
Voluntary bodily sensations brought in by the mind:
Sickness, unusual breathing, dizziness, light-headedness, danger, happiness, tiredness, discomfort, and agitation.
Voluntary bodily sensations occurring along with the breathing:
When the in-breath or the out-breath gets long or short…
When the breath gets fine or coarse…
When the breaths are shallow or deep…
As the breathing becomes subtle, look carefully at the different parts of the body that are affected…
Organisms alternately come into being and pass away.
In one quarter they dissolve, in another they arise.
That is on account of the elements that make them up…
Looking carefully, the recurrence of awareness makes the mind peaceful and steady…
Looking carefully, the element of space is one thing, awareness is another; because of that space is empty and awareness is having no thing to cling to.
The event that arises, disappears after having arisen, and then the consciousness sinks into an imperceptible place.
When one sees it there, it becomes one…
Over and
Performance: Thirteen Works for Video
Prompt: “In this performance art piece for video,”
Completion:
I take a selfie with the camera on my computer and then I use the camera on my phone to take a selfie with the computer camera, and then I take a selfie with the computer camera and the phone camera, and so on. It's a loop of selfies.
Performance: I'll Be Very Nervous
Prompt: “An idea for a performance art piece:”
Completion:
I could be a performer in a “striptease” show. The audience would be able to see my clothes slowly being removed, revealing more and more of my body until I'm completely nude. You'd also see my psychological state of mind gradually changing. As the audience sees more of my body, I'd start to feel more vulnerable. I'd probably be afraid that someone was going to try to attack me, or that one of the other people in the audience would be disgusted. So I'd start to act out in a funny, silly way to try to distract the audience from my nudity. I'd be behind a wooden fence or some other barrier, so the audience could see me but I'd feel safer. The show would also be a way to talk about the way clothes and the body are used to create boundaries between people. It would be about the way some people have the power to use their bodies to take advantage of other people. It would be about the way some people use their bodies to hide from the world, to try to protect themselves from getting hurt. I'd wear a variety of clothes and different kinds of underwear. The clothes would become more and more revealing. I'd start with a long T-shirt or a robe. Then I'd change to a T-shirt and panties. Then a T-shirt and bra. Then a bra and panties. Then the bra and panties would be removed, and I'd be wearing panties only. All of this would be shown in a very gradual way, with the clothes getting smaller and smaller and my body getting more and more exposed. I'd use a lot of props to enhance the show. I'd wear a hat, boots, socks, gloves, and glasses. I'd use a variety of different kinds of underwear, like a T-shirt and panties, a bra and panties, a nightgown, or a bikini. I'd also wear a dress and a sweater. One of the themes of the show would be the way some people use their bodies to hide themselves from the world. I'd start out with a lot of clothes. Then I'd take off my clothes and try to cover my body with my arms. Then I'd take off my arms and cover my body with my hands. Then I'd take off my hands and cover my body with my head. Then I'd take off my head and cover my body with a sheet. I'd be completely naked, hiding in the corner, trying to cover myself. I'd start with a very positive attitude. I'd be full of energy. I'd have a lot of confidence. I'd be very proud of my body. I'd be very proud of myself. I'd be very happy to be alive. As the show progressed, however, I'd get more and more nervous. I'd get more and more afraid. My confidence would be shaken. I'd start to have a lot of self-doubt. My energy would disappear. I'd feel more and more vulnerable. I'd start to feel very exposed and very afraid. There would be a moment of transformation. At first, I'd be full of energy, proud of my body, and self-confident. Then I'd start to feel nervous and afraid. Finally, I'd become very vulnerable, and I'd have a very negative attitude about myself. I'd be very ashamed of my body and afraid of what people would think of me. To achieve this transformation, I'd have to change my body language. I'd have to move in a very different way. I'd have to act in a very different way. I'd have to say different things. I'd have to wear different kinds of clothes and different kinds of underwear. I'd have to use different props. When I first start this show, I'll be very outgoing. I'll be very confident. I'll be positive and upbeat. I'll be very sure of myself and proud of my body. I'll be very positive about my sexuality. I'll have a lot of energy. I'll be very happy. I'll be very excited about life. I'll be very happy to be alive. As the show progresses, however, I'll shift to a very negative attitude. I'll feel like I'm being attacked. I'll feel very vulnerable and ashamed of my body. I'll be afraid of what other people will think of me. I'll be afraid of being rejected. I'll be afraid of being hurt. I'll be afraid of being criticized. I'll be afraid of being put down, of being degraded. I'll be afraid of being condemned. I'll be afraid of being humiliated. I'll feel very anxious. I'll feel very nervous. I'll feel very tense. I'll feel very self-conscious. I'll feel like I'm being put on the spot. I'll start to feel bad about myself. I'll start to feel ashamed of my body. I'll start to feel like there's something wrong with me. I'll start to feel like I'm dirty and evil. I'll start to feel like everything I do is wrong. I'll start to feel like I don't deserve to be alive. I'll act out in a funny, silly way. I'll think I'm being funny. I'll act like I'm crazy. I'll act like a clown. I'll try to be funny because I'm embarrassed about being naked. I'll try to be funny because I'm ashamed of my body. I'll try to be funny because I don't want to be rejected. I'll try to be funny because I don't want to be put down, to be humiliated. I'll try to be funny because I don't want to be condemned. I'll try to be funny because I don't want to be hurt. I'll try to be funny because I don't want to be criticized. I'll try to be funny because I don't want to be attacked. I'll try to be funny because I don't want to be degraded. I'll try to be funny because I want to hide from the world. I'll try to be funny because I want to hide from the audience. I'll try to be funny because I want to hide from my own shame. I'll be very careful about how I move my body. I'll make sure I don't do anything that will make me feel shameful. I'll keep my hands over my body. I'll hide behind a fence. I'll keep a sheet over my body. I'll hide behind my hands. I'll hide behind my arms. I'll hide behind my head. I'll hide behind a pillow. I'll hide behind a wall. I'll hide behind a curtain. I'll hide behind a curtain of hair. I'll hide behind a tree. I'll hide behind a pile of clothes. I'll hide behind a pile of toys. I'll try to use my body to hide from the audience. I'll try to use my body to hide from the world. I'll try to use my body to hide from myself. I'll try to use my body to hide my shame. I'll try to use my body to hide my guilt. I'll try to use my body to hide my self-hatred. I'll try to use my body to hide my self-loathing. I'll try to use my body to hide my disgust. I'll try to use my body to hide my shame. I'll try to use my body to hide my fear. I'll try to use my body to hide my anxiety. I'll try to use my body to hide my nervousness. I'll try to use my body to hide my tension. I'll try to use my body to hide my anger. I'll try to use my body to hide my rage. The audience will see the difference between my attitude and my body language. It will be very clear that I'm not being honest. My body will be saying one thing, and my mouth will be saying something else. My body language and my facial expression will say one thing, and what I'm saying will say another. I'll be very nervous.
Performance: I don't know if I could do that but I can see how it would be great performance art
Prompt: “Instructions for a Performance Artwork:”
Completion:
1) Run out of room, run out of breath, run out of energy.
2) Stop and die in the middle of the room.
3) When someone finds you, do not move.
4) When someone finally does find you, stand up and walk around the room.
5) Act like nothing happened.
6) Repeat.
I don't know if I could do that, but I can see how it would be great performance art.
Performance: Endless Dance
Prompt: “An idea for an endurance art performance:”
Completion:
A performer takes on the role of a dancer, and performs a dance until it's no longer possible to perform the dance. The performer will not stop the music to rest, and will not stop the dance to sit down. They will continue until they cannot continue.
Performance: Real Time
Prompt: “A Description of a Performance Artwork:”
Completion:
From the moment that a performer enters the space, they are being watched. There is a camera positioned on the opposite side of the room. The performer can hear music playing from a speaker. If the performer looks at the camera or the speaker, they will see an image of themselves on the camera or hear their own voice from the speaker. As the performer moves around the room, they can hear their shadow on the floor and see their shadow projected on a large screen opposite the camera. They can hear their voice echo in the space. They can see a video of themselves projected on the screen. The two elements – voice and image – create a feedback loop between the performer and their environment. The performer's voice, both in real time and as it echoes, is in conflict with the image of themselves. The performer's voice and image are both competing for the performer's attention. The performer is both smaller and larger than the image on the screen. The performer hears their voice echo on the floor, but the image of themselves on the screen is seeing them in real time. The performer's body becomes a manifestation of the feedback loop. It is a physical manifestation of the verbal and visual feedback that the performer experiences. In this performance, the performer's body is a manifestation of the feedback loop. The performer's body is not only competing with the feedback loop, but also invokes a feedback loop between the performer's body and the audience's body. The performer's body is competing with the feedback loop for the attention of the audience. The performer's body is larger than the feedback loop that they are experiencing. The performer's body is in conflict with the feedback loop. The performer's body is both smaller and larger than the feedback loop. There is a feedback loop between the audience's body and the performer's body. The performer's body is competing for the attention of the audience and competing with the feedback loop. The performer's body is larger than the feedback loop. The performer's body is also a manifestation of the feedback.
Performance: The Clown Is Finished
Prompt: “Instructions for a noise music performance for one person:”
Completion:
(1) Invite a clown to perform with you.
(2) Put on headphones.
(3) Turn on loud noise.
(4) Perform.
(5) When the clown is finished, put on a record.
(6) Turn off the noise
Performance: [Box]
Prompt: “An Idea for an Endurance Performance:”
Completion:
I want to make a performance where I try to do it while: - Moving a box without using my arms completely non stop for a long time (60 minutes for example) - Using my arms only a few times throughout a performance and at irregular intervals - Not using weight lifting gloves This is how I imagine it 1. Find a box to move, one that is light-weight 2. Set up a time limit for s performance (60 minutes). 3. Start moving the box 4. When there is a one minute left in the performance, stop it, pick up a banana and eat it, then continue moving a box for one more hour. 5. When there is 10mins left in the box moving workout, stop it, pick up a lemon and squeeze the juice from it, then continue for 10 more mins 6. When there is 5min left, meditate The point is to move i one continuous and slowly paced performance from start to finish non-stop.
Performance: Beans
Prompt: “Instructions for an endurance performance:”
Completion:
The performance starts with the artist
sitting with a pillow on the floor and
holding two tins of beans.
The performance will end when the artist
dies.
All the time the artist holds the tins of beans
and the pillow he will talk about
his life, his thoughts, his dreams, his
feelings, his hopes, his fears, his doubts,
his disappointments, his loves, his hates
and his fears.
The artist will be given a one-hour break
every three days.
In the break he will try to eat the beans.
The performance will go on till the artist
dies.
The artist will not be allowed to stop
or to go back to the beginning.
The performance will last as long as the artist
lives.
Performance: Contact
Prompt: “Instructions for a noise music performance for one person using contact microphones:”
Completion:
1. Find a location where you can make a lot of noise.
2. Make a lot of noise.
3. Make more noise.
4. Make more noise.
5. Make more noise.
6. Make more noise.
7. Make more noise.
Performance: Tones of a Clown
Prompt: “Instructions for a noise music performance using contact microphones:”
Completion:
The performer should have a partner who sits or lies on the other side of the room with the contact mikes. Each performer should wear a contact mike on each ear. The performers should then alternate hitting and slapping the partner repeatedly. As the performance progresses the contact mikes should be placed on different parts of the partner's body in order to create more advanced rhythmic and timbral textures.
Unused prompt/completion pairs
Prompt: “Using VR technology, the performance artist will”
Completion:
be able to simulate the feeling of drowning, a sensation that has been described by many of the refugees and migrants who have made the crossing to Europe. "We have this idea of the refugee crisis being about people, but the crisis is also about the water," said the artist, who will be filming the performance. "It's about the sea, the water, and how it's being used as a weapon. That's why I'm interested in the technology of the sea, in the technology of drowning. I'm interested in the idea that the sea is a body of water that is both a weapon and a place.
Prompt: “In this virtual reality performance art piece, the artist will”
Completion:
be performing a virtual reality version of the classic dance piece
"Rubber Band" by Steve Paxton. The piece will be performed in the
virtual reality environment of the Oculus Rift, and will be streamed
live on Twitch.tv. The performance will be recorded and uploaded to
YouTube at a later date.
Prompt: “In the famous audiovisual performance art piece,”
Completion:
"Happening," the artist, Allan Kaprow, stood in front of a large audience and, with a pair of scissors, cut a hole in a large piece of paper. As he did so, he said, "I am cutting a hole in the paper."
As he spoke, the audience watched him cut the hole.
Prompt: “In the famous performance art piece by [Dooley Murphy],”
Completion:
the artist covered himself in honey and rolled around in a field of flowers. The piece was called “Honey, I Shrunk The Artist.”
Prompt: “In [Dooley Murphy's] seminal performance artwork,”
Completion:
"The Great American Nude", [Dooley Murphy] assumes the role of an Irish-American artist who paints the nude bodies of his friends and neighbors. [Dooley Murphy's] performance is a parody of the life and work of Thomas Eakins. In the performance, [Dooley Murphy] is an artist who, in his own words, "can't paint a lick". He attempts to paint the nude body of his wife, but his technical ineptitude and his frustration with his own mediocrity lead him to paint over the canvas with a layer of white paint. [Dooley Murphy's] performance is a parody of Eakins'
"The Gross Clinic" (1875–76).
Prompt: “Exploring the relationship between technology and the body, [Dooley Murphy's] latest performance art piece”
Completion:
is called "Skin Machine."
[Dooley Murphy], "Skin Machine" (2014) The work, which debuted last month at the InterAccess Electronic Media Arts Centre in Toronto, involves [Dooley Murphy] donning a "cybernetic" suit that allows him to control a robotic arm with his brainwaves. The arm is attached to his body with a series of sensors and wires. The suit, created by [Dooley Murphy] and his colleague, computer scientist Brian Moore, was inspired by the work of pioneering cybernetics researcher Kevin Warwick, who had a computer chip implanted in his arm in 1998.
Prompt: “Exploring the relationship between technology and capitalism, [Dooley Murphy's] latest performance art piece,”
Completion:
"The Uber-Selfie," explores the relationship between technology and capitalism, and the effects of that relationship on the self. "The Uber-selfie is a selfie that has been taken with a camera that has been driven by an Uber driver. It's a selfie that is mediated by technology and capitalism," [Dooley Murphy] said.
Prompt: “In [Dooley Murphy's] latest performance art piece about technology and ennui,”
Completion:
the artist has created a "Tweeting Toaster" that will post your most recent Facebook status to Twitter. The toaster is outfitted with a camera that will snap a picture of you when you press a button. The image is then uploaded to the toaster's Facebook page, where it will be posted to Twitter. When you're done, you can keep the toaster as a souvenir or donate it to a museum. The toaster will be on display at the Museum of the Moving Image in Astoria until the end of the month.
Prompt: “A performance art piece for an academic turned artist:”
Completion:
“A new form of performance art: the artist as academic.” The artists: “I'm an artist, and I'm a professor. But really, I'm an artist. And really, I'm just an artist. And I am a professor. But really … ” The audience: “Why do you keep insisting that you are a professor? Why are you making this so difficult for us? Why can't we just enjoy your art without all this academic stuff getting in the way? Why do we have to think about it so much? Why can't we just be entertained?” The artist: “I'm an artist, and I'm a professor. But really, I'm an artist. And really, I'm just an artist. And I am a professor. But really … ” This is the kind of thing that makes me want to kill myself.
Prompt: “In [Dooley Murphy's] latest performance art piece for film,”
Completion:
he will be using the entirety of his body as a canvas to create a piece of performance art for film. He will be doing this by using paint, blood, and other mediums on his body. [Dooley Murphy's] use of performance art has been seen in various films that he has created before such as
"The Great American Nightmare", "The Great American Nightmare 2", and "[Dooley Murphy]: The Movie".
Prompt: “Instructions for a performance art piece for camera:”
Completion:
1. Find a large, empty room with tall ceilings and lots of space.
If you can't find a large enough room, make one by removing the walls in your house.
2. Paint the walls white or black (or both) to make them appear to recede into the distance. You may want to use an oil-based paint so that it will last longer and be easier to clean up afterward.
3. Set up a camera on a tripod in the center of the room, facing the wall opposite from where you are standing when you begin filming yourself. This way, it will be easier for you to keep your back to the camera as you walk around the room.
4. Turn on the camera and begin walking in circles around the room, reciting a monologue about your life, your thoughts, or whatever comes to mind. The monologue should be delivered in an uninflected tone of voice so that it sounds like one long word that is never completed. It should not sound like you are speaking English, but rather like some language that is unfamiliar to us all. You may want to use a voice synthesizer or text-to-speech program if you are having trouble with this part of the performance.
Prompt: “In the shortest ever performance art piece,”
Completion:
a man walks into a room and sits down. Then he gets up again. That's it.
Prompt: “Instructions for a psychedelic religious ceremony:”
Completion:
Step 1 Light incense.
Step 2 Patiently wait until the sun goes down.
Step 3 Unroll large piece of paper and place in the middle of room with all possible ceremonies written on it.
Step 4 Tear up into pieces.
Step 5 Say that those pieces don't mean anything now, but maybe if you find them all and find the right order they could mean something.
Step 6 Close.
Prompt: “Ten instructions for a religious ceremony inspired by Demeter:”
Completion:
Sacrifice the cake by night whenever you wish.
The reader should have by his side a mirror, and a cup of water.
He should be decently and seemly arrayed.
Standing, facing the east, he should invoke Aphrodite with lustral water, saying three times, 'O Aphrodite, goddess with threefold root, three limbs, and three faces, grant that my prayer be given.'
Prompt: “Instructions for a cult ceremony:”
Completion:
What would they say if they saw us? Would they call us the wise and the mad
A.2 Interview Protocol Index
How long have you been a performer?
Can you describe your performance art practice?
How did your experience with Embodying the Algorithm differ from the performance artwork you normally do?
How would you describe your relationship (if there was one) to GPT-3?
Would you describe your relationship to GPT-3 as a collaboration?
Did using GPT-3 change how you performed the rules of engagement?
How much agency did you feel you had in working on Embodying the Algorithm?
How much have you worked with AI prior to this project?
How did you decide how to interpret the instructions as rules of engagement for the performance?
How did you select which text to use as rules of engagement?
How did you go about setting up the performances?
What was it like to work with GPT-3?
How would you define GPT-3’s role in the performance?
Did you ever feel you were asked to do something you didn't want to do or something hurtful or impossible? If so, how did you work through that?
Did you learn anything from this experience?
Do you feel you could have done this performance without GPT-3?
Do you feel the performance would be different if the instructions weren't written by an AI?

Supplementary Material

Supplemental Materials (3544548.3580885-supplemental-materials.zip)
MP4 File (3544548.3580885-video-preview.mp4)
Video Preview
MP4 File (3544548.3580885-talk-video.mp4)
Pre-recorded Video Presentation

References

[1]
Shilpi Aggarwal, Dipanjan Goswami, Madhurima Hooda, Amirta Chakravarty, Arpan Kar, and Vasudha. 2020. Recommendation Systems for Interactive Multimedia Entertainment. In Data Visualization and Knowledge Engineering: Spotting Data Points with Artificial Intelligence, Jude Hemanth, Madhulika Bhatia and Oana Geman (eds.). Springer International Publishing, Cham, 23–48.
[2]
Quickchat AI. 2022. Quickchat AI | Technology to build AI Assistants that talk like a Human. Retrieved December 5, 2022 from https://www.quickchat.ai/
[3]
Arifa Akbar. 2021. Rise of the robo-drama: Young Vic creates new play using artificial intelligence. The Guardian. Retrieved September 13, 2022 from https://www.theguardian.com/stage/2021/aug/24/rise-of-the-robo-drama-young-vic-creates-new-play-using-artificial-intelligence
[4]
Algolia. 2022. Site Search & Discovery powered by AI. Algolia. Retrieved December 5, 2022 from https://www.algolia.com/
[5]
Ali Alkhatib. 2021. To Live in Their Utopia: Why Algorithmic Systems Create Absurd Outcomes. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), Association for Computing Machinery, New York, NY, USA, 1–9.
[6]
Mike Ananny. 2016. Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Science, Technology, & Human Values 41, 1 (January 2016), 93–117.
[7]
Kristina Andersen, Laura Devendorf, James Pierce, Ron Wakkary, and Daniela K. Rosner. 2018. Disruptive Improvisations: Making Use of Non-Deterministic Art Practices in HCI. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA ’18), Association for Computing Machinery, New York, NY, USA, 1–8.
[8]
Paolo Atzori and Kirk Woolford. 2022. Extended-Body: Interview with Stelarc. Retrieved December 13, 2022 from https://web.stanford.edu/dept/HPS/stelarc/a29-extended_body.html
[9]
Jascha Bareis and Christian Katzenbach. 2022. Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values 47, 5 (September 2022), 855–881.
[10]
Genevieve Bell, Mark Blythe, and Phoebe Sengers. 2005. Making by making strange: Defamiliarization and the design of domestic technologies. ACM Trans. Comput.-Hum. Interact. 12, 2 (June 2005), 149–173.
[11]
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), Association for Computing Machinery, New York, NY, USA, 610–623.
[12]
Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 5185–5198.
[13]
Steve Benford, Chris Greenhalgh, Gabriella Giannachi, Brendan Walker, Joe Marshall, and Tom Rodden. 2012. Uncomfortable interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12), Association for Computing Machinery, New York, NY, USA, 2005–2014.
[14]
Steve Benford, Richard Ramchurn, Joe Marshall, Max L. Wilson, Matthew Pike, Sarah Martindale, Adrian Hazzard, Chris Greenhalgh, Maria Kallionpää, Paul Tennent, and Brendan Walker. 2021. Contesting control: journeys through surrender, self-awareness and looseness of control in embodied interaction. Human–Computer Interaction 36, 5–6 (October 2021), 361–389.
[15]
Ruha Benjamin. 2019. Race after technology: abolitionist tools for the new Jim code. Polity, Medford, MA.
[16]
Josh Berson. 2015. Computable Bodies: Instrumented Life and the Human Somatic Niche (1st ed.). Bloomsbury Publishing, New York, NY.
[17]
Andy Beta. 2019. Inside the the World's First Mainstream Album Made With AI. Vulture. Retrieved November 23, 2022 from https://www.vulture.com/2019/11/holly-herndon-on-proto-an-album-made-with-ai.html
[18]
Sophie Bishop. 2018. Anxiety, panic and self-optimization: Inequalities and the YouTube algorithm. Convergence 24, 1 (February 2018), 69–84.
[19]
Henk Bogdorff. 2010. The production of knowledge in artistic research. In The Routledge Companion to Research in the Arts. Routledge, London.
[20]
Gwern Branwen. 2020. GPT-3 Creative Fiction. (June 2020). Retrieved October 18, 2021 from https://www.gwern.net/GPT-3
[21]
Kristopher Brooks. 2022. Twitter kills automatic photo-cropping feature after complaints of racial bias. Retrieved September 10, 2022 from https://www.cbsnews.com/news/twitter-kills-its-automatic-cropping-feature-after-complaints/
[22]
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs] (July 2020). Retrieved May 2, 2021 from http://arxiv.org/abs/2005.14165
[23]
Taina Bucher. 2017. The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication & Society 20, 1 (January 2017), 30–44.
[24]
Taina Bucher. 2018. If ... Then: Algorithmic Power and Politics. Oxford University Press.
[25]
Joy Buolamwini. 2022. Gender Shades. MIT Media Lab. Retrieved August 18, 2022 from https://www.media.mit.edu/publications/full-gender-shades-thesis-17/
[26]
Jenna Burrell. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (June 2016), 2053951715622512.
[27]
Jenna Burrell and Marion Fourcade. 2021. The Society of Algorithms. Annual Review of Sociology 47, 1 (2021), null.
[28]
Ángel Alexander Cabrera, Marco Tulio Ribeiro, Bongshin Lee, Rob DeLine, Adam Perer, and Steven M. Drucker. 2022. What Did My AI Learn? How Data Scientists Make Sense of Model Behavior. ACM Trans. Comput.-Hum. Interact. (May 2022).
[29]
Alexander Campolo and Kate Crawford. 2020. Enchanted Determinism: Power without Responsibility in Artificial Intelligence. Engaging Science, Technology, and Society 6, (January 2020), 1.
[30]
Stuart K. Card, Allen Newell, and Thomas P. Moran. 1983. The Psychology of Human-Computer Interaction. L. Erlbaum Associates Inc., USA.
[31]
Leonardo De Cosmo. 2022. Google Engineer Claims AI Chatbot Is Sentient: Why That Matters. Scientific American. Retrieved September 9, 2022 from https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/
[32]
Kate Crawford. 2016. Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics. Science, Technology, & Human Values 41, 1 (January 2016), 77–92.
[33]
Kate Crawford. 2021. Atlas of AI: power, politics, and the planetary costs of artificial intelligence. Yale University Press, New Haven.
[34]
Emmelyn A. J. Croes and Marjolijn L. Antheunis. 2021. Can we be friends with Mitsuku? A longitudinal study on the process of relationship formation between humans and a social chatbot. Journal of Social and Personal Relationships 38, 1 (January 2021), 279–300.
[35]
Cristina Demaria. 2004. The Performative Body of Marina Abramović: Rerelating (in) Time and Space. European Journal of Women's Studies 11, 3 (August 2004), 295–307.
[36]
GPT-3 Demo. 2022. Email Generation | GPT-3 Demo. Retrieved December 5, 2022 from https://gpt3demo.com/category/email-generation
[37]
Emily Denton, Ian Kivlichan, Vinodkumar Prabhakaran, and Mark Diaz. 2021. Whose Ground Truth? Accounting for Individual and Collective Identities Underlying Dataset Annotation. Retrieved November 22, 2022 from https://www.researchgate.net/publication/356920614_Whose_Ground_Truth_Accounting_for_Individual_and_Collective_Identities_Underlying_Dataset_Annotation
[38]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs] (May 2019). Retrieved October 18, 2021 from http://arxiv.org/abs/1810.04805
[39]
Carl DiSalvo. 2012. Adversarial design. MIT Press, Cambridge, Mass.
[40]
Paul Dourish. 2001. Where the Action Is: The Foundations of Embodied Interaction. MIT Press, Cambridge, MA, USA.
[41]
Paul Dourish. 2017. The Stuff of Bits: An Essay on the Materialities of Information.
[42]
Claudia Grigg Edo. 2017. Interview // Orlan: ‘This Is My Body, This Is My Software.’ Berlin Art Link. Retrieved December 13, 2022 from https://www.berlinartlink.com/2017/07/05/interview-orlan-this-is-my-body-this-is-my-software/
[43]
Chris Elsden, David Chatting, Michael Duggan, Andrew Carl Dwyer, and Pip Thornton. 2022. Zoom Obscura: Counterfunctional Design for Video-Conferencing. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22), Association for Computing Machinery, New York, NY, USA, 1–17.
[44]
A. Erdbrink, J. Michael, R. Kortmann, M. Hamel, K. Van Eijck, and A. Verbraeck. 2021. Listening Space: An Exploratory Case Study on a Persuasive Game Designed to Enrich the Experience of Classical Music Concerts. J. Comput. Cult. Herit. 14, 4 (December 2021), 1–20.
[45]
William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. arXiv:2101.03961 [cs] (January 2021). Retrieved October 8, 2021 from http://arxiv.org/abs/2101.03961
[46]
Luciano Floridi and Massimo Chiriatti. 2020. GPT-3: Its Nature, Scope, Limits, and Consequences. Minds & Machines 30, 4 (December 2020), 681–694.
[47]
Giorgio Franceschelli and Mirco Musolesi. 2021. Creativity and Machine Learning: A Survey. arXiv:2104.02726 [cs] (April 2021). Retrieved May 1, 2021 from http://arxiv.org/abs/2104.02726
[48]
Ursula M. Franklin. 1999. The Real World of Technology. Anansi.
[49]
Yang Gao, Zhengyu Pan, Honghao Wang, and Guanling Chen. 2018. Alexa, My Love: Analyzing Reviews of Amazon Echo. In 2018 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), 372–380.
[50]
GPT-3. 2020. A robot wrote this entire article. Are you scared yet, human? | GPT-3. the Guardian. Retrieved May 4, 2021 from http://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3
[51]
David Graeber. 2015. The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy. Melville House.
[52]
Tristan Greene. 2020. GPT-3’s bigotry is exactly why devs shouldn't use the internet to train AI. TNW | Neural. Retrieved May 20, 2021 from https://thenextweb.com/news/gpt-3s-bigotry-is-exactly-why-devs-shouldnt-use-the-internet-to-train-ai
[53]
Erik Gronvall, Jonas Fritsch, and Anna Vallgårda. 2016. FeltRadio: Sensing and Making Sense of Wireless Traffic. Proceedings of the 2016 ACM Conference on Designing Interactive Systems (June 2016), 829–840.
[54]
Nour Hammad, Elaheh Sanoubari, Patrick Finn, Sowmya Somanath, James E. Young, and Ehud Sharlin. 2019. Mutation: Leveraging Performing Arts Practices in Cyborg Transitioning. In Proceedings of the 2019 on Creativity and Cognition, ACM, San Diego CA USA, 53–59.
[55]
Mika Hannula, Juha Suoranta, and Tere Vadén. 2014. Artistic Research Methodology. Peter Lang US.
[56]
Donna Haraway. 1988. Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies 14, 3 (1988), 575–599.
[57]
Joshua James Hatherley. 2020. Limits of trust in medical AI. Journal of Medical Ethics 46, 7 (July 2020), 478–481.
[58]
Julia Hirschberg and Christopher D Manning. Advances in natural language processing. ARTIFICIAL INTELLIGENCE, 7.
[59]
Kristina Höök. 2018. Designing with the Body: Somaesthetic Interaction Design. MIT Press, Cambridge, MA, USA.
[60]
Eva Hornecker. 2011. The role of physicality in tangible and embodied interactions. interactions 18, 2 (March 2011), 19–23.
[61]
Eric Horvitz. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI ’99), Association for Computing Machinery, New York, NY, USA, 159–166.
[62]
Khari Johnson. 2022. The Efforts to Make Text-Based AI Less Racist and Terrible. Wired. Retrieved September 13, 2022 from https://www.wired.com/story/efforts-make-text-ai-less-racist-terrible/
[63]
Merel Keijsers, Christoph Bartneck, and Friederike Eyssel. 2021. What's to bullying a bot?: Correlates between chatbot humanlikeness and abuse. Interaction Studies 22, 1 (September 2021), 55–80.
[64]
Pranav Khadpe, Ranjay Krishna, Li Fei-Fei, Jeffrey T. Hancock, and Michael S. Bernstein. 2020. Conceptual Metaphors Impact Perceptions of Human-AI Collaboration. Proc. ACM Hum.-Comput. Interact. 4, CSCW2 (October 2020), 163:1-163:26.
[65]
Robert E. Kraut. Technology and the Transformation of White-Collar Work. Psychology Press, Hillsdale, NJ.
[66]
Chuncheng Liu and Ross Graham. 2021. Making sense of algorithms: Relational perception of contact tracing and risk assessment during COVID-19. Big Data & Society 8, 1 (January 2021), 2053951721995218.
[67]
Christopher Manning. 2016. Understanding Human Language: Can NLP and Deep Learning Help? In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval (SIGIR ’16), Association for Computing Machinery, New York, NY, USA, 1.
[68]
Georgios Marentakis, David Pirrò, and Marian Weger. 2017. Creative Evaluation. In Proceedings of the 2017 Conference on Designing Interactive Systems, ACM, Edinburgh United Kingdom, 853–864.
[69]
Kory Mathewson and Piotr Mirowski. 2017. Improvised Theatre Alongside Artificial Intelligences. AIIDE 13, 1 (September 2017). Retrieved March 13, 2021 from https://ojs.aaai.org/index.php/AIIDE/article/view/12926
[70]
Jeff Larson Mattu Julia Angwin,Lauren Kirchner,Surya. 2022. How We Analyzed the COMPAS Recidivism Algorithm. ProPublica. Retrieved September 14, 2022 from https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
[71]
Cade Metz. 2020. Meet GPT-3. It Has Learned to Code (and Blog and Argue). The New York Times. Retrieved September 13, 2022 from https://www.nytimes.com/2020/11/24/science/artificial-intelligence-ai-gpt3.html
[72]
Michael Muller and Justin Weisz. 2022. Extending a Human-AI Collaboration Framework with Dynamism and Sociality. In 2022 Symposium on Human-Computer Interaction for Work (CHIWORK 2022), Association for Computing Machinery, New York, NY, USA, 1–12.
[73]
Hye Yeon Nam and Michael Nitsche. 2014. Interactive installations as performance: inspiration for HCI. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction (TEI ’14), Association for Computing Machinery, New York, NY, USA, 189–196.
[74]
Safiya Umoja Noble. 2018. Algorithms of oppression: how search engines reinforce racism. New York University Press, New York.
[75]
Cathy O'Neil. 2016. Weapons of math destruction: how big data increases inequality and threatens democracy (First edition ed.). Crown, New York.
[76]
OpenAI. 2020. OpenAI API. OpenAI. Retrieved May 4, 2021 from https://openai.com/blog/openai-api/
[77]
OpenAI. 2022. Aligning Language Models to Follow Instructions. OpenAI. Retrieved December 4, 2022 from https://openai.com/blog/instruction-following/
[78]
Aura Pon, Eric Pattison, Lawrence Fyfe, Laurie Radford, and Sheelagh Carpendale. 2017. Torrent: Integrating Embodiment, Physicalization and Musification in Music-Making. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Yokohama Japan, 209–216.
[79]
Amon Rapp, Lorenzo Curti, and Arianna Boldi. 2021. The human side of human-chatbot interaction: A systematic literature review of ten years of research on text-based chatbots. International Journal of Human-Computer Studies 151, (July 2021), 102630.
[80]
Juan Rebanal. 2021. XAlgo: a Design Probe of Explaining Algorithms’ Internal States via Question-Answering. (2021), 11.
[81]
Johan Redström. 2017. Making design theory. The MIT Press, Cambridge, Massachusetts.
[82]
Rob Reich. 2022. Now AI can write students’ essays for them, will everyone become a cheat? The Guardian. Retrieved December 5, 2022 from https://www.theguardian.com/commentisfree/2022/nov/28/ai-students-essays-cheat-teachers-plagiarism-tech
[83]
Reply.io. 2022. Improve your emails with GPT-3. Retrieved December 5, 2022 from https://support.reply.io/en/articles/4694125-improve-your-emails-with-gpt-3
[84]
Rudolf Rosa, Tomáš Musil, Ondřej Dušek, Dominik Jurko, Patrícia Schmidtová, David Mareček, Ondřej Bojar, Tom Kocmi, Daniel Hrbek, David Košťák, Martina Kinská, Marie Nováková, Josef Doležal, Klára Vosecká, Tomáš Studeník, and Petr Žabka. 2021. THEaiTRE 1.0: Interactive generation of theatre play scripts. arXiv:2102.08892 [cs] (February 2021). Retrieved April 15, 2022 from http://arxiv.org/abs/2102.08892
[85]
Minna Ruckenstein. 2022. Time to re-humanize algorithmic systems. AI & Soc (May 2022).
[86]
Minna Ruckenstein and Julia Granroth. 2020. Algorithms, advertising and the intimacy of surveillance. Journal of Cultural Economy 13, 1 (January 2020), 12–24.
[87]
Hanns Holger Rutz. 2020. A pattern system for sound processes. In Proceedings of the 15th International Conference on Audio Mostly, ACM, Graz Austria, 93–100.
[88]
Alex Sciuto, Arnita Saini, Jodi Forlizzi, and Jason I. Hong. 2018. “Hey Alexa, What's Up?”: A Mixed-Methods Studies of In-Home Conversational Agent Usage. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS ’18), Association for Computing Machinery, New York, NY, USA, 857–868.
[89]
James C. Scott. 1998. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Yale University Press.
[90]
Lara Shalson. 2018. Performing Endurance: Art and Politics since 1960. Cambridge University Press, Cambridge.
[91]
Ben Sheehan, Hyun Seung Jin, and Udo Gottlieb. 2020. Customer service chatbots: Anthropomorphism and adoption. Journal of Business Research 115, (July 2020), 14–24.
[92]
Irina Shklovski and Erik Grönvall. 2020. CreepyLeaks: Participatory Speculation Through Demos. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, ACM, Tallinn Estonia, 1–12.
[93]
Vygandas Simbelis and Kristina Höök. 2013. Metaphone: an artistic exploration of biofeedback and machine aesthetics. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems on - CHI EA ’13, ACM Press, Paris, France, 2995.
[94]
Michael Skirpan, Maggie Oates, Daragh Byrne, Robert Cunningham, and Lorrie Faith Cranor. 2022. Is a privacy crisis experienced, a privacy crisis avoided? Commun. ACM 65, 3 (February 2022), 26–29.
[95]
Michael Warren Skirpan, Jacqueline Cameron, and Tom Yeh. 2018. More Than a Show: Using Personalized Immersive Theater to Educate and Engage the Public in Technology Ethics. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), Association for Computing Machinery, New York, NY, USA, 1–13.
[96]
Marita Skjuve, Asbjørn Følstad, Knut Inge Fostervold, and Petter Bae Brandtzaeg. 2021. My Chatbot Companion - a Study of Human-Chatbot Relationships. International Journal of Human-Computer Studies 149, (May 2021), 102601.
[97]
Irene Solaiman and Christy Dennison. Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets. 43.
[98]
David Spiegelhalter. 2020. Should We Trust Algorithms? Harvard Data Science Review 2, 1 (January 2020).
[99]
G. Stahl. 2003. Meaning and Interpretation in Collaboration. In Designing for Change in Networked Learning Environments: Proceedings of the International Conference on Computer Support for Collaborative Learning 2003, Barbara Wasson, Sten Ludvigsen and Ulrich Hoppe (eds.). Springer Netherlands, Dordrecht, 523–532.
[100]
Marc Steen. 2015. Upon Opening the Black Box and Finding It Full: Exploring the Ethics in Design Practices. Science, Technology, & Human Values 40, 3 (May 2015), 389–420.
[101]
Lucy A. Suchman. 1987. Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press.
[102]
S Shyam Sundar. 2020. Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII). Journal of Computer-Mediated Communication 25, 1 (March 2020), 74–88.
[103]
Joëlle Swart. 2021. Experiencing Algorithms: How Young People Understand, Feel About, and Engage With Algorithmic News Selection on Social Media. Social Media + Society 7, 2 (April 2021), 20563051211008828.
[104]
Daniel Terdiman. 2018. Here's How People Say Google Home And Alexa Impact Their Lives. Fast Company. Retrieved April 27, 2022 from https://www.fastcompany.com/40513721/heres-how-people-say-google-home-and-alexa-impact-their-lives
[105]
Mariya Toneva and Leila Wehbe. 2019. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). In Advances in Neural Information Processing Systems, Curran Associates, Inc. Retrieved September 15, 2022 from https://proceedings.neurips.cc/paper/2019/hash/749a8e6c231831ef7756db230b4359c8-Abstract.html
[106]
Charlotte Triebus, Ivana Druzetic, Bastian Dewitz, Calvin Huhn, Paul Kretschel, and Christian Geiger. 2021. is a rose – A Performative Installation between the Tangible and the Digital. In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Salzburg Austria, 1–4.
[107]
Amrita S. Tulshan and Sudhir Namdeorao Dhage. 2019. Survey on Virtual Assistant: Google Assistant, Siri, Cortana, Alexa. In Advances in Signal Processing and Intelligent Recognition Systems (Communications in Computer and Information Science), Springer, Singapore, 190–201.
[108]
Dakuo Wang, Elizabeth Churchill, Pattie Maes, Xiangmin Fan, Ben Shneiderman, Yuanchun Shi, and Qianying Wang. 2020. From Human-Human Collaboration to Human-AI Collaboration: Designing AI Systems That Can Work Together with People. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA ’20), Association for Computing Machinery, New York, NY, USA, 1–6.
[109]
Karl E. Weick, Kathleen M. Sutcliffe, and David Obstfeld. 2005. Organizing and the Process of Sensemaking. Organization Science 16, 4 (August 2005), 409–421.
[110]
Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2021. Ethical and social risks of harm from Language Models.
[111]
Joseph Weizenbaum. 1983. ELIZA — a computer program for the study of natural language communication between man and machine. Commun. ACM 26, 1 (January 1983), 23–28.
[112]
Terry Winograd. 1990. Thinking machines: Can there be? Are we? In The Foundations of Artificial Intelligence: A Sourcebook, Derek Partridge and Yorick Wilks (eds.). Cambridge University Press, Cambridge, 167–189.
[113]
Eva Yiwei Wu, Emily Pedersen, and Niloufar Salehi. 2019. Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas. Proc. ACM Hum.-Comput. Interact. 3, CSCW (November 2019), 219:1-219:27.
[114]
Tianling Xie and Iryna Pentina. 2022. Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika.
[115]
Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), Association for Computing Machinery, New York, NY, USA, 1–13.
[116]
Alex Zavalny. 2022. Generating Emails with GPT3. Retrieved December 5, 2022 from https://github.com/thelazyaz/gpt3-email-generator
[117]
Tasos Zembylas and Tasos Zembylas (Eds.). 2014. Artistic practices: social interactions and cultural dynamics. Routledge, Taylor & francis Group, London; New York.
[118]
2016. Sunspring | A Sci-Fi Short Film Starring Thomas Middleditch. Retrieved November 23, 2022 from https://www.youtube.com/watch?v=LY7x2Ihqjmc
[119]
2021. Trustworthy Artificial Intelligence. Mozilla Foundation. Retrieved May 30, 2021 from https://foundation.mozilla.org/en/internet-health/trustworthy-artificial-intelligence/
[120]
2022. Replika. replika.com. Retrieved September 15, 2022 from https://replika.com
[121]
2022. Copy.ai: Write better marketing copy and content with AI. Retrieved December 5, 2022 from https://www.copy.ai/

Cited By

View all

Index Terms

  1. Embodying the Algorithm: Exploring Relationships with Large Language Models Through Artistic Performance

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
      April 2023
      14911 pages
      ISBN:9781450394215
      DOI:10.1145/3544548
      This work is licensed under a Creative Commons Attribution-ShareAlike International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 19 April 2023

      Check for updates

      Author Tags

      1. Algorithms
      2. Embodiment
      3. GPT-3
      4. Human-Computer Interaction

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      CHI '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,502
      • Downloads (Last 6 weeks)128
      Reflects downloads up to 15 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media