skip to main content
10.1145/3643834.3660729acmconferencesArticle/Chapter ViewAbstractPublication PagesdisConference Proceedingsconference-collections
research-article
Open access

Understanding Human-AI Workflows for Generating Personas

Published: 01 July 2024 Publication History

Abstract

One barrier to deeper adoption of user-research methods is the amount of labor required to create high-quality representations of collected data. Trained user researchers need to analyze datasets and produce informative summaries pertaining to the original data. While Large Language Models (LLMs) could assist in generating summaries, they are known to hallucinate and produce biased responses. In this paper, we study human–AI workflows that differently delegate subtasks in user research between human experts and LLMs. Studying persona generation as our case, we found that LLMs are not good at capturing key characteristics of user data on their own. Better results are achieved when we leverage human skill in grouping user data by their key characteristics and exploit LLMs for summarizing pre-grouped data into personas. Personas generated via this collaborative approach can be more representative and empathy-evoking than ones generated by human experts or LLMs alone. We also found that LLMs could mimic generated personas and enable interaction with personas, thereby helping user researchers empathize with them. We conclude that LLMs, by facilitating the analysis of user data, may promote widespread application of qualitative methods in user research.

References

[1]
Yongsu Ahn, Yu-Ru Lin, Panpan Xu, and Zeng Dai. 2023. ESCAPE: Countering Systematic Errors from Machine’s Blind Spots via Interactive Visual Analysis. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 834, 16 pages. https://doi.org/10.1145/3544548.3581373
[2]
Ayham Alomari, Norisma Idris, Aznul Qalid Md Sabri, and Izzat Alsmadi. 2022. Deep reinforcement and transfer learning for abstractive text summarization: A review. Computer Speech& Language 71 (2022), 101276. https://doi.org/10.1016/j.csl.2021.101276
[3]
Gordon Baxter, John Rooksby, Yuanzhi Wang, and Ali Khajeh-Hosseini. 2012. The Ironies of Automation: Still Going Strong at 30?. In Proceedings of the 30th European Conference on Cognitive Ergonomics (Edinburgh, United Kingdom) (ECCE ’12). Association for Computing Machinery, New York, NY, USA, 65–71. https://doi.org/10.1145/2448136.2448149
[4]
David Benyon. 2019. Designing user experience. Pearson UK.
[5]
Ali Borji. 2023. A Categorical Archive of ChatGPT Failures. arxiv:2302.03494 [cs.CL]
[6]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
[7]
Peter Brusilovsky and Eva Millán. 2007. User Models for Adaptive Hypermedia and Adaptive Educational Systems. Springer Berlin Heidelberg, Berlin, Heidelberg, 3–53. https://doi.org/10.1007/978-3-540-72079-9_1
[8]
Chris Callison-Burch, Gaurav Singh Tomar, Lara J. Martin, Daphne Ippolito, Suma Bailis, and David Reitter. 2022. Dungeons and Dragons as a Dialog Challenge for Artificial Intelligence. arxiv:2210.07109 [cs.CL]
[9]
Rishabh Choudhary, Omar Alsayed, Simona Doboli, and Ali A. Minai. 2022. Building Semantic Cognitive Maps with Text Embedding and Clustering. In 2022 International Joint Conference on Neural Networks (IJCNN). 01–08. https://doi.org/10.1109/IJCNN55064.2022.9892429
[10]
Yun-Shiuan Chuang, Siddharth Suresh, Nikunj Harlalka, Agam Goyal, Robert Hawkins, Sijia Yang, Dhavan Shah, Junjie Hu, and Timothy T. Rogers. 2023. Evaluating LLM Agent Group Dynamics against Human Group Dynamics: A Case Study on Wisdom of Partisan Crowds. arxiv:2311.09665 [cs.CL]
[11]
Alan Cooper, Robert Reimann, David Cronin, and Christopher Noessel. 2014. About face : the essentials of interaction design (fourth edition. ed.). John Wiley W& Sons, Indianapolis, Indiana.
[12]
Samuel Rhys Cox, Ashraf Abdul, and Wei Tsang Ooi. 2023. Prompting a Large Language Model to Generate Diverse Motivational Messages: A Comparison with Human-Written Messages. arxiv:2308.13479 [cs.CL]
[13]
Nick De Voil. 2010. Personas considered harmful. De Voil Consulting Limited (2010).
[14]
Adrian de Wynter, Xun Wang, Alex Sokolov, Qilong Gu, and Si-Qing Chen. 2023. An evaluation on large language model outputs: Discourse and memorization. Natural Language Processing Journal 4 (sep 2023), 100024. https://doi.org/10.1016/j.nlp.2023.100024
[15]
Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, and Rahul Gupta. 2021. BOLD: Dataset and metrics for measuring biases in open-ended language generation. In ACM FAccT 2021. https://www.amazon.science/publications/bold-dataset-and-metrics-for-measuring-biases-in-open-ended-language-generation
[16]
Esin Durmus, Karina Nyugen, Thomas I. Liao, Nicholas Schiefer, Amanda Askell, Anton Bakhtin, Carol Chen, Zac Hatfield-Dodds, Danny Hernandez, Nicholas Joseph, Liane Lovitt, Sam McCandlish, Orowa Sikder, Alex Tamkin, Janel Thamkul, Jared Kaplan, Jack Clark, and Deep Ganguli. 2023. Towards Measuring the Representation of Subjective Global Opinions in Language Models. arxiv:2306.16388 [cs.CL]
[17]
Alex Endert, Seth Fox, Dipayan Maiti, Scotland Leman, and Chris North. 2012. The Semantics of Clustering: Analysis of User-Generated Spatializations of Text Documents. In Proceedings of the International Working Conference on Advanced Visual Interfaces (Capri Island, Italy) (AVI ’12). Association for Computing Machinery, New York, NY, USA, 555–562. https://doi.org/10.1145/2254556.2254660
[18]
Shamal Faily and Ivan Flechais. 2011. Persona Cases: A Technique for Grounding Personas. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 2267–2270. https://doi.org/10.1145/1978942.1979274
[19]
Bruna Moraes Ferreira, Simone D. J. Barbosa, and Tayana Conte. 2016. PATHY: Using Empathy with Personas to Design Applications that Meet the Users’ Needs. In Human-Computer Interaction. Theory, Design, Development and Practice, Masaaki Kurosu (Ed.). Springer International Publishing, Cham, 153–165.
[20]
Riccardo Fogliato, Shreya Chappidi, Matthew Lungren, Paul Fisher, Diane Wilson, Michael Fitzke, Mark Parkinson, Eric Horvitz, Kori Inkpen, and Besmira Nushi. 2022. Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1362–1374. https://doi.org/10.1145/3531146.3533193
[21]
Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, and Xiaojun Wan. 2023. Human-like Summarization Evaluation with ChatGPT. arxiv:2304.02554 [cs.CL]
[22]
Katy Ilonka Gero, Vivian Liu, and Lydia Chilton. 2022. Sparks: Inspiration for Science Writing using Language Models. In Proceedings of the 2022 ACM Designing Interactive Systems Conference (, Virtual Event, Australia, ) (DIS ’22). Association for Computing Machinery, New York, NY, USA, 1002–1019. https://doi.org/10.1145/3532106.3533533
[23]
Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. ChatGPT outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences 120, 30 (jul 2023). https://doi.org/10.1073/pnas.2305016120
[24]
Elizabeth Goodman and Mike Kuniavsky. 2012. Observing the user experience: A practitioner’s guide to user research. Elsevier.
[25]
Ben Green and Yiling Chen. 2019. The Principles and Limits of Algorithm-in-the-Loop Decision Making. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 50 (nov 2019), 24 pages. https://doi.org/10.1145/3359152
[26]
Hongyan Gu, Yuan Liang, Yifan Xu, Christopher Kazu Williams, Shino Magaki, Negar Khanlou, Harry Vinters, Zesheng Chen, Shuo Ni, Chunxu Yang, Wenzhong Yan, Xinhai Robert Zhang, Yang Li, Mohammad Haeri, and Xiang ‘Anthony’ Chen. 2023. Improving Workflow Integration with XPath: Design and Evaluation of a Human-AI Diagnosis System in Pathology. ACM Trans. Comput.-Hum. Interact. 30, 2, Article 28 (mar 2023), 37 pages. https://doi.org/10.1145/3577011
[27]
Perttu Hämäläinen, Mikke Tavast, and Anton Kunnari. 2023. Evaluating Large Language Models in Generating Synthetic HCI Research Data: A Case Study. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 433, 19 pages. https://doi.org/10.1145/3544548.3580688
[28]
Pengcheng He, Baolin Peng, Song Wang, Yang Liu, Ruochen Xu, Hany Hassan, Yu Shi, Chenguang Zhu, Wayne Xiong, Michael Zeng, Jianfeng Gao, and Xuedong Huang. 2023. Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Toronto, Canada, 5095–5112. https://doi.org/10.18653/v1/2023.acl-long.279
[29]
Elizabeth Y. Huang, Dario Paccagnan, Wenjun Mei, and Francesco Bullo. 2023. Assign and Appraise: Achieving Optimal Performance in Collaborative Teams. IEEE Trans. Automat. Control 68, 3 (2023), 1614–1627. https://doi.org/10.1109/TAC.2022.3156879
[30]
Jina Huh, Bum Chul Kwon, Sung-Hee Kim, Sukwon Lee, Jaegul Choo, Jihoon Kim, Min-Je Choi, and Ji Soo Yi. 2016. Personas in online health communities. Journal of Biomedical Informatics 63 (2016), 212–225. https://doi.org/10.1016/j.jbi.2016.08.019
[31]
Bernard J. Jansen, Soon gyo Jung, and Joni Salminen. 2023. Employing large language models in survey research. Natural Language Processing Journal 4 (2023), 100020. https://doi.org/10.1016/j.nlp.2023.100020
[32]
Jialun Aaron Jiang, Kandrea Wade, Casey Fiesler, and Jed R. Brubaker. 2021. Supporting Serendipity: Opportunities and Challenges for Human-AI Collaboration in Qualitative Analysis. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 94 (apr 2021), 23 pages. https://doi.org/10.1145/3449168
[33]
Plinio Thomaz Aquino Junior and Lucia Vilela Leite Filgueiras. 2005. User Modeling with Personas. In Proceedings of the 2005 Latin American Conference on Human-Computer Interaction (Cuernavaca, Mexico) (CLIHC ’05). Association for Computing Machinery, New York, NY, USA, 277–282. https://doi.org/10.1145/1111360.1111388
[34]
Keren J Kanarik, Wojciech T Osowiecki, Yu Lu, Dipongkar Talukder, Niklas Roschewsky, Sae Na Park, Mattan Kamon, David M Fried, and Richard A Gottscho. 2023. Human–machine collaboration for improving semiconductor process development. Nature 616, 7958 (2023), 707–711. https://doi.org/10.1038/s41586-023-05773-7
[35]
Jeongyeon Kim, Sangho Suh, Lydia B Chilton, and Haijun Xia. 2023. Metaphorian: Leveraging Large Language Models to Support Extended Metaphor Creation for Science Writing. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (, Pittsburgh, PA, USA, ) (DIS ’23). Association for Computing Machinery, New York, NY, USA, 115–135. https://doi.org/10.1145/3563657.3595996
[36]
Ezgi Korkmaz. 2022. Revealing the Bias in Large Language Models via Reward Structured Questions. In NeurIPS 2022 Foundation Models for Decision Making Workshop. https://openreview.net/forum?id=7uYsFvahSzx
[37]
Dannie Korsgaard, Thomas Bjørner, Pernille Krog Sørensen, and Paolo Burelli. 2020. Creating user stereotypes for persona development from qualitative data through semi-automatic subspace clustering. User modeling and user-adapted interaction 30 (2020), 81–125. https://doi.org/10.1007/s11257-019-09252-5
[38]
Mike Kuniavsky. 2003. Observing the user experience: a practitioner’s guide to user research. Elsevier.
[39]
Vivian Lai, Samuel Carton, Rajat Bhatnagar, Q. Vera Liao, Yunfeng Zhang, and Chenhao Tan. 2022. Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (, New Orleans, LA, USA, ) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 54, 18 pages. https://doi.org/10.1145/3491102.3501999
[40]
Hwaran Lee, Seokhee Hong, Joonsuk Park, Takyoung Kim, Gunhee Kim, and Jung-Woo Ha. 2023. KoSBi: A Dataset for Mitigating Social Bias Risks Towards Safer Large Language Model Application. arxiv:2305.17701 [cs.CL]
[41]
Wanhae Lee, Minki Chun, Hyeonhak Jeong, and Hyunggu Jung. 2023. Toward Keyword Generation through Large Language Models. In Companion Proceedings of the 28th International Conference on Intelligent User Interfaces (Sydney, NSW, Australia) (IUI ’23 Companion). Association for Computing Machinery, New York, NY, USA, 37–40. https://doi.org/10.1145/3581754.3584126
[42]
Chin-Yew LIN. 2004. ROUGE : a package for automatic evaluation of summaries. Proceedings of the Workshop on Text Summarization Branches Out, 2004 (2004). https://cir.nii.ac.jp/crid/1571417125576321408
[43]
Anthony Liu, Santiago Guerra, Isaac Fung, Gabriel Matute, Ece Kamar, and Walter Lasecki. 2020. Towards Hybrid Human-AI Workflows for Unknown Unknown Detection. In Proceedings of The Web Conference 2020 (Taipei, Taiwan) (WWW ’20). Association for Computing Machinery, New York, NY, USA, 2432–2442. https://doi.org/10.1145/3366423.3380306
[44]
Brian Lubars and Chenhao Tan. 2019. Ask not what AI can do, but what AI should do: Towards a framework of task delegability. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.). Vol. 32. Curran Associates, Inc.https://proceedings.neurips.cc/paper_files/paper/2019/file/d67d8ab4f4c10bf22aa353e27879133c-Paper.pdf
[45]
Ana Macanovic. 2022. Text mining for social science – The state and the future of computational text analysis in sociology. Social Science Research 108 (2022), 102784. https://doi.org/10.1016/j.ssresearch.2022.102784
[46]
Megh Marathe and Kentaro Toyama. 2018. Semi-Automated Coding for Qualitative Research: A User-Centered Inquiry and Initial Prototypes. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173922
[47]
Nicola Marsden and Maren Haag. 2016. Stereotypes and Politics: Reflections on Personas. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 4017–4031. https://doi.org/10.1145/2858036.2858151
[48]
Nicola Marsden and Maren Haag. 2016. Stereotypes and Politics: Reflections on Personas. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 4017–4031. https://doi.org/10.1145/2858036.2858151
[49]
Tara Matthews, Tejinder Judge, and Steve Whittaker. 2012. How Do Designers and User Experience Professionals Actually Perceive and Use Personas?. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI ’12). Association for Computing Machinery, New York, NY, USA, 1219–1228. https://doi.org/10.1145/2207676.2208573
[50]
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On Faithfulness and Factuality in Abstractive Summarization. arxiv:2005.00661 [cs.CL]
[51]
Jennifer (Jen) McGinn and Nalini Kotamraju. 2008. Data-Driven Persona Development. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy) (CHI ’08). Association for Computing Machinery, New York, NY, USA, 1521–1524. https://doi.org/10.1145/1357054.1357292
[52]
Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Javad Hosseini, Mark Johnson, and Mark Steedman. 2023. Sources of Hallucination by Large Language Models on Inference Tasks. arxiv:2305.14552 [cs.CL]
[53]
Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. 2022. Text and Code Embeddings by Contrastive Pre-Training. arxiv:2201.10005 [cs.CL]
[54]
Timothy J. Nokes-Malach, Michelle L. Meade, and Daniel G. Morrow. 2012. The effect of expertise on collaborative problem solving. Thinking & Reasoning 18, 1 (2012), 32–58. https://doi.org/10.1080/13546783.2011.642206 arXiv:https://doi.org/10.1080/13546783.2011.642206
[55]
Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric Horvitz. 2023. Capabilities of GPT-4 on Medical Challenge Problems. arxiv:2303.13375 [cs.CL]
[56]
OpenAI. 2023. GPT-4 Technical Report. arxiv:2303.08774 [cs.CL]
[57]
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744.
[58]
Gabriele Paolacci, Jesse Chandler, and Panagiotis G. Ipeirotis. 2010. Running experiments on Amazon Mechanical Turk. Judgment and Decision Making 5, 5 (2010), 411–419. https://doi.org/10.1017/S1930297500002205
[59]
Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Tulio Ribeiro. 2023. ART: Automatic multi-step reasoning and tool-use for large language models. arxiv:2303.09014 [cs.CL]
[60]
Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative Agents: Interactive Simulacra of Human Behavior. arxiv:2304.03442 [cs.HC]
[61]
John Pruitt and Tamara Adlin. 2010. The persona lifecycle: keeping people in mind throughout product design. Elsevier.
[62]
John Pruitt and Jonathan Grudin. 2003. Personas: practice and theory. In Proceedings of the 2003 conference on Designing for user experiences. 1–15.
[63]
Indra Kharisma Raharjana, Daniel Siahaan, and Chastine Fatichah. 2021. User Stories and Natural Language Processing: A Systematic Literature Review. IEEE Access 9 (2021), 53811–53826. https://doi.org/10.1109/ACCESS.2021.3070606
[64]
Alex Renda, Aspen Hopkins, and Michael Carbin. 2023. Can LLMs Generate Random Numbers? Evaluating LLM Sampling in Controlled Domains. In ICML Workshop.
[65]
Ananya B. Sai, Akash Kumar Mohankumar, and Mitesh M. Khapra. 2022. A Survey of Evaluation Metrics Used for NLG Systems. ACM Comput. Surv. 55, 2, Article 26 (jan 2022), 39 pages. https://doi.org/10.1145/3485766
[66]
Joni Salminen, Kathleen Guan, Soon-Gyo Jung, Shammur A. Chowdhury, and Bernard J. Jansen. 2020. A Literature Review of Quantitative Persona Creation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376502
[67]
Joni Salminen, Kathleen Guan, Soon-Gyo Jung, and Bernard J. Jansen. 2021. A Survey of 15 Years of Data-Driven Persona Development. International Journal of Human–Computer Interaction 37, 18 (2021), 1685–1708. https://doi.org/10.1080/10447318.2021.1908670 arXiv:https://doi.org/10.1080/10447318.2021.1908670
[68]
Joni Salminen, Kathleen Guan, Lene Nielsen, Soon-gyo Jung, and Bernard J. Jansen. 2020. A Template for Data-Driven Personas: Analyzing 31 Quantitatively Oriented Persona Profiles. In Human Interface and the Management of Information. Designing Information, Sakae Yamamoto and Hirohiko Mori (Eds.). Springer International Publishing, Cham, 125–144.
[69]
Joni Salminen, Bernard Jansen, and Soon-Gyo Jung. 2022. Survey2Persona: Rendering Survey Responses as Personas. In Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization (Barcelona, Spain) (UMAP ’22 Adjunct). Association for Computing Machinery, New York, NY, USA, 67–73. https://doi.org/10.1145/3511047.3536403
[70]
Joni Salminen, Joao M. Santos, Haewoon Kwak, Jisun An, Soon gyo Jung, and Bernard J. Jansen. 2020. Persona Perception Scale: Development and Exploratory Validation of an Instrument for Evaluating Individuals’ Perceptions of Personas. International Journal of Human-Computer Studies 141 (2020), 102437. https://doi.org/10.1016/j.ijhcs.2020.102437
[71]
Joni Salminen, Kathleen Wenyun Guan, Soon-Gyo Jung, and Bernard Jansen. 2022. Use Cases for Design Personas: A Systematic Review and New Frontiers. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (, New Orleans, LA, USA, ) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 543, 21 pages. https://doi.org/10.1145/3491102.3517589
[72]
Jeff Sauro and James R Lewis. 2016. Quantifying the user experience: Practical statistics for user research. Morgan Kaufmann.
[73]
Neil Savage. 2023. Synthetic data could be better than real data. Nature article (2023). https://doi.org/doi.org/10.1038/d41586-023-01445-8
[74]
Murray Shanahan, Kyle McDonell, and Laria Reynolds. 2023. Role play with large language models. Nature 623, 7987 (01 Nov 2023), 493–498. https://doi.org/10.1038/s41586-023-06647-8
[75]
Gabriel Simmons. 2023. Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political Identity. arxiv:2209.12106 [cs.CL]
[76]
Dimitris Spiliotopoulos, Dionisis Margaris, and Costas Vassilakis. 2020. Data-Assisted Persona Construction Using Social Media Data. Big Data and Cognitive Computing 4, 3 (2020). https://doi.org/10.3390/bdcc4030021
[77]
Hao Tan, Shenglan Peng, Jia-Xin Liu, Chun-Peng Zhu, and Fan Zhou. 2022. Generating Personas for Products on Social Media: A Mixed Method to Analyze Online Users. International Journal of Human–Computer Interaction 38, 13 (2022), 1255–1266. https://doi.org/10.1080/10447318.2021.1990520 arXiv:https://doi.org/10.1080/10447318.2021.1990520
[78]
Petter Törnberg. 2023. ChatGPT-4 Outperforms Experts and Crowd Workers in Annotating Political Twitter Messages with Zero-Shot Learning. arxiv:2304.06588 [cs.CL]
[79]
Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, and Douglas C. Schmidt. 2023. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. arxiv:2302.11382 [cs.SE]
[80]
Ning Wu, Ming Gong, Linjun Shou, Shining Liang, and Daxin Jiang. 2023. Large Language Models are Diverse Role-Players for Summarization Evaluation. arxiv:2303.15078 [cs.CL]
[81]
Tongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022. AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (, New Orleans, LA, USA, ) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 385, 22 pages. https://doi.org/10.1145/3491102.3517582
[82]
Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, and Minjoon Seo. 2023. FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets. arxiv:2307.10928 [cs.CL]
[83]
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating Text Generation with BERT. arxiv:1904.09675 [cs.CL]
[84]
Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B. Hashimoto. 2023. Benchmarking Large Language Models for News Summarization. arxiv:2301.13848 [cs.CL]
[85]
Xiang Zhang, Hans-Frederick Brown, and Anil Shankar. 2016. Data-Driven Personas: Constructing Archetypal Users with Clickstreams and User Telemetry. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 5350–5359. https://doi.org/10.1145/2858036.2858523
[86]
Zoie Zhao, Sophie Song, Bridget Duah, Jamie Macbeth, Scott Carter, Monica P Van, Nayeli Suseth Bravo, Matthew Klenk, Kate Sick, and Alexandre L. S. Filipowicz. 2023. More Human than Human: LLM-Generated Narratives Outperform Human-LLM Interleaved Narratives. In Proceedings of the 15th Conference on Creativity and Cognition (Virtual Event, USA) (C&C ’23). Association for Computing Machinery, New York, NY, USA, 368–370. https://doi.org/10.1145/3591196.3596612
[87]
Haining Zhu, Hongjian Wang, and John M. Carroll. 2019. Creating Persona Skeletons from Imbalanced Datasets - A Case Study using U.S. Older Adults’ Health Data. In Proceedings of the 2019 on Designing Interactive Systems Conference (San Diego, CA, USA) (DIS ’19). Association for Computing Machinery, New York, NY, USA, 61–70. https://doi.org/10.1145/3322276.3322285

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
DIS '24: Proceedings of the 2024 ACM Designing Interactive Systems Conference
July 2024
3616 pages
ISBN:9798400705830
DOI:10.1145/3643834
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 July 2024

Check for updates

Author Tags

  1. LLM
  2. User research
  3. persona generation

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Subjective Functions
  • Human Automata
  • IFI program of the German Academic Exchange Service (DAAD)

Conference

DIS '24
Sponsor:
DIS '24: Designing Interactive Systems Conference
July 1 - 5, 2024
Copenhagen, Denmark

Acceptance Rates

Overall Acceptance Rate 1,158 of 4,684 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 1,227
    Total Downloads
  • Downloads (Last 12 months)1,227
  • Downloads (Last 6 weeks)327
Reflects downloads up to 22 Dec 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media