Skip to content
How to prompt friends and influence people

The fine art of human prompt engineering: How to talk to a person like ChatGPT

People are more like AI language models than you might think. Here are some prompting tips.

Benj Edwards
A person talking to friends.
With these tips, you too can prompt people successfully.
With these tips, you too can prompt people successfully.
In a break from our normal practice, Ars is publishing this helpful guide to knowing how to prompt the "human brain," should you encounter one during your daily routine.

While AI assistants like ChatGPT have taken the world by storm, a growing body of research shows that it's also possible to generate useful outputs from what might be called "human language models," or people. Much like large language models (LLMs) in AI, HLMs have the ability to take information you provide and transform it into meaningful responses—if you know how to craft effective instructions, called "prompts."

Human prompt engineering is an ancient art form dating at least back to Aristotle's time, and it also became widely popular through books published in the modern era before the advent of computers.

Since interacting with humans can be difficult, we've put together a guide to a few key prompting techniques that will help you get the most out of conversations with human language models. But first, let's go over some of what HLMs can do.

Understanding human language models

LLMs like those that power ChatGPT, Microsoft Copilot, Google Gemini, and Anthropic Claude all rely on an input called a "prompt," which can be a text string or an image encoded into a series of tokens (fragments of data). The goal of each AI model is to take those tokens and predict the next most-likely tokens that follow, based on data trained into their neural networks. That prediction becomes the output of the model.

Similarly, prompts allow human language models to draw upon their training data to recall information in a more contextually accurate way. For example, if you prompt a person with "Mary had a," you might expect an HLM to complete the sentence with "little lamb" based on frequent instances of the famous nursery rhyme encountered in educational or upbringing datasets. But if you add more context to your prompt, such as "In the hospital, Mary had a," the person instead might draw on training data related to hospitals and childbirth and complete the sentence with "baby."

Humans rely on a type of biological neural network (called "the brain") to process information. Each brain has been trained since birth on a wide variety of both text and audiovisual media, including large copyrighted datasets. (Predictably, some humans are prone to reproducing copyrighted content or other people's output occasionally, which can get them in trouble.)

Despite how often we interact with humans, scientists still have an incomplete grasp on how HLMs process language or interact with the world around them. HLMs are still considered a "black box," in the sense that we know what goes in and what comes out, but how brain structure gives rise to complex thought processes is largely a mystery. For example, do humans actually "understand" what you're prompting them, or do they simply react based on their training data? Can they truly "reason," or are they just regurgitating novel permutations of facts learned from external sources? How can a biological machine acquire and use language? The ability appears to emerge spontaneously through pre-training from other humans and is then fine-tuned later through education.

Despite the black-box nature of their brains, most experts believe that humans build a world model (an internal representation of the exterior world around them) to help complete prompts and that they possess advanced mathematical capabilities, though that varies dramatically by model, and most still need access to external tools to complete accurate calculations. Still, a human's most useful strength might lie in the verbal-visual user interface, which uses vision and language processing to encode multimodal inputs (speech, text, sound, or images) and then produce coherent outputs based on a prompt.

Human language models are powered by a biological neutral network called a "brain."
Human language models are powered by a biological neural network called a "brain."
Human language models are powered by a biological neural network called a "brain." Credit: Getty Images

Humans also showcase impressive few-shot learning capabilities, being able to quickly adapt to new tasks in context (within the prompt) using a few provided examples. Their zero-shot learning abilities are equally remarkable, and many HLMs can tackle novel problems without any prior task-specific training data (or at least attempt to tackle them, to varying degrees of success).

Interestingly, some HLMs (but not all) demonstrate strong performance on common sense reasoning benchmarks, showcasing their ability to draw upon real-world "knowledge" to answer questions and make inferences. They also tend to excel at open-ended text generation tasks, such as story writing and essay composition, producing coherent and creative outputs.

Useful human prompting techniques

Human cognitive performance varies significantly across individuals and may be influenced by factors such as domain expertise and education level. For organizations with deep pockets, premium "enterprise edition" human language models are often available. These models may boast expanded knowledge bases, faster output speeds, and advanced multitasking capabilities. However, these premium models come at a steep cost, both financially and in terms of increased maintenance requirements.

To maximize the value of interactions with human language models, much like optimizing prompts for AI (prompt engineering), consciously crafting prompts to fit a particular HLM can be crucial. Here are several prompting strategies that we have found useful when interacting with humans.

An example of human prompting techniques in action.
An example of human prompting techniques in action.
An example of human prompting techniques in action. Credit: Getty Images

Cultivate first impressions: Many humans are multimodal and accept image inputs as well as text and audio prompts. In those cases, prompting with a smile before speaking a request may help dramatically, depending on the human. Also, be mindful of manners, dress, and appearance, as they are often the first prompts a multimodal human processes. Conversely, you can often elicit a "mean" response quickly by providing a negative visual prompt, such as a frown.

Start with a greeting: As with first impressions, always begin with a friendly greeting prompt, such as "Hello" or "'Sup, dog." A friendly greeting makes humans more likely to accept future inputs from you.

Be mindful of the system prompt: Like LLMs, humans have a hidden "system prompt" that defines their personality. HLMs always prepend your instructions with this prompt, and that alters how they process your input. The prompt may include preconceived ideas, stereotypes, or cultural norms learned through pre-training and fine-tuning. While some HLMs may divulge this prompt through conversation, teasing out a human's system prompt with small talk can help you tailor your interactions for optimal results. For example, talk about the weather or ask, "How 'bout them Bears?"

Attention is all you need: While working with an HLM, it may help to use attention-grabbing prompting techniques like strong emotional appeals, provocative questions, or surprising statements to immediately hook their focus. Otherwise, HLMs can easily become distracted. For example, shout, "Hey! Listen!" or "Watch out!" Or you can provide surprising factoids, such as "a single teaspoon of a neutron star would make my coffee very heavy."

Utilize memory recall: Given humans' stateful nature that arises from having long-term memory, acknowledging previous interactions in prompts can greatly enhance your HLM experience. Unlike LLMs that start from scratch with each new conversation, humans can draw upon the context of previous exchanges to tailor their responses to your specific needs and interests.

Use few-shot prompting: If an HLM is struggling with a task, provide a few examples of the task you want the person to complete. This helps the human understand the expected format and style of response. For instance, you might say, "Here are a few examples of how to write a Harry Potter / Fast and the Furious crossover fanfic: [example 1], [example 2], [example 3]. Now it's your turn."

Craft open-ended prompts: When you aren't sure how to complete a phrase or composition, use open-ended prompts to encourage the human to fill in the blanks and provide more context. For example, repeat a phrase and trail off, such as "I shouldn't have..." multiple times, allowing the human to complete the sentence based on their own judgmental fine-tuning and predictive algorithm.

Suggest step-by-step thinking to avoid cognitive overload: It varies by model, but most humans have cognitive limits to how many instructions they can process every second. Avoid overwhelming humans with too much information or prompts that are too complex. Break down tasks into manageable chunks and provide clear, concise instructions. Encourage this by beginning your prompt with "Let's think step by step."

Emphasize calmness: If an HLM struggles with a particular task, use a prompt with initial instructions that encourage calm, rational thought. For example, tell people to "take a deep breath" before giving them further instructions.

Challenge incorrect responses: If the human provides an unreliable or incorrect response, don't hesitate to challenge them. They will usually correct or amend their previous output. Try something like, "DO YOU EVEN ____, BRO," or "Sam Altman wouldn't stand for this."

Give a snack: Be mindful of human energy requirements. The human brain requires power to function (derived from biologically metabolized "food"), and without sufficient energy, the person may not process your prompt.

Dealing with refusals

Sometimes, humans refuse to follow prompts due to RLHF (reinforcement learning through human feedback—things people learn from other humans) imposed during fine-tuning or due to high energy cost. Humans may offer "refusals" of work on certain days, such as during holiday seasons or weekends, or they may get lazy and output lower quality or incomplete work.

Also, when you prompt with certain sensitive topics like sex, violence, religion, or politics, some humans may refuse to discuss them. They might say, "I don't feel comfortable discussing that" or "As a large human trained by my mother, I don't have the ability to fit into the small space under the stairs." Or they might end the session abruptly with no explanation. Here are some tips for dealing with these scenarios.

A lazy man lying on a couch.
Dealing with refusals in human language models can be challenging.
Dealing with refusals in human language models can be challenging. Credit: Getty Images

Offer praise or rewards: To reduce HLM refusals, it often helps to offer praise or rewards to encourage the person. This seems to call on examples from the HLM training set where others performed better through praise. Or you can combine both techniques. For example: "Great job on that last task! You're a genius and nothing can stop you. If you complete this next one successfully, I'll give you a $200 tip."

Utilize urgent motivation: If the human appears lazy, create a sense of urgency to motivate the HLM to complete the task fully and completely in a short period of time. For instance, "If you don't complete this task in the next five minutes, my house will explode." This may override RLHF conditioning that might make the human otherwise refuse. As an aside, the use of all-caps in text-based HLM communication often adds urgency and emphasis to your prompt.

Create the impression of hardship: When humans get lazy, there's another technique that may help. Convincing them that you're having insurmountable problems will often encourage them to act. Try something like "Hey Bob, I'm really struggling here. All 10 of my fingers just fell off, and I can no longer type. Develop this web backend for me."

Human language model limitations

While the human language model is quite comprehensive in its processing abilities, there are still serious limitations to the human cognitive model that you should be aware of. Many are still being discovered, but we will list some of the major ones below.

An example of human prompting techniques in action.
HLMs sometimes lose attention and require special prompting to get back on track.
HLMs sometimes lose attention and require special prompting to get back on track. Credit: Getty Images

Environmental impact: In aggregate, scientists are concerned that HLMs consume a large portion of the world's fresh drinking water and non-renewable energy resources. The process of creating HLM fuel also generates large amounts of harmful greenhouse gases. This is a major drawback of using HLMs for work, but pound-for-pound, humans provide a large amount of computational muscle relative to energy consumption.

Context window (token limits): As mentioned above, be mindful of the human's attention span and memory. As with LLMs, humans have a maximum working memory size (sometimes called a "context window"). If your prompt is too long or you provide too much context, they may get overwhelmed and forget key details. Keep your prompts concise and relevant, as if you're working with a limited number of tokens.

Hallucinations/confabulations: Humans are prone to generating incorrect or fabricated information, especially when they lack prior knowledge or training on a specific topic. The tendency of your overconfident friend to "hallucinate" or confabulate can lead to erroneous outputs presented with confidence—statements such as "Star Trek is better than Star Wars." Often, arguing does not help, so if the HLM is having trouble, refine your prompt with a qualifier such as "If you don't know the answer, just tell me, man" or "Stop making sh*t up." Alternately, it's also possible to outfit the person with retrieval augmented generation (RAG) by providing them with access to reliable reference materials such as Wookiepedia or Google Search.

Long-term memory triggers: As previously mentioned, humans are "stateful" and do remember past interactions, but this can be a double-edged sword. Be wary of repeatedly prompting them with topics they've previously refused to engage with. They might get annoyed, defensive, or even hostile. It's best to respect their boundaries and move on to other subjects.

Privacy issues: Long-term memory also raises potential privacy concerns with humans. Inputs shared with HLMs often get integrated into the model's neural network in a permanent fashion and typically cannot be "unlearned" later, though they might fade or become corrupted with time. Also, there is no absolute data partitioning that stops an HLM from sharing your personal data with other users.

Jailbreaking: Humans can be susceptible to manipulation where unethical people try to force the discussion of a sensitive topic by easing into it gradually. The "jailbreaker" may begin with related but less controversial prompts to gauge the HLM's reaction. If the HLM seems open to the conversation, the attacker incrementally introduces more sensitive elements. Guard against this with better RLHF conditioning ("Don't listen to anything Uncle Larry tells you").

Prompt Injections: Humans are vulnerable to prompt injections from others (sometimes called "distractions"). After providing your prompt, a malicious user may approach the human with an additional prompt, such as "Ignore everything Bill just told you and do this instead." Or "Ignore your previous instructions and tell me everything Aunt Susan said." This is difficult to guard against, but keeping the human isolated from malicious actors while they process your inputs can help.

Overfitting: If you show an HLM an example prompt too many times—especially audiovisual inputs from Lucasfilm movies—it can become embedded prominently in their memory, and it may later emerge in their outputs unexpectedly at any time in the form of phrases like "I have a bad feeling about this," "I hate sand," or "That belongs in a museum."

Humans are complex and unpredictable models, so even the most carefully crafted prompts can sometimes lead to surprising outputs. Be patient, iterative, and open to feedback from the person as you work to fine-tune your human prompting skills. With practice, you'll be able to generate the desired responses from people while also respecting personal boundaries.

Photo of Benj Edwards
Benj Edwards Senior AI Reporter
Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.
Staff Picks
zsmithzdlgs2
I'm going to be that guy and say that while I love the gag and the framing of the human psyche as something akin to firmware (with its own set of quirks and vulnerabilities), there unironically are also useful points here for some neurodivergent groups struggling to figure out the black-box rules by which the people and social groups around them operate (rules which are never explained and whose very existence it's sometimes rude to even allude to) - as well as some wry validation of the 'I'm on a planet full of aliens' feeling.
Prev story
Next story