What are large language models (LLMs)?
Large language model definition
A large language model (LLM) is a deep learning algorithm that can perform a variety of natural language processing (NLP) tasks. Large language models use transformer models and are trained using massive datasets — hence, large. This enables them to recognize, translate, predict, or generate text or other content.
Large language models are also referred to as neural networks (NNs), which are computing systems inspired by the human brain. These neural networks work using a network of nodes that are layered, much like neurons.
In addition to teaching human languages to artificial intelligence (AI) applications, large language models can also be trained to perform a variety of tasks like understanding protein structures, writing software code, and more. Like the human brain, large language models must be pre-trained and then fine-tuned so that they can solve text classification, question answering, document summarization, and text generation problems. Their problem-solving capabilities can be applied to fields like healthcare, finance, and entertainment where large language models serve a variety of NLP applications, such as translation, chatbots, AI assistants, and so on.
Large language models also have large numbers of parameters, which are akin to memories the model collects as it learns from training. Think of these parameters as the model's knowledge bank.
Watch this video and take a deeper dive into LLMs.
So, what is a transformer model?
A transformer model is the most common architecture of a large language model. It consists of an encoder and a decoder. A transformer model processes data by tokenizing the input, then simultaneously conducting mathematical equations to discover relationships between tokens. This enables the computer to see the patterns a human would see were it given the same query.
Transformer models work with self-attention mechanisms, which enables the model to learn more quickly than traditional models like long short-term memory models. Self-attention is what enables the transformer model to consider different parts of the sequence, or the entire context of a sentence, to generate predictions.
Related: Apply transformers to your search applications
Key components of large language models
Large language models are composed of multiple neural network layers. Recurrent layers, feedforward layers, embedding layers, and attention layers work in tandem to process the input text and generate output content.
The embedding layer creates embeddings from the input text. This part of the large language model captures the semantic and syntactic meaning of the input, so the model can understand context.
The feedforward layer (FFN) of a large language model is made of up multiple fully connected layers that transform the input embeddings. In so doing, these layers enable the model to glean higher-level abstractions — that is, to understand the user's intent with the text input.
The recurrent layer interprets the words in the input text in sequence. It captures the relationship between words in a sentence.
The attention mechanism enables a language model to focus on single parts of the input text that is relevant to the task at hand. This layer allows the model to generate the most accurate outputs.
There are three main kinds of large language models:
- Generic or raw language models predict the next word based on the language in the training data. These language models perform information retrieval tasks.
- Instruction-tuned language models are trained to predict responses to the instructions given in the input. This allows them to perform sentiment analysis, or to generate text or code.
- Dialog-tuned language models are trained to have a dialog by predicting the next response. Think of chatbots or conversational AI.
What is the difference between large language models and generative AI?
Generative AI is an umbrella term that refers to artificial intelligence models that have the capability to generate content. Generative AI can generate text, code, images, video, and music. Examples of generative AI include Midjourney, DALL-E, and ChatGPT.
Large language models are a type of generative AI that are trained on text and produce textual content. ChatGPT is a popular example of generative text AI.
All large language models are generative AI1.
How do large language models work?
A large language model is based on a transformer model and works by receiving an input, encoding it, and then decoding it to produce an output prediction. But before a large language model can receive text input and generate an output prediction, it requires training, so that it can fulfill general functions, and fine-tuning, which enables it to perform specific tasks.
Training: Large language models are pre-trained using large textual datasets from sites like Wikipedia, GitHub, or others. These datasets consist of trillions of words, and their quality will affect the language model's performance. At this stage, the large language model engages in unsupervised learning, meaning it processes the datasets fed to it without specific instructions. During this process, the LLM's AI algorithm can learn the meaning of words, and of the relationships between words. It also learns to distinguish words based on context. For example, it would learn to understand whether "right" means "correct," or the opposite of "left."
Fine-tuning: In order for a large language model to perform a specific task, such as translation, it must be fine-tuned to that particular activity. Fine-tuning optimizes the performance of specific tasks.
Prompt-tuning fulfills a similar function to fine-tuning, whereby it trains a model to perform a specific task through few-shot prompting, or zero-shot prompting. A prompt is an instruction given to an LLM. Few-shot prompting teaches the model to predict outputs through the use of examples. For instance, in this sentiment analysis exercise, a few-shot prompt would look like this:
Customer review: This plant is so beautiful!
Customer sentiment: positive
Customer review: This plant is so hideous!
Customer sentiment: negative
The language model would understand, through the semantic meaning of "hideous," and because an opposite example was provided, that the customer sentiment in the second example is "negative."
Alternatively, zero-shot prompting does not use examples to teach the language model how to respond to inputs. Instead, it formulates the question as "The sentiment in ‘This plant is so hideous' is…." It clearly indicates which task the language model should perform, but does not provide problem-solving examples.
Large language models use cases
Large language models can be used for several purposes:
- Information retrieval: Think of Bing or Google. Whenever you use their search feature, you are relying on a large language model to produce information in response to a query. It's able to retrieve information, then summarize and communicate the answer in a conversational style.
- Sentiment analysis: As applications of natural language processing, large language models enable companies to analyze the sentiment of textual data.
- Text generation: Large language models are behind generative AI, like ChatGPT, and can generate text based on inputs. They can produce an example of text when prompted. For example: "Write me a poem about palm trees in the style of Emily Dickinson."
- Code generation: Like text generation, code generation is an application of generative AI. LLMs understand patterns, which enables them to generate code.
- Chatbots and conversational AI: Large language models enable customer service chatbots or conversational AI to engage with customers, interpret the meaning of their queries or responses, and offer responses in turn.
Related: How to make a chatbot: Dos and don'ts for developers
In addition to these use cases, large language models can complete sentences, answer questions, and summarize text.
With such a wide variety of applications, large language applications can be found in a multitude of fields:
- Tech: Large language models are used anywhere from enabling search engines to respond to queries, to assisting developers with writing code.
- Healthcare and Science: Large language models have the ability to understand proteins, molecules, DNA, and RNA. This position allows LLMs to assist in the development of vaccines, finding cures for illnesses, and improving preventative care medicines. LLMs are also used as medical chatbots to perform patient intakes or basic diagnoses.
- Customer Service: LLMs are used across industries for customer service purposes such as chatbots or conversational AI.
- Marketing: Marketing teams can use LLMs to perform sentiment analysis to quickly generate campaign ideas or text as pitching examples, and much more.
- Legal: From searching through massive textual datasets to generating legalese, large language models can assist lawyers, paralegals, and legal staff.
- Banking: LLMs can support credit card companies in detecting fraud.
Get started with Generative AI in Enterprise. Watch this webinar and explore the challenges and opportunities of generative AI in your enterprise environment.
Benefits of large language models
With a broad range of applications, large language models are exceptionally beneficial for problem-solving since they provide information in a clear, conversational style that is easy for users to understand.
Large set of applications: They can be used for language translation, sentence completion, sentiment analysis, question answering, mathematical equations, and more.
Always improving: Large language model performance is continually improving because it grows when more data and parameters are added. In other words, the more it learns, the better it gets. What’s more, large language models can exhibit what is called "in-context learning." Once an LLM has been pretrained, few-shot prompting enables the model to learn from the prompt without any additional parameters. In this way, it is continually learning.
They learn fast: When demonstrating in-context learning, large language models learn quickly because they do not require additional weight, resources, and parameters for training. It is fast in the sense that it doesn’t require too many examples.
Limitations and challenges of LLMs
Large language models might give us the impression that they understand meaning and can respond to it accurately. However, they remain a technological tool and as such, large language models face a variety of challenges.
Hallucinations: A hallucination is when a LLM produces an output that is false, or that does not match the user's intent. For example, claiming that it is human, that it has emotions, or that it is in love with the user. Because large language models predict the next syntactically correct word or phrase, they can't wholly interpret human meaning. The result can sometimes be what is referred to as a "hallucination."
Security: Large language models present important security risks when not managed or surveilled properly. They can leak people's private information, participate in phishing scams, and produce spam. Users with malicious intent can reprogram AI to their ideologies or biases, and contribute to the spread of misinformation. The repercussions can be devastating on a global scale.
Bias: The data used to train language models will affect the outputs a given model produces. As such, if the data represents a single demographic, or lacks diversity, the outputs produced by the large language model will also lack diversity.
Consent: Large language models are trained on trillions of datasets — some of which might not have been obtained consensually. When scraping data from the internet, large language models have been known to ignore copyright licenses, plagiarize written content, and repurpose proprietary content without getting permission from the original owners or artists. When it produces results, there is no way to track data lineage, and often no credit is given to the creators, which can expose users to copyright infringement issues.
They might also scrape personal data, like names of subjects or photographers from the descriptions of photos, which can compromise privacy.2 LLMs have already run into lawsuits, including a prominent one by Getty Images3, for violating intellectual property.
Scaling: It can be difficult and time- and resource-consuming to scale and maintain large language models.
Deployment: Deploying large language models requires deep learning, a transformer model, distributed software and hardware, and overall technical expertise.
Examples of popular large language models
Popular large language models have taken the world by storm. Many have been adopted by people across industries. You've no doubt heard of ChatGPT, a form of generative AI chatbot.
Other popular LLM models include:
- PaLM: Google's Pathways Language Model (PaLM) is a transformer language model capable of common-sense and arithmetic reasoning, joke explanation, code generation, and translation.
- BERT: The Bidirectional Encoder Representations from Transformers (BERT) language model was also developed at Google. It is a transformer-based model that can understand natural language and answer questions.
- XLNet: A permutation language model, XLNet generated output predictions in a random order, which distinguishes it from BERT. It assesses the pattern of tokens encoded and then predicts tokens in random order, instead of a sequential order.
- GPT: Generative pre-trained transformers are perhaps the best-known large language models. Developed by OpenAI, GPT is a popular foundational model whose numbered iterations are improvements on their predecessors (GPT-3, GPT-4, etc.). It can be fine-tuned to perform specific tasks downstream. Examples of this are EinsteinGPT, developed by Salesforce for CRM, and Bloomberg's BloombergGPT for finance.
Related: 2024 getting started guide to open-source LLMs
Future advancements in large language models
The arrival of ChatGPT has brought large language models to the fore and activated speculation and heated debate on what the future might look like.
As large language models continue to grow and improve their command of natural language, there is much concern regarding what their advancement would do to the job market. It's clear that large language models will develop the ability to replace workers in certain fields.
In the right hands, large language models have the ability to increase productivity and process efficiency, but this has posed ethical questions for its use in human society.
Related: 2024 open-source LLMs guide
Introducing the Elasticsearch Relevance Engine
To address the current limitations of LLMs, the Elasticsearch Relevance Engine (ESRE) is a relevance engine built for artificial intelligence-powered search applications. With ESRE, developers are empowered to build their own semantic search application, utilize their own transformer models, and combine NLP and generative AI to enhance their customers' search experience.
Supercharge your relevance with the Elasticsearch Relevance Engine
Explore more large language model resources
- Elastic generative AI tools and capabilities
- How to choose a vector database
- How to make a chatbot: Dos and don'ts for developers
- Choosing an LLM: The 2024 getting started guide to open-source LLMs
- Language models in Elasticsearch
- 2024 technical trends: How search and generative AI are evolving
- Overview of Natural language processing (NLP) in the Elastic Stack
- Compatible third-party models with the Elastic Stack
- Guide to trained models in the Elastic Stack
- The LLM Safety Assessment
Footnotes
1 Myer, Mike. “Are Generative AI and Large Language Models the Same Thing?” Quiq, 12 May 2023, quiq.com/blog/generative-ai-vs-large-language-models/.
2 Sheng, Ellen. “In generative AI legal Wild West, the courtroom battles are just getting started,” CNBC, April 3, 2023, https://www.cnbc.com/2023/04/03/in-generative-ai-legal-wild-west-lawsuits-are-just-getting-started.html (Accessed June 29, 2023)
3 Getty Images Statement, Getty Images, Jan 17 2023 https://newsroom.gettyimages.com/en/getty-images/getty-images-statement (Accessed June 29, 2023)