Featured Article

The emerging types of language models and why they matter

Comment

Natural Language Processing concept. Business communication vector illustration
Image Credits: Ledi Nuge / Getty Images

AI systems that understand and generate text, known as language models, are the hot new thing in the enterprise. A recent survey found that 60% of tech leaders said that their budgets for AI language technologies increased by at least 10% in 2020 while 33% reported a 30% increase.

But not all language models are created equal. Several types are emerging as dominant, including large, general-purpose models like OpenAI’s GPT-3 and models fine-tuned for particular tasks (think answering IT desk questions). At the edge exists a third category of model — one that tends to be highly compressed in size and limited to few capabilities, designed specifically to run on Internet of Things devices and workstations.

These different approaches have major differences in strengths, shortcomings and requirements — here’s how they compare and where you can expect to see them deployed over the next year or two.

Large language models

Large language models are, generally speaking, tens of gigabytes in size and trained on enormous amounts of text data, sometimes at the petabyte scale. They’re also among the biggest models in terms of parameter count, where a “parameter” refers to a value the model can change independently as it learns. Parameters are the parts of the model learned from historical training data and essentially define the skill of the model on a problem, such as generating text.

“Large models are used for zero-shot scenarios or few-shot scenarios where little domain-[tailored] training data is available and usually work okay generating something based on a few prompts,” Fangzheng Xu, a Ph.D. student at Carnegie Mellon specializing in natural language processing, told TechCrunch via email. In machine learning, “few-shot” refers to the practice of training a model with minimal data, while “zero-shot” implies that a model can learn to recognize things it hasn’t explicitly seen during training.

“A single large model could potentially enable many downstream tasks with little training data,” Xu continued.

The usage of large language models models has grown dramatically over the past several years as researchers develop newer — and bigger — architectures. In June 2020, AI startup OpenAI released GPT-3, a 175 billion-parameter model that can generate text and even code given a short prompt containing instructions. Open research group EleutherAI subsequently made available GPT-J, a smaller (6 billion parameters) but nonetheless capable language model that can translate between languages, write blog posts, complete code and more. More recently, Microsoft and Nvidia open sourced a model dubbed Megatron-Turing Natural Language Generation (MT-NLG), which is among the largest models for reading comprehension and natural language inference developed to date at 530 billion parameters.

“One reason these large language models remain so remarkable is that a single model can be used for tasks” including question answering, document summarization, text generation, sentence completion, translation and more, Bernard Koch, a computational social scientist at UCLA, told TechCrunch via email. “A second reason is because their performance continues to scale as you add more parameters to the model and add more data … The third reason that very large pre-trained language models are remarkable is that they appear to be able to make decent predictions when given just a handful of labeled examples.”

Startups including Cohere and AI21 Labs also offer models akin to GPT-3 through APIs. Other companies, particularly tech giants like Google, have chosen to keep the large language models they’ve developed in house and under wraps. For example, Google recently detailed — but declined to release — a 540 billion-parameter model called PaLM that the company claims achieves state-of-the-art performance across language tasks.

Large language models, open source or no, all have steep development costs in common. A 2020 study from AI21 Labs pegged the expenses for developing a text-generating model with only 1.5 billion parameters at as much as $1.6 million. Inference — actually running the trained model — is another drain. One source estimates the cost of running GPT-3 on a single AWS instance (p3dn.24xlarge) at a minimum of $87,000 per year.

“Large models will get larger, more powerful, versatile, more multimodal and cheaper to train. Only Big Tech and extremely well-funded startups can play this game,” Vu Ha, a technical director at the AI2 Incubator, told TechCrunch via email. “Large models are great for prototyping, building novel proof-of-concepts and assessing technical feasibility. They are rarely the right choice for real-world deployment due to cost. An application that processes tweets, Slack messages, emails and such on a regular basis would become cost prohibitive if using GPT-3.”

Large language models will continue to be the standard for cloud services and APIs, where versatility and enterprise access are of more importance than latency. But despite recent architectural innovations, these types of language models will remain impractical for the majority of organizations, whether academia, the public or the private sector.

Fine-tuned language models

Fine-tuned models are generally smaller than their large language model counterparts. Examples include OpenAI’s Codex, a direct descendant of GPT-3 fine-tuned for programming tasks. While still containing billions of parameters, Codex is both smaller than OpenAI and better at generating — and completing — strings of computer code.

Fine-tuning can improve a models’ ability to perform a task, for example answering questions or generating protein sequences (as in the case of Salesforce’s ProGen). But it can also bolster a model’s understanding of certain subject matter, like clinical research.

“Fine-tuned … models are good for mature tasks with lots of training data,” Xu said. “Examples include machine translation, question answering, named entity recognition, entity linking [and] information retrieval.”

The advantages don’t stop there. Because fine-tuned models are derived from existing language models, fine-tuned models don’t take nearly as much time — or compute — to train or run. (Larger models like those mentioned above may take weeks or require far more computational power to train in days.) They also don’t require as much data as large language models. GPT-3 was trained on 45 terabytes of text versus the 159 gigabytes on which Codex was trained.

Fine-tuning has been applied to many domains, but one especially strong, recent example is OpenAI’s InstructGPT. Using a technique called “reinforcement learning from human feedback,” OpenAI collected a data set of human-written demonstrations on prompts submitted to the OpenAI API and prompts written by a team of human data labelers. They leveraged these data sets to create fine-tuned offshoots of GPT-3 that — in addition to being a hundredth the size of GPT-3 — are demonstrably less likely to generate problematic text while closely aligning with a user’s intent.

In another demonstration of the power of fine-tuning, Google researchers in February published a study claiming that a model far smaller than GPT-3 — fine-tuned language net (FLAN) — bests GPT-3 “by a large margin” on a number of challenging benchmarks. FLAN, which has 137 billion parameters, outperformed GPT-3 on 19 out of the 25 tasks the researchers tested it on and even surpassed GPT-3’s performance on 10 tasks.

“I think fine-tuning is probably the most widely used approach in industry right now, and I don’t see that changing in the short term. For now, fine-tuning on smaller language models allows users more control to solve their specialized problems using their own domain-specific data,” Koch said. “Instead of distributing [very large language] models that users can fine tune on their own, companies are commercializing few-shot learning through API prompts where you can give the model short prompts and examples.”

Edge language models

Edge models, which are purposefully small in size, can take the form of fine-tuned models — but not always. Sometimes, they’re trained from scratch on small data sets to meet specific hardware constraints (e.g., phone or local web server hardware). In any case, edge models — while limited in some respects — offer a host of benefits that large language models can’t match.

Cost is a major one. With an edge model that runs offline and on-device, there aren’t any cloud usage fees to pay. (Even fine-tuned models are often too large to run on local machines; MT-NLG can take over a minute to generate text on a desktop processor.) Tasks like analyzing millions of tweets may rack up thousands of dollars in fees on popular cloud-based models.

Edge models also offer greater privacy than their internet-bound counterparts, in theory, because they don’t need to transmit or analyze data in the cloud. They’re also faster — a key advantage for applications like translation. Apps such as Google Translate rely on edge models to deliver offline translations.

“Edge computing is likely to be deployed in settings where immediate feedback is needed … In general, I would think these are scenarios where humans are interacting conversationally with AI or robots or something like self-driving cars reading road signs,” Koch said. “As a hypothetical example, Nvidia has a demo where an edge chatbot has a conversation with clients at a fast food restaurant. A final use case might be automated note taking in electronic medical records. Processing conversation quickly in these situations is essential.”

Of course, small models can’t accomplish everything that large models can. They’re bound by the hardware found in edge devices, which ranges from single-core processors to GPU-equipped systems-on-chips. Moreover, some research suggests that the techniques used to develop them can amplify unwanted characteristics, like algorithmic bias.

“[There’s usually a] trade off between power usage and predictive power. Also, mobile device computation is not really increasing at the same pace as distributed high-performance computing clusters, so the performance may lag behind more and more,” Xu said.

Looking to the future

As large, fine-tuned and edge language models continue to evolve with new research, they’re likely to encounter roadblocks on the path to wider adoption. For example, while fine-tuning models requires less data compared to training a model from scratch, fine-tuning still requires a dataset. Depending on the domain — e.g., translating from a little-spoken language — the data might not exist.

“The disadvantage of fine-tuning is that it still requires a fair amount of data. The disadvantage of few-shot learning is that it doesn’t work as well as fine-tuning, and that data scientists and machine learning engineers have less control over the model because they are only interacting with it through an API,” Koch continued. “And the disadvantages of edge AI are that complex models cannot fit on small devices, so performance is strictly worse than models that can fit on a single desktop GPU — much less cloud-based large language models distributed across tens of thousands of GPUs.”

Xu notes that all language models, regardless of size, remain understudied in certain important aspects. She hopes that areas like explainability and interpretability — which aim to understand how and why a model works and expose this information to users — receive greater attention and investment in the future, particularly in “high-stake” domains like medicine.

“Provenance is really an important next step that these models should have,” Xu said. “In the future, there will be more and more efficient fine-tuning techniques … to accommodate the increasing cost of fine-tuning a larger model in whole. Edge models will continue to be important, as the larger the model, the more research and development is needed to distill or compress the model to fit on edge devices.”

More TechCrunch

BDO, the auditor for Indian edtech startup Byju’s, has resigned with immediate effect, marking the second auditor departure for the embattled startup in about a year and further intensifying concerns…

Second Byju’s auditor exits in a year as financial turmoil deepens

A federal judge says he will deliver a punishment in Google’s antitrust case by August 2025, according to The New York Times, after ruling earlier this month that Google had…

Google to receive punishment for search monopoly by next August, says judge

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code…

ChatGPT: Everything you need to know about the AI-powered chatbot

The world will have to wait a little longer to see Blue Origin’s massive New Glenn rocket fly for the first time. That rocket had been scheduled to launch two…

The maiden voyage of Blue Origin’s massive new rocket won’t be for NASA

After 93 days on orbit, Starliner is coming home.  The spacecraft is a “go” for undocking from the International Space Station at 6:04 p.m. EST, though it will be leaving…

Watch live as Boeing and NASA attempt to bring empty Starliner back to Earth

Some of Vice President Kamala Harris’ wealthier donors are informally asking for FTC Chair Lina Khan to be replaced, reports Bloomberg. It’s not really surprising: Her expansive definition of antitrust…

Wealthy Harris donors are reportedly pressing for ouster of FTC Chair Lina Khan

Mangomint seeks to make it easier for spa and salon owners to run their businesses.

How a cold email to a VC helped salon software startup Mangomint raise $35M

The honors program is one of the first in the U.S. that allows incoming freshmen to apply for the program as part of their initial admission application.

University of Texas opens robotics program up to incoming freshmen

By using readily available natural gas as the feedstock, C-Zero hopes to produce emission-free hydrogen for less than other green hydrogen startups.

C-Zero is raising $18M to make emission-free hydrogen using natural gas, filings reveal

Meta on Friday published an update on how it plans to comply with the Digital Markets Act (DMA), the European law that aims to promote competition in digital marketplaces, where…

Meta will let third-party apps place calls to WhatsApp and Messenger users — in 2027

At the annual Roblox Developers Conference, the company announced on Friday a series of changes coming to the platform in the next few months and years. Most notably, Roblox is…

Roblox introduces new earning opportunities for creators, teases generative AI project

Apple is likely to unveil its iPhone 16 series of phones and maybe even some Apple Watches at its Glowtime event on September 9.

How to watch the iPhone 16 reveal during this year’s big Apple Event

Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of startups. Want it in your inbox every Friday? Sign up here. You won’t…

Startups have to be clever when fighting larger rivals

The Philadelphia Eagles and the Green Bay Packers will face off tonight in their first game of the NFL season. But this season opener is a bit different. As the…

NFL kicks off in Brazil for the first time, but reporters and fans can’t post on X due to nationwide ban

Venture capitalist Tim Draper’s international pitch competition, “Meet the Drapers,” is partnering up with TikTok as it heads into its seventh season. Under the new tie-up, entrepreneurs will pitch their…

VC pitch show ‘Meet the Drapers’ partners with TikTok

It’s tempting to think the trend of EV startups merging with special purpose acquisition companies (SPACs) to go public has ended, seeing how many of them are struggling or defunct.…

Public EV startup with an indicted CEO is looking to raise an additional $100 million

In the world of modern AI, data is more than just a resource — it’s the fundamental core that aligns decision-makers, supports processes and enables innovation. As AI applications become…

The New Data Pipeline: Fivetran, DataStax and NEA are coming to TechCrunch Disrupt 2024

In a brief update ahead of the weekend, the London transport network said it has no evidence yet that customer data was compromised.

Transport for London outages drag into weekend after cyberattack

Meta-owned Instagram is jazzing up the inbox by adding new features for photo editing, sticker creation and themes. The company is trying to make Instagram more appealing as a messaging…

Instagram jazzes up its DMs with stickers, photo editing, and themes

Keep the excitement of TechCrunch Disrupt 2024 alive by hosting an exclusive Side Event after hours. Don’t miss out — today is the final day to apply for free! Maximize…

Last call: Boost your brand by hosting a Side Event at TechCrunch Disrupt 2024

Today’s your final chance to secure your TechCrunch Disrupt 2024 Student Pass with a $200 discount! Maximize your savings by opting for the Student 4+ Bundle and bring four or…

Students and recent grads: Last day to save on TechCrunch Disrupt 2024 Student Passes

The Equity podcast crew is wrapping up another eventful week, with real estate, AI agents, gambling and secondary markets — which are, of course, a form of legalized gambling. Mary…

Real estate revolutions and beanie baby economies

More antitrust woes for Google. The U.K’.s competition watchdog said on Friday that it suspects the company of adtech antitrust abuses. The tech giant will now have a chance to…

Google faces provisional antitrust charges in UK for ‘self-preferencing’ its ad exchange

You can build a reminder and task management system for yourself, and use a service that works for your team. But it might not be easy to get your family…

Karo is a to-do app that lets you assign tasks to your friends and family

Earlier this week, the EU’s lead privacy regulator ended its court proceeding related to how X processed user data to train its Grok AI chatbot, but the saga isn’t over…

Elon Musk’s X could still face sanctions for training Grok on Europeans’ data

Telegram has updated its website to explicitly allow users to report private chats to its moderators, the company said in its FAQ page, as it updated some of its other…

Telegram quietly updates website to allow abuse reports following founder’s arrest

SpaceX President Gwynne Shotwell made a public plea to one of Brazil’s top judicial figures on Thursday, asking him to “please stop harassing Starlink” amid the ongoing battle in the…

‘Stop harassing Starlink,’ SpaceX president tells Brazilian judge

OSOM always had a difficult road, with plans to launch a privacy-focused handset.

Osom is shutting down on Friday, as it had ‘no customers for a mobile phone’

Salesforce has acquired Own Company, a New Jersey-based provider of data management and protection solutions, for $1.9 billion in cash. Own is Salesforce’s biggest deal since buying Slack for $27.7…

Salesforce acquires data management firm Own for $1.9B in cash

The U.S. government indictment demonstrated deep knowledge of the Russian spies’ activities, including their real-world meetings at a cafe in Moscow.

US charges five Russian military hackers with targeting Ukraine’s government with destructive malware