AI

AI training data has a price tag that only Big Tech can afford

Comment

Binary code in blue with little yellow locks in between to illustrate data protection.
Image Credits: Peresmeh / Getty Images

Data is at the heart of today’s advanced AI systems, but it’s costing more and more — making it out of reach for all but the wealthiest tech companies.

Last year, James Betker, a researcher at OpenAI, penned a post on his personal blog about the nature of generative AI models and the datasets on which they’re trained. In it, Betker claimed that training data — not a model’s design, architecture or any other characteristic — was the key to increasingly sophisticated, capable AI systems.

“Trained on the same data set for long enough, pretty much every model converges to the same point,” Betker wrote.

Is Betker right? Is training data the biggest determiner of what a model can do, whether it’s answer a question, draw human hands, or generate a realistic cityscape?

It’s certainly plausible.

Statistical machines

Generative AI systems are basically probabilistic models — a huge pile of statistics. They guess based on vast amounts of examples which data makes the most “sense” to place where (e.g., the word “go” before “to the market” in the sentence “I go to the market”). It seems intuitive, then, that the more examples a model has to go on, the better the performance of models trained on those examples.

“It does seem like the performance gains are coming from data,” Kyle Lo, a senior applied research scientist at the Allen Institute for AI (AI2), a AI research nonprofit, told TechCrunch, “at least once you have a stable training setup.”

Lo gave the example of Meta’s Llama 3, a text-generating model released earlier this year, which outperforms AI2’s own OLMo model despite being architecturally very similar. Llama 3 was trained on significantly more data than OLMo, which Lo believes explains its superiority on many popular AI benchmarks.

(I’ll point out here that the benchmarks in wide use in the AI industry today aren’t necessarily the best gauge of a model’s performance, but outside of qualitative tests like our own, they’re one of the few measures we have to go on.)

That’s not to suggest that training on exponentially larger datasets is a sure-fire path to exponentially better models. Models operate on a “garbage in, garbage out” paradigm, Lo notes, and so data curation and quality matter a great deal, perhaps more than sheer quantity.

“It is possible that a small model with carefully designed data outperforms a large model,” he added. “For example, Falcon 180B, a large model, is ranked 63rd on the LMSYS benchmark, while Llama 2 13B, a much smaller model, is ranked 56th.”

In an interview with TechCrunch last October, OpenAI researcher Gabriel Goh said that higher-quality annotations contributed enormously to the enhanced image quality in DALL-E 3, OpenAI’s text-to-image model, over its predecessor DALL-E 2. “I think this is the main source of the improvements,” he said. “The text annotations are a lot better than they were [with DALL-E 2] — it’s not even comparable.”

Many AI models, including DALL-E 3 and DALL-E 2, are trained by having human annotators label data so that a model can learn to associate those labels with other, observed characteristics of that data. For example, a model that’s fed lots of cat pictures with annotations for each breed will eventually “learn” to associate terms like bobtail and shorthair with their distinctive visual traits.

Bad behavior

Experts like Lo worry that the growing emphasis on large, high-quality training datasets will centralize AI development into the few players with billion-dollar budgets that can afford to acquire these sets. Major innovation in synthetic data or fundamental architecture could disrupt the status quo, but neither appear to be on the near horizon.

“Overall, entities governing content that’s potentially useful for AI development are incentivized to lock up their materials,” Lo said. “And as access to data closes up, we’re basically blessing a few early movers on data acquisition and pulling up the ladder so nobody else can get access to data to catch up.”

Indeed, where the race to scoop up more training data hasn’t led to unethical (and perhaps even illegal) behavior like secretly aggregating copyrighted content, it has rewarded tech giants with deep pockets to spend on data licensing.

Generative AI models such as OpenAI’s are trained mostly on images, text, audio, videos and other data — some copyrighted — sourced from public web pages (including, problematically, AI-generated ones). The OpenAIs of the world assert that fair use shields them from legal reprisal. Many rights holders disagree — but, at least for now, they can’t do much to prevent this practice.

There are many, many examples of generative AI vendors acquiring massive datasets through questionable means in order to train their models. OpenAI reportedly transcribed more than a million hours of YouTube videos without YouTube’s blessing — or the blessing of creators — to feed to its flagship model GPT-4. Google recently broadened its terms of service in part to be able to tap public Google Docs, restaurant reviews on Google Maps and other online material for its AI products. And Meta is said to have considered risking lawsuits to train its models on IP-protected content.

Meanwhile, companies large and small are relying on workers in third-world countries paid only a few dollars per hour to create annotations for training sets. Some of these annotators — employed by mammoth startups like Scale AI — work literal days on end to complete tasks that expose them to graphic depictions of violence and bloodshed without any benefits or guarantees of future gigs.

Growing cost

In other words, even the more aboveboard data deals aren’t exactly fostering an open and equitable generative AI ecosystem.

OpenAI has spent hundreds of millions of dollars licensing content from news publishers, stock media libraries and more to train its AI models — a budget far beyond that of most academic research groups, nonprofits and startups. Meta has gone so far as to weigh acquiring the publisher Simon & Schuster for the rights to e-book excerpts (ultimately, Simon & Schuster sold to private equity firm KKR for $1.62 billion in 2023).

With the market for AI training data expected to grow from roughly $2.5 billion now to close to $30 billion within a decade, data brokers and platforms are rushing to charge top dollar — in some cases over the objections of their user bases.

Stock media library Shutterstock has inked deals with AI vendors ranging from $25 million to $50 million, while Reddit claims to have made hundreds of millions from licensing data to orgs such as Google and OpenAI. Few platforms with abundant data accumulated organically over the years haven’t signed agreements with generative AI developers, it seems — from Photobucket to Tumblr to Q&A site Stack Overflow.

It’s the platforms’ data to sell — at least depending on which legal arguments you believe. But in most cases, users aren’t seeing a dime of the profits. And it’s harming the wider AI research community.

“Smaller players won’t be able to afford these data licenses, and therefore won’t be able to develop or study AI models,” Lo said. “I worry this could lead to a lack of independent scrutiny of AI development practices.”

Independent efforts

If there’s a ray of sunshine through the gloom, it’s the few independent, not-for-profit efforts to create massive datasets anyone can use to train a generative AI model.

EleutherAI, a grassroots nonprofit research group that began as a loose-knit Discord collective in 2020, is working with the University of Toronto, AI2 and independent researchers to create The Pile v2, a set of billions of text passages primarily sourced from the public domain.

In April, AI startup Hugging Face released FineWeb, a filtered version of the Common Crawl — the eponymous dataset maintained by the nonprofit Common Crawl, composed of billions upon billions of web pages — that Hugging Face claims improves model performance on many benchmarks.

A few efforts to release open training datasets, like the group LAION’s image sets, have run up against copyright, data privacy and other, equally serious ethical and legal challenges. But some of the more dedicated data curators have pledged to do better. The Pile v2, for example, removes problematic copyrighted material found in its progenitor dataset, The Pile.

The question is whether any of these open efforts can hope to maintain pace with Big Tech. As long as data collection and curation remains a matter of resources, the answer is likely no — at least not until some research breakthrough levels the playing field.

More TechCrunch

Volkswagen is taking its ChatGPT voice assistant experiment on the road. Or more, specifically to vehicles it sells in the United States.  The German automaker announced in January at CES…

Volkswagen is rolling out its ChatGPT assistant to the US

From idea to IPO, Disrupt charts startups at every stage on the roadmap to their next breakthrough. TechCrunch will gather some of the startup world’s leading companies — but our…

Learn startup best practices with MongoDB, Venture Backed, InterSystems and others at Disrupt 2024

Android introduced five updates on Tuesday as part of its latest release of the mobile operating system. Available for smartphones, tablets, and Wear OS watches, the new features include audio…

Android’s latest update improves text-to-speech, Circle to Search, earthquake alerts and more

Google announced on Tuesday it’s releasing Android 15 and making its source code available ahead of the coming consumer launch, which will bring the new mobile operating system to supported…

Android 15 will be available on supported Pixel devices in the coming weeks

As new users downloaded the app, Bluesky jumped to becoming the app to No. 1 in Brazil over the weekend, ahead of Meta’s X competitor, Instagram Threads.

Bluesky continues to soar, adding 2M more new users in a matter of days

Welcome to TechCrunch Fintech! This week, we’re looking at a new real estate startup that’s making big waves with its offering, Klarna and Affirm’s financials, a neobank focused on immigrants…

The flat-rate real estate startup that’s got big players worried and BNPL’s turning a corner

Instagram’s latest feature aims to boost user interaction within Stories. The social media platform now allows followers to comment on each other’s Stories, making the experience more community-focused, akin to…

As more Instagram users engage with Stories, the app adds a comments feature

Curious about how top venture capitalists are positioning themselves for the next wave in the crypto market?  Dragonfly Capital’s Haseeb Qureshi, Galaxy Ventures’ Will Nuelle, and NFX’s Morgan Beller will…

Dragonfly Capital, Galaxy Ventures and NFX share insights on crypto scaling and strategy at TechCrunch Disrupt 2024

Get ready for TechCrunch Disrupt 2024, our signature event for startups of all stages, happening at Moscone West in San Francisco from October 28-30. This year, we’re expecting a massive…

Announcing the final agenda for the Builders Stage at TechCrunch Disrupt 2024

Spotter, the startup that provides financial solutions to content creators, announced Tuesday the launch of its new AI-powered creative suite. Dubbed Spotter Studio, the solution aims to support YouTubers throughout the…

Spotter launches AI tools to help YouTubers brainstorm video ideas, thumbnails and more

This second fund is significant because Gupta expanded it beyond a corporate fund with one main LP — Prudential Financial — into one supported by a number of financial and…

Former Citi, Battery VC has new $378M fund that helps startups land Prudential, Mutual of Omaha, others as investors and customers

The oil and fracking giant says it is “working to identify effects” of the ongoing cyberattack on its oil and fracking operations.

Halliburton confirms data was stolen in ongoing cyberattack

Is Elon’s rumble in the Amazonian jungle on course for a technical knockout? Over the weekend, the Brazilian high court voted to uphold a ban on X that another judge issued…

Elon Musk’s Brazil battle wages on

Flexible green methanol, which is made without fossil fuels, could rid carbon pollution from a range of industries.

Oxylus Energy strikes ‘beautiful balance’ to make e-fuels for aviation and shipping

French billionaire Xavier Niel is joining the board of directors of TikTok’s parent, ByteDance, the company told the South China Morning Post. It’s an interesting move as Niel isn’t a…

Xavier Niel replaces Coatue’s Laffont on board of TikTok parent ByteDance

The Netherlands’ data protection authority has imposed a penalty of €30.5M on Clearview AI for GDPR violations.

Clearview AI hit with its largest GDPR fine yet as Dutch regulator considers holding execs personally liable

X, the social network owned by Elon Musk, is finally rolling out one of the most sought-after features for direct messages: the ability to edit your message. Over the weekend,…

X now lets you edit DMs — here is how to use the feature

The Dubai-based startup, which now counts 50,000 retail and business customers in the UAE, has netted $22 million led by Altos Ventures.

Ziina banks $22M as growth explodes for the UAE-based fintech for small businesses

Fleet is launching several software services on top of its hardware-as-a-service proposition, from device management to cybersecurity and insurance.

Laptop-leasing startup Fleet wants to become the IT companion for small companies

The potential of Cercli’s payroll platform has attracted investor interest, leading to $4 million in seed funding.

Payroll startup Cercli inks $4M to build the ‘Rippling for the Middle East and North Africa’

Hospitals around the world regularly face bed shortages — an issue that can get exacerbated to breaking point when a health scare or other large-scale disaster occurs. A startup called…

‘Hospital at home’ startup Doccla raises $46 million for its European expansion

India’s fabless semiconductor startup BigEndian has raised $3 million in a seed round led by Vertex Ventures SEA and India.

BigEndian founders hope to use their deep chip experience to help establish India in semiconductors

SparkLabs — an early-stage venture capital firm that has made a name for itself for backing OpenAI as well as a host of other AI startups such as Vectara, Allganize,…

SparkLabs closes $50M fund to back AI startups

As companies grapple with the challenge of developing a sustainable business without sacrificing their core principles, open source has evolved from a niche approach to software development into the business…

Accel, Docker and Redis will discuss what’s next in open source as a business model at TechCrunch Disrupt 2024

Whether it’s a sophisticated cocktail party, a casual happy hour, a niche meetup, or a skill-building workshop, “Disrupt Week” offers you the flexibility to host a Side Event that truly…

Enhance your brand at TechCrunch Disrupt 2024 by hosting a Side Event

After joining the firm as an investor in 2022, Lu has seen how AI and new distribution platforms are changing the industry for the better.

A16z’s Joshua Lu says AI is already radically changing video games and Discord is the future

Only 5 days remain to grab a $200 discount on Student Passes for TechCrunch Disrupt 2024. This special offer ends on September 6 at 11:59 p.m. PT. Don’t miss out!…

Students and recent grads: 5 days left to save on TechCrunch Disrupt 2024 tickets

The tech industry has responded with a resounding outcry against SB 1047.

Sign or veto: What’s next for California’s AI disaster bill, SB 1047?

Even before Delta came forward, shareholders were looking for their pound of flesh, filing a class action lawsuit against CrowdStrike.

CrowdStrike faces onslaught of legal action from faulty software update

If you have never considered a search engine beyond Google, you might be surprised to see what else is out there.

Want to branch out beyond Google? Here are some search engines worth checking out