AI

Google’s Gemini isn’t the generative AI model we expected

Comment

Google logo on building
Image Credits: Alex Tai/SOPA Images/LightRocket / Getty Images

Google’s long-promised, next-gen generative AI model, Gemini, has arrived. Sort of.

The version of Gemini launching this week, Gemini Pro, is essentially a lightweight offshoot of a more powerful, capable Gemini model set to arrive… sometime next year. But I’m getting ahead of myself.

Yesterday in a virtual press briefing, members of the Google DeepMind team — the driving force behind Gemini, alongside Google Research — gave a high-level overview of Gemini (technically “Gemini 1.0”) and its capabilities.

Gemini, as it turns out, is actually a family of AI models — not just one. It comes in three flavors:

  • Gemini Ultra, the flagship Gemini model
  • Gemini Pro, a “lite” Gemini model
  • Gemini Nano, which is distilled to run on mobile devices like the Pixel 8 Pro*

*To make matters more confusing, Gemini Nano comes in two model sizes, Nano-1 (1.8 billion parameters) and Nano-2 (3.25 billion parameters) — targeting low- and high-memory devices, respectively.

Gemini
Image Credits: Google

The easiest place to try Gemini Pro is Bard, Google’s ChatGPT competitor, which as of today is powered by a fine-tuned version of Gemini Pro — at least in English in the U.S. (and only for text, not images). Sissie Hsiao, GM of Google Assistant and Bard, said during the briefing that the fine-tuned Gemini Pro delivers improved reasoning, planning and understanding capabilities over the previous model driving Bard.

We can’t independently confirm any of those improvements, I’ll note. Google didn’t allow reporters to test the models prior to their unveiling and, indeed, didn’t give live demos during the briefing.

Gemini Pro will also launch December 13 for enterprise customers using Vertex AI, Google’s fully managed machine learning platform, and then head to Google’s Generative AI Studio developer suite. (Some eagle-eyed users have already spotted Gemini model versions appearing in Vertex AI’s model garden.) Elsewhere, Gemini will arrive in the coming months in Google products like Duet AI, Chrome and Ads, as well as Search as a part of Google’s Search Generative Experience.

Gemini Nano, meanwhile, will launch soon in preview via Google’s recently released AI Core app, exclusive to Android 14 on the Pixel 8 Pro for now; Android developers interested in incorporating the model into their apps can sign up today for a sneak peek. On the Pixel 8 Pro first and other Android devices in the future, Gemini Nano will power features that Google previewed during the Pixel 8 Pro’s unveiling in October, like summarization in the Recorder app and suggested replies for supported messaging apps (starting with WhatsApp).

Natively multimodal

Gemini Pro — or at least the fine-tuned version of Gemini Pro powering Bard — isn’t much to write home about.

Hsiao says that Gemini Pro is more capable at tasks such as summarizing content, brainstorming and writing, and outperforms OpenAI’s GPT-3.5, the predecessor to GPT-4, in six benchmarks, including one (GSM8K) that measures grade school math reasoning. But GPT-3.5 is over a year old — hardly a challenging milestone to surpass at this point.

So what about Gemini Ultra? Surely it must be more impressive?

Somewhat.

Like Gemini Pro, Gemini Ultra was trained to be “natively multimodal” — in other words, pre-trained and fine-tuned on a large set of codebases, text in different languages, audio, images and videos. Eli Collins, VP of product at DeepMind, claims that Gemini Ultra can comprehend “nuanced” information in text, images, audio and code and answer questions relating to “complicated” topics, particularly math and physics.

Gemini
Image Credits: Google

In this respect, Gemini Ultra does several things better than rival OpenAI’s own multimodal model, GPT-4 with Vision, which can only understand the context of two modalities: words and images. Gemini Ultra can transcribe speech and answer questions about audio and videos (e.g. “What’s happening in this clip?”) in addition to art and photos.

“The standard approach to creating multimodal models involves training separate components for different modalities,” Collins said during the briefing. “These models are pretty good at performing certain tasks like describing an image, but they really struggle with more complicated conceptual and complicated reasoning tasks. So we designed Gemini to be natively multimodal.”

I wish I could tell you more about Gemini’s training datasets — I’m curious myself. But Google repeatedly refused to answer questions from reporters about how it collected Gemini’s training data, where the training data came from and whether any of it was licensed from a third party.

Collins did reveal that at least a portion of the data was from public web sources and that Google “filtered” it for quality and “inappropriate” material. But he didn’t address the elephant in the room: whether creators who might’ve unknowingly contributed to Gemini’s training data can opt out or expect/request compensation.

Google’s not the first to keep its training data close to the chest. The data isn’t only a competitive advantage, but a potential source of lawsuits pertaining to fair use. Microsoft, GitHub, OpenAI and Stability AI are among the generative AI vendors being sued in motions that accuse them of violating IP law by training their AI systems on copyrighted content, including artwork and e-books, without providing the creators credit or pay.

Gemini
Image Credits: Google

OpenAI, joining several other generative AI vendors, recently said it would allow artists to opt out of the training datasets for its future art-generating models. Google offers no such option for art-generating models or otherwise — and it seems that policy won’t change with Gemini.

Google trained Gemini on its in-house AI chips, tensor processing units (TPUs) — specifically TPU v4 and v5e (and in the future the v5p) — and is running Gemini models on a combination of TPUs and GPUs. (According to a technical whitepaper released this morning, Gemini Pro took “a matter of weeks” to train, with Gemini Ultra presumably taking much longer.) While Collins claimed that Gemini is Google’s “most efficient” large generative AI model to date and “significantly cheaper” than its multimodal predecessors, he wouldn’t say how many chips were used to train it or how much it cost — or the environmental impact of the training.

One article estimates that training a model the size of GPT-4 emits upwards of 300 metric tons of CO2 — significantly more than the annual emissions created by a single American (~5 tons of CO2). One would hope Google took steps to mitigate the impact, but since the company chose not to address the issue — at least not during the briefing this reporter attended — who can say?

A better model — marginally

In a prerecorded demo, Google showed how Gemini could be used to help with physics homework, solving problems step-by-step on a worksheet and pointing out possible mistakes in already filled-in answers.

In another demo — also prerecorded — Gemini was shown identifying scientific papers relevant to a particular problem set, extracting information from those papers and “updating” a chart from one by generating the formulas necessary to recreate the chart with more recent data.

“You can think of the work here as an extension of what [DeepMind] pioneered with ‘chain of thought prompting,’ which is that, with further instruction tuning, you can get the model to follow [more complex] instructions,” Collins said. “If you think of the physics homework example, you can give the model an image but also instructions to follow — for example, to identify the flaw in the math of the physics homework. So the model is able to handle more complicated prompts.”

Collins several times during the briefing touted Gemini Ultra’s benchmark superiority, claiming that the model exceeds current state-of-the-art results on “30 of the 32 widely used academic benchmarks used in large language model research and development.” But dive into the results, and it quickly becomes apparent that Gemini Ultra scores only marginally better than GPT-4 and GPT-4 with Vision across many of those benchmarks. 

Gemini
Image Credits: Google

For example, on GSM8K, Gemini Ultra answers 94.4% of the math questions correctly compared to 92% in GPT-4’s case. On the DROP benchmark for reading comprehension, Gemini Ultra barely edges out GPT-4 82.4% to 80.9%. On VQAv2, a “neural” image understanding benchmark, Gemini does a measly 0.6 percentage points better than GPT-4 with Vision. And Gemini Ultra bests GPT-4 by just 0.5 percentage points on the Big-Bench Hard reasoning suite.

Collins notes that Gemini Ultra achieves a “state-of-the-art” score of 59.4% on a newer benchmark, MMMU, for multimodal reasoning — ahead of GPT-4 with Vision. But in a test set for commonsense reasoning, HellaSwag, Gemini Ultra is actually a fair bit behind GPT-4 with a score of 87.8%; GPT-4 scores 95.3%.

Asked by a reporter if Gemini Ultra, like other generative AI models, falls victim to hallucinating — i.e. confidently inventing facts — Collins said that it “wasn’t a solved research problem.” Take that how you will.

Presumably, bias and toxicity are well within the realm of possibility for Gemini Ultra too given that even the best generative AI models today respond problematically and harmfully when prompted in certain ways. It’s almost certainly as Anglocentric as other generative AI models — Collins said that, while Gemini Ultra can translate between around 100 languages, no specific work has been done to localize the model to Global South countries.

Gemini
Image Credits: Google

In another key limitation, while the Gemini Ultra architecture supports image generation (as does Gemini Pro, in theory), that capability won’t make its way into the productized version of the model at launch. That’s perhaps because the mechanism is slightly more complex than how, say, ChatGPT generates images; rather than feed prompts to an image generator (like DALL-E 3, in ChatGPT’s case), Gemini outputs images “natively” without an intermediary step.

Collins didn’t provide a timeline as to when image generation might arrive — only an assurance that the work is “ongoing.”

Rushed out the gate

The impression one gets from this week’s Gemini “launch” is that it was a bit of a rush job.

At its annual I/O developer conference, Google promised that Gemini would deliver “impressive multimodal capabilities not seen in prior models” and “[efficiency] at tool and API integrations.” And in an interview with Wired in June, Demis Hassabis, the head and co-founder of DeepMind, described Gemini as introducing somewhat novel capabilities to the text-generating AI domain, such as planning and the ability to solve problems.

It may well be that Gemini Ultra is capable of all of this — and more. But the briefing yesterday wasn’t especially convincing, and — given Google’s previous, recent gen AI stumbles — I’d argue that it needed to be.

Gemini
Image Credits: Google

Google’s been playing catch-up in generative AI since early this year, racing after OpenAI and the company’s viral sensation ChatGPT. Bard was released in February to criticism for its inability to answer basic questions correctly; Google employees, including the company’s ethics team, expressed concerns over the accelerated launch timeline.

Reports later emerged that Google hired overworked, underpaid third-party contractors from Appen and Accenture to annotate Bard’s training data. The same may be true for Gemini; Google didn’t deny it yesterday, and the technical whitepaper says only that annotators were paid “at least a local living wage.”

Now, to be fair to Google, it’s making progress in the sense that Bard has improved substantially since launch and that Google has successfully injected dozens of its products, apps and services with new generative AI-powered features, powered by homegrown models like PaLM 2 and Imagen.

But reporting suggests that Gemini’s development has been troubled.

Gemini — which reportedly had direct participation from Google higher-ups, including Jeff Dean, the company’s most senior AI research executive — is said to be struggling with tasks like reliably handling non-English queries, which contributed to a delay in the launch of Gemini Ultra. (Gemini Ultra will only be available to select customers, developers, partners and “safety and responsibility experts” before rolling out to developers and enterprise customers followed by Bard “early next year,” Google says.) Google doesn’t even understand all of Gemini Ultra’s novel capabilities yet, Collins said — nor has it figured out a monetization strategy for Gemini. (Given the sky-high cost of AI model training and inferencing, I doubt it’ll be long before it does.)

Gemini
Image Credits: Google

So we’re left with Gemini Pro — and very possibly an underwhelming Gemini Ultra, especially if the model’s context window remains ~24,000 words as outlined in the technical whitepaper. (Context window refers to the text the model considers before generating any additional text.) GPT-4 handily beats that context window (~100,000 words), but context window admittedly isn’t everything; we’ll reserve judgement until we’re able to get our hands on the model.

Could it be that Google’s marketing, telegraphing that Gemini would be something truly remarkable rather than a slight move of the generative AI needle, is to blame for today’s dud of a product launch? Perhaps. Or perhaps building state-of-the-art generative AI models is really hard — even if you reorganize your entire AI division to juice up the process.

More TechCrunch

The Federal Trade Commission (FTC) published a report about increasing fraud at Bitcoin ATMs. These ATMs allow people to turn their cash into crypto, but they’ve become a tool for…

Bitcoin ATMs are a hotbed for scams, FTC says

Volkswagen is taking its ChatGPT voice assistant experiment on the road. Or more specifically, to vehicles it sells in the United States.  The German automaker announced in January at CES…

Volkswagen is rolling out its ChatGPT assistant to the US

From idea to IPO, Disrupt charts startups at every stage on the roadmap to their next breakthrough. TechCrunch will gather some of the startup world’s leading companies — but our…

Learn startup best practices with MongoDB, Venture Backed, InterSystems and others at Disrupt 2024

Android introduced five updates on Tuesday as part of its latest release of the mobile operating system. Available for smartphones, tablets, and Wear OS watches, the new features include audio…

Android’s latest update improves text-to-speech, Circle to Search, earthquake alerts and more

Google announced on Tuesday it’s releasing Android 15 and making its source code available ahead of the coming consumer launch, which will bring the new mobile operating system to supported…

Android 15 will be available on supported Pixel devices in the coming weeks

As new users downloaded the app, Bluesky jumped to becoming the app to No. 1 in Brazil over the weekend, ahead of Meta’s X competitor, Instagram Threads.

Bluesky continues to soar, adding 2M more new users in a matter of days

Welcome to TechCrunch Fintech! This week, we’re looking at a new real estate startup that’s making big waves with its offering, Klarna and Affirm’s financials, a neobank focused on immigrants…

The flat-rate real estate startup that’s got big players worried and BNPL’s turning a corner

Instagram’s latest feature aims to boost user interaction within Stories. The social media platform now allows followers to comment on each other’s Stories, making the experience more community-focused, akin to…

As more Instagram users engage with Stories, the app adds a comments feature

Curious about how top venture capitalists are positioning themselves for the next wave in the crypto market?  Dragonfly Capital’s Haseeb Qureshi, Galaxy Ventures’ Will Nuelle, and NFX’s Morgan Beller will…

Dragonfly Capital, Galaxy Ventures and NFX share insights on crypto scaling and strategy at TechCrunch Disrupt 2024

Get ready for TechCrunch Disrupt 2024, our signature event for startups of all stages, happening at Moscone West in San Francisco from October 28-30. This year, we’re expecting a massive…

Announcing the final agenda for the Builders Stage at TechCrunch Disrupt 2024

Spotter, the startup that provides financial solutions to content creators, announced Tuesday the launch of its new AI-powered creative suite. Dubbed Spotter Studio, the solution aims to support YouTubers throughout the…

Spotter launches AI tools to help YouTubers brainstorm video ideas, thumbnails and more

This second fund is significant because Gupta expanded it beyond a corporate fund with one main LP — Prudential Financial — into one supported by a number of financial and…

Former Citi, Battery VC has new $378M fund that helps startups land Prudential, Mutual of Omaha, others as investors and customers

The oil and fracking giant says it is “working to identify effects” of the ongoing cyberattack on its oil and fracking operations.

Halliburton confirms data was stolen in ongoing cyberattack

Is Elon’s rumble in the Amazonian jungle on course for a technical knockout? Over the weekend, the Brazilian high court voted to uphold a ban on X that another judge issued…

Elon Musk’s Brazil battle wages on

Flexible green methanol, which is made without fossil fuels, could rid carbon pollution from a range of industries.

Oxylus Energy strikes ‘beautiful balance’ to make e-fuels for aviation and shipping

French billionaire Xavier Niel is joining the board of directors of TikTok’s parent, ByteDance, the company told the South China Morning Post. It’s an interesting move as Niel isn’t a…

Xavier Niel replaces Coatue’s Laffont on board of TikTok parent ByteDance

The Netherlands’ data protection authority has imposed a penalty of €30.5M on Clearview AI for GDPR violations.

Clearview AI hit with its largest GDPR fine yet as Dutch regulator considers holding execs personally liable

X, the social network owned by Elon Musk, is finally rolling out one of the most sought-after features for direct messages: the ability to edit your message. Over the weekend,…

X now lets you edit DMs — here is how to use the feature

The Dubai-based startup, which now counts 50,000 retail and business customers in the UAE, has netted $22 million led by Altos Ventures.

Ziina banks $22M as growth explodes for the UAE-based fintech for small businesses

Fleet is launching several software services on top of its hardware-as-a-service proposition, from device management to cybersecurity and insurance.

Laptop-leasing startup Fleet wants to become the IT companion for small companies

The potential of Cercli’s payroll platform has attracted investor interest, leading to $4 million in seed funding.

Payroll startup Cercli inks $4M to build the ‘Rippling for the Middle East and North Africa’

Hospitals around the world regularly face bed shortages — an issue that can get exacerbated to breaking point when a health scare or other large-scale disaster occurs. A startup called…

‘Hospital at home’ startup Doccla raises $46 million for its European expansion

India’s fabless semiconductor startup BigEndian has raised $3 million in a seed round led by Vertex Ventures SEA and India.

BigEndian founders hope to use their deep chip experience to help establish India in semiconductors

SparkLabs — an early-stage venture capital firm that has made a name for itself for backing OpenAI as well as a host of other AI startups such as Vectara, Allganize,…

SparkLabs closes $50M fund to back AI startups

As companies grapple with the challenge of developing a sustainable business without sacrificing their core principles, open source has evolved from a niche approach to software development into the business…

Accel, Docker and Redis will discuss what’s next in open source as a business model at TechCrunch Disrupt 2024

Whether it’s a sophisticated cocktail party, a casual happy hour, a niche meetup, or a skill-building workshop, “Disrupt Week” offers you the flexibility to host a Side Event that truly…

Enhance your brand at TechCrunch Disrupt 2024 by hosting a Side Event

After joining the firm as an investor in 2022, Lu has seen how AI and new distribution platforms are changing the industry for the better.

a16z’s Joshua Lu says AI is already radically changing video games and Discord is the future

Only 5 days remain to grab a $200 discount on Student Passes for TechCrunch Disrupt 2024. This special offer ends on September 6 at 11:59 p.m. PT. Don’t miss out!…

Students and recent grads: 5 days left to save on TechCrunch Disrupt 2024 tickets

The tech industry has responded with a resounding outcry against SB 1047.

Sign or veto: What’s next for California’s AI disaster bill, SB 1047?

Even before Delta came forward, shareholders were looking for their pound of flesh, filing a class action lawsuit against CrowdStrike.

CrowdStrike faces onslaught of legal action from faulty software update