Skip to content
Massachusetts Institute of Sobriety

EmTech Digital 2024: A thoughtful look at AI’s pros and cons with minimal hype

At MIT conference, experts explore AI's potential for "human flourishing" and the need for regulation.

Benj Edwards
Nathan Benaich of Air Street capital delivers the opening presentation on the state of AI at EmTech Digital 2024 on May 22, 2024.
Nathan Benaich of Air Street Capital delivers the opening presentation on the state of AI at EmTech Digital 2024 on May 22, 2024. Credit: Benj Edwards
Nathan Benaich of Air Street Capital delivers the opening presentation on the state of AI at EmTech Digital 2024 on May 22, 2024. Credit: Benj Edwards

CAMBRIDGE, Massachusetts—On Wednesday, AI enthusiasts and experts gathered to hear a series of presentations about the state of AI at EmTech Digital 2024 on the Massachusetts Institute of Technology's campus. The event was hosted by the publication MIT Technology Review. The overall consensus is that generative AI is still in its very early stages—with policy, regulations, and social norms still being established—and its growth is likely to continue into the future.

I was there to check the event out. MIT is the birthplace of many tech innovations—including the first action-oriented computer video game—among others, so it felt fitting to hear talks about the latest tech craze in the same building that hosts MIT's Media Lab on its sprawling and lush campus.

EmTech's speakers included AI researchers, policy experts, critics, and company spokespeople. A corporate feel pervaded the event due to strategic sponsorships, but it was handled in a low-key way that matches the level-headed tech coverage coming out of MIT Technology Review. After each presentation, MIT Technology Review staff—such as Editor-in-Chief Mat Honan and Senior Reporter Melissa Heikkilä—did a brief sit-down interview with the speaker, pushing back on some points and emphasizing others. Then the speaker took a few audience questions if time allowed.

EmTech Digital 2024 took place in building E14 on MIT's Campus in Cambridge, MA.
EmTech Digital 2024 took place in building E14 on MIT's Campus in Cambridge, MA.
EmTech Digital 2024 took place in building E14 on MIT's Campus in Cambridge, MA. Credit: Benj Edwards

The conference kicked off with an overview of the state of AI by Nathan Benaich, founder and general partner of Air Street Capital, who rounded up news headlines about AI and several times expressed a favorable view toward defense spending on AI, making a few people visibly shift in their seats. Next up, Asu Ozdaglar, deputy dean of Academics at MIT's Schwarzman College of Computing, spoke about the potential for "human flourishing" through AI-human symbiosis and the importance of AI regulation.

Kari Ann Briski, VP of AI Models, Software, and Services at Nvidia, highlighted the exponential growth of AI model complexity. She shared a prediction from consulting firm Gartner research that by 2026, 50 percent of customer service organizations will have customer-facing AI agents. Of course, Nvidia’s job is to drive demand for its chips, so in her presentation, Briski painted the AI space as an unqualified rosy situation, assuming that all LLMs are (and will be) useful and reliable, despite what we know about their tendencies to make things up.

The conference also addressed the legal and policy aspects of AI. Christabel Randolph from the Center for AI and Digital Policy—an organization that spearheaded a complaint about ChatGPT to the FTC last year—gave a compelling presentation about the need for AI systems to be human-centered and aligned, warning about the potential for anthropomorphic models to manipulate human behavior. She emphasized the importance of demanding accountability from those designing and deploying AI systems.

Asu Ozdaglar, Deputy Dean of Academics at MIT's Schwarzman College of Computing spoke with MIT Technology Review's Editor-in-Chief Mat Honan at EmTech Digital on May 22, 2024.
Asu Ozdaglar, deputy dean of Academics at MIT's Schwarzman College of Computing spoke with MIT Technology Review Editor-in-Chief Mat Honan at EmTech Digital on May 22, 2024.
Kari Ann Briski, VP of AI Models, Software, and Services at NVIDIA, highlighted the exponential growth of AI model complexity at EmTech Digital on May 22, 2024.
Kari Ann Briski, VP of AI Models, Software, and Services at NVIDIA, highlighted the exponential growth of AI model complexity at EmTech Digital on May 22, 2024.

Amir Ghavi, an AI, Tech, Transactions, and IP partner at Fried Frank LLP, who has defended AI companies like Stability AI in court, provided an overview of the current legal landscape surrounding AI, noting that there have been 24 lawsuits related to AI so far in 2024. He predicted that IP lawsuits would eventually diminish, and he claimed that legal scholars believe that using training data constitutes fair use. He also talked about legal precedents with photocopiers and VCRs, which were both technologies demonized by IP holders until courts decided they constituted fair use. He pointed out that the entertainment industry's loss on the VCR case ended up benefiting it by opening up the VHS and DVD markets, providing a brand new revenue channel that was valuable to those same companies.

In one of the higher-profile discussions, Meta President of Global Affairs Nick Clegg sat down with MIT Technology Review Executive Editor Amy Nordrum to discuss the role of social media in elections and the spread of misinformation, arguing that research suggests social media's influence on elections is not as significant as many believe. He acknowledged the "whack-a-mole" nature of banning extremist groups on Facebook and emphasized the changes Meta has undergone since 2016, increasing fact-checkers and removing bad actors.

Regarding AI-generated content, Clegg discussed Meta's plans to enforce labeling and ensure provenance for content created using their tools. He acknowledged the challenges posed by the immaturity of establishing provenance for audio-visual content and the likelihood of bad actors attempting to circumvent these measures. Clegg expressed hope for collaboration among AI and tech providers, along with soft and hard laws, to combat bad actors. Despite several high-profile elections worldwide since January 2024, Meta's monitoring teams have observed little AI-generated content involved, he says, and their tools aim to catch deceptive content regardless of its origin. Clegg also highlighted the lack of consensus on the future direction of AI technology, questioning whether large language models (LLMs) will continue to advance or eventually lose momentum.

MIT Technology Meta's President of Global Affairs Nick Clegg sat down with MIT Technology Review Executive Editor Amy Nordrum at EmTech Digital on May 22, 2024.
Meta President of Global Affairs Nick Clegg sat down with MIT Technology Review Executive Editor Amy Nordrum at EmTech Digital on May 22, 2024.
Mounir Ibrahim, EVP of Public Affairs and Impact at Truepic, spoke about the importance of digital content provenance in a "zero trust world" at EmTech Digital on May 22, 2024.
Mounir Ibrahim, EVP of Public Affairs and Impact at Truepic, spoke about the importance of digital content provenance in a "zero trust world" at EmTech Digital on May 22, 2024.
Patricia Thaine, Cofounder & CEO of Private AI, addressed the often-overlooked privacy problems in AI at EmTech Digital on May 22, 2024.
Patricia Thaine, cofounder and CEO of Private AI, addressed the often-overlooked privacy problems in AI at EmTech Digital on May 22, 2024.
Cynthia Lu, Senior Director and Head of Applied Research for GenAI at Adobe, talked about Adobe's Firefly image generator at EmTech Digital on May 22, 2024.
Cynthia Lu, senior director and head of Applied Research for GenAI at Adobe, talked about Adobe's Firefly image generator at EmTech Digital on May 22, 2024.

Mounir Ibrahim, EVP of Public Affairs and Impact at Truepic, spoke about the importance of digital content provenance in a "zero trust world." He talked about developing the Content Credentials for Platform Attestation (C2PA) standard, which aims to provide interoperable, tamper-evident metadata for various media formats. And Patricia Thaine, co-founder and CEO of Private AI, addressed the often-overlooked privacy problems in AI, emphasizing the need for privacy-preserving methods to maintain compliance with regulations such as GDPR.

To close out day one of the event, Jingwan (Cynthia) Lu, senior director and head of Applied Research for GenAI at Adobe, discussed the company's Adobe Firefly family of generative AI tools, which has been designed to be "commercially safe" by having been trained solely on licensed images and public domain works. She said that Firefly, which is now in version 3, has already generated 7 billion images and emphasized that the goal of these tools is to amplify, not replace, human creativity. She also said the Firefly audio, video, and 3D models would be coming soon.

Employees of OpenAI had been scheduled to do a presentation on the company's Sora video generator to cap off the day, but they had to back out "for personal reasons" at the last minute, according to EmTech Digital organizers. It's been a rough couple of weeks for OpenAI in the news, and whether those two facts are related is unknown.

And what good would an AI conference be without overheard nuggets of gossip? During a break, I overheard a conversation about someone earning $50 an hour to solve difficult math problems that ChatGPT can't handle, with the results fed back into the model for improvement. This anecdote serves as a compelling reminder that behind the rapid advancements in generative AI, human effort remains a crucial component—even if, in some cases, humans may be training their own prospective replacements.

Listing image: Benj Edwards

Photo of Benj Edwards
Benj Edwards Senior AI Reporter
Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.
Staff Picks
SnoopCatt
Am I imagining it, or are most of our concerns related to generative AI? We're rightly concerned about dodgy lawyers citing made up cases, deepfakes of famous people, and writers seeing their works mined without compensation for training.
But there is a whole class of AI that is used to discover how proteins function, or diagnose cancers.
I'm hoping we don't stifle the positives when we regulate the negatives.
Prev story
Next story