We face a set of threats that put all of humanity at risk: the climate crisis, pandemics, nuclear weapons, and ungoverned AI. The ongoing harms and existential risk presented by these issues can't be tackled with short-term fixes. But with bold leadership and decisive action from world leaders, our best days can still lay ahead of us. That's why, with The Elders Foundation, we're calling on decision-makers to demonstrate the responsible governance and cooperation required to confront these shared global challenges. This #LongviewLeadership means: ⏰ Thinking beyond short-term political cycles to deliver solutions for current and future generations. 🤝 Recognising that enduring answers require compromise and collaboration for the good of the whole world. 🧍 Showing compassion for all people, designing sustainable policies which respect that everyone is born free and equal in dignity and rights. 🌍 Upholding the international rule of law and accepting that durable agreements require transparency and accountability. 🕊️ Committing to a vision of hope in humanity’s shared future, not playing to its divided past. World leaders have come together before to address catastrophic risks. We can do it again. Share and sign our open letter ⬇️ https://rb.gy/0duze1
Future of Life Institute (FLI)
Civic and Social Organizations
Campbell, California 14,831 followers
Independent non-profit reducing extreme, large-scale risks from transformative technologies.
About us
The Future of Life Institute (FLI) is an independent nonprofit that works to reduce extreme, large-scale risks from transformative technologies, as well as steer the development and use of these technologies to benefit life. The Institute's work primarily consists of grantmaking, educational outreach, and policy advocacy within the U.S. government, European Union institutions, and United Nations, but also includes running conferences and contests. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
- Website
-
http://futureoflife.org
External link for Future of Life Institute (FLI)
- Industry
- Civic and Social Organizations
- Company size
- 11-50 employees
- Headquarters
- Campbell, California
- Type
- Nonprofit
- Specialties
- artificial intelligence, biotechnology, European Union, nuclear, climate change, technology policy, and grantmaking
Locations
-
Primary
300 Orchard City Dr
Campbell, California 95008, US
-
Avenue des Arts / Kunstlaan 44
Brussels, 1040, BE
Employees at Future of Life Institute (FLI)
-
David Nicholson
Director. Future of Life Award at Future of Life Institute
-
Andrea Berman
Philanthropy - Partnerships - Program Development - Strategy
-
Mark Brakel
Director of Policy at Future of Life Institute
-
Risto Uuk
EU Research Lead @ Future of Life Institute | PhD Researcher @ KU Leuven | Systemic risks from general-purpose AI
Updates
-
🚨 New polling finds that 80% of the public wants California Governor Gavin Newsom to sign SB 1047, the "commonsense AI safety bill" that has received overwhelming support from lawmakers (who've already passed it in the state legislature), AI experts, industry workers and leaders, and - evidently - the public. ❓ When was the last time you can remember legislation having such widespread, bipartisan support from the public? ⬇
-
🆕 🔧 Our policy team has created a new living guide to AI regulatory work, budgets, and programs across U.S. federal agencies! ➡ This agency "map" offers a thorough breakdown of AI-related activities across the Departments of Commerce, Energy, State, and Homeland Security, along with independent Executive Branch agencies. 🔗 Explore this resource now at the link in the comments:
-
📻 A new episode of the FLI Podcast is out now! 🆕 Tom Barnes from Founders Pledge joins to discuss layers of defense against unsafe AI, the shocking imbalance between funding for AI capabilities research vs. AI safety research, and how we can build a world more resilient to catastrophes. ⏯ Listen now at the link in the comments, or find it on your favourite podcast player!
-
"Policymakers have a responsibility to step in and protect our members and the public. SB 1047 is a measured first step to get us there." - SAG-AFTRA "The AI safety standards set by California will change the world." - National Organization for Women "We should listen to these experts more interested in our wellbeing than the Big Tech executives skimping on AI safety." - FundHer New in The Verge: 160,000 member-strong SAG-AFTRA, and two prominent women's equality groups (including the largest in the US, National Organization for Women) are the latest prominent organizations urging Gov. Gavin Newsom to lead on safe AI innovation and sign SB 1047. 🔗 Read the full article now at the link in the comments.
-
⭐ California-based SAG-AFTRA has now joined the call for Governor Gavin Newsom to sign SB 1047, the recently-passed "common sense" AI legislation. 👏
-
Former OpenAI superalignment safety team co-lead, current Anthropic researcher Jan Leike joins the increasingly-loud call for CA Gov. Gavin Newsom not to veto SB 1047: 🗣 From Jan's thread on X: "If your model causes mass casualties or >$500 million in damages, something has clearly gone very wrong. Such a scenario is not a normal part of innovation." 👏
-
👏 Yet another California-based founder of an AI company - Lindy's Flo Crivello - comes out in support of SB 1047: 📢 "We're on track to give birth to a species that's infinitely smarter than humans — 'please at least make sure it's safe before yolo-tweeting its magnet link' seems like a low bar to me. We have more stringent measures for cars or airplanes, and they've been an overwhelming net positive." 🔗 Read Flo's full Tweet in support of SB 1047 at the link in the comments: