AI is now present in most lives in some way, and more people are turning to generative AI tools to improve productivity in the workplace and elsewhere. As this happens, AI's energy and resource consumption increases, making it a more significant environmental threat. Direct impacts are resource consumption-oriented and negatively impact water supplies, energy, associated greenhouse gas (GHG) emissions, and other raw materials. Data storage alone is an environmental challenge, making responsible management crucial to AI sustainability. On a more positive note, indirect impacts include smart energy grids and precision agriculture technologies. But these, too, have some negative consequences, such as unsustainable changes in consumption. Tom Jackson, Ian Hodgkinson, and Nick Jennings discuss responsible data management, the trade-offs of AI development, and raising awareness about AI's environmental impacts as some steps for making AI use more ethical and clean. Ulrik Vestergaard Knudsen Jerry Sheehan Audrey Plonk Karine Perset Celine Caira Luis Aranda Jamie Berryhill Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Johannes Leon Kirnberger Eunseo Dana Choi Sara Fialho Esposito Nikolas S. Sarah Bérubé Guillermo H. Sara Marchi #climatechange #artificialintelligence #aicompute #oecd #aipolicy
OECD.AI
Affaires étrangères
Paris, Île-de-France 37 333 abonnés
OECD.AI is a platform to share and shape trustworthy AI. Sign up below for email alerts and visit our blog OECD.AI/WONK.
À propos
Visit our blog, the AI Wonk: https://oecd.ai/wonk/ The OECD AI Policy Observatory is a tool at the disposal of governments and businesses that they can use to implement the first intergovernmental standard on AI: the OECD AI Principles. The OECD AI Principles focus on how governments and other actors can shape a human-centric approach to trustworthy AI. The Observatory includes a blog for its group of international AI experts (ONE AI) to discuss issues related to defining AI and how to implement the OECD Principles. OECD countries adopted the standards in May 2019, along with a range of partner economies. The OECD AI Principles provided the basis for the G20 AI Principles endorsed by Leaders in June 2019. OECD.AI combines resources from across the OECD, its partners and all stakeholder groups. OECD.AI facilitates dialogue between stakeholders while providing multidisciplinary, evidence-based policy analysis in the areas where AI has the most impact. As an inclusive platform for public policy on AI – the OECD AI Policy Observatory is oriented around three core attributes: Multidisciplinarity The Observatory works with policy communities across and beyond the OECD – from the digital economy and science and technology policy, to employment, health, consumer protection, education and transport policy – to consider the opportunities and challenges posed by current and future AI developments in a coherent, holistic manner. Evidence-based analysis The Observatory provides a centre for the collection and sharing of evidence on AI, leveraging the OECD’s reputation for measurement methodologies and evidence-based analysis. Global multi-stakeholder partnerships The Observatory engages governments and a wide spectrum of stakeholders – including partners from the technical community, the private sector, academia, civil society and other international organisations – and provides a hub for dialogue and collaboration.
- Site web
-
https://oecd.ai/
Lien externe pour OECD.AI
- Secteur
- Affaires étrangères
- Taille de l’entreprise
- 11-50 employés
- Siège social
- Paris, Île-de-France
- Type
- Administration publique
- Fondée en
- 2020
Lieux
-
Principal
2 rue André Pascal
75016 Paris, Île-de-France, FR
Employés chez OECD.AI
Nouvelles
-
Today, OECD Deputy Secretary-General Ulrik Vestergaard Knudsen participated in a panel discussion, Governing AI: Shaping the Future, at the #Google Responsible AI Summit. DSG Knudsen spoke about the critical moment jurisdictions face: They must work together globally to embed interoperability and inclusivity in AI governance efforts to reap the benefits of trustworthy AI fully. Through the #GPAI integrated partnership, the #OECD enlarged the scope of countries on equal footing with #Argentina, #Brazil, #India, #Senegal and #Serbia. The partnership expects to welcome more countries, including African countries, in its work on AI. He also pointed to the new collaboration announced last week between the OECD and the #UN : “Our hope is that the OECD can go deep, and the UN can go wide. Together, we can fly high.” Jerry Sheehan Audrey Plonk Karine Perset Celine Caira Luis Aranda Jamie Berryhill Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Johannes Leon Kirnberger Eunseo Dana Choi Sara Fialho Esposito Nikolas S. Sarah Bérubé Guillermo H. Sara Marchi #generativeai #artificialintelligence #internationalcooperation
-
DEADLINE EXTENSION UNTIL 7 OCTOBER! Participate in this OECD survey of the private sector to gather insights on AI adoption and AI governance. 👉 https://lnkd.in/er-Ts3-V The survey is divided into two sections, comprises 13 questions, and should take approximately 15 minutes to complete. The results will inform the OECD Global Strategy Group (GSG) Meeting 2024, which will take place on 15-16 October and the work of the OECD AI Futures Expert Group. 📅 Participants must complete the survey by 7 October 2024. Business at OECD (BIAC) Nicole Primmer Maylis Berviller Ulrik Vestergaard Knudsen Jerry Sheehan Audrey Plonk Karine Perset Luis Aranda Jamie Berryhill Lucia Russo John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Bénédicte Rispal Nikolas S. Sarah Bérubé #survey #aigovernance #industry #artificialintelligence #oecd
-
DEADLINE TOMORROW! Participate in this OECD survey of the private sector to gather insights on AI adoption and AI governance. 👉 https://lnkd.in/er-Ts3-V The survey is divided into two sections, comprises 13 questions, and should take approximately 15 minutes to complete. The results will inform the OECD Global Strategy Group (GSG) Meeting 2024, which will take place on 15-16 October and the work of the OECD AI Futures Expert Group. 📅 Participants must complete the survey by 1 October 2024. Business at OECD (BIAC) Nicole Primmer Maylis Berviller Ulrik Vestergaard Knudsen Jerry Sheehan Audrey Plonk Karine Perset Luis Aranda Jamie Berryhill Lucia Russo John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Bénédicte Rispal Nikolas S. Sarah Bérubé #survey #aigovernance #industry #artificialintelligence #oecd
-
LAST FEW DAYS TO PARTICIPATE! 📅 DEADLINE: 1 OCTOBER Public consultation on risk thresholds for advanced AI systems 👉 https://lnkd.in/e98Pzw-b The OECD is working with diverse stakeholders to explore potential approaches, opportunities, and limitations for establishing risk thresholds for advanced AI systems. To inform this work, we are holding an open public consultation to get the views of all interested parties. We are interested in hearing your thoughts on the following key questions: ❓ What publications or other resources have you found helpful on AI risk thresholds? ❓ To what extent do you believe AI risk thresholds based on compute power are adequate and appropriate to mitigate risks from advanced AI systems? ❓ To what extent do you believe other AI risk thresholds would be valuable, and what are they? ❓ What strategies and approaches can governments or companies use to identify and set specific thresholds and measure real-world systems against those thresholds? What requirements should be imposed for systems that exceed any given threshold? ❓ What else should the OECD and collaborating organisations consider concerning designing and/or implementing AI risk thresholds? 📅 10 SEPTEMBER DEADLINE TO PARTICIPATE 👉 https://lnkd.in/e98Pzw-b Francesca Rossi Stuart Russell Michael Schönstein Ulrik Vestergaard Knudsen Jerry Sheehan Audrey Plonk Celine Caira Luis Aranda Jamie Berryhill Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Johannes Leon Kirnberger Eunseo Dana Choi Pablo Gomez Ayerbe Sara Fialho Esposito Nikolas S. Sarah Bérubé Guillermo H. #airisk #aisafety #trustworthyai #oecd #risk
Seeking your views: Public consultation on risk thresholds for advanced AI systems – Deadline 10 September
oecd.ai
-
Take part in this OECD survey to gather insights from the private sector on AI adoption and AI governance. 👉 https://lnkd.in/er-Ts3-V The survey is divided into two sections, comprises 13 questions, and should take approximately 15 minutes to complete. The results will inform the OECD Global Strategy Group (GSG) Meeting 2024, which will take place on 15-16 October and the work of the OECD AI Futures Expert Group. 📅 Participants must complete the survey by 1 October 2024. Business at OECD (BIAC) Nicole Primmer Maylis Berviller Ulrik Vestergaard Knudsen Jerry Sheehan Audrey Plonk Karine Perset Luis Aranda Jamie Berryhill Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Eunseo Dana Choi Nikolas S. Sarah Bérubé #survey #aigovernance #industry #artificialintelligence #oecd
-
AI is evolving quickly, and so is AI governance. Yesterday, the United Nations and the OECD announced a future collaboration on AI. In July, our Secretary-General announced the merger of the Global Partnership on AI (GPAI) with our policy work. Our former director, Andrew Wyckoff, recently published an article about what AI governance actors can learn from the GPAI experience. Andrew analyses the challenges GPAI faced, in part due to its structure. More importantly, he discusses how a merger with the OECD addresses the challenges, providing more agency to its members and more interoperability with the evolving network of global AI governance bodies. The merger also adds to the diversity of member countries. Ulrik Vestergaard Knudsen Jerry Sheehan Audrey Plonk Karine Perset Celine Caira Luis Aranda Jamie Berryhill Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Johannes Leon Kirnberger Eunseo Dana Choi Pablo Gomez Ayerbe Sara Fialho Esposito Nikolas S. Sarah Bérubé Guillermo H. #unitednations #oecd #aipolicy #gpai #trustworthyai #artificialintelligence #oecdai
A new institution for governing AI? Lessons from GPAI
https://www.brookings.edu
-
At the Summit of the Future, the #UN and the #OECD announced a groundbreaking partnership to enhance global AI governance. Amandeep Gill, UN Envoy on Technology, emphasised the need for real-time, cohesive policy ecosystems to address AI's sweeping impact. Ulrik Vestergaard Knudsen, OECD Deputy Secretary-General, highlighted the importance of evidence-based governance to ensure responsible, human-centred AI development that benefits all. Together, our organisations will deliver science-based assessments of AI's risks and opportunities, empowering governments to better respond to AI’s rapid development. At the OECD.AI Policy Observatory, we are proud to be a part of this global collaboration and to broaden our work to make AI trustworthy and beneficial for all. LEARN MORE 👉 https://lnkd.in/eY9R3WVX Jerry Sheehan Audrey Plonk Karine Perset Celine Caira Luis Aranda Jamie Berryhill Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Johannes Leon Kirnberger Eunseo Dana Choi Pablo Gomez Ayerbe Sara Fialho Esposito Nikolas S. Sarah Bérubé Guillermo H. #AI #TechPolicy #AIEthics #UN #OECD #AIGovernance #trustworthyai #aipolicy
-
Thursday 17-18 October Register to attend the Internet Governance Project at Georgia Tech's (IGP) 9th annual workshop, focused on the critical topic of AI governance and its implications for ICT policy. This year’s event will bring together a diverse group of thought leaders, researchers, and policymakers in Atlanta on October 17-18 to present and discuss cutting-edge analyses of AI governance issues. Our Karine Perset will speak on Friday, 18 October, in a session about comparative analysis of governance efforts. 👇 FIND OUT MORE https://lnkd.in/euHbJgtg 👇 REGISTER TO ATTEND ONLINE https://lnkd.in/gDa7TwFv Ulrik Vestergaard Knudsen Jerry Sheehan Audrey Plonk Luis Aranda Jamie Berryhill Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Sara Fialho Esposito Nikolas S. Sarah Bérubé #aigovernance #aipolicy #oecd #trustworthyai
IGP Annual Workshop: Does AI Need Governance? - Internet Governance Project
https://www.internetgovernance.org
-
Over the last year, AI safety has occupied the minds of AI experts from all sectors worldwide. Last week, our Director, Jerry Sheehan, participated in the pivotal 3rd International Dialogue on AI Safety (IDAIS), a significant event organised by the Safe AI Forum and Berggruen Institute. Yoshua Bengio and Stuart Russell, members of our OECD.AI Network of Experts, were present alongside other top experts in the field. Participants reached a consensus and issued a statement calling on governments and other actors to recognise AI safety as a global public good, distinct from broader geostrategic competition. Some of the key proposals in the statement include: 💡 Create an international body to coordinate national AI safety authorities, audit regulations and ensure minimal preparedness measures, and eventually set AI safety standards. 💡Developers should show they do not cross red lines, preserve privacy, and conduct pre-deployment testing and monitoring, including in high-risk cases that may cross those lines. 💡Verify safety claims and ensure global collaboration and trust, privacy protection, potentially through third-party governance and peer reviews to ensure global trust and reinforce international collaboration Read more about the event and statement on the IDAIS website 👉 https://idais.ai/ Ulrik Vestergaard Knudsen Audrey Plonk Karine Perset Luis Aranda Jamie Berryhill Lucia Russo Noah Oder John Leo Tarver ⒿⓁⓉ Rashad Abelson Angélina Gentaz Valéria Silva Bénédicte Rispal Nikolas S. Sarah Bérubé #trustworthyai #aisafety #aipolicy #security
International Dialogues on AI Safety - International Dialogues on AI Safety
https://idais.ai