Center for AI and Digital Policy

Center for AI and Digital Policy

Public Policy Offices

Washington, DC 59,342 followers

"Filter coffee. Not people."

About us

The Center for AI and Digital Policy aims to ensure that artificial intelligence and digital policies promote a better society, more fair, more just, and more accountable – a world where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law. As an independent non-profit corporation, the Center for AI and Digital Policy will bring together world leaders, innovators, advocates, and thinkers to promote established frameworks for AI policy – including the OECD AI Principles and the Universal Guidelines for AI – and to explore emerging challenges.

Website
https://caidp.org
Industry
Public Policy Offices
Company size
11-50 employees
Headquarters
Washington, DC
Type
Educational
Founded
2021
Specialties
Public Policy, Artificial Intelligence, Privacy, and AI

Locations

Employees at Center for AI and Digital Policy

Updates

  • View organization page for Center for AI and Digital Policy, graphic

    59,342 followers

    📢 📜 G7 Privacy Officials Issue Statement on the Role of Data Protection Authorities in Fostering Trustworthy AI Privacy officials with the G7 nation, meeting in Rome, issued several statements regarding data protection and Artificial intelligence. In the statement on the Role of Data Protection Authorities in Fostering Trustworthy AI, they said: 🔥 "We emphasize that many AI technologies, including generative AI, are based on the processing of personal data, which can subject natural persons to unfair stereotyping, bias and discrimination even when not directly processing their respective personal data. This, in turn, may influence larger societal processes with deep fakes and disinformation. Consequently, data protection and the need to protect the right to privacy are more critical than ever." 🔥 "We reiterate that current privacy and data protection laws apply to the development and use of generative AI products, even as different jurisdictions continue to develop AI-specific laws and policies." 🔥 "We acknowledge that the complexity of AI technologies, which often involve extensive collection of personal data and sophisticated algorithmic systems, has led DPAs to emerge as key figures in the AI governance landscape, leveraging their expertise in data protection to uphold privacy and ethical standards." 🔥 "Therefore, we call on policymakers and regulators to make available adequate human and financial resources to DPAs, to enable our societies to adequately tackle the new, highly demanding challenges posed by developing trustworthy AI as outlined in this Statement." In their joint communiqué, the G7 data protection and privacy authorities note that as many AI technologies, such as generative AI, are based on the processing of personal data, the need to protect privacy is “more critical than ever.” (https://lnkd.in/eHV2jMQt) The statement on child-appropriate AI, developed by the G7 DPA Roundtable’s Emerging Technologies Working Group examined issues related to young peoples’ use of AI-powered technology, such as toys and educational software. It also addresses the use of AI to make decisions or evaluate information about children. (https://lnkd.in/eGfU_6qf). The Rome gathering marked the fourth meeting of the G7 DPA Roundtable since it was launched in 2021 during the U.K. G7 presidency to provide a forum for discussing emerging data-protection challenges and the need for closer international collaboration. The Office of the Canadian Privacy Commissioner will host the next meeting of the Roundtable next June, as Canada takes on the Presidency of the global forum during the coming year. #aigovernance #g7 The Italian Data Protection AuthorityEDPS - European Data Protection Supervisor European Data Protection Board Federal Trade Commission CNIL - Commission Nationale de l'Informatique et des Libertés Information Commissioner's Office

  • 📢 CAIDP Advises Saudi Data and Artificial Intelligence Authority on Deepfake Guidelines In comments to the Saudi Data and Artificial Intelligence Authority (SDAIA), the Center for AI and Digital Policy provided detailed comments on proposed Guidelines for deepfakes. CAIDP commended the SDAIA for drafting “comprehensive guidelines for technology development, content creators and users to address the implications of deepfake tools and their associated risks...[as well as] recommendations to ensure safe and ethical use of this technology in line with the [SDAIA] AI Ethics Principles and Data Privacy practices.” CAIDP recommended additional measures to protect civil, human, and individual rights in the face of increasingly permeating deepfake content, causing harm to individuals, communities, and democratic processes worldwide. 1️⃣ Expand the definition of “deepfake” to emphasize outcomes.  2️⃣ Enhance individual control over personal data 3️⃣ Strengthen legal protections against the misuse of synthetic media 4️⃣ Implement mandatory pre-deployment impact assessments  5️⃣ Include specific provisions protecting the rights of children and other vulnerable groups 6️⃣ Improve incident reporting 7️⃣ Ensure human oversight for deepfake-related techniques 8️⃣ Support the endorsement and ratification of the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law CAIDP expressed appreciation "to the SDAIA for the opportunity to provide feedback on the Deepfake Guidelines, and to recommend amendments to ensure that these effectively uphold fundamental individual, civil and human rights." SDAIA | سدايا #deepfakes #aigovernance Merve Hickok Marc Rotenberg Caroline Friedman Levy Monique Munarini April Yoder, PhD Heramb Podar Gamze Büşra Kaya Nidhi Sinha

  • View organization page for Center for AI and Digital Policy, graphic

    59,342 followers

    📢 CAIDP Provides Comments to FCC on AI, Consumer Protection, and Robocalls (Oct. 10, 2024) The Center for AI and Digital Policy has provided detailed comments to the Federal Communications Commission regarding a proposed rule on the implications of AI on communications technologies. CAIDP commended the FCC's initiative to "protect consumers from the abuse of artificial intelligence (AI) systems, particularly considering the exacerbation of AI-driven risks of crime, fraud, and annoyance, by undertaking rulemaking that centers the privacy and security of consumers." ⚠️🤖 📞 CAIDP explained, "As the FCC is aware, robocalls are not new. However, the breakthroughs in AI and machine learning (ML) systems also pose novel threats to human interaction, and trust and existing regulatory paradigms are inadequate to address these threats. Now more than ever, the FCC must adopt regulations that are human-centric, and place the public interest, privacy, and security of consumers above all else. ⚠️🤖 📞 CAIDP also warned, "Machine learning systems mimic and manipulate human behavior. Understanding this fundamental nature of the technology is also key to contextualizing the potential uses and abuses of AI systems." CAIDP recommended that the FCC: ➡️ Establish an “opt-in” regime for calls, messages, and AI-generated communications ➡️ Enhance accountability measures to limit AI risks to consumers CAIDP also wrote: 🔥 "To ensure that its definition of “AI-generated calls” remains relevant and effective, the Commission should use broad, technology-neutral language that focuses on the functionality and impact of AI rather than specific technical implementations. " 🔥 "The definition adopted by the Commission should cover both generative and predictive AI technologies, as both can be used in ways that increase risks to consumers. " CAIDP also cautioned against carrying forward opt-in consent from pre-AI generated communications: 🔥 "The Proposed Rule seeks to address the seismic shifts in technology with the introduction and commercialization of GenAI systems. Therefore, grandfathering existing consents under the proposed rule would defeat the very purpose of the rule. We recommend implementing an opt-in system where consumers specifically opt-in to receive AI-generated calls, even where extant requirements provide for express consent." The Federal Communications Commission will receive reply comments until October 25, 2024 and will then issue a final rule. Christabel R. Marc Rotenberg Merve Hickok Rupali Lekhi Bhawna M. Ngonidzaishe Gotora Peter Zhang

  • View organization page for Center for AI and Digital Policy, graphic

    59,342 followers

    📢 🏅 Hopfield and Hinton Receive Nobel Prize in Physics for "foundational discoveries and inventions that enable machine learning with artificial neural networks" (Oct. 8, 2024) 👏🏼 The Center for AI and Digital Policy applauded the announcement and noted that Geoffrey Hinton is now leading efforts to establish limits on AI systems. Hinton resigned from his position with Google in May 2023 so that he could speak more freely about his concerns with the rapidly developing technology. In numerous interviews since, he has warned about the risks of unregulated AI. Hinton told CNN’s Jake Tapper. “I want to blow the whistle and say we should worry seriously about how we stop these things getting control over us." Hinton endorsed SB 1047, legislation that would establish accountability for large AI systems and require the creation of a mechanism to stop AI systems no longer under human control. Marc Rotenberg, executive director of the Center for AI and Digital Policy, said, "Geoffrey Hinton follows in a line of distinguished scientists who are on the front lines of innovation and also the front lines of the call for regulation. In the 1980s, it was computer scientists in Silicon Valley who warned of the risk of AI warfare and established Computer Professionals for Social Responsibility (CPSR). " "Today it is Geoffrey Hinton, Yoshua Bengio, Stuart Russell, and others who are making the breakthroughs and simultaneously calling for accountability. The need to maintain human control is foundational for safe, secure, and trustworthy AI," said Rotenberg. The Center for AI and Digital Policy has long urged AI policymakers to implement the Termination Principle from the Universal Guidelines for AI (2018), a widely endorsed framework for AI Governance. The Termination Principle states, "An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible." And Rotenberg and Christabel R. recently published an article on "The AI Red Line Challenge" for Tech Policy Press: https://lnkd.in/eNCF9ZJg #aigovernance The Nobel Prize Terry Winograd Lucy Suchman Eric Roberts Barbara Simons Computer Professionals for Social Responsibility https://lnkd.in/dwQK3jnq

    The Nobel Prize in Physics 2024

    The Nobel Prize in Physics 2024

    nobelprize.org

  • View organization page for Center for AI and Digital Policy, graphic

    59,342 followers

    The Center for AI and Digital Policy is proud to partner with the Renew Democracy Initiative on the second annual "Frontlines of Freedom Conference on Transnational Repression" this October 29-30 in Washington DC. We hope you will join us! The first day is for the dissident community and partners to come together and share best practices before bringing together a wider audience for the second day to dive deeper into the issues and propose actionable solutions. We will explore how AI aids authoritarian governments and explain the need for countries to rally behind the Council of Europe AI treaty to support human rights, democracy, and the rule of law. Day 1: Tuesday, October 29th Our Day for Dissidents will be hosted at the National Endowment for Democracy and will bring together roughly 50 dissidents and partners for practical resource sessions and community building. Day 2: Wednesday, October 30th We will then unite our activist community with ~100+ policymakers, business leaders, journalists, and more to raise the alarm about how transnational repression is affecting open societies, including here in the US, and discuss critical policy recommendations, such as the Council of Europe AI Treaty, for resisting authoritarian influence. Day 2 will be held at the Eaton Hotel in downtown, DC. CAIDP's Christabel R. will be speaking on the Tyranny & Touchscreens and the AI Treaty! 📍 Washington DC 📅 October 29-30, 2024 Register - https://lnkd.in/e3JXZPpS

    • No alternative text description for this image
  • 📢 📰 CAIDP Update 6.38 - AI Policy News (Oct. 7, 2024) 🇺🇸 White House Sets Guidelines for AI Procurement in U.S. Government 🇪🇺 EU Launches Code of Practice, Scrutinizes Tech Platforms 🌐 G7 Nations Target AI Monopolies and Market Distortions 🇯🇵 Japan AI Safety Institute Issues Guidance on AI Safety Evaluation and Red Teaming ⚠️ 🪫 🌲 Experts Sound Alarm on AI's Environmental Impact 🗣️ 🇦🇺 ⚠️ CAIDP Advises Australia on Mandatory Guardrails for AI 🗣️ 🌐 🤖 CAIDP Advises OECD on Risk Thresholds For Advanced AI Systems 🗣️ 📖 🇹🇷 CAIDP President Launches Book, Addresses Türkiye AI Event ✍🏼 📜 🏛️ IBA Endorses Council of Europe AI Treaty 🗣️ 📜 🏛️ CAIDP Council of Europe AI Treaty Observatory 📘 CAIDP and IBA Report The Future is Now: AI and the Legal Profession 🪦 In Memoriam: Abhishek Gupta The White House European Union #g7 #Japan OECD.AI Council of Europe International Bar Association Abhishek Gupta

  • 📢 CAIDP Advises Australia on Mandatory Guardrails for AI 🇦🇺 In detailed comments to the Australian Department of Industry, Science and Resources, the Center for AI and Digital Policy set out key recommendations for the establishment of mandatory Guardrails for high-risk AI systems. CAIDP said Australia should ❗ Establish red lines prohibiting AI systems that violate human rights, such as those used in mass surveillance, social scoring, biometric categorization, emotion recognition, and predictive policing. ❗ Mandate rigorous, independent impact assessments for high-risk AI systems prior to deployment with a “go/no go” approach ❗ Clarify legal liability for AI-related harms to incentivize responsible development and deployment ❗ Enforce strict data governance measures providing transparency on data provenance, quality and consent. Prohibit the use of non-consensually obtained data to train AI models ❗ Implement robust whistleblower protections to safeguard individuals who report unethical practices or safety risks in AI development and across the AI lifecycle. ❗ Endorse and ratify the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and Rule of Law As CAIDP explained, governments are “crucial actors in creating an environment that supports safe and responsible AI use, while reducing the risks posed by these technologies.” CAIDP strongly endorsed the Australian Government’s commitment to establishing mandatory guardrails for high-risk AI. #aigovernance #guardrails #publicvoice Merve Hickok Marc Rotenberg Caroline Friedman Levy Monique Munarini April Yoder, PhD Heramb Podar Gamze Büşra Kaya Nidhi Sinha Ed Santow

  • View organization page for Center for AI and Digital Policy, graphic

    59,342 followers

    📢 CAIDP Advises OECD on Risk Thresholds For Advanced AI Systems In response to a public comment opportunity, the Center for AI and Digital Policy provided detailed comments to the OECD.AI. on Risk Thresholds for Advanced AI Systems. CAIDP cited several influential AI Policy frameworks that address the risk challenges of advanced AI systems ➡️ The Universal Guidelines for AI - Termination Principle (2018) ➡️ The EU AI Act - Risk Assessment (2023) ➡️  U.S. Executive Order on Safe, Secure, and Trustworthy AI (2023) ➡️ California Senate Bill 1047 (not adopted) ➡️ and the work of Professor Stuart Russell, "Managing extreme AI risks amid rapid progress" (2023) and " Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems" (2024) CAIDP explained: ❗ "Compute thresholds are important to define and include within a comprehensive risk assessment framework, however compute thresholds as the single measure for risk mitigation is insufficient." ❗ "Focusing on compute thresholds as the sole determinant of risk draws attention and resources away from other factors, such as the data used for training a given model, the context in which it is deployed, and safety optimization practices." ❗ "Another crucial factor to take into account when determining the risk level of a given model is to consider whether or not it processes personal data. The OECD should draw a sharp distinction between those AI systems that involve the processing of personal data and those AI systems that do not." ❗ "We propose a proactive approach towards threshold management. Consistent with regulatory frameworks that call for AI transparency and accountability, we advocate the reporting and publishing of risk scores." ❗ "The OECD should promote the data transparency requirements set out in the recently enacted California Data Transparency Act, AB2013." The Act requires the developers of AI systems to post documentation regarding the data used to train the system. OECD - OCDE U.S. Mission to the OECD Nayyara Rahman Tatiana G. Zasheva Tim Sowa Marc Rotenberg Merve Hickok Christabel R.

  • View organization page for Center for AI and Digital Policy, graphic

    59,342 followers

    📢 US Releases Guidance to Advance the Responsible Acquisition of AI in Government 📝 Today, the Office of Management and Budget (OMB) released the "Advancing the Responsible Acquisition of Artificial Intelligence in Government" memorandum (M-24-18). 🏛️ The White House said, "Successful use of commercially-provided AI requires responsible procurement of AI. This new memo ensures that when Federal agencies acquire AI, they appropriately manage risks and performance; promote a competitive marketplace; and implement structures to govern and manage their business processes related to acquiring AI." 📝 📝 This OMB Memo builds on an earlier OMB memo (M-24-10) that established the first binding requirements for US agencies to strengthen governance, innovation, and risk management for the use of AI. Key topics in the new Memo include: ➡️ Managing AI Risks and Performance ➡️ Promoting a Competitive AI Market with Innovative Acquisition ➡️ Ensuring Collaboration Across the Federal Government ⚠️ 🤖 ⚠️ The new OMB memo carries forward the emphasis on a careful evaluation of "rights-impacting" and "safety-impacting" AI systems. ‼️ This memo establishes "acquisition-related practices that agencies must implement to ensure effective deployment of required risk management practices for rights-impacting and safety-impacting AI. These include specific actions designed to address complex issues related to privacy, security, data ownership and rights, and interoperability that may arise in connection with the acquisition of an AI service or system." Agencies must ❗Address Privacy Risks Throughout the Acquisition Lifecycle ❗Ensure That AI-based Biometrics Protect the Public’s Rights, Safety, and Privacy ❗Comply with Civil Rights Laws to Avoid Unlawful Bias, Unlawful Discrimination, and Harmful Outcomes.  ❗ Identify When Solicitations Require Compliance for Rights-Impacting and Safety-Impacting AI. ❗ Incorporate Transparency Requirements into Contractual Terms and Solicitations to Obtain Necessary Information and Access. ❗ Delineate Responsibilities for Ongoing Testing and Monitoring and Build Evaluations into Vendor Contract Performance. ❗ Require AI Incident Reporting. CAIDP President Merve Hickok, said "We welcome the new memo from OMB on the Responsible Acquisition of AI in government. Procurement is a powerful lever for effective AI policy. We urge federal agencies to examine closely the rights-impacting and safety-impacting dimensions of AI systems, prior to acquisition and throughout the AI lifecycle. Deployment must be undertaken responsibly and thoughtfully." Hickok is the author of the forthcoming "From Trustworthy AI Principles to Public Procurement Practices" ( DeGruyter 2024). Beginning in 2021, the Center for AI and Digital Policy sent several statements to the OMB and Congressional Committees, urging the adoption of strong regulations across the federal government for AI systems. #aigovernance #procurement https://lnkd.in/eYYBBsxv

    FACT SHEET: OMB Issues Guidance to Advance the Responsible Acquisition of AI in Government | OMB | The White House

    FACT SHEET: OMB Issues Guidance to Advance the Responsible Acquisition of AI in Government | OMB | The White House

    whitehouse.gov

  • View organization page for Center for AI and Digital Policy, graphic

    59,342 followers

    📢 ✍🏼 📜 The International Bar Association (IBA) has formally endorsed the Council of Europe Framework Convention on #ArtificialIntelligence, #Democracy, #HumanRights, and the #RuleOfLaw. 📖 🤖 🏛️ This endorsement follows in the wake of a report titled ‘The Future is Now: Artificial Intelligence and the Legal Profession’, recently published by the IBA in partnership with the Center for AI and Digital Policy. The combined expertise of the organisations underscores the relevance of the #legal profession’s involvement in AI governance developments. 🙏🏼 🤝🏼 🌐 Marc Rotenberg, Executive Director of the Center for AI and Digital Policy, said, "We welcome the support of the International Bar Association for the AI Convention, the first internationally binding treaty for the governance of AI. We look forward to collaborating with the IBA as we promote the Treaty's ratification and implementation in countries around the world." American Bar Foundation American Society of International Law American Bar Association The American Law Institute European Law Institute (ELI) https://lnkd.in/g_xJgM5m

    The IBA is the first association of legal practitioners to endorse the Council of Europe Framework Convention on Artificial Intelligence

    The IBA is the first association of legal practitioners to endorse the Council of Europe Framework Convention on Artificial Intelligence

    ibanet.org

Similar pages

Browse jobs

Funding