Center for AI and Digital Policy

Center for AI and Digital Policy

Public Policy Offices

Washington, DC 58,987 followers

"Filter coffee. Not people."

About us

The Center for AI and Digital Policy aims to ensure that artificial intelligence and digital policies promote a better society, more fair, more just, and more accountable – a world where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law. As an independent non-profit corporation, the Center for AI and Digital Policy will bring together world leaders, innovators, advocates, and thinkers to promote established frameworks for AI policy – including the OECD AI Principles and the Universal Guidelines for AI – and to explore emerging challenges.

Website
https://caidp.org
Industry
Public Policy Offices
Company size
11-50 employees
Headquarters
Washington, DC
Type
Educational
Founded
2021
Specialties
Public Policy, Artificial Intelligence, Privacy, and AI

Locations

Employees at Center for AI and Digital Policy

Updates

  • 📢 CAIDP Advises Australia on Mandatory Guardrails for AI 🇦🇺 In detailed comments to the Australian Department of Industry, Science and Resources, the Center for AI and Digital Policy set out key recommendations for the establishment of mandatory Guardrails for high-risk AI systems. CAIDP said Australia should ❗ Establish red lines prohibiting AI systems that violate human rights, such as those used in mass surveillance, social scoring, biometric categorization, emotion recognition, and predictive policing. ❗ Mandate rigorous, independent impact assessments for high-risk AI systems prior to deployment with a “go/no go” approach ❗ Clarify legal liability for AI-related harms to incentivize responsible development and deployment ❗ Enforce strict data governance measures providing transparency on data provenance, quality and consent. Prohibit the use of non-consensually obtained data to train AI models ❗ Implement robust whistleblower protections to safeguard individuals who report unethical practices or safety risks in AI development and across the AI lifecycle. ❗ Endorse and ratify the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and Rule of Law As CAIDP explained, governments are “crucial actors in creating an environment that supports safe and responsible AI use, while reducing the risks posed by these technologies.” CAIDP strongly endorsed the Australian Government’s commitment to establishing mandatory guardrails for high-risk AI. #aigovernance #guardrails #publicvoice Merve Hickok Marc Rotenberg Caroline Friedman Levy Monique Munarini April Yoder, PhD Heramb Podar Gamze Büşra Kaya Nidhi Sinha Ed Santow

  • View organization page for Center for AI and Digital Policy, graphic

    58,987 followers

    📢 CAIDP Advises OECD on Risk Thresholds For Advanced AI Systems In response to a public comment opportunity, the Center for AI and Digital Policy provided detailed comments to the OECD.AI. on Risk Thresholds for Advanced AI Systems. CAIDP cited several influential AI Policy frameworks that address the risk challenges of advanced AI systems ➡️ The Universal Guidelines for AI - Termination Principle (2018) ➡️ The EU AI Act - Risk Assessment (2023) ➡️  U.S. Executive Order on Safe, Secure, and Trustworthy AI (2023) ➡️ California Senate Bill 1047 (not adopted) ➡️ and the work of Professor Stuart Russell, "Managing extreme AI risks amid rapid progress" (2023) and " Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems" (2024) CAIDP explained: ❗ "Compute thresholds are important to define and include within a comprehensive risk assessment framework, however compute thresholds as the single measure for risk mitigation is insufficient." ❗ "Focusing on compute thresholds as the sole determinant of risk draws attention and resources away from other factors, such as the data used for training a given model, the context in which it is deployed, and safety optimization practices." ❗ "Another crucial factor to take into account when determining the risk level of a given model is to consider whether or not it processes personal data. The OECD should draw a sharp distinction between those AI systems that involve the processing of personal data and those AI systems that do not." ❗ "We propose a proactive approach towards threshold management. Consistent with regulatory frameworks that call for AI transparency and accountability, we advocate the reporting and publishing of risk scores." ❗ "The OECD should promote the data transparency requirements set out in the recently enacted California Data Transparency Act, AB2013." The Act requires the developers of AI systems to post documentation regarding the data used to train the system. OECD - OCDE U.S. Mission to the OECD Nayyara Rahman Tatiana G. Zasheva Tim Sowa Marc Rotenberg Merve Hickok Christabel R.

  • View organization page for Center for AI and Digital Policy, graphic

    58,987 followers

    📢 US Releases Guidance to Advance the Responsible Acquisition of AI in Government 📝 Today, the Office of Management and Budget (OMB) released the "Advancing the Responsible Acquisition of Artificial Intelligence in Government" memorandum (M-24-18). 🏛️ The White House said, "Successful use of commercially-provided AI requires responsible procurement of AI. This new memo ensures that when Federal agencies acquire AI, they appropriately manage risks and performance; promote a competitive marketplace; and implement structures to govern and manage their business processes related to acquiring AI." 📝 📝 This OMB Memo builds on an earlier OMB memo (M-24-10) that established the first binding requirements for US agencies to strengthen governance, innovation, and risk management for the use of AI. Key topics in the new Memo include: ➡️ Managing AI Risks and Performance ➡️ Promoting a Competitive AI Market with Innovative Acquisition ➡️ Ensuring Collaboration Across the Federal Government ⚠️ 🤖 ⚠️ The new OMB memo carries forward the emphasis on a careful evaluation of "rights-impacting" and "safety-impacting" AI systems. ‼️ This memo establishes "acquisition-related practices that agencies must implement to ensure effective deployment of required risk management practices for rights-impacting and safety-impacting AI. These include specific actions designed to address complex issues related to privacy, security, data ownership and rights, and interoperability that may arise in connection with the acquisition of an AI service or system." Agencies must ❗Address Privacy Risks Throughout the Acquisition Lifecycle ❗Ensure That AI-based Biometrics Protect the Public’s Rights, Safety, and Privacy ❗Comply with Civil Rights Laws to Avoid Unlawful Bias, Unlawful Discrimination, and Harmful Outcomes.  ❗ Identify When Solicitations Require Compliance for Rights-Impacting and Safety-Impacting AI. ❗ Incorporate Transparency Requirements into Contractual Terms and Solicitations to Obtain Necessary Information and Access. ❗ Delineate Responsibilities for Ongoing Testing and Monitoring and Build Evaluations into Vendor Contract Performance. ❗ Require AI Incident Reporting. CAIDP President Merve Hickok, said "We welcome the new memo from OMB on the Responsible Acquisition of AI in government. Procurement is a powerful lever for effective AI policy. We urge federal agencies to examine closely the rights-impacting and safety-impacting dimensions of AI systems, prior to acquisition and throughout the AI lifecycle. Deployment must be undertaken responsibly and thoughtfully." Hickok is the author of the forthcoming "From Trustworthy AI Principles to Public Procurement Practices" ( DeGruyter 2024). Beginning in 2021, the Center for AI and Digital Policy sent several statements to the OMB and Congressional Committees, urging the adoption of strong regulations across the federal government for AI systems. #aigovernance #procurement https://lnkd.in/eYYBBsxv

    FACT SHEET: OMB Issues Guidance to Advance the Responsible Acquisition of AI in Government | OMB | The White House

    FACT SHEET: OMB Issues Guidance to Advance the Responsible Acquisition of AI in Government | OMB | The White House

    whitehouse.gov

  • View organization page for Center for AI and Digital Policy, graphic

    58,987 followers

    📢 ✍🏼 📜 The International Bar Association (IBA) has formally endorsed the Council of Europe Framework Convention on #ArtificialIntelligence, #Democracy, #HumanRights, and the #RuleOfLaw. 📖 🤖 🏛️ This endorsement follows in the wake of a report titled ‘The Future is Now: Artificial Intelligence and the Legal Profession’, recently published by the IBA in partnership with the Center for AI and Digital Policy. The combined expertise of the organisations underscores the relevance of the #legal profession’s involvement in AI governance developments. 🙏🏼 🤝🏼 🌐 Marc Rotenberg, Executive Director of the Center for AI and Digital Policy, said, "We welcome the support of the International Bar Association for the AI Convention, the first internationally binding treaty for the governance of AI. We look forward to collaborating with the IBA as we promote the Treaty's ratification and implementation in countries around the world." American Bar Foundation American Society of International Law American Bar Association The American Law Institute European Law Institute (ELI) https://lnkd.in/g_xJgM5m

    The IBA is the first association of legal practitioners to endorse the Council of Europe Framework Convention on Artificial Intelligence

    The IBA is the first association of legal practitioners to endorse the Council of Europe Framework Convention on Artificial Intelligence

    ibanet.org

  • 📰 The Center for AI and Digital Policy Celebrates Publication of "Time for California to Act on Algorithmic Discrimination" Evelina Ayrapetyan has published "Time for California to Act on Algorithmic Discrimination" in Tech Policy Press, shining light on AB2013, the Data Transparency Act, and the removal of obligations for automated decision making tech. Evelina Ayrapetyan is a Research Fellow at the Center for AI and Digital Policy (CAIDP), where she recently launched the CAIDP California Affiliate to advocate for the safe development and deployment of emerging tech in the state. 🔥 "Initially, AB2013 sought to create transparency for all AI systems, including GenAI and Automated Decision-Making Technology (ADMT). However, the bill's final version focuses solely on GenAI, stripping out crucial transparency requirements for ADMT." 🔥 "This change is significant because ADMT, not GenAI, is currently being used to determine access to education, housing, credit, and employment for millions of Americans. Limiting AB2013’s scope to GenAI weakens its potential, especially in high-risk applications. The Biden Administration's 2022 Blueprint for an AI Bill of Rights emphasizes that algorithms often replicate and exacerbate existing inequalities, introducing harmful bias and discrimination. . . . Transparency is key to holding developers accountable and safeguarding the rights of all Americans." 🔥 "Public trust in AI is plummeting, with confidence in AI systems dropping from 50% to 35%. This isn’t a partisan issue; it reflects a deep-seated concern among Americans that AI systems—especially in employment, lending, and criminal justice—are harming people." 🔥 "I applaud Governor Newsom for signing AB2013 into law and urge California legislators to build on AB2013 to create regulatory frameworks for ADMT. By regulating ADMT, legislators can ensure fairness, transparency, and accountability in the systems that are already shaping our society." Read the complete article below. #aigovernance Office of California Governor Gavin Newsom #california https://lnkd.in/gRRGkQBZ

    Time for California to Act on Algorithmic Discrimination | TechPolicy.Press

    Time for California to Act on Algorithmic Discrimination | TechPolicy.Press

    techpolicy.press

  • 📰 CAIDP Update 6.37 - AI Policy News (Sept. 30, 2024) 🇪🇺 Over 100 Companies Pledge Early Compliance with EU AI Act 🇺🇸 FTC Cracks Down on Deceptive AI Schemes in Nationwide Sweep 🇺🇸 California Enacts Landmark AI Data Transparency Law 🇺🇸 Governor Newsom Vetoes AI Safety Bill SB 1047, Despite Strong Expert Support 🇸🇬 Singapore Courts Address AI Use in Legal Proceedings 🇰🇷 South Korea Fines WorldCoin $850,000 for Privacy Violations 🇳🇴 Norway Bets Big on AI in New Digital Strategy 🇷🇺 Russia's AI Ethics Code Gains New Signatories 🇳🇱 Dutch Regulator Seeks Input on AI Bans 🗣️ 🇪🇺 CAIDP Europe Joins EU's AI Code of Practice Drafting 🗣️ 🌐 CAIDP Calls for Unified AI Governance in Response to UNESCO’s Consultation 🗣️ 🇹🇼 CAIDP Recommends Stronger Protections in Taiwan’s AI Basic Law Draft 🗣️ 🇺🇸 CAIDP Urges U.S. Senate to Protect Workers from AI Risks 🗣️ ✍🏼 📜 CAIDP Council of Europe AI Treaty Observatory - Article 9 🗣️ 🇹🇷 CAIDP Executive Director Addresses Istanbul Bar Association 🗣️ 📖 🏛️ CAIDP and IBA Release Report The Future is Now: AI and the Legal Profession 🗣️ 🔢 CAIDP Europe Calls for Support to the Algorithmic Pluralism Initiative #AIgovernance #DataTransparency #Transparency European Union Federal Trade Commission Worldcoin Office of California Governor Gavin Newsom United States Senate UNESCOCouncil of Europe İstanbul Barosu International Bar Association #AIA #CodeOfPractice ❗ ✍🏼 📰 Subscribe to the CAIDP Update - caidp.org/caidp-update/

  • View organization page for Center for AI and Digital Policy, graphic

    58,987 followers

    📢 California Governor Signs AI Laws, Announces New Initiatives "California Governor Gavin Newsom announced a series of initiatives to further protect Californians from fast-moving and transformative GenAI technology, while vetoing legislation that falls short of providing a flexible, comprehensive solution to curbing the potential catastrophic risks." ✅ According to Office of California Governor Gavin Newsom, the Governor signed 17 bills covering the deployment and regulation of GenAI technology, "the most comprehensive legislative package in the nation on this emerging industry — cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation." ✅ "The Newsom Administration will also immediately engage academia to convene labor stakeholders and the private sector to explore approaches to use GenAI technology in the workplace." ✅ "Governor Newsom signed legislation requiring California’s Office of Emergency Services to expand their work assessing the potential threats posed by the use of GenAI to California’s critical infrastructure, including those that could lead to mass casualty events." ❌ However, Governor Gavin Newsom vetoed SB1047, a bill that would impose safety obligations on those companies that develop large "frontier models." He explained: 📜 "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.” His statement on SB1047 is here - https://lnkd.in/egtwWKUH The Center for AI and Digital Policy endorsed AB2013, The Data Transparency Act, spnsored by Jacqui Irwin, which Governor Newsom signed into law on Friday. 🎉 #aigovernance Christabel R. Merve Hickok Evelina Ayrapetyan Jaya V. Kyler Zhou Nidhi Sinha https://lnkd.in/eSq32k8X

    Governor Newsom announces new initiatives to advance safe and responsible AI, protect Californians

    Governor Newsom announces new initiatives to advance safe and responsible AI, protect Californians

    https://www.gov.ca.gov

  • View organization page for Center for AI and Digital Policy, graphic

    58,987 followers

    ✍🏼 Governor Gavin Newsom Signs Data Transparency Act According to Bloomberg Law, California has enacted the most comprehensive US rules for disclosing information about the data that is fed into artificial intelligence systems, under legislation Gov. Gavin Newsom signed Saturday. The bill (AB 2013) by Assembly Member Jacqui Irwin applies to generative AI, the AI systems that use human prompts to create text, images, and similar content. But these systems have also raised profound concerns about the impact on privacy, intellectual property, security, and fairness. The Center for AI and Digital Policy California Team urged the enactment of AB2013. AS CAIDP explained: AB2013 ("Generative artificial intelligence: training data transparency") would: ✅ Establish transparency obligations for foundational AI systems ✅ Limit the exploitation of personal data and the work of creative artists by the developers of AI systems ✅ Limit the creation of opaque decision-making AI systems that impact fundamental rights. Evelina Ayrapetyan, a member of the CAIDP California team, said "We commend Gavin Newsom for signing AB2013 and Assembly Member Jacqui Irwin for her leadership on data transparency. The Center for AI and Digital Policy met with Irwin's team, provided a statement to the California State Assembly, and worked with the National Association of Voice Actors (NAVA), Transparency Coalition.ai, and others to support the law." Evelina continued, "We now urge California lawmakers to create transparency obligations for AI systems that make decisions about people. These systems impact opportunities in education, housing, credit, and employment. There is more work to do." Nidhi Sinha, CAIDP Policy Coordinator, said, "Transparency is the basis of AI fairness and AI accountability. We look forward to the implementation and enforcement of this landmark law." Who Must Comply? (h/t Katharina Koerner) - Developers of GenAI systems, including any person, partnership, corporation, or government agency that designs, codes, produces, or substantially modifies these AI systems for public use, whether for free or for compensation. Developers Must post on their website a summary of datasets used, including:  - The sources or owners of the datasets.  - A description of how the datasets align with the intended purpose of the AI system.  - The number of data points in the datasets.  - The types of data points, including: - For labeled datasets: types of labels used.  - For unlabeled datasets: general characteristics. Whether:  - the datasets include any data protected by copyright, trademark, or patent or if the data is public domain.  - the datasets were purchased or licensed by the developer. . . . (and more) Evelina Ayrapetyan Christabel R. Merve Hickok Marc Rotenberg Jaya V. Kyler Zhou Article in Bloomberg Law by Titus Wu: https://lnkd.in/gFVMMfWZ

    Bill Information

    Bill Information

    leginfo.legislature.ca.gov

  • View organization page for Center for AI and Digital Policy, graphic

    58,987 followers

    👏🏼 The Center for AI and Digital Policy Celebrates the appointment of Virginia Dignum to Chair the ACM Technology Policy Committee The Center for AI and Digital Policy celebrates the appointment of Professor Virginia Dignum to Chair the ACM Technology Policy Committee (TPC). The TPC sets the agenda for ACM’s global policy activities and serves as the central convening point for ACM's interactions with government organizations, the computing community, and the public in all matters of public policy related to computing and information technology. Dignum is a professor in Responsible Artificial Intelligence and the Director of the AI Policy Lab at Umeå University. She is a member of the UN High-Level Advisory Body on AI and a senior advisor to the Wallenberg Foundations. Professor Dignum is also a member of the Center for AI and Digital Policy Global Academic Network, a network of leading academic experts in the AI field who advise CAIDP. Many members of the Network have recently endorsed the Council of Europe AI Treaty, the first internationally binding treaty for AI governance. Global Academic Network - https://lnkd.in/eKqVE6ki CAIDP also acknowledges Dr. Lorraine Kisselburgh, who established the Technology Policy Council and serves on both the CAIDP Global Academic Network and the CAIDP Board of Directors, and Dr. Barbara Simons, a former President of the ACM, who first urged the organization to participate in public policy, and continues to support the work of the Center for AI and Digital Policy. We celebrate all three ACM leaders in computer science and public policy! Maria Helen Murphy and Derrick Cogburn, co-chairs, CAIDP Global Academic Network

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • 📢 CAIDP Advises US Senate to Establish AI Guardrails for the Workplace (Sept. 25, 2024) In detailed comments to the United States Senate Committee on Health, Education, Labor & Pensions, the Center for AI and Digital Policy has urged the Senate to safeguard workers. 🔥 CAIDP explained that "voluntary actions by the AI industry fall short of meaningful protections for workers and do not establish guardrails. . . Harnessing AI’s opportunities must be undertaken in tandem with addressing AI’s risks. " CAIDP recommended: ✅ AI legislation with clear transparency and accountability provisions. "Congress must create guardrails to protect workers." ✅ Oversight to ensure the implementation of the AI Executive Order and the implementation of the OMB Guidance on Agency Use of AI. CAIDP noted that the bipartisan Senate AI Working Group found, "wide agreement that workers across the spectrum, ranging from blue-collar positions to C-suite executives, are concerned about the potential for AI to impact their jobs." CAIDP emphasized: ❗ AI poses a systematic risk to workplace privacy ❗ Low-wage and non-union workers are most vulnerable to AI systems ❗ AI systems entrench bias in the employment process CAIDP pointed out that President Biden’s Executive Order on AI requires federal agencies “to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection.” The Executive Order makes clear that irresponsible use of AI could “displace and disempower workers.” #aigovernance #labor The White House Merve Hickok Christabel R. Marc Rotenberg Janhvi Patel

Similar pages

Browse jobs

Funding