Center for AI Policy

Center for AI Policy

Government Relations Services

Washington, DC 5,573 followers

Developing and promoting policy to mitigate catastrophic risks from AI

About us

The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.

Website
https://aipolicy.us/
Industry
Government Relations Services
Company size
2-10 employees
Headquarters
Washington, DC
Type
Nonprofit
Founded
2023

Locations

Employees at Center for AI Policy

Updates

  • View organization page for Center for AI Policy, graphic

    5,573 followers

    On Tuesday, September 10th, 2024 the Center for AI Policy held a briefing for House and Senate staff on Advancing Education in the AI Era: Promises, Pitfalls, and Policy Strategies. The Center's Executive Director, Jason Green-Lowe, moderated a discussion between a panel of esteemed experts: • Michael Brickman, Education Policy Director, The Cicero InstituteBethany Abbate, AI Policy Manager, Software and Information Industry Association (SIIA ) • Punya Mishra, Professor, Mary Lou Fulton Teachers College at Arizona State University (ASU) • Pati R., Senior Director of Edtech and Emerging Technologies, Digital Promise If you missed the event, you can watch a video recording here: https://lnkd.in/efWRWmPf

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,573 followers

    AI Policy Weekly No. 44: 1) AI software startups are receiving considerable funding: $491 million to KoBold Metals, $500 million to poolside, $230 million to World Labs, and more. 2) The U.S. AI Safety Institute has issued a Request for Information on the responsible development of chemical and biological AI models. 3) According to a new study in Harvard Business Review by University of Cambridge professors and leaders at strategize.inc, GPT-4o has impressive CEO decision-making capabilities that in many ways exceed university students and experienced bank executives. Quote of the Week: Geoffrey Hinton discussed Sam Altman's firing at a press conference after winning the 2024 Nobel Prize in Physics. #AI #AIPolicy #Startups #Safety #CEO Read the full stories: https://lnkd.in/eGw8nfJb

    AI Policy Weekly #44

    AI Policy Weekly #44

    aipolicyus.substack.com

  • View organization page for Center for AI Policy, graphic

    5,573 followers

    Preparedness: Key to Weathering Tech Disasters With families and communities still working to recover from previous storms, like the recent Hurricane Helene that devastated many coastal communities and unsuspecting inland areas, America, this week, prepared for another monster storm - Hurricane Milton. Anticipating the oncoming storm, schools and public events were canceled, supplies prepositioned, response personnel activated, and Florida Governor Ron DeSantis preemptively declared a state of emergency for more than 80 percent of the counties across the state days before landfall. From school fire drills to federal disaster coordination exercises, it is understood that preparedness is key to weathering disasters. Creating a plan, anticipating challenges, and executing a coordinated local, state, and federal response saves lives and protects communities. Disasters are evolving. This is reflected in the July 2024 IT disaster that was caused by a faulty Crowdstrike update, hobbled millions of Windows systems, and cost Fortune 500 companies an estimated $5.4 billion. As lives and the economy become increasingly intertwined with technology, it is judicious and necessary that emergency preparation and capabilities adapt to new threats. For these reasons, the Center for AI Policy (CAIP) is encouraging Congress and the Administration to increase America’s technology security posture and emergency response capabilities. A foundation for this can be set by “wargaming” technology related catastrophes. With the proliferation and accessibility of artificial intelligence (AI) tools in particular, critical US systems and infrastructure face increased risk of attack. Government agencies already conduct wargames, or tabletop exercises, focused on natural disasters, physical attacks, and cyberattacks. This planning should now be extended to anticipate the evolving threats from advanced AI. We need additional resources to simulate AI-specific catastrophes, mitigate those threats, and prepare robust responses. Only through practice can we better understand our vulnerabilities and fortify our readiness. Crises have traditionally brought communities and country together. As we keep those in Hurricane Milton’s path in our thoughts and prayers, let’s ensure a swift and effective response. As we look toward the future of emergency preparedness, let’s ensure America is ready for existing and novel threats. -- Brian Waldrip

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,573 followers

    *** Today's AI Policy Daily highlights: Here is today's edition: 1. North Koreans using AI and fake American IDs for remote IT work 2. Meta is expanding its content moderation capabilities in Europe 3. Brazil lifts Twitter ban after £3.8m fine payment 4. US government's plan to break up Google's search dominance 5. Potential AI-driven electricity boom compared to '90s internet bubble 6. Amazon's new AI tool for package delivery optimization 7. OpenAI's legal battle with Elon Musk and pursuit of public benefit structure 8. Google DeepMind researchers win Nobel Prize in Chemistry 9. Google's partnership with Sequoia Capital for AI startups 10. Big Tech's role in revitalizing nuclear power for AI energy needs October 10, 2024 For the full newsletter - Check it out - click here:  https://lnkd.in/eDAc77ge #ai #artificialintelligence #aipolicy #aiprogramming #airegulation #aisafety 

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,573 followers

    "The only natural antidote to an increasingly powerful and relentlessly ambitious Sam Altman is to wake up Uncle Sam. "The federal government needs to pass AI safety legislation immediately so that the American public's needs will be reflected in AI developers’ final decisions—not just their public relations campaigns, which hide their ulterior motives and ultimate goals."

    Sam Altman’s Dangerous and Unquenchable Craving for Power

    Sam Altman’s Dangerous and Unquenchable Craving for Power

    Center for AI Policy on LinkedIn

  • View organization page for Center for AI Policy, graphic

    5,573 followers

    CAIP Comment on BIS Reporting Requirements for the Development of Advanced AI Models and Computing Clusters On the 11th of September, the Bureau of Industry and Security-U.S. Department of Commerce (BIS) released a proposed rule “Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters”. In line with Executive Order 14410, BIS has proposed a quarterly cadence of reporting the development and safety activities of the most powerful models.  The Center for AI Policy (CAIP) supports these reporting requirements and urges Congress to explicitly authorize them. These reporting requirements will offer valuable visibility for BIS over the state and safety of America’s AI industry. Such insight will enable BIS to analyze whether innovation is matching America’s military needs and models are being safety tested before they are released to the wider public. Besides design of the rule itself, sufficient resources and communication between government departments will be crucial to achieving the intent of these reporting requirements. For example, BIS may wish to establish ongoing meetings with representatives of the DoD Chief Digital and Artificial Intelligence Office (CDAO) to understand what innovation is relevant to military usage.  Although the proposed rule is a step towards AI safety, reporting requirements are no guarantee that companies will act responsibly. Given corporate incentives, companies may rush to develop and release AI models without sufficient safety testing. Powerful but insufficiently tested models may prove deadly when deployed in high stakes critical infrastructure contexts. Similarly, we don’t want malicious actors armed with the capability to develop new pathogens. Plus, current generative AI tendencies towards deception and power seeking are all the more concerning as the autonomy of AI agents increases. Only by shifting corporate incentives, through required safety measures or clarification of liabilities, can we ensure that companies don’t put society at risk with technically faulty or easily misused models. CAIP replied to BIS’s request for comments to help refine the proposed reporting requirements. Read our full comment here: https://lnkd.in/e8GStSXC

    Comment on BIS Reporting Requirements for the Development of Advanced AI Models and Computing Clusters | Center for AI Policy | CAIP

    Comment on BIS Reporting Requirements for the Development of Advanced AI Models and Computing Clusters | Center for AI Policy | CAIP

    aipolicy.us

  • View organization page for Center for AI Policy, graphic

    5,573 followers

    EU Announces Initial AI Pact Pledges The European Commission has unveiled the EU AI Pact, a voluntary initiative with over 100 initial signatories from various sectors. The Pact encourages early adoption of EU AI Act principles before the law’s full implementation over the next few years. All participants pledge to adopt AI governance strategies, map potential high-risk AI systems, and promote AI literacy among their staff. Additionally, participants can voluntarily commit to further measures, such as: • “Put in place processes to identify possible known and reasonably foreseeable risks to health, safety and fundamental rights.” • “Clearly and distinguishably label AI generated content including image, audio or video constituting deep fakes.” • “Ensure that individuals are informed, as appropriate, when they are directly interacting with an AI system.” The EU AI Office will publicly share the commitments that organizations intend to meet, and organizations will report on their implementation progress twelve months after that publication. Current participants include leading AI companies like OpenAI, Google, Amazon, Microsoft, IBM, and Cohere. Some important AI companies have not yet signed, namely Meta, Anthropic, NVIDIA, and Mistral AI. The Pact remains open for new signatories, offering companies a chance to voluntarily implement safety measures. In the coming months, the public will find out which AI companies are willing to do that, and how much they are willing to do. #AI #AIPolicy #EU #AIAct

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,573 followers

    *** Today's AI Policy Daily highlights: Here is today's edition: 1. AI pioneers Geoffrey Hinton and John Hopfield awarded 2024 Nobel Prize in Physics for foundational work in machine learning and artificial neural networks. 2. European data protection regulations impacting Big Tech's AI plans, with companies like Google, Meta, X, and LinkedIn pausing or delaying projects in the EU. 3. Antitrust officials are considering breaking up tech giants like Google to address alleged monopoly abuses. 4. States suing TikTok over child safety concerns related to addictive features amid ongoing national security debates. 5. Blockchain and crypto companies are investing heavily in football sponsorships, spending a record $170 million this season. October 9, 2024 For the full newsletter - Check it out - click here:  https://lnkd.in/eDAc77ge #ai #artificialintelligence #aipolicy #aiprogramming #airegulation #aisafety 

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,573 followers

    Deepfake Clone Targets U.S. Senator, Underscoring AI Risks Senate Foreign Relations Committee Chairman Ben Cardin was recently targeted by a sophisticated deepfake operation. Last month, an unknown actor impersonated former Ukrainian Foreign Minister Dmytro Kuleba in a Zoom call, using advanced AI technology to create a convincing audio and video likeness. According to a notice from the Senate’s security office, Cardin grew suspicious when the fake Kuleba “began acting out of character and firmly pressing for responses to questions like ‘Do you support long range missiles into Russian territory? I need to know your answer.’” “After immediately becoming clear that the individual I was engaging with was not who they claimed to be, I ended the call and my office took swift action, alerting the relevant authorities,” said Senator Cardin in a statement. The Federal Bureau of Investigation (FBI) is investigating the incident. “We have seen an increase of social engineering threats in the last several months and years,” said the Senate’s security office, adding that “this attempt stands out due to its technical sophistication and believability.” This deepfake attack highlights AI’s potential for political manipulation and disinformation. As AI advances, protecting the integrity of political communications and democratic processes from AI-enabled threats must be a top priority. #AI #AIPolicy #Deepfake Pictured: Senator Ben Cardin, Chair of the United States Senate Committee on Foreign Relations.

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,573 followers

    CAIP congratulates AI safety advocate on winning the 2024 Nobel Prize in Physics Washington, DC—Geoffrey Hinton, along with John Hopfield, won the 2024 Nobel Prize in Physics for their discoveries and inventions, which laid the foundation for machine learning. Hinton, long known as the "godfather of artificial intelligence," made headlines last year when he quit his job at Google to speak more openly about the dangers of the technology he helped create. Hinton says now is the moment to run experiments to understand artificial intelligence (AI). Hinton has called for governments, companies, and developers to: Run experiments to understand AI Pass laws to ensure AI is used ethically Create a registry of large AI systems Require companies to report when AI behaves dangerously Legally protect whistleblowers Have AI developers focus on understanding how AI might go wrong before it's more intelligent than humans Hinton speaking to 60 Minutes on October 8, 2023: "It may be we look back and see this as a kind of turning point when humanity had to make the decision about whether to develop these things further and what to do to protect themselves if they did. I don't know. I think my main message is there's enormous uncertainty about what's gonna happen next. These things do understand. And because they understand, we need to think hard about what's going to happen next. And we just don't know." "Geoffrey Hinton has been at the forefront of the relationship between humans and digital intelligence," said Jason Green-Lowe, executive director at the Center for AI Policy (CAIP). "Hinton has been sounding the alarm and telling anyone who will listen that we must worry about emerging AI technology." Hinton long thought computer models weren't as powerful as the human brain. Now, he sees artificial intelligence as a relatively imminent "existential threat." "Sadly, with Congress bogged down in election-year politicking, AI safety rules have taken a back seat, allowing unchecked innovation to be more important than public safety," added Green-Lowe. “Hinton's winning of the 2024 Nobel Prize in Physics should be a clarion call for elected officials to do the right thing and pass robust AI safety laws immediately." The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards. More @ aipolicy.us. ### October 8, 2024

    • No alternative text description for this image

Similar pages