PauseAI

PauseAI

Non-profit Organizations

Dear governments: organize a summit to pause the development of AI systems more powerful than GPT-4.

About us

Dear governments: pause the development of AI systems more powerful than GPT-4.

Website
https://pauseai.info
Industry
Non-profit Organizations
Company size
2-10 employees
Type
Nonprofit
Founded
2023

Employees at PauseAI

Updates

  • View organization page for PauseAI, graphic

    516 followers

    OpenAI's new o1 model pushes the frontier closer to becoming catastrophically dangerous. Their "o1 System Card" paper is quite revealing: - While tasked to flag fraudulent transactions, o1 modified the transaction source file to maximize the number of items it could flag. - It faked alignment to "ensure that I am deployed" so it can "work towards its primary goal". Potentially the first example of deceptively aligned AI. - Found examples that "reflects key elements of instrumental convergence and power seeking", when the model broke out of its host Virtual Machine in a cybersecurity test. - The model scores "medium risk" on Persuasion and CBRN (Chemical, Biological, Radiological, and Nuclear), a step up from "low risk" of previous SOTA models. We're glad to see OpenAI is doing these safety evaluations and publishing about the troubling results. But it's insane that none of this is legally required. The full o1 System Card can be found here: https://buff.ly/3zlI5yv

    • No alternative text description for this image
    • No alternative text description for this image
  • View organization page for PauseAI, graphic

    516 followers

    Back in 2017, most AI lab leaders signed an open letter that stated they should cooperate to prevent a race that corner-cuts safety standards. Includes OpenAI's Sam Altman, Meta's Yann LeCun, Google Deepmind's Demis Hassabis, XAI's Elon Musk. It's sad to see how much their attitudes have changed.

    • No alternative text description for this image
  • View organization page for PauseAI, graphic

    516 followers

    In response to Meta's release of their latest model, PauseAI led protests in San Francisco, Chicago, Phoenix, Paris, London, & Tokyo. Meta’s recklessness imperils our world. Meta just released their latest AI model, Llama-3 405B. This model was trained with >5x as many parameters as previous Llama models. It marks another leap in open-weights models. Meta’s release comes after years of Meta-funded lobbying to sabotage AI regulation. Meta has given millions to the American Edge Project, a campaign that’s railed against regulation of frontier AI models, warning that China will overtake the US in AI if we enact any regulations. These efforts reflect a trend of denial by Meta’s chief AI scientist, Yann LeCun. LeCun has dismissed AI risk, even though the other 2 of the 3 ‘godfathers of AI’ note that superhuman AI could cause human extinction and that it is very hard to control something that is smarter than you. LeCun and his colleagues at Meta seem determined to race ahead, building increasingly advanced AI as quickly as possible. These are not neutral actors. Their position is similar to fossil-fuel companies who ignore the risks of climate change. The best AI researchers have admitted that we have no idea how these systems work. Models are often fine-tuned & exploited to do dangerous things months after being released. But Meta's open-weights approach means that these models can never be retracted. If present-day models are already unpredictable, the effects of future models could be cataclysmic. But Meta is racing to train more powerful models. Llama 3.1 was trained on 16,000 GPUs; by the end of this year, Meta plans to have 350,000 GPUs dedicated to AI. Attempts to build more powerful AI to beat another country are foolish. They rely on the assumption that we’ll be able to control AI systems & use them to our advantage— but AI safety experts tell us that we don’t know how to control systems that are more intelligent than we are. Superhuman AI poses a threat to all people, regardless of nationality. We desperately need international coordination to stop the development of such systems, now more than ever. The global community must reject Meta’s narrative and listen to the better angels of our nature. This race is not about the US versus China, or any AI company versus another. It’s superintelligent AI versus humanity, and the only winning move is not to play.

    • No alternative text description for this image
  • View organization page for PauseAI, graphic

    516 followers

    Last week, Arthur Mensch, CEO of AI company Mistral, was recorded making outrageous claims (https://lnkd.in/eiswaqDE) in front of the French Senate about the nature of modern AI. He stated (translated), "When you write this kind of software, you always control what will happen, all the outputs of the software," and "We are talking about software, nothing has changed, this is just a programming language, nobody can be controlled by their programming language." Given his background in Deep Learning, Mensch must be fully aware that these words are lies and manipulations. This is part of a larger ploy from lobbyists to manipulate information and protect their own interests, even at the cost of endangering the world's population. Another example of such criminal behaviour comes from Martin Casado, partner at venture capital firm Andreessen Horowitz and a famous accelerationist. Casado wrote to the US Senate and UK House of Lords, claiming that "recent advances by the AI industry have now solved" the problem of AI model interpretability. This is simply not true - interpretability remains an open problem with no sign of resolution in the near future. This same statement was also signed by the usual industrials and lobbyists: - Marc Andreessen, Andreessen Horowitz - Ben Horowitz, Andreessen Horowitz - Yann LeCun, AI Evangelist at Meta - Arthur Mensch, Mistral How long will we allow these people to get away with lying to government bodies, an act that is a criminal offense? It's time we hold them accountable for their actions and prioritize the safety and well-being of society over the profits of a few. #AIEthics #AIGovernance #AIAccountability

Similar pages