Deep Dives

Major AI firms commit to enhanced safety measures

July 21, 2023
We summarized this source into key points to remember. To know more about it, please click on the link above.

Receive a daily summary of what happened in tech, powered by ML and AI.

Thank you! We sent you a verification email.
Oops! Something went wrong while submitting the form.
Join 1,500+ thinkers, builders and investors.
Seven leading AI companies have agreed to a voluntary commitment to put their AI models through independent security testing before launch, as part of a larger deal with the US government to enhance AI safety and alignment.

AI Security Agreement: Key tech companies have agreed to a safety commitment for AI development.
  • This commitment includes running security tests and identifying vulnerabilities before AI models go live.
  • It's part of a broader deal with the US government to improve AI safety and alignment.

  • Participating Companies: Seven top AI firms, including Amazon, Google, and Microsoft, have signed the agreement.
  • These companies will not only run tests and implement security measures but also share risk reduction strategies.
  • The firms are also expected to invest in cybersecurity measures.

  • Government Involvement: The agreement is part of the Biden administration's efforts to regulate AI development.
  • There are calls for comprehensive legislation similar to what exists in the EU, but no such law exists yet in the US.
  • The administration is working on an executive order and bipartisan legislation for responsible innovation.

  • Commitments under the Agreement: The agreement obliges companies to undertake various safety measures.
  • These include internal and external security testing, and sharing of information on managing AI risks with the industry, government, and civil society.
  • The companies are also required to facilitate third-party discovery and reporting of vulnerabilities.

  • Future AI Models: The agreement mainly applies to future, theoretically more powerful AI models.
  • This means it does not currently apply to existing models like GPT-4 and Titan.
  • The companies have to develop a watermarking system to make it clear when content is AI-generated.

  • Societal Impact of AI: The agreement highlights the need for research on the societal risks posed by AI.
  • These risks include harmful bias, discrimination, and privacy concerns.
  • The AI firms also agree to use AI to address significant societal challenges, such as cancer prevention and climate change mitigation.

  • Global AI Regulations: AI has become a crucial regulatory topic in the tech industry worldwide.
  • Countries around the world are working to ensure the safe development and deployment of AI.
  • OpenAI, Anthropic, and Google's DeepMind have agreed to provide early access to models for AI safety researchers in the UK.

  • Did you like this article? 🙌

    Receive a daily summary of the best tech news from 50+ media (The Verge, Tech Crunch...).
    Thank you! We sent you a verification email.
    Oops! Something went wrong while submitting the form.
    Join 1,500+ thinkers, builders and investors.
    You're in! Thanks for subscribing to Techpresso :)
    Oops! Something went wrong while submitting the form.
    Join 5,000+ thinkers, builders and investors.
    Also available on:

    You might also like