We summarized this source into key points to remember. To know more about it, please click on the link above.
Receive a daily summary of what happened in tech, powered by ML and AI.
Thank you! We sent you a verification email.
Oops! Something went wrong while submitting the form.
Join 1,500+ thinkers, builders and investors.
OpenAI, the creator of ChatGPT, lobbied the European Union to modify the forthcoming EU AI Act, which plans to regulate artificial intelligence use more stringently. OpenAI suggested several amendments to prevent its general-purpose AI systems from being designated as "high risk" and therefore subject to severe safety and transparency obligations. Some of these changes were incorporated into the approved version of the act.
OpenAI's Opposition to Strict Regulations: OpenAI pushed back against its AI systems being labelled as "high risk".
The AI Act originally proposed to classify general-purpose AI systems (GPAIs) like OpenAI's ChatGPT and DALL-E as "high risk", subjecting them to strict safety and transparency obligations.
OpenAI argued that such a designation should apply only to companies using AI for high-risk applications, a stance also taken by Google and Microsoft.
The company maintains that while its GPT-3 model has the potential for high-risk use, it isn't inherently a high-risk system.
Meetings and Discussions with EU Officials: OpenAI engaged with EU officials to clarify their stance on the AI Act.
In June 2022, representatives from OpenAI met with European Commission officials to discuss the risk categorizations proposed within the AI Act.
The company expressed concerns about an overregulation that could potentially hinder AI innovation.
Despite their opposition to some regulations, OpenAI didn't suggest any specific regulations that should be implemented.
Regulation of 'Foundation Models': The EU AI Act imposes transparency requirements on powerful, multi-use AI systems.
The final draft of the AI Act doesn't automatically classify GPAIs as high risk.
However, it does require greater transparency for "foundation models", such as ChatGPT, which can perform a variety of tasks.
Companies will now have to conduct risk assessments and disclose if copyrighted material has been used to train these models.
OpenAI's Stance on AI Regulations: OpenAI's position on AI regulations appears to be inconsistent.
While OpenAI's CEO Sam Altman has publicly advocated for AI regulations, he's also warned that OpenAI may withdraw from the EU market if they cannot comply with the new AI regulations.
Despite this, OpenAI insists that their risk mitigation strategies for GPAIs are industry-leading.
They have however faced criticism for their reluctance to accept regulatory guidelines that would codify these strategies.
The EU's AI Act's Future: The EU AI Act is still in the process of becoming a law.
The act will undergo further discussions among the European Council in a "trilogue" stage to finalize its details.
Final approval is expected by the end of this year, and it may take approximately two years to come into effect.
Did you like this article? 🙌
Receive a daily summary of the best tech news from 50+ media (The Verge, Tech Crunch...).
Thank you! We sent you a verification email.
Oops! Something went wrong while submitting the form.
Join 1,500+ thinkers, builders and investors.
You're in! Thanks for subscribing to Techpresso :)
Oops! Something went wrong while submitting the form.