EU’s Preliminary Deal on AI Regulation: Implications for ChatGPT


The European Union has recently brokered a preliminary deal that outlines the regulations for governing advanced AI models, with particular emphasis on the widely recognized ChatGPT. This marks a significant stride towards establishing the world’s first comprehensive artificial intelligence regulation.

Transparency for AI Systems

In a bid to enhance transparency, developers of general-purpose AI systems, including the formidable ChatGPT, must adhere to fundamental requirements. These encompass the implementation of an acceptable-use policy, the maintenance of up-to-date information on model training methodologies, and the provision of a detailed summary of the data employed in their training. Additionally, a commitment to respecting copyright law is mandatory.

AI Regulation | ChatGPT

Additional Rules for Models Posing “Systemic Risk”

Models identified as posing a “systemic risk” face more stringent regulations. The determination of this risk hinges on the amount of computing power utilized during model training. Notably, any model surpassing 10 trillion operations per second, with OpenAI’s GPT-4 being the automatic qualifier, falls under this category. The EU’s executive arm holds the authority to designate other models based on various criteria, such as data set size, registered business users, and end-users.

Also Read: Shocking News: ChatGPT’s Vulnerability to Data Breach

Code of Conduct for Highly Capable Models

Highly capable models, including ChatGPT, are required to adopt a code of conduct while the European Commission devises more comprehensive and enduring controls. Non-compliance necessitates proof of adherence to the AI Act. Notably, open-source models, while exempt from certain controls, are not immune if deemed to pose a systemic risk.

Stringent Obligations for Models

Models categorized under the regulatory framework must report their energy consumption, undergo red-teaming or adversarial tests, assess and mitigate potential systemic risks, and report any incidents. Additionally, they must ensure the implementation of robust cybersecurity controls, disclose information used for fine-tuning the model, and adhere to more energy-efficient standards if developed.

EU poses AI regulation

Approval Process and Concerns

The European Parliament and the EU’s 27 member states are yet to approve the tentative deal. Meanwhile, concerns have been voiced by nations like France and Germany. The apprehension revolves around the perceived risk of stifling European AI competitors, exemplified by companies like Mistral AI & Aleph Alpha. France and Germany specifically worry about excessive regulations hampering innovation and competitiveness in the global AI landscape.

Also Read: The European AI Giant MISTRAL AI Raised €385 Million

Our Say

In navigating the intricate terrain of AI regulations, the EU’s approach seeks a delicate balance between fostering innovation and safeguarding against potential risks. As the proposal awaits approval, concerns are being raised by certain member states. This underscores the challenges in finding a consensus on the extent of regulation needed to govern the field of AI. Balancing the aspirations of AI developers with the imperative of societal safety remains a pivotal task in charting the future of AI governance.

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img