EU countries adopt a common position on Artificial Intelligence rulebook – EURACTIV.com

EU ministers greenlit general access to the AI ​​Act at the Telecommunications Council meeting on Tuesday (6 December). EURACTIV provides an overview of the main changes.

The AI ​​Act is a major legislative proposal to regulate Artificial Intelligence technology based on its potential to cause harm. The EU Council is the first co-legislator to complete the first step of the legislative process, and the European Parliament is due to finalize its version around March next year.

“The final compromise text of the Czech presidency takes into account the key concerns of the member states and maintains the delicate balance between the protection of fundamental rights and the promotion of the use of AI technology,” said Ivan Bartoš, Deputy Prime Minister of the Czech Republic for Digitization.

AI definition

How AI is defined was a critical part of the discussions, because that defines the scope of the regulation.

Member States were concerned that traditional software would be included, so they presented a narrower definition of systems developed through machine learning, logic- and knowledge-based approaches, elements that the Commission can specify or update later through delegated acts.

General purpose AI

General purpose AI consists of large language models that can be adapted to perform various tasks. As such, it initially did not fall under the scope of the AI ​​regulation, which only provided for objective systems.

Also Read :  Make Your Home More Secure with These 5 Gadgets

However, member states felt that leaving these critical systems out of scope would cripple the AI ​​rulebook, while the specifications of this nascent market needed some tweaking.

The Czech presidency resolved the issue by instructing the Commission to carry out an evaluation and consultation effect on which to adapt the rules for general purpose AI by means of an implementing act within one and a half years after the regulation comes into force.

Prohibited practices

The AI ​​regulation prohibits the use of the technology for subliminal techniques, exploiting vulnerabilities and establishing Chinese-style social scoring altogether.

The social scoring ban was extended to private actors to prevent the ban being circumvented by an entrepreneur, while the concept of vulnerability was also extended to socio-economic aspects.

High risk categories

According to Annex III, the regulation lists the uses of AI that are considered with a high risk of harming people or property and, therefore, must meet stricter legal obligations.

In particular, the Czech presidency introduced an additional layer, meaning that, to be classified as high-risk, the system should have a decisive weight in the decision-making process and not be “purely accessory”, a concept left to the Commission to define. by implementing act.

The Council removed from the list the in-depth counterfeit detection of police authorities, criminal analysis and the verification of the authenticity of travel documents. However, critical digital infrastructure and life and health insurance were added.

Also Read :  Marketers bring Web3 to the FIFA World Cup with augmented reality, NFTs and virtual worlds

Another significant change is that the Commission will be able not only to add to the annex high-risk use cases, but also to remove them under certain conditions.

In addition, the obligation of high-risk providers to register in an EU database has been extended to public body users, except law enforcement.

High risk bonds

The high-risk systems will have to meet requirements such as the quality of the data set and detailed technical documentation. For the Czech presidency, these provisions “have been clarified and adjusted in such a way that they are more technically feasible and less burdensome for those concerned to comply with them”.

The general approach also tries to clarify the allocation of responsibility according to the complex AI value chains and how the AI ​​Law will interact with existing sectoral legislation.

Law enforcement

The member states have introduced several allocations for the implementation of the law in the text, some of which are intended to be “negotiations” for the negotiations with the European Parliament.

For example, while users of high-risk systems will have to monitor the systems after launch and report to the provider in case of serious incidents, this obligation does not apply to sensitive information spurred by law enforcement activities.

Also Read :  Season 2 of the PokerStars VR Metaverse Poker Tour Has Landed!

What EU governments seem less keen to concede is the exclusion of AI applications related to national security, defense and military from the scope and ability of the regulation for law enforcement agencies to use “real-time” remote biometric identification systems in exceptional circumstances.

Management & Enforcement

The Council improved the AI ​​Board, which will bring together the competent national authorities, in particular by introducing elements already present in the European Data Protection Board, such as the set of experts.

The general approach also requires that the Commission designates one or more test facilities to provide technical support for the enforcement and to adopt advice on how to comply with the legislation.

The penalties for breaching the AI ​​obligations have become lighter for SMEs, while a set of criteria has been introduced for national authorities to consider when calculating the sanction.

Innovation

The AI ​​Act includes the possibility to set up regulatory sandboxes, controlled environments under the supervision of an authority, where companies can test AI solutions.

The text of the Council allows such testing to take place in real-world conditions, while under certain conditions, that real-world testing could also take place unsupervised.

Transparency

The transparencies for emotion recognition and depth fakes have been improved.

[Edited by Nathalie Weatherald]



Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button