What does the AI ​​Act, the first European regulation on artificial intelligence, provide

Lifegate

https://www.lifegate.it/ai-act-intelligenza-artificiale

Agreement reached between Parliament and Council for the AI ​​Act.It bans biometric recognition and introduces various limits but, according to NGOs, it is not enough.
  • After 36 hours of negotiations, the European Parliament and the Council reached an agreement on the AI ​​Act, the regulation on artificial intelligence.
  • The use of mass surveillance, such as live facial recognition and predictive policing, is prohibited, with various exceptions.
  • Human rights associations are holding back their enthusiasm:without the definitive text it is not possible to verify the level of protection of rights and the effective limit to mass surveillance.

After months of debates on how to regulate companies like OpenAI that deal with artificial intelligence systems, theEuropean Union he approved the AI ​​Act on Friday 8 December, the first package of regional rules on artificial intelligence.It's one fundamental law which, in the hopes of European legislators, could become a model for the rest of the world.

What is the AI ​​Act and how is it different from other regulations on artificial intelligence

Proposed by the European Commission on 21 April 2021 as part of the community digital strategy, theArtificial intelligence act is the European Union regulation which aims to introduce a common regulatory and legal framework for artificial intelligence.

The regulation itself is not a first globally.In China, for example, the rules on generative artificial intelligence have entered into force already in August.However, the exceptional nature of the AI ​​Act concerns lset of rules, much broader, on the use of artificial intelligence, including a large number of bans.

Own the trilogue on some prohibitions – the negotiation between Parliament, European Commission and Council, the final part of the European legislative process – risked running aground in the 36 hours of negotiations.In particular, the discussion focused on the permitted and prohibited uses of artificial intelligence by companies law enforcement agencies, particularly around predictive policing and real-time facial recognition.While on the one hand Parliament defended the line of total blockade, the Council, representing the member states, instead pushed for a much more permissive approach.

The issue of surveillance by law enforcement agencies

The most controversial and divisive topic, not only between Parliament and the Commission but also with civil society, was the definition of the lawful use of artificial intelligence by law enforcement agencies

The Advise, which represents the 27 national governments of the member states, attempted to grant the possibility of using artificial intelligence to identify people through real-time biometric facial recognition.Another request was to allow the use of these systems for the purposes of predictive policing.This involves the use of algorithms to predict the probability with which a crime can be committed, by whom and in what place.

cina social scoring intelligenza artificiale
In China there is a social credit system to rank the reputation of citizens.In the European Union it will not be allowed © AerialPerspective Works/iStockphoto

The countries that have attempted to push in this direction have beenItalyforgetting that the Privacy guarantor banned the use of real-time recognition in 2021 – theHungary and the France.Especially the latter has recently pushed a lot in a security direction, first with the law on "global security” and then with the law last April which authorized the use of artificial intelligence and preventive investigations in view of Paris Olympics.

Not sun:concerns, especially in civil society and digital rights organizations, increased when, during the negotiations, the Council attempted to allow the use of biometric recognition on an ethnic basis.

What uses of AI for surveillance are prohibited

The opposition carried out by Parliament has held up and theuse of artificial intelligence for surveillance has been banned, except in a few exceptions and in any case with the authorization of the judicial authorities. From what emerged in the following hours, the rules are set on risks, divided into four categories: minimal, limited, high and unacceptable.The higher the risks, the greater the responsibilities for those who develop and use those specific systems, until it was banned for those considered too dangerous.

The AI ​​Act prohibits the biometric categorization of sensitive personal data, such as ethnicity, faith or sexual orientation;there massive collection of faces from the internet and technologies that recognize the emotions (only, however, at work and at school).It also prohibits systems capable of manipulate emotions of people and those based on social scoring, that is, the method of classifying citizens' reputations already adopted in China.

Real-time biometric recognition has been banned except in three situations: threat of terrorism, search for crime victims (such as, for example, hostages) and to identify suspected of some “serious crimes”.There first widespread list includes human trafficking, drug trafficking, arms trafficking, child pornography and child abuse, environmental crimes, murders, but also terrorism and kidnapping, crimes in reality already included in the three exceptions provided.

THE predictive policing systems, however, can only be used to analyze anonymized information to provide trends at the crime scene, while systems that, through algorithms, indicate a suspect are prohibited.In other words, an individual will not be able to be investigated because they have been reported by an algorithm.

The other rules provided for in the AI ​​Act

An important issue on which the regulation intervenes is the transparency in the use of artificial intelligence systems.Users must be able to recognize the contents deep fake, such as images and videos, through clearly visible labels, and they must know if they are interacting with a person or with a system, such as a chatbots.Furthermore, companies digital service providers will have theobligation to identify content created with artificial intelligence that circulate on their platforms and must automatically mark as such.A revolution if we think about the number of deep fakes that circulate, for example, on social media.

Another theme is that ofgenerative artificial intelligence, that system which is capable of generating text, images, videos, music or other media in response to requests, as ChatGpt. The standard refers to general purpose AI systems, capable of performing different tasks and trained through a huge amount of uncategorized data.Some are set thresholds to identify systems based on impact:the greater the effects on the population, the greater the obligations to be respected.

Among other issues, the issue relating to copyright protection and the general transparency of all digital content created with artificial intelligence.

The next steps of the AI ​​Act and the criticism received from civil society

The text of the regulation has finished its political process and is now passed into the hands of the technicians who have the task of reviewing the rules to verify their coherence and amendments.After the publication of the definitive text, approximately towards the end of next January, the Commission will equip itself with a office for artificial intelligence. Each state will have to nominate a local authority who supervises the application of the regulation, working together with the Data protection guarantor.

It is important to underline that To date there is still no definitive document that can be viewed.Everything we know was communicated in the press conference and leaked by Brussels sources.

Civil society and networks of European associations who deal with digital rights including EDRi, a network of NGOs and academics dealing with digital rights, are very cautious in celebrating the agreement reached.The following are of the same opinion Italian associations, including The Good Lobby, Hermes Center And Privacy Network, which underline the concern about the multiple exceptions to the ban on the use of facial recognition in public spaces, but above all the lack of transparency during negotiations. On a positive note, as he points out Diletta Huyskes of Privacy Network, it would seem the inclusion in the rule of impact assessment on fundamental rights for high-risk artificial intelligences, but this too can only be verified with the publication of the definitive text.

As with other European regulations, the most problematic details they will only emerge once the text will be made public.During the negotiations, governments had to admit that artificial intelligence systems are increasingly used for mass surveillance, racial profiling and other harmful and invasive purposes.The agreement contains limits, but only with the definitive text will it be possible to verify its impact, especially regarding internal surveillance and borders.

It's hard to get excited about a law that, for the first time in the European Union, has taken steps to legalize real-time facial recognition.Although Parliament has fought hard to limit the damage, the overall package on biometric surveillance and profiling does not go far enough.Our fight against mass biometric surveillance is set to continue

Ella Jakubowska, Senior Policy Advisor, EDRi

Licensed under: CC-BY-SA
CAPTCHA

Discover the site GratisForGratis

^