The AI Act is the first law in the world that will regulate the use of AI. The legislation aims to direct the development of this technology-which seems destined to evolve like no other and to decisively affect the economic and social growth of humanity- to avoid its most dangerous drifts and mitigate its risks.

After a very tough negotiation that pitted the various bodies of the European Community and individual countries against each other, historic agreements were reached: AI Act places the first stakes to prevent mass surveillance, the proliferation of deep fakes and protect copyright.

The use of AI for law enforcement purposes and the threat of mass surveillance

The most controversial and divisive issue has been the uses of AI by law enforcement agencies. These could take advantage of real-time biometric recognition and get to predictive policing, that is, using algorithms to predict the likelihood of a crime, the possible criminal, and where.

The AI Act regulated biometric recognition, which was banned except in three cases: foreseen and obvious threat of terrorist attack; victim search; and prosecution of serious crimes. The use of AI to evaluate individuals based on sensitive data and personal characteristics, including political, religious, and sexual orientations, to name a few, is also prohibited.

In addition, since among the goals of the European law is to maximally protect the rights of citizens’ dignity and freedom, as well as to protect their privacy and prevent mass surveillance, systems that process and recognize emotions or develop techniques that aim to manipulate human behavior will also be banned.

Controlling Generative AI

The AI Act also regulates “generative” AI, which creates textual and visual content, to curb the uncontrolled development of fake content and also protect content producers through copyright enforcement.

With the new legislation, digital watermarking is made mandatory: developers must insert a string that warns when content is AI generated. While, to actively protect copyright, it will not be possible to employ content to power advanced chatbots, such as ChatGpt or Gemini, if the author who owns the rights explicitly requests it. The use of content already taken to train algorithms will have to be transparent, and tech companies will be required to provide detailed summaries of what they use.

A new European body

This new transnational regulatory instrument also establishes the AI Office, a new European body in Brussels with autonomous financial and technical resources to monitor compliance. Alongside this institution, different countries will be able to have their own independent national authority, or they will be able to outsource AI oversight to an existing authority.

Enforcement in 24 months

The AI Act is expected to become fully operational in 24 months, but already in the first 6 the most dangerous uses will be banned, and individual countries will be able to accelerate the implementation of some of the bans.

Within this time frame, a voluntary compliance, the AI Pact, has been provided and will allow companies to comply with the AI Act before it becomes fully operational. This facilitation is motivated by the severity of the penalties provided for violations and offenses, which can be very high: up to 35 million euros or 7 percent of global turnover for the most serious infractions. But to encourage innovation, exceptions are made for small and medium-sized companies to create test environments exempt from the rules (regulatory sandbox).

Thanks to this new approach to AI development, if large providers such as Google, Meta or Microsoft want to continue selling their services to citizens and businesses in the European Union, they will have to guarantee and certify quality and transparency of algorithms and data, not least because the legislation provides for the right of citizens to complain about decisions made by high-risk AI systems.