Treasury News Network

Learn & Share the latest News & Analysis in Corporate Treasury

  1. Home
  2. Treasury Technology
  3. News

Europe reaches deal on Artificial Intelligence Act

Members of the European Parliament (MEPs) and the European Council have reached a political deal on the Artificial Intelligence Act, which aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact.

Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit several items:

  • Biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race).
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
  • Emotion recognition in the workplace and educational institutions.
  • Social scoring based on social behaviour or personal characteristics.
  • AI systems that manipulate human behaviour to circumvent their free will.
  • AI used to exploit people's vulnerabilities (due to their age, disability, social or economic situation).

Negotiators agreed on a series of safeguards and narrow exceptions for using biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime. “Post-remote” RBI would be used strictly in the targeted search of a person convicted or suspected of committing a serious crime.

“Real-time” RBI would comply with strict conditions, and its use would be limited in time and location for:

  • Targeted searches of victims (abduction, trafficking, sexual exploitation).
  • Prevention of a specific and present terrorist threat.
  • The localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime).

For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed upon. MEPs successfully included a mandatory fundamental rights impact assessment, among other requirements, that is also applicable to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.

To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities, it was agreed that general-purpose AI (GPAI) systems and the GPAI models they are based on will have to adhere to transparency requirements initially proposed by Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.

For high-impact GPAI models with systemic risk, Parliament negotiators managed to secure more stringent obligations. If these models meet specific criteria, they must conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.

MEPs wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain. To this end, the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market.

Non-compliance with the rules can lead to fines ranging from €35m or 7% of global turnover to €7.5m or 1.5 % of turnover, depending on the infringement and size of the company.

Like this item? Get our Weekly Update newsletter. Subscribe today

Also see

Add a comment

New comment submissions are moderated.