REXILIENCE: YOUR PARTNER FOR AI COMPLIANCE

Rexilience is at the side of companies that already use artificial intelligence tools, offering support to ensure safe, compliant and consistent use with the AI Act. Find out how we can help you navigate this new regulatory landscape. 

AI ACT PUBLISHED IN THE OFFICIAL JOURNAL OF THE EUROPEAN UNION

THE EUROPEAN UNION RECENTLY PUBLISHED THE LONG-AWAITED AI ACT IN THE OFFICIAL JOURNAL.

This marks a crucial moment in the regulation of artificial intelligence (AI) within the European single market. This regulation represents a major turning point, due to enter into force on 2 August 2024.

 

IMPLEMENTATION PHASES

The AI Act envisages a gradual implementation in several stages, each with specific deadlines: 

– 2 february 2025:Entry into force of general provisions and prohibited practices. On this date, restrictions on a number of AI practices deemed unacceptable risk by the EU will become effective, including cognitive manipulation and predictive surveillance based on biometric profiles.

– 2 august 2025: Enforcement of rules on general purpose IA models, notifying authorities and notified bodies, as well as sanctions and governance. These provisions mainly concern providers of general purpose AI systems, such as large language models (LLM) and basic models, and aim to ensure that these systems comply with transparency and security requirements.

– 2 august 2026: Entry into force of most of the other provisions of the IV Act. This phase includes all obligations for high-risk AI systems used in sectors such as biometrics, education, employment, financial services and critical infrastructure.

– 2 august 2027: Application of the provisions on IA systems classified as high risk and the corresponding obligations. These standards concern in particular AI systems that are already regulated by other European legislation, such as medical devices, machines and radio equipment.

KEY STANDARDS AND OBLIGATIONS

THE AI ACT ESTABLISHES RULES FOR THE DEVELOPMENT, DEPLOYMENT AND USE OF AI SYSTEMS, WITH A FOCUS ON HIGH-RISK SYSTEMS. THE OBLIGATIONS VARY ACCORDING TO THE NATURE, PURPOSE OF THE SYSTEM AND THE ORGANISATION’S ROLE IN THE SUPPLY CHAIN, BECOMING MORE STRINGENT AS THE RISK OF THE SYSTEM INCREASES.

The main IA categories under the new regulation are:

  1. Prohibited AI practices: These include AI systems that manipulate human behaviour in harmful ways, such as social scoring and predictive surveillance.
  2. General-purpose AI systems: These systems, which do not present systemic risks, still have to comply with certain transparency rules, while those with higher risks are subject to stricter regulations.
  3. High-risk AI systems: These must meet specific transparency, security and compliance requirements in order to be distributed in the EU market. These include systems used in sensitive areas such as education and financial services.

GOVERNANCE AND SANCTIONS

To ensure proper implementation of the regulation, the AI Act provides for the creation of various governance bodies, including an AI Office within the European Commission, a scientific panel of independent experts and an AI Council with representatives of the Member States.

To ensure proper implementation of the regulation, the AI Act provides for the creation of various governance bodies, including an AI Office within the European Commission, a scientific panel of independent experts and an AI Council with representatives of the Member States.

Violations of the AI Act can result in significant penalties, with fines of up to EUR 35 million or 7% of the company’s annual global turnover, whichever is greater. Competent authorities will have the power to access source codes, training data and other documentation to assess the compliance of AI systems and, if necessary, order corrections or the withdrawal of systems from the market.

MEASURES FOR ORGANISATIONS

To prepare for the implementation of the AI Act, organisations should take the following priority actions:

  1. Create an inventory of AI systems: Detail all AI systems currently developed, deployed and used by the organisation.
  2. Conduct an applicability assessment: Assess the potential impact of the regulation on the organisation and identify relevant obligations.
  3. Conduct a gap analysis: Compare the current governance measures with the requirements of the AI Act and identify the necessary corrective actions.
  4. Implement a compliance programme: Ensure that all appropriate measures are in place by the scheduled implementation dates.

To ensure proper implementation of the regulation, the AI Act provides for the creation of various governance bodies, including an AI Office within the European Commission, a scientific panel of independent experts and an AI Council with representatives of the Member States.

Violations of the AI Act can result in significant penalties, with fines of up to EUR 35 million or 7% of the company’s annual global turnover, whichever is greater. Competent authorities will have the power to access source codes, training data and other documentation to assess the compliance of AI systems and, if necessary, order corrections or the withdrawal of systems from the market.

AI COMPLIANCE: iso/iec 42001

The AI Act represents a significant step forward in the regulation of artificial intelligence, with the aim of promoting the development and adoption of secure and reliable AI systems. This unique legislation will not only ensure that the fundamental rights of European citizens are respected, but will also establish a global standard for AI regulation, in line with the efforts of ISO/IEC 42001.

ISO/IEC 42001 describes a Management System for AI, providing a clear and comprehensive governance model for managing risks in the use of AI. This standard includes a set of usable controls for risk management and reinforces the focus on risks to people, communities and society. It can be used for the design, development and use of IA systems and recalls several useful standards for the proper management of IA systems, at least six of which are essential to the IA Management System.

Rexilience is here

Rexilience is ready to support companies in bringing the use of IA in their business operations up to standard and ensuring compliance and protection in an evolving regulatory environment.