top of page

Future-proof your organization for the EU AI Act

The forthcoming EU AI Act promises to be the most consequential global regulation for AI to date, impacting businesses of all sizes around the world. Start preparing now. 

What is the EU AI Act?

The EU AI Act sets a global precedent in AI regulation, emphasizing human rights in AI development and implementation of AI systems. While the law will directly apply to EU countries, its extraterritorial reach will impact global businesses in profound ways.

Global businesses producing AI-related applications or services that either impact EU citizens or supply EU-based companies will be responsible for complying with the EU AI Act.

Trustible_EU_AI_logo.png

Key Requirements of the AI Act

  • The EU AI Act outlines certain uses of AI which are prohibited, high risk, or require specific disclosures. Many of the Act’s compliance obligations depend on which risk category each AI use case falls into.

    Requirement

    The EU AI Act outlines certain uses of AI which are prohibited, high risk, or require specific disclosures. Many of the Act’s compliance obligations depend on which risk category each AI use case falls into.

    How Trustible™ Helps
  • Under the AI Act (Article 9), High risk AI use cases must implement a risk management program where key risks are identified, tracked, and mitigated to the appropriate level.

    Requirement

    Guided assessments and recommendations based off of Trustible’s AI risk taxonomy helps you identify what risks need to be tracked. Trustible also can recommend appropriate mitigation measures, tools, and generate mitigation evidence.

    How Trustible™ Helps
  • The AI Act requires organizations developing or deploying AI to ensure their staff are appropriately trained on AI topics. In addition, high risk system providers must ensure AI systems have appropriate human oversight, and that users are sufficiently trained and informed about the system.

    Requirement

    Trustible offers best-in-class AI training modules to help educate your staff about relevant AI risks. Trustible’s content can be delivered in-app, or as modules for your existing LMS. Trustible is constantly updating its content to keep it up-to-date with the latest AI risks, best practices, and regulatory interpretations.

    How Trustible™ Helps
  • The AI Act contains several documentation requirements including structured technical documentation for high risk systems, record keeping of model inputs/outputs, and incident documentation from post-market monitoring activities.

    Requirement

    Trustible outlines what documentation you need to generate, and can help break it down into what sections need to be filled out, what persona needs to be involved, and leverage AI to ensure core regulatory questions are covered.

    How Trustible™ Helps
  • The EU AI Act lays out certain transparency requirements for General Purpose AI models to ensure that downstream deployers are fully cognizant of the risks. In addition, the Act outlines higher evaluation requirements for massive frontier models that are deemed to have ‘system risks’.

    Requirement

    Trustible’s AI model ratings evaluate which GPAI models are compliant with the EU AI Act’s transparency requirements so you can ensure you’re using compliant models. Trustible can also help anyone fine-tuning GPAI models produce compliant documentation to give to downstream users.

    How Trustible™ Helps
  • The AI Act requires high risk AI system providers and deployers to implement a post-market monitoring system to collect information about their product’s performance in the real world, and respond to reported incidents in a timely manner.

    Requirement

    Trustible helps you document and track reported incidents, and evaluate if any are severe enough to report to regulators. In addition, Trustible monitors and analyzes public AI incidents and can alert customers when a relevant incident occurs that may impact their business.

    How Trustible™ Helps
trustible-pyramid-1260.png

Risk Categories

Unacceptable

(Prohibited Systems)

High

(Strict Compliance Obligations)

Limited

(Certain Transparency Obligations)

Minimal

(Voluntary Oversight)

trustible-bubble-compliance-oversight.png

Navigate the EU AI Act with Trustible™

AI Inventory

Centralize EU AI Act required documentation in a single source of truth across AI use cases

trustible-icon-ai-inventory.png

Risk Management

Assign EU AI Act-defined Risk Levels to determine relevant requirements

trustible-icon-ai-inventory.png

Reporting

Automatically prove conformity with specific EU AI Act requirements

trustible-icon-ai-inventory.png

FAQs

  • The EU AI Act was published in the EU’s Official Journal on July 21, 2024 and will enter into force on August 1, 2024. Compliance with the first set of obligations, which include prohibited systems and AI literacy programs, will begin on February 1, 2025. The remaining compliance timeline will be staggered through August 1, 2027.

  • The EU AI Act provides a list of use cases that qualify as high-risk in Annex II of the Act. Those include AI systems used for biometric identification systems, biometric categorization, employment and employee management, accessing essential services (i.e., healthcare), accessing government benefits, determining creditworthiness, accessing health or life insurance, certain law enforcement activities (e.g., accessing whether a person is likely to be the victim of a crime), immigration-related activities, administering justice, and elections.

  • Compliance with the EU AI Act takes a phased approach. The provisions on prohibited AI systems will come into effect 6 months after the Act enters into force. Provisions pertaining to general purpose AI systems will come into effect 12 months after the Act enters into force, while provisions related to high-risk AI systems will apply beginning 36 months after the Act enters into force. The expected timeline for compliance would start in early 2025, with the Act coming into full effect by August 2027.

Non-Compliance 

Failure to comply with the Act carries hefty fines.

Non-compliance with prohibitions: Up to €35 million or 7% of annual worldwide turnover

Non-compliance with specific obligations (i.e., for providers, deployers, importers, and distributors): €15 million or 3% of annual worldwide turnover

Providing incorrect, incomplete, or misleading information: €7.5 million or 1.5% of annual worldwide turnover

Non-compliance with prohibitions

€35M

or 7% of annual worldwide turnover

Non-compliance with specific obligations

€15M

or 3% of annual worldwide turnover

Providing incorrect information

€7.5M

or 1.5% of annual worldwide turnover

bottom of page