The European Artificial Intelligence Act is driving new levels of human oversight and regulatory compliance for artificial intelligence (AI) within the European Union. Similar to GDPR for privacy, the EU AI Act has potential to set the tone for upcoming AI regulations worldwide.  

In early 2024, the European Parliament comprised of 27 member states, unanimously endorsed the EU AI Act. The act is now making its way through the final phases of the legislative process and is expected to rollout in stages in the second half of 2024. Understanding the provisions of the EU AI Act and readying for compliance is essential for any organization who develops, deploys or uses AI — or is planning to.

The AI Act aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects set values and rules, and harnesses the potential of AI for industrial use.”

European Parliament News

The EU AI Act in brief

The primary focus of the EU AI Act is to strengthen regulatory compliance in the areas of risk management, data protection, quality management systems, transparency, human oversight, accuracy, robustness and cyber security. It aims to drive transparency and accountability into how AI systems are developed and deployed, helping to ensure that AI products placed in the market are safe for individuals to use.

The EU AI Act aims to meet the challenge to develop and deploy AI responsibly across industries including those that are highly regulated such as healthcare, finance and energy. For industries providing essential services to clients such as insurance, banking and retail, the law requires the use of a fundamental rights impact assessment that details how the use of AI will affect the rights of customers.

The cornerstone of the EU AI Act: Safeguards to prevent unacceptable risk

The EU AI Act requires that general purpose AI models, including generative AI systems such as large language (LLMs) and foundation models, adhere to a classification system based on systematic risk tiers. Higher risk tiers have more transparency requirements including model evaluation, documentation and reporting. They also involve assessment and mitigation of system risks, reporting of serious incidents and providing protections against cybersecurity. In addition, these transparency requirements include maintenance of up-to-date technical documentation, providing a summary of the content used for model training, and complying with European copyright laws.

The EU AI act follows a risk-based approach, using tiers to classify the level of risk that AI systems pose to an individual’s health, safety or fundamental rights. The three tiers are:  

  • Low risk systems such as spam filters or video games have few requirements under the law other than transparency obligations. 
  • High-risk AI systems such as autonomous vehicles, medical devices and critical infrastructure (water, gas, electric, etc.) require developers and users to adhere to additional regulatory requirements:
    • Implement risk management, provide accuracy, robustness and a framework for accountability that includes human oversight
    • Meet transparency requirements provisioned for users, record keeping, and technical documentation
  • Prohibited systems with little exception are systems posing unacceptable risk such as social scoring, facial recognition, emotion recognition and remote biometric identification systems in public spaces.

The EU AI Act also imposes rules as to how customers are notified when using a chatbot or when an emotion recognition system is used. There are addition requirements for labeling deep fakes and identifying when generative AI content is used in the media.

Not complying with the EU AI Act can be costly:  

7.5 million euros or 1.5% of a company’s total worldwide annual turnover (whichever is higher) for the supply of incorrect information. 15 million euros or 3% of a company’s total worldwide annual turnover (whichever is higher) for violations of the EU AI Act’s obligations.

— VentureBeat

The European AI Act is currently the most comprehensive legal framework for AI regulations. Governments worldwide are taking note and actively discussing how to regulate AI technology to ensure their citizens, business and government agencies are protected from potential risks. In addition, stakeholders from corporate boards to consumers are starting to prioritize trust, transparency, fairness and accountability when it comes to AI.

Getting ready for upcoming regulations with IBM

IBM watsonx.governance helps you accelerate responsible, transparent and explainable AI workflows

IBM® watsonx.governance™ allows you to accelerate your AI governance, the directing, managing and monitoring of your organization’s AI activities. It employs software automation to strengthen your ability to mitigate risks, manage policies requirements, and govern the lifecycle for both generative AI and predictive machine learning (ML) models.

Watsonx.governance helps to drive model transparency, explainability and documentation in 3 key areas:

  • Compliance help manage AI transparency and address compliance with policies and standards. Connect data to key risk controls and use factsheets to automate the capture and reporting of model metadata in support of inquiries and audits.
  • Risk management preset risk thresholds, helping to proactively detect and mitigate AI model risks. Monitor for fairness, drift, bias, performance against evaluation metrics, instances of toxic language, and for protection of personal identifiable information (PII) . Gain insights into organizational risk with user-based dashboards and reports.
  • Lifecycle governance — help govern both generative AI and predictive machine learning models across the lifecycle using integrated workflows and approvals, Monitor the status of use cases, in-process change requests, challenges, issues and assigned tasks.
‘Break open the black box’ with AI governance

The client is responsible for ensuring compliance with laws and regulations applicable to it. IBM does not provide legal advice or represent or warrant that its services or products will ensure that the client is in compliance with any law or regulation.

Was this article helpful?
YesNo

More from Artificial intelligence

Revolutionize your talent acquisition strategy: How AI can help you find the right candidates faster

2 min read - Imagine that you are a talent acquisition manager at a large corporation, and you're struggling to find suitable candidates for a critical role. Despite posting the description on multiple job boards, the résumés received are either unqualified or uninteresting. This results in wasted valuable time and resources on manual screening, causing frustration among hiring managers. This scenario is common in fast-paced business environments. The talent competition is fierce, placing immense pressure on companies to quickly and efficiently secure the best…

AI governance is rapidly evolving — here’s how government agencies must prepare

5 min read - The global AI governance landscape is complex and rapidly evolving. Key themes and concerns are emerging, however government agencies should get ahead of the game by evaluating their agency-specific priorities and processes. Compliance with official policies through auditing tools and other measures is merely the final step. The groundwork for effectively operationalizing governance is human-centered, and includes securing funded mandates, identifying accountable leaders, developing agency-wide AI literacy and centers of excellence and incorporating insights from academia, non-profits and private industry.…

Redefining clinical trials: Adopting AI for speed, volume and diversity

8 min read - Successful clinical studies hinge on efficiently recruiting and retaining diverse participants. Yet, clinical trial professionals across the globe grapple with notable challenges in these areas. In this chapter of the IBM series on clinical trial innovation, we spotlight key strategies for boosting recruitment speed, helping to ensure diversity, and harnessing digital advancements. Seamlessly integrating these elements is essential for leading-edge success in clinical development. Recruitment difficulties are the leading reason for trial terminations. While the overall clinical trial termination rate…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters