On August 1, 2024, the European Union’s AI Act became official. This important new law is expected to change how AI is developed and used, not only in Europe but around the world.
But what does this mean for you and your company?
Very similar to EU’s General Data Protection Regulation (GDPR) for data privacy, if your company wants to provide AI products to the EU, then you will need to adhere to the EU’s new AI Act.
What is the AI Act?
The AI Act is a new law from the European Union that creates the first complete set of rules for managing artificial intelligence.
- Boosting Innovation and Safety: The Act is meant to encourage innovation in the EU and help Europe stay competitive worldwide. It also sets up rules to protect people from the risks of AI by applying the same standards across all 27 EU countries. This creates fair conditions for businesses and keeps EU citizens safe.
- Risk-Based Rules: The Act classifies AI systems based on how risky they are and adjusts the rules accordingly. This means the higher the risk, the stricter the rules, ensuring responsible AI development and strong protections where they’re most needed.
- Global Impact: Even though the AI Act is focused on Europe, it will likely affect companies outside of Europe, including in the United States. American companies that develop or use AI may need to follow these rules if their products or services are used by European markets or EU citizens. The EU is setting a new standard for AI that could shape global rules, so U.S. businesses will need to comply to keep operating smoothly in Europe.
What does the EU AI Act qualify as an “AI System”?
An “AI system” is a machine that can work on its own (autonomous) and adjust itself, producing results like predictions, content, recommendations, or even making decisions by itself that can affect the real world or digital environments.
The AI Act’s definition is broad and covers a lot, including different types of AI like machine learning and knowledge-based systems. Even if your technology doesn’t seem like “AI” to you, it might still fall under the AI Act’s rules.
AI Risk Categories Under the EU AI Act
Let’s analyze and summarize how the EU AI Act classifies AI systems based on their risk levels. The act has the following risk levels:
- Unacceptable / Prohibited - Banned
- High-Risk – Allowed with strict rules and regulations
- Limited-Risk – Allowed with transparency obligations
- Minimal-Risk - Allowed with no extra rules and unregulated
Let’s go in details on Prohibited and High-Risk levels and how they will impact you.
Unacceptable / Prohibited Applications
The following AI practices shall be prohibited and should not be developed:
- The use or sale of AI systems that secretly or unfairly influence someone’s decisions, causing them to make choices they normally wouldn’t, is not allowed, especially if it results in serious harm to them or others.
- An AI system that exploits a person’s weaknesses, like their age, disability, or social/economic situation, to influence their behavior in a harmful way is not allowed, especially if it can cause serious harm to them or others.
- AI systems that judge or rank people based on their social behavior or personal traits, leading to a social score, are not allowed if they cause unfair or negative treatment in unrelated situations or if the treatment is unjustified or too harsh based on the behavior.
- AI systems that predict if someone might commit a crime based only on their profile or personality are not allowed. However, law enforcement and government agencies can still use these systems when conducting investigations.AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
- Using AI systems to detect emotions in workplaces or schools is not allowed, unless the AI is being used for medical or safety purposes.
- AI systems that sort people based on their biometric data, like race, political views, religion, or sexual orientation, are not allowed to be sold, used, or deployed. However, this rule doesn’t apply to law enforcement using biometric data for legal purposes or to organize or label legally obtained biometric data like images.
- Using “real-time” biometric systems, like facial recognition, in public spaces for law enforcement is generally not allowed. However, it can be used if it’s absolutely necessary for specific reasons, such as:
- Finding victims of abduction, human trafficking, or sexual exploitation, or locating missing persons.
- Preventing a serious and immediate threat to people’s lives or safety or stopping a terrorist attack.
- Identifying or locating someone suspected of a serious crime for investigation or prosecution, especially for crimes that could lead to at least four years in prison.
High-Risk Applications
An AI system is considered high-risk if it’s used in a product related to safety or if it is a product that falls under EU laws. These systems must be reviewed and approved by an independent third party before they can be sold or used.
Additional High-Risk AI systems include:
Biometrics - refers to using physical or behavioral traits to identify people, but only if allowed by EU or national laws:
- Remote biometric identification systems: These systems identify people from a distance but do not include AI systems used only to verify someone’s identity, like confirming if a person is who they say they are.
- Biometric categorization systems: These systems sort people into groups based on sensitive or protected characteristics, like race or gender, inferred from their biometric data.
- Emotion recognition systems: These AI systems are designed to detect and interpret people’s emotions.
Critical infrastructure - includes AI systems used for safety in managing and running important services, like digital networks, road traffic, or the supply of water, gas, heating, or electricity.
Human Resources, education and job training : includes AI systems used for education, HR, and admissions.
- Admission and Placement: AI systems used to decide who gets into schools or training programs and where they should be placed.
- Evaluating Learning: AI systems used to assess how well students are learning, including guiding their learning process.
- Determining Education Level: AI systems used to decide what level of education or training someone should receive or can access.
- Monitoring Tests: AI systems used to watch students during tests to catch any cheating or rule-breaking.
- Benefits Information: Public assistance, medical, credit
- Recruiting: Finding workers, filtering job applications
Who Will the AI Act Affect?
The AI Act impacts more businesses and organizations than you might expect! Here’s why:
- Global Reach: The Act isn’t limited to Europe. If your AI system affects people in the EU, the rules apply to you, no matter where your company is located.
- Entire AI Ecosystem: Whether you’re creating AI, selling it, or just using it in your business, the Act might apply to you. It covers everyone involved, from developers to end-users.
- Existing AI Systems: Even if your AI system is already in use, you might still need to comply, especially if it involves general-purpose AI (like large language models/ LLM) or high-risk systems (like medical devices or self-driving cars).
- Major Updates Matter: If you make significant changes to an existing AI system, it’s considered new under the Act. You can’t assume older systems automatically meet the new rules.
- All Sectors Included: No matter your industry—healthcare, finance, education, or anything else—if your AI affects EU citizens, you need to pay attention to these regulations.
Who Isn’t Affected by the AI Act?
- Non-EU Government authorities or organizations: If they’re working with the EU on law enforcement or court matters and have proper protections in place, they’re not affected by the Act.
- Military and defense: AI systems used in military or defense areas are not covered by the Act, as these are outside the EU’s authority.
- Scientific research: If you’re developing AI just for scientific research, you’re exempt from the Act.
- AI in development: AI systems that are still being researched, tested, or developed aren’t affected, as long as they haven’t been released or sold yet.
- Open-source projects: Free and open-source software is usually exempt, unless it involves AI that is banned, high-risk, or needs to be transparent.
Key Requirements in the EU AI Act: A High-level Overview
The EU AI Act not only classifies AI systems but also sets rules for everyone involved with AI. Here’s what you need to know:
If you’re involved with high-risk AI systems:
- Before launching: You need to thoroughly check that the AI meets safety standards.
- After launching: You must keep records and monitor the AI to ensure it continues to work safely.
If you’re deploying, importing, or distributing AI systems:
- Deployers: Ensure that people can oversee the AI and check for any impacts on basic rights.
- Importers and Distributors: Make sure the AI follows all the rules and has the proper documentation before it’s sold or used.
Even for low-risk AI systems:
- Transparency: Clearly indicate when AI is being used, like labeling chatbots or AI-generated content (Example: “deepfakes”, photos, content).
The Act includes specific details depending on your role in the AI ecosystem. If you’re working with AI, it’s crucial to know which rules apply to you.
Simple Explanation of the EU’s AI Act and General-Purpose AI (GPAI) Models
_GPAI models are defined as AI systems capable of performing a wide range of tasks and easily integrated into various applications. _
Key obligations to GPAI models include maintaining technical documentation, respecting copyright law, and sharing information with downstream providers. For high-impact models, additional requirements like model evaluations and risk mitigation apply.
The European Union’s AI Act sets rules for general-purpose AI (GPAI) models in several key areas:
- Risk Assessment: The AI Act separates GPAI models into two groups: those that pose systemic risks and those that don’t. If a model uses a lot of computational power, it’s likely considered a systemic risk. The European Commission can also label certain models as systemic risks.
- Rules for High-Risk Models: For GPAI models that pose a systemic risk, stricter rules apply. Providers of these models must:
o Identify and reduce systemic risks.
o Conduct thorough evaluations and tests to check for vulnerabilities.
o Report serious issues that occur.
o Ensure strong cybersecurity measures are in place.
- Transparency: All GPAI models must be clear and understandable so that people can manage risks effectively. This includes maintaining technical documentation, respecting copyright law, and sharing information with downstream providers. For high-impact models, additional requirements like model evaluations and risk mitigation apply.
- Timelines: The rules for new GPAI models will start 12 months after the AI Act becomes official. For GPAI models already in use, providers will have 36 months to comply with the new rules.
EU AI Act Implementation Schedule: Key Dates to Remember
Here’s a simple timeline and deadlines for the EU AI Act:
- August 1, 2024: The AI Act officially became law.
- February 2, 2025: Certain AI practices will be banned.
- August 2, 2025: New rules for general-purpose AI models will start.
- August 2, 2026: Rules for high-risk AI systems and transparency requirements for other AI systems will take effect.
- August 2, 2027: Additional rules for high-risk AI systems must be followed, especially for those already in use before August 2025.
- December 31, 2030: Final deadline for high-risk AI systems used by public authorities to comply with the law if they were already on the market before the Act became official.
What Happens if You Don’t Follow the EU AI Act? You pay….
If your company doesn’t follow the rules of the EU AI Act, here’s what could happen:
- Serious Violations: For major breaches, like breaking the Act’s strict bans, you could be fined up to €35m ($39m USD) or 7% of your global yearly income, whichever is higher.
- High-Risk Violations: If you don’t meet the rules for high-risk or general-purpose AI systems, the fines could reach up to €15m ($16.5m USD) or 3% of your global yearly income.
- Minor Violations: Even for smaller issues, like giving incorrect information to authorities, you could be fined up to €7.5m ($8.3m USD) or 1% of your global yearly income.
Small and medium-sized businesses might have to pay the lower amount, but these fines are designed to make sure every company working with AI in the EU takes the rules seriously.
Understanding the EU AI Act: What It Means for Your Business
We’ve discussed the key points of the EU AI Act, but this is just a summary. If the Act affects your business, it’s essential to review the full details. For regulations this complex, it’s best to refer to the original documentation at https://artificialintelligenceact.eu/.
The EU AI Act will significantly affect AI in Europe, the USA, and globally. Whether your business is involved in developing, marketing, or utilizing AI, it’s crucial to grasp the varying risk levels and the steps needed to comply, especially if your products or services are available to European customers.
Although the Act may seem complicated, remember that it’s being introduced step by step. The key is to start getting ready now: check your AI systems, understand your responsibilities, and make a plan to comply.
At JHG Consulting, we’re here to help you navigate these new challenges, ensuring you understand and comply with the requirements of the EU AI Act. Together, we can build a future where AI is both powerful and secure.