Article
10 Sep
2024

EU AI Act - Everything You Need to Know

AI is everywhere. Every day, new applications and uses spawn around the world, transforming people's lives. What was in short supply, however, was regulation. Until now. Learn more about the new EU AI Act.
Christian Backlund
|
7
min read
eu-ai-act-everything-you-need-to-know

It’s undeniable that AI is becoming ubiquitous - it’s seemingly everywhere. In the last few years, services such as ChatGPT have gone from being niche research tech to household names, accessible to anyone with a smartphone. No longer is AI solely the province of Cyberpunk sci-fi films (think Blade Runner and the cult favourite Voigt-Kampff test), instead it’s discussed around dining tables the world over, crossing generational boundaries.

It took the release of foundation models, à la ChatGPT’s OpenAI GPT-3.5 model, to open the door to innovation on a grander scale, outside of the research labs. With access to pre-trained models, the ever-innovative tech sector fired into startup gear and started spitting out new uses for AI. Image generation, AI pair programming, and of course… chatbots. Chatbots that will sell you a new Chevy for $1. No one said it was perfect (yet).

There was no shortage of hype, ideas (good and bad), or the usual apocalyptic fear of an impending Skynet doomsday event. What was in short supply, however, was regulation. Until now. 

The European Union, in December 2023 announced that it had reached a provisional agreement regarding the basic content of the forthcoming Artificial Intelligence Act (AI Act). The legislation, known as the EU AI Act has since been released and come into effect. 

What is the EU AI Act? A brief overview for everyone!

The EU AI Act is the first of its kind, and is widely expected to have a far-reaching impact that will shape the future of AI legislation. It is a framework designed to manage how AI is developed and deployed across the EU, balancing individual security and privacy with the opportunities presented by the use of AI. 

Critically, the Act affects all providers and deployers if their AI, or the outputs of the AI system, are used within the EU. This broad reach ensures that, in effect, all providers and deployers of AI systems would likely need to ensure compliance with the Act, much in the same way that GDPR affects businesses across the world.

The Act takes a clever approach by implementing a sliding scale of rules, depending on the level of risk each AI system poses. Some AI uses are flat-out banned, while others will face strict scrutiny, with tough requirements for governance, risk management, and transparency.

The intent is to support the inherent potential of AI, and allow us to ride that AI wave, while providing guardrails to safeguard our privacy and ensure ethical use.

A bit of history now.

The AI Act of the European Union was first introduced by the European Commission on April 21, 2021. After its introduction, the Act underwent a series of important development stages. The initial draft sparked significant debate among EU member states, industry experts, and civil society, leading to several revisions aimed at balancing innovation with regulatory measures. 

Over the course of 2022 and 2023, the proposal was refined through a combination of consultations, expert advice, and negotiations in the European Parliament and Council. The European Parliament eventually approved the Act on March 13, 2024, with formal adoption following on May 21, 2024.

Who is the target audience of the EU AI Act?

Throughout its 458 pages, The EU AI Act targets several key players in the AI ecosystem. These include providers, deployers, importers, distributors, product manufacturers, and authorised representatives. It also outlines specific roles for providers, deployers, and importers.

Providers

Providers refer to individuals or entities responsible for creating AI systems or general-purpose AI (GPAI) models, either directly or by commissioning others. These providers then market or deploy these systems under their own name or trademark. According to the EU AI Act, an AI system is broadly defined as one that autonomously processes inputs to generate outputs—such as predictions, recommendations, decisions, or content—that can impact both physical and virtual environments. GPAI models are those that possess a high degree of generality, allowing them to perform a diverse array of tasks and integrate into various downstream AI systems. For example, a foundation model is considered a GPAI, while a chatbot or generative AI tool built on this model is classified as an AI system.

Deployers

Deployers are those who utilise AI systems within their operations. For instance, a company employing a third-party AI chatbot for handling customer service enquiries would be recognized as a deployer.

Importers

Importers are defined as individuals or entities within the EU that bring AI systems developed by organisations outside the EU into the European market.

What about the scope outside the EU?

The EU AI Act also extends its grasp to providers and deployers located outside the EU area, if their AI systems or the outputs of these systems are used within the EU.

The situations here are quite varied. Many companies that provide services throughout Europe actually send data back to their home country for processing, and then transmit the results back to Europe to be delivered to the end user.

Those scenarios fall under the EU AI Act. Those providers must choose authorised representatives in the EU to coordinate their compliance efforts on their behalf.

Are there any exceptions?

The regulations do not apply to activities related to research, development, or prototyping that occur prior to an AI system's market release. Additionally, AI systems designed specifically for military, defence, or national security purposes are exempt from these rules, regardless of the entity responsible for their development.

EU AI Act Risk Based Classifications

The AI Act establishes a consistent regulatory framework across all EU Member States, featuring a forward-looking definition of AI and a risk-based methodology:

Unacceptable Risks: Certain highly detrimental AI applications are banned due to their violation of EU values and fundamental rights. These include:

  • Exploiting individual vulnerabilities, using manipulative or subliminal techniques.
  • Social scoring by both public and private entities.
  • Predictive policing based solely on profiling.
  • Unrestricted collection of facial images from the internet or CCTV for database expansion.
  • Emotion recognition in workplaces and educational institutions, except for medical or safety purposes (e.g., monitoring pilot fatigue).
  • Biometric categorization to infer personal attributes like race, political beliefs, or sexual orientation. However, labelling and categorising data for law enforcement remains permissible.
  • Real-time remote biometric identification in public spaces by law enforcement, with limited exceptions.

High-Risk AI Systems: Certain AI systems deemed high-risk due to their potential impact on safety or fundamental rights are specifically outlined in the Act. These include:

  • AI systems that determine eligibility for medical treatments, employment, or loans.
  • AI used for police profiling or crime risk assessment, unless banned under Article 5.
  • AI operating robots, drones, or medical devices.
  • The Act also considers AI systems subject to third-party conformity assessments under sector-specific regulations as high-risk.

Specific Transparency Risk: To build trust, the AI Act mandates transparency for specific AI applications where there is a risk of manipulation, such as chatbots or deep fakes. Users must be informed when interacting with AI systems.

Minimal Risk: Most AI systems can be developed and used under existing legislation without additional requirements. Providers may voluntarily adopt trustworthy AI practices and codes of conduct.

Systemic Risks: The Act also addresses risks associated with general-purpose AI models, such as large generative models. These models, which can perform various tasks, might pose systemic risks if they are powerful or widely deployed, potentially leading to significant accidents or widespread misuse, including harmful biases affecting multiple applications.

What are the consequences of non-compliance?

Failure to comply with the Act will likely face serious consequences. The European Union is making it clear that penalties for violating AI regulations will be effective - and heavy.

  • For serious violations of banned practices or failure to comply with data requirements, fines can reach up to €35 million or 7% of the total global turnover from the previous year—whichever is higher.
  • For other non-compliance issues under the Regulation, the maximum penalty is €15 million or 3% of the total global turnover from the previous year.
  • Providing false, incomplete, or misleading information to authorities can lead to fines up to €7.5 million or 1.5% of the total global turnover from the previous year.
  • For small and medium-sized enterprises (SMEs), the fines will be capped at the lower end of these thresholds, while larger companies face the higher amounts.

The European Commission also has the power to impose fines on providers of general-purpose AI models. In this case, penalties can be up to €15 million or 3% of the total global turnover from the previous year.

The governance behind the machine

The AI Act introduces a sophisticated, two-tiered governance system. On one hand, national authorities are tasked with overseeing and enforcing the rules for specific AI systems within their borders. On the other hand, the EU level, through bodies like the European AI Office, takes charge of governing general-purpose AI models.

This setup means that organisations could find themselves navigating inquiries or enforcement actions from multiple national authorities at once. And unlike GDPR, where you generally deal with a single lead supervisory authority, the AI Act requires you to manage relationships with various authorities across different jurisdictions. It’s a whole new level of regulatory complexity.

The countdown to compliance

The Act is set to kick in two years after it officially came into force, which is on August 2, 2026. But some parts of it will start making waves much sooner:

  • The rules around prohibitions, definitions, and AI literacy will apply just six months in, by February 2, 2025.
  • Next, the governance rules and obligations for general-purpose AI take effect 12 months after the act comes into force, on August 2, 2025.
  • Finally,: obligations for high-risk AI systems will apply 36 months after the Act’s entry into force, kicking in on August 2, 2027.

So while the full framework is being progressively rolled out, certain rules are already coming into play, setting the stage for how AI will be governed in the EU.

What do you need to do?

The first step for businesses is to get informed. Understanding where your AI systems fall within the Act’s risk-based classification is crucial. Are you developing high-risk AI? Are you deploying general-purpose AI models? Or maybe you’re somewhere in the middle? The answer to these questions will determine the level of scrutiny your operations will face.

  • Evaluate Your AI Use and Risk Exposure: Begin by identifying all AI systems or models in use within your organisation. Assess the potential risks they pose, especially in light of the EU AI Act’s classifications.
  • Understand Compliance Obligations: Familiarise yourself with the specific obligations tied to the risk levels of your AI systems. Each classification—from minimal to high-risk—comes with its own set of requirements that you’ll need to meet.
  • Ensure Data Transparency: Conduct a thorough review of your data practices. Ensure that your datasets are transparent and well-documented, leveraging available tools and platforms to support this transparency.
  • Develop Ethical Guidelines: These guidelines will help standardise ethical AI use across your organisation.
  • Enhance AI Competence: Invest in upskilling your team on AI-related knowledge and skills.
  • Engage with EU Regulatory Bodies: Stay proactive by working with the European Commission and the AI Office.
  • Collaborate with National Governments: Maintain close communication with your headquarters and governmental representatives. Engage with the AI Board to monitor developments and influence policy decisions that impact your operations.

Consider seeking legal advice or consulting with AI regulation experts to ensure you’re on the right track.

How can The Virtual Forge help?

Our services include comprehensive AI audits, where we assess your current systems for compliance and risk. We also offer tailored consultancy to help you design and implement the necessary changes to your AI processes, ensuring they meet the Act’s requirements. 

To learn more about how we can help you, visit our AI Services Page or reach out at connect@thevirtualforge.com. We'd love to learn about your business challenges and discuss how AI can help.

Our Most Recent Blog Posts

Discover our latest thoughts, tendencies, and breakthroughs in the realm of software development and data.

Swipe to View More

Get In Touch

Have a project in mind? No need to be shy, drop us a note and tell us how we can help realise your vision.

Please fill out this field.
Please fill out this field.
Please fill out this field.
Please fill out this field.

Thank you.

We've received your message and we'll get back to you as soon as possible.
Sorry, something went wrong while sending the form.
Please try again.