It’s undeniable that AI is becoming ubiquitous - it’s seemingly everywhere. In the last few years, services such as ChatGPT have gone from being niche research tech to household names, accessible to anyone with a smartphone. No longer is AI solely the province of Cyberpunk sci-fi films (think Blade Runner and the cult favourite Voigt-Kampff test), instead it’s discussed around dining tables the world over, crossing generational boundaries.
It took the release of foundation models, à la ChatGPT’s OpenAI GPT-3.5 model, to open the door to innovation on a grander scale, outside of the research labs. With access to pre-trained models, the ever-innovative tech sector fired into startup gear and started spitting out new uses for AI. Image generation, AI pair programming, and of course… chatbots. Chatbots that will sell you a new Chevy for $1. No one said it was perfect (yet).
There was no shortage of hype, ideas (good and bad), or the usual apocalyptic fear of an impending Skynet doomsday event. What was in short supply, however, was regulation. Until now.
The European Union, in December 2023 announced that it had reached a provisional agreement regarding the basic content of the forthcoming Artificial Intelligence Act (AI Act). The legislation, known as the EU AI Act has since been released and come into effect.
The EU AI Act is the first of its kind, and is widely expected to have a far-reaching impact that will shape the future of AI legislation. It is a framework designed to manage how AI is developed and deployed across the EU, balancing individual security and privacy with the opportunities presented by the use of AI.
Critically, the Act affects all providers and deployers if their AI, or the outputs of the AI system, are used within the EU. This broad reach ensures that, in effect, all providers and deployers of AI systems would likely need to ensure compliance with the Act, much in the same way that GDPR affects businesses across the world.
The Act takes a clever approach by implementing a sliding scale of rules, depending on the level of risk each AI system poses. Some AI uses are flat-out banned, while others will face strict scrutiny, with tough requirements for governance, risk management, and transparency.
The intent is to support the inherent potential of AI, and allow us to ride that AI wave, while providing guardrails to safeguard our privacy and ensure ethical use.
A bit of history now.
The AI Act of the European Union was first introduced by the European Commission on April 21, 2021. After its introduction, the Act underwent a series of important development stages. The initial draft sparked significant debate among EU member states, industry experts, and civil society, leading to several revisions aimed at balancing innovation with regulatory measures.
Over the course of 2022 and 2023, the proposal was refined through a combination of consultations, expert advice, and negotiations in the European Parliament and Council. The European Parliament eventually approved the Act on March 13, 2024, with formal adoption following on May 21, 2024.
Throughout its 458 pages, The EU AI Act targets several key players in the AI ecosystem. These include providers, deployers, importers, distributors, product manufacturers, and authorised representatives. It also outlines specific roles for providers, deployers, and importers.
Providers refer to individuals or entities responsible for creating AI systems or general-purpose AI (GPAI) models, either directly or by commissioning others. These providers then market or deploy these systems under their own name or trademark. According to the EU AI Act, an AI system is broadly defined as one that autonomously processes inputs to generate outputs—such as predictions, recommendations, decisions, or content—that can impact both physical and virtual environments. GPAI models are those that possess a high degree of generality, allowing them to perform a diverse array of tasks and integrate into various downstream AI systems. For example, a foundation model is considered a GPAI, while a chatbot or generative AI tool built on this model is classified as an AI system.
Deployers are those who utilise AI systems within their operations. For instance, a company employing a third-party AI chatbot for handling customer service enquiries would be recognized as a deployer.
Importers are defined as individuals or entities within the EU that bring AI systems developed by organisations outside the EU into the European market.
The EU AI Act also extends its grasp to providers and deployers located outside the EU area, if their AI systems or the outputs of these systems are used within the EU.
The situations here are quite varied. Many companies that provide services throughout Europe actually send data back to their home country for processing, and then transmit the results back to Europe to be delivered to the end user.
Those scenarios fall under the EU AI Act. Those providers must choose authorised representatives in the EU to coordinate their compliance efforts on their behalf.
The regulations do not apply to activities related to research, development, or prototyping that occur prior to an AI system's market release. Additionally, AI systems designed specifically for military, defence, or national security purposes are exempt from these rules, regardless of the entity responsible for their development.
The AI Act establishes a consistent regulatory framework across all EU Member States, featuring a forward-looking definition of AI and a risk-based methodology:
Unacceptable Risks: Certain highly detrimental AI applications are banned due to their violation of EU values and fundamental rights. These include:
High-Risk AI Systems: Certain AI systems deemed high-risk due to their potential impact on safety or fundamental rights are specifically outlined in the Act. These include:
Specific Transparency Risk: To build trust, the AI Act mandates transparency for specific AI applications where there is a risk of manipulation, such as chatbots or deep fakes. Users must be informed when interacting with AI systems.
Minimal Risk: Most AI systems can be developed and used under existing legislation without additional requirements. Providers may voluntarily adopt trustworthy AI practices and codes of conduct.
Systemic Risks: The Act also addresses risks associated with general-purpose AI models, such as large generative models. These models, which can perform various tasks, might pose systemic risks if they are powerful or widely deployed, potentially leading to significant accidents or widespread misuse, including harmful biases affecting multiple applications.
Failure to comply with the Act will likely face serious consequences. The European Union is making it clear that penalties for violating AI regulations will be effective - and heavy.
The European Commission also has the power to impose fines on providers of general-purpose AI models. In this case, penalties can be up to €15 million or 3% of the total global turnover from the previous year.
The
AI Act introduces a sophisticated, two-tiered governance system. On one hand, national authorities are tasked with overseeing and enforcing the rules for specific AI systems within their borders. On the other hand, the EU level, through bodies like the European AI Office, takes charge of governing general-purpose AI models.
This setup means that organisations could find themselves navigating inquiries or enforcement actions from multiple national authorities at once. And unlike GDPR, where you generally deal with a single lead supervisory authority, the AI Act requires you to manage relationships with various authorities across different jurisdictions. It’s a whole new level of regulatory complexity.
The Act is set to kick in two years after it officially came into force, which is on August 2, 2026. But some parts of it will start making waves much sooner:
So while the full framework is being progressively rolled out, certain rules are already coming into play, setting the stage for how AI will be governed in the EU.
The first step for businesses is to get informed. Understanding where your AI systems fall within the Act’s risk-based classification is crucial. Are you developing high-risk AI? Are you deploying general-purpose AI models? Or maybe you’re somewhere in the middle? The answer to these questions will determine the level of scrutiny your operations will face.
Consider seeking legal advice or consulting with AI regulation experts to ensure you’re on the right track.
Our services include comprehensive AI audits, where we assess your current systems for compliance and risk. We also offer tailored consultancy to help you design and implement the necessary changes to your AI processes, ensuring they meet the Act’s requirements.
To learn more about how we can help you, visit our AI Services Page or reach out at connect@thevirtualforge.com. We'd love to learn about your business challenges and discuss how AI can help.
Have a project in mind? No need to be shy, drop us a note and tell us how we can help realise your vision.