AI agent
An AI agent is an AI system that autonomously plans and executes multiple steps to reach a goal. It uses a language model as its brain and c...
Read definitionThe AI Act is the European Union regulation that governs artificial intelligence. It sorts AI systems by risk and places obligations on anyone who develops or deploys AI in the EU. For organisations using AI in HR, customer service, or decision-making, compliance is mandatory.
The AI Act is the European Union regulation that governs artificial intelligence. It was formally adopted in 2024 and enters into force in phases between 2025 and 2027. It's the first comprehensive AI law in the world and, like GDPR, has extraterritorial reach: companies outside the EU that place AI systems on the European market also have to comply.
The core of the law is a risk-based approach. AI systems are sorted into four categories, and each category gets its own rules. The greater the risk, the stricter the obligations. Systems with unacceptable risk are banned, high-risk systems carry heavy obligations, and systems with limited or minimal risk get mostly transparency requirements.
Important to understand: the AI Act doesn't decide whether you can use AI. It decides how you can use AI in specific contexts. A chatbot on your website and an AI system that decides whether someone gets a loan fall into completely different regimes.
Unacceptable risk (banned)
A number of practices are outright prohibited in the EU. Examples: social scoring of citizens by governments, real-time biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces and schools, AI that manipulates or exploits vulnerable groups. These provisions have applied since February 2025.
High risk
AI systems that significantly affect people's rights or safety. Think of AI used for recruitment and selection, performance evaluation, credit scoring, access to education, medical treatment, administration of justice, asylum procedures, and management of critical infrastructure. These systems aren't banned, but they carry very heavy obligations around risk management, data quality, human oversight, and transparency.
Limited risk (transparency obligation)
Systems that interact with people must make clear that it's AI. Chatbots must disclose, deepfakes must be labelled, emotion recognition and biometric categorisation (outside the banned domains) must be communicated to users.
Minimal risk
All other AI. Spam filters, recommendation algorithms in a webshop, AI in games. No specific obligations under the AI Act, though other regulations (GDPR, product safety, consumer law) continue to apply.
There's also a separate regime for general-purpose AI models (like LLMs) and for so-called systemic risk models (the largest models). They receive obligations around transparency about training data, safety evaluation, and reporting of serious incidents.
The AI Act draws a clear distinction between four roles.
Provider
The organisation that develops and places an AI system on the market. They carry the heaviest obligations, especially for high-risk systems.
Deployer
The organisation that puts an AI system to use under its own responsibility. A European bank running AI screening for loan applications is a deployer, even if it bought the model from a US vendor.
Distributor and importer
Parties in the supply chain who distribute or import AI systems. Mostly control duties: is the required labelling and documentation present?
Authorised representative
For providers outside the EU: a representative inside the EU who is addressable on their behalf.
For European organisations that don't build AI themselves but deploy it (the vast majority) the deployer regime is usually the most relevant. Don't underestimate it: as a deployer you still need to document the system in use, organise human oversight, inform end users, and report incidents where applicable.
For high-risk systems the strictest obligations apply. For providers:
A risk management system across the full lifecycle of the model.
Data governance: representative, relevant, and error-free training data, with attention to bias.
Technical documentation and logging of output.
Transparency to deployers through clear usage information.
Built-in human oversight (human-in-the-loop).
Appropriate accuracy, robustness, and cybersecurity.
Conformity assessment and CE marking before going to market.
Registration in an EU database.
For deployers of high-risk systems:
Use the system according to the provider's instructions.
Ensure human oversight by sufficiently trained staff.
Manage and log relevant input data.
Inform end users when an AI system affects them.
Carry out a fundamental rights impact assessment in certain sectors (public sector, private bodies providing public services).
Report serious incidents and malfunctions.
The AI Act formally entered into force in August 2024, but different parts apply in phases.
2 February 2025: prohibited practices become banned. AI literacy obligations for staff start to apply.
2 August 2025: rules for general-purpose AI models and systemic risk models apply. Member states must designate their competent supervisory authorities.
2 August 2026: the bulk of the regulation applies, including the rules for high-risk systems under Annex III.
2 August 2027: the last category of high-risk systems (those embedded in regulated products such as medical devices or toys) falls under the regime.
Fines go up to 35 million euros or 7% of worldwide annual turnover for the most serious violations (prohibited practices). For other violations, lower but still substantial amounts apply.
A few concrete points for anyone deploying or planning AI today.
Inventory your AI use
Which AI systems are in use or planned? Who is the provider, who is the deployer? Which risk category? A simple AI register is the starting point for everything else.
Extra care for HR applications
Recruitment, selection, performance evaluation, and decisions about employment conditions are high-risk. Using AI for these purposes requires additional documentation, human oversight, and transparency towards employees.
AI literacy
The regulation requires employers to train staff who work with AI systems. A simple training on what a model can and can't do, including hallucinations and bias, is the minimum.
Vendor assessment
When procuring AI tools: ask for documentation on risk category, intended use, limitations, and human oversight. If the vendor can't deliver, your compliance is on thin ice.
Sector regulation still applies
Banks (EBA guidelines), insurers, healthcare, public bodies all have additional sector rules around AI. The AI Act sits on top of those rules, it doesn't replace them.
GDPR and the AI Act overlap but complement each other. GDPR is about personal data: can you process it, how, and on what basis? The AI Act is about AI systems: can you deploy them, how, and with what controls?
An AI system deciding on a loan application falls under both. GDPR requires a legal basis, transparency to the data subject, and the right to human intervention in automated decision-making (Article 22). The AI Act adds the high-risk obligations on top.
In practice, AI governance and privacy governance belong together. Organisations with solid GDPR compliance are better positioned to tackle the AI Act. Those still wrestling with GDPR have extra work ahead.
An AI agent is an AI system that autonomously plans and executes multiple steps to reach a goal. It uses a language model as its brain and c...
Read definitionArtificial intelligence is technology that teaches computers to learn, reason, and make decisions from data instead of following hand-writte...
Read definitionBias in AI is a skew that creeps into models through data, algorithms, or human choices. It is not always harmful, but it has to be managed ...
Read definitionBottleneck analysis finds the step in a process where work gets stuck waiting, the step that dictates total throughput time. You spot bottle...
Read definitionConformance checking compares how a process actually runs against how it is supposed to run. It is the second pillar of process mining along...
Read definition
Collect&Go and Telenet Business are testing an autonomous electric delivery cart in Leuven, steered over 5G. What it means for logistics and...
Ten practical steps to automate your business processes without AI hype. Start small, fix the process first, use the tools you already own, ...