AI in Recruitment: Why Your HR Tool Is Suddenly High-Risk
AI Act and HR: AI in recruitment, application screening and performance evaluation is classified as high-risk. What this means for employers — and why Equal Pay makes it more complex.
AI in human resources is already everyday reality. Pre-filtering applications, optimising job ads, evaluating performance, benchmarking salaries — in many companies, an algorithm handles all of this. Efficient, fast, scalable.
And from now on: high-risk.
The EU AI Act explicitly classifies AI systems in the area of employment and workforce management as high-risk (Annex III, point 4). This means: the strictest requirements, comprehensive documentation, human oversight — and serious penalties for non-compliance. For companies that must simultaneously implement the EU Pay Transparency Directive, this creates a dual compliance challenge.
This article explains which HR systems are affected, what employers face, and how to address both regulations together.
What exactly is "high-risk" in HR?
Annex III of the AI Act lists eight areas where AI systems are considered high-risk. Point 4 — "Employment, workers' management and access to self-employment" — is the most important for employers. It covers two categories:
Category (a): Recruitment and selection
- AI for targeted job advertisements (e.g. algorithmic distribution on LinkedIn)
- AI for analysing and filtering applications (CV screening)
- AI for evaluating candidates (assessment tools, video interview analysis)
Category (b): Employment decisions and monitoring
- AI for promotion and termination decisions
- AI for task allocation based on individual behaviour or personality traits
- AI for monitoring and evaluating performance and behaviour
Important: This isn't limited to specialised HR software. If you use ChatGPT to evaluate CVs, or an AI-assisted spreadsheet for pay analysis — that too may fall under high-risk if it influences decisions about people.
Are there exceptions?
Yes — but they are narrow. Art. 6(3) provides four exceptions where a system listed in Annex III is not considered high-risk:
- It performs only a narrow procedural task (e.g. automated interview scheduling)
- It improves a previously completed human result (e.g. spell-checking job ads)
- It detects patterns or deviations but does not replace the human decision
- It performs a preparatory task for a human assessment
But watch out: As soon as a system performs profiling — i.e. automatically processes personal data to evaluate aspects such as work performance, reliability, or behaviour — it is always high-risk. No exception.
What must employers do as deployers?
The obligations for deployers of high-risk AI in HR are comprehensive (Art. 26). Here are the key requirements:
1. Use according to instructions
You may only use the system as the provider intended. If a tool is designed for "support in pre-selection," you may not use it for automated rejection decisions.
2. Human oversight
You must assign natural persons to oversee the system — and they must have the competence, training, and authority to understand, question, and override AI decisions. This is not a formality: the AI Act explicitly names the risk of "automation bias" — the tendency to uncritically accept AI outputs.
3. Inform workers
Before deployment, workers' representatives and affected employees must be informed. In Germany: the works council. In Luxembourg: the employee delegation. This is not a recommendation — it's a legal obligation.
4. Retain logs
Automatically generated logs must be retained for at least 6 months. In the event of an audit or legal dispute, you must be able to demonstrate how the system operated.
5. Monitoring and reporting
You must monitor the system's operation. If you identify risks, you must immediately suspend use and inform the provider and authorities.
6. Data protection impact assessment
For high-risk AI in HR, a DPIA under Art. 35 GDPR is generally required. The AI Act and GDPR interlock here.
The double trap: AI Act + Pay Transparency
This is where it gets particularly relevant for many companies: those using AI for pay analysis, compensation benchmarking, or performance evaluations face dual regulation:
- The EU AI Act classifies AI in employment decisions as high-risk
- The EU Pay Transparency Directive (deadline: 7 June 2026) requires transparent, gender-neutral pay criteria and gap assessments
If your compensation system relies on AI-supported analysis, you must simultaneously demonstrate that:
- The AI operates correctly and without discrimination (AI Act)
- Pay criteria are gender-neutral and transparent (Pay Transparency)
- Human oversight is ensured (both regulations)
- Affected individuals are informed (both regulations)
The good news: many measures address both requirements simultaneously. A gender-neutral job evaluation system with documented criteria satisfies both the transparency requirements of the directive and the explainability requirements of the AI Act.
Emotion recognition: the red line
One special case deserves particular attention: emotion recognition in the workplace is banned (Art. 5(1)(f)). Since February 2025.
This covers any system that infers emotions or intentions of employees based on biometric data — whether facial recognition, voice analysis, or body language evaluation. Some HR tools and video interview platforms use such technologies. Check your deployed systems.
Only exception: Medical reasons (e.g. fatigue detection for professional drivers) or safety reasons.
What you should do now — a checklist
- HR AI inventory: List all AI systems used in your HR department — including the "small" tools.
- High-risk check: For each system, ask: Does it fall under Annex III point 4? Does an exception apply? Does it perform profiling?
- Check your provider: Does the provider meet their obligations? (Technical documentation, CE marking, EU database registration)
- Set up human oversight: Appoint competent individuals. Train them. Document their authority.
- Inform works council / employee delegation: Now — not after the system is already running.
- Check for emotion recognition: Does any of your systems use emotion recognition? If so: discontinue immediately.
- Conduct a DPIA: Create or update a data protection impact assessment for each HR high-risk system.
- Link to Equal Pay: If AI is used in compensation — address both regulations together.
Conclusion: HR is high-risk zone No. 1
Of all the AI Act's high-risk areas, none affects as many companies as the HR domain. AI in recruitment, performance evaluation, and compensation — this is everyday practice in European businesses. The combination of the AI Act and the Pay Transparency Directive makes coordinated compliance a necessity.
The deadlines may shift. The obligations won't.
Do you use AI in HR and want to know what's coming? We help with the stocktake — for AI Act and Equal Pay alike. Book a free initial call.
Frequently Asked Questions
Disclaimer: The contents of this article are for general information purposes only and do not constitute legal advice. For a binding assessment of your individual situation, please consult a qualified legal professional.
Jens Druckenmüller, LL.M.
Entrepreneur & Independent Advisor
20 years of experience in boardrooms, due diligence and advisory. Today as an independent advisor based in Luxembourg — the topics change, but the standards never do.
Ready to prepare?
In a complimentary initial consultation, we'll assess where your organization stands and identify the right next steps.
Schedule a Consultation