Compliance & Ethics

GDPR and AI recruiting: what data you can collect and for how long

Amesha
Amesha
.
4 min read

March 15, 2026

GDPR AI Recruiting: A Practical Compliance Guide for Hiring Teams

What GDPR Means for AI Recruiting

GDPR AI recruiting refers to the practice of using artificial intelligence tools in talent acquisition — résumé screening, automated interviews, candidate scoring, and pipeline management — in a way that complies with the General Data Protection Regulation. GDPR requires that any personal data collected from candidates is gathered lawfully, for a defined purpose, kept only as long as necessary, and protected against misuse. When AI is involved, those obligations become more complex, because AI systems process data at scale, often automatically, and frequently in ways that directly affect a candidate's chances of employment.

The regulation applies to any organization that processes personal data of individuals in the European Union — regardless of where the organization itself is based. A US company recruiting for a role that accepts applications from EU residents is subject to GDPR. A UK company operating under UK GDPR faces almost identical obligations post-Brexit. For global hiring teams, this is not a regional compliance checkbox. It is a baseline standard that shapes how AI recruiting tools can be deployed in a significant portion of the world's labor markets.

What makes GDPR particularly relevant to AI hiring is the intersection of automation and consequential decisions. When an algorithm screens out a candidate — before any human has read their application — that is a decision with significant effects, and GDPR has specific rules for exactly that scenario. Understanding those rules, and designing hiring systems around them, is what this guide covers.

What Counts as Candidate Data Under GDPR

Under GDPR, personal data means any information relating to an identified or identifiable natural person. In the context of recruitment, that covers more ground than most hiring teams initially realize. The obvious categories — name, email address, phone number, employment history — are clearly personal data. But the scope extends well beyond what is in a CV.

AI screening data is personal data. When an AI tool scores a candidate's résumé and assigns a numerical ranking, that score is derived from and linked to an identifiable individual. It is personal data. The same applies to assessment results from cognitive or personality tests administered through an AI platform. If a candidate completes an online evaluation and the system produces a competency score, that score is part of their personal data profile and must be handled accordingly.

Voice recordings from AI-powered interviews are personal data — and they are sensitive personal data if they contain information from which health conditions, ethnic origin, or religious beliefs could be inferred. The same logic applies to video recordings from automated interview platforms. Automated transcripts produced from these recordings are also personal data, even if the original recording is subsequently deleted.

Behavioral and interaction data is another category that often catches organizations off guard. If your AI interview tool tracks response latency, eye movement patterns, facial expressions, or tone of voice as inputs to scoring, that data is personal data — and depending on what is being inferred from it, it may constitute special category data under Article 9, which carries significantly higher compliance obligations. Health-related inferences, for example, are special category data regardless of whether they were the intended output of the analysis.

The practical implication of this broad scope is that you need a data inventory that maps every AI touchpoint in your hiring process to the personal data it generates. This is not a theoretical exercise. If you cannot articulate what data each tool collects, stores, and processes, you cannot demonstrate GDPR compliance — and you cannot delete data appropriately when retention periods expire.

How AI Screening Changes GDPR Risk

Manual recruitment creates GDPR risk, but it creates it at human scale. When a recruiter reviews applications, they process perhaps fifty to a hundred candidate files in a day. When an AI screening tool is deployed, the same organization might process ten thousand applications in an hour. The scale change is not just operational — it is a fundamental shift in the profile of data processing risk.

At scale, data minimisation failures become significant. If your AI tool collects data that is not strictly necessary for the decision it is making — behavioral signals that are not genuinely predictive, or profile data harvested from LinkedIn integrations that goes beyond job-relevant information — you are accumulating personal data that has no lawful purpose. At a hundred applicants, this is a management issue. At ten thousand, it is a regulatory exposure.

Automation also changes the legal character of decisions. When a recruiter reads an application and decides not to advance a candidate, that is a human decision. It may be influenced by unconscious bias, but it is not a solely automated decision in the GDPR sense. When an AI tool screens out a candidate before any human has reviewed their file, and that screening directly determines whether the candidate progresses, Article 22 of GDPR is engaged. Candidates have the right not to be subject to solely automated decisions that produce significant effects on them — and employment decisions qualify.

AI tools also create risks around sensitive data that manual processes tend to avoid simply through human judgment. A recruiter reading a CV does not analyze the candidate's vocal frequency patterns for signs of neurological divergence. An AI interview platform that claims to assess enthusiasm or communication quality from acoustic features may, depending on how it works, be making inferences that touch on health or disability status — which are special category data under Article 9 and cannot be processed without explicit consent or another specifically enumerated lawful basis. This is not a hypothetical risk; it is a documented feature of several commercially deployed interview AI tools that have faced regulatory scrutiny.

The combination of scale, automation, and the risk of sensitive data inference means that AI hiring compliance under GDPR requires more deliberate architecture than manual recruitment. It is not a harder version of the same compliance problem. It is structurally different.

Lawful Basis for Processing Candidate Data

GDPR requires that every instance of personal data processing has a lawful basis from the six listed in Article 6. In recruitment, the relevant bases are typically legitimate interests, contract necessity, legal obligation, and consent. Understanding which basis applies — and when — is foundational to a defensible compliance program.

Legitimate interests

This is the most commonly used basis for recruitment data processing, and the most frequently misapplied. Legitimate interests allows processing when it is necessary for the organization's genuine interests and those interests are not overridden by the data subject's rights. In recruitment, this typically covers processing an application, reviewing a CV, and conducting assessment activities. However, it requires a three-part legitimate interests assessment: identifying the interest, demonstrating necessity, and balancing against candidate rights. Organizations that invoke legitimate interests without conducting this assessment are using it as a shortcut rather than a lawful basis.

Contract necessity

This basis applies when processing is necessary to take steps at the request of the data subject prior to entering a contract. In recruitment, this covers processing directly necessary to evaluate someone's application for a specific role. It does not cover keeping candidate data in a talent pool after a hiring process concludes — at that point, there is no longer a pre-contractual relationship, and contract necessity falls away.

Legal obligation

Some recruitment data processing is required by law — for example, collecting right-to-work documentation or maintaining records required by equal opportunities monitoring obligations. Where a legal obligation exists and requires processing, it provides a lawful basis for that specific activity. It does not extend to other data processing activities that happen to occur alongside the legally required ones.

Consent

Consent is the most discussed and most misunderstood lawful basis in recruitment. GDPR sets a high standard: consent must be freely given, specific, informed, and unambiguous. In the employment context, the power imbalance between employer and candidate means that consent is often considered not freely given — a candidate who believes that not consenting will harm their chances is not providing genuinely free consent. This doesn't mean consent is never appropriate in recruitment. Retaining a candidate's data in a talent pool after a process concludes is a case where explicit consent is both appropriate and the most legally robust basis. But using consent as the default basis for all recruitment data processing, and then relying on a standard application checkbox to satisfy it, is legally inadequate under GDPR.

Data Minimisation: What You Can and Cannot Collect

Data minimisation is one of GDPR's core principles and one of the most practically challenging to implement in AI recruiting. Article 5(1)(c) requires that personal data be adequate, relevant, and limited to what is necessary in relation to the purposes for which it is processed. In plain terms: if you don't need it for the hiring decision, you shouldn't collect it.

In practice, AI tools often collect more data than they need. Some video interview platforms capture eye movement data, facial expression analysis, and micro-expression scoring in addition to what candidates say. Some résumé screening tools harvest social media profiles and LinkedIn activity. Some assessment platforms log every mouse click and keypress during an evaluation. The fact that a tool can collect this data does not mean collecting it is lawful. Each data point needs a justified purpose connected to a legitimate hiring criterion.

What is genuinely necessary depends on the role. For a software engineering position, technical skill assessment data, employment history in relevant domains, and responses to role-specific screening questions are clearly necessary. Acoustic analysis of the candidate's tone of voice is not — unless you can demonstrate a direct, validated connection between that signal and job performance in this specific role, which is an extremely high bar that most vendors cannot meet.

Excessive collection creates multiple problems. First, it is a direct GDPR breach, regardless of whether it causes harm. Second, it increases your deletion obligations — you must be able to delete all the data you collected, not just the data you meant to collect. Third, if you use data you collected excessively in making a hiring decision, the decision itself may lack a valid lawful basis. Fourth, candidates who discover you collected more than was necessary — through a Subject Access Request, for example — have grounds for a complaint to the supervisory authority.

The practical approach to data minimisation in AI recruiting is to work backward from the hiring decision: what information genuinely informs the judgment about this candidate for this role? Configure your AI tools to collect only that. Disable features that collect additional data points — many platforms enable acoustic or behavioral analysis by default, and these need to be actively switched off unless you have a validated reason to use them. Document your minimisation decisions so you can demonstrate to a regulator that you made deliberate choices, not accidental ones.

How Long You Can Keep Candidate Data

Retention is one of the most frequently neglected GDPR obligations in recruitment. Data should be kept no longer than necessary for the purpose for which it was collected. In recruitment, that purpose — evaluating a candidate for a specific role — typically ends when the hiring process concludes. After that point, retention needs its own justification.

The table below reflects widely adopted practice in EU jurisdictions, though specific requirements vary by country and context. Some EU member states have issued supplementary guidance; always check national requirements in addition to the base GDPR obligations.

Data Type Recommended Retention Period Basis for Retention
Application data (CV, cover letter, screening responses) 6 months after process conclusion Legitimate interest — potential legal challenge defense
AI interview recordings (video/audio) 3–6 months Legitimate interest — limited; delete early where possible
AI interview transcripts Up to 12 months Legitimate interest; explicit deletion schedule required
Assessment scores and AI screening outputs 6–12 months Legitimate interest — discrimination claim defense window
Talent pool data (candidate opted in) Up to 2 years with re-consent Explicit consent — must be renewed
Right-to-work documentation Duration of employment + 2 years Legal obligation

The six-month window for application data is not arbitrary. It reflects the typical limitation period for bringing an employment discrimination claim in many EU jurisdictions — retaining data during this window gives you a defensible record if a claim is made. Beyond that window, the case for retention weakens considerably, and organizations that hold candidate data for years without a documented justification are in breach of the storage limitation principle.

Talent pool retention is the category that requires the most active management. Retaining a candidate's data for future opportunities is only lawful if the candidate has given explicit, informed consent for that specific purpose. That consent should state clearly what data is held, for how long, and what it will be used for. It should be easy to withdraw. And it should be renewed — most supervisory authorities treat consent for talent pool retention as requiring active renewal after twelve to twenty-four months, because candidates' circumstances change and their original consent cannot be assumed to remain meaningful indefinitely.

The practical challenge is automation. Manual deletion of candidate data at defined intervals is error-prone and inconsistent. Your ATS or AI recruiting platform should have configurable retention policies that trigger automated deletion at the end of defined periods. If your platform does not support this, it is a compliance gap that needs to be addressed — either through a process-based workaround or a platform change.

Automated Decision-Making and Candidate Rights

Article 22 of GDPR gives data subjects the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects on them. Hiring decisions — being rejected from employment — clearly fall within this category. This provision is one of the most operationally significant GDPR requirements for AI recruiting, and one of the least understood in practice.

A decision is solely automated when no meaningful human assessment is involved before the decision takes effect. If your AI screening tool rejects applications and candidates receive rejection notifications without any human reviewing their file, that is a solely automated decision. If a recruiter technically reviews AI scores but in practice does not deviate from them and applies no independent judgment, that is functionally a solely automated decision, even if a human clicked a button. Supervisory authorities look at substance, not form.

When Article 22 applies, candidates have three rights: the right to obtain human review of the decision, the right to express their point of view, and the right to receive a meaningful explanation of the decision. That last obligation — explainability — connects directly to the broader AI hiring explainability requirements discussed in other GDPR guidance. A meaningful explanation must describe the logic applied and the factors that were most influential in the outcome. Telling a candidate that their application was processed by an automated system and found not to meet requirements is not a meaningful explanation. It is a disclosure of the process, not an account of the outcome.

There are three exceptions to the Article 22 prohibition on solely automated decisions: the decision is necessary for a contract, the decision is authorized by EU or member state law, or the data subject has given explicit consent. In recruitment, contract necessity is the most commonly invoked exception — the argument being that processing applications at scale requires automation. However, even where an exception applies, organizations must still implement suitable measures to safeguard the data subject's rights, which includes at minimum providing transparency about the automated process and offering a route to human review.

The most operationally straightforward approach to Article 22 compliance is to ensure that AI tools are advisory rather than determinative. If AI produces a ranking or score and a human recruiter reviews and makes the actual decision to advance or reject a candidate, Article 22 is not engaged — provided the human review is genuine. This means building recruiting processes where overriding AI scores is not only technically possible but organizationally expected, and where those overrides are logged.

AI Vendor Compliance Responsibilities

Under GDPR, when an organization engages a third-party AI recruiting tool, the organization is the data controller and the vendor is a data processor. This relationship has specific legal consequences. The data controller determines the purposes and means of processing — what the data is collected for, which candidates are in scope, what decisions it informs. The data processor acts on the controller's behalf and has specific obligations that must be formalized in a Data Processing Agreement.

Article 28 of GDPR requires that processing by a data processor be governed by a binding Data Processing Agreement (DPA). This document must cover the subject matter and duration of processing, the nature and purpose of processing, the type of personal data involved, the categories of data subjects, and the obligations and rights of the controller. Every AI hiring tool vendor that processes personal data of your candidates must have a signed DPA in place before you deploy their tool. This is not a formality — it is a legal requirement, and operating without one is a compliance breach.

The DPA must also address sub-processors — third parties that the AI vendor engages to support their service, such as cloud infrastructure providers, analytics platforms, or data enrichment services. GDPR requires that sub-processors be subject to the same data protection obligations as the primary processor, and the DPA should list approved sub-processors or establish a process for notifying the controller of changes to the sub-processor list.

International data transfers are another vendor-related compliance question. If your AI recruiting vendor processes candidate data outside the EU — for example, on servers in the US — that transfer must have a legal mechanism. Post-Schrems II, the primary mechanism for EU-US transfers is the EU-US Data Privacy Framework, which requires the US recipient to be certified. Standard Contractual Clauses (SCCs) are the alternative where framework certification is not in place. Before deploying any AI hiring tool, ask your vendor where candidate data is stored and processed, and request documentation of the transfer mechanism they use for cross-border flows.

One thing that DPAs and vendor contracts cannot do is transfer the controller's compliance obligations. You remain accountable for the lawfulness of the data processing you instruct your vendor to carry out. If your vendor processes data in a way that breaches GDPR and you have not exercised appropriate oversight, you bear responsibility. Vendor compliance is necessary but not sufficient. It supplements your own compliance program; it does not substitute for it.

Building a GDPR-Compliant AI Hiring System

A GDPR-compliant AI hiring system is not a single tool or a single policy. It is an architecture — a set of design decisions about data flow, storage, access, deletion, and oversight that, together, produce a process that is both operationally effective and legally sound. Here is how to think about building one.

Start with data mapping. Before you can manage candidate data compliantly, you need to know exactly what data exists, where it is, who can access it, and when it should be deleted. This means mapping every AI touchpoint in your recruiting process: what data enters the system, what data is generated by the AI, what data is stored in the ATS, what data flows to the vendor, and what data is retained after the hiring process concludes. This map is also the foundation of your Record of Processing Activities (ROPA), which Article 30 requires for most organizations.

Design for deletion from the start. Many organizations add deletion processes as an afterthought and find them difficult to implement consistently. If your ATS and AI platform support automated retention policies — delete application data after six months, delete recordings after three months — configure those policies when you deploy the tools, not later. If your platform does not support automated deletion, build a manual process with a designated owner and a calendar trigger. Deletion that depends on someone remembering to do it will eventually fail.

Build privacy notices that are actually informative. Candidates must be informed about how their data will be processed before they submit their application. This notice — typically provided at the point of application — must cover who is the data controller, the purposes and lawful basis for each type of processing, whether AI is used and in what way, what automated decision-making is involved and what rights they have in relation to it, how long data will be retained, and their rights to access, erasure, portability, and objection. A privacy notice that lists general categories without specifying AI processing or automated decision-making does not meet this standard.

Implement Subject Access Request (SAR) handling for candidates. Under Article 15, candidates have the right to request a copy of all personal data you hold about them, including AI-generated scores and outputs. Organizations should have a documented SAR process for candidates, with a designated handler and a response timeline of no more than one month. This includes being able to produce AI scoring data from your vendor in a readable format — which means, again, that you need vendors who can surface per-candidate data on request.

Conduct a Data Protection Impact Assessment (DPIA) before deploying AI hiring tools. Article 35 requires a DPIA for processing that is likely to result in a high risk to individuals' rights and freedoms. Automated decision-making that significantly affects individuals is explicitly listed as a scenario requiring a DPIA. A DPIA documents the nature and purpose of the processing, the risks to data subjects, and the measures taken to address those risks. It is not just a regulatory checkbox — it forces you to think through the privacy implications before deployment, which is exactly when problems can still be fixed.

Common GDPR Mistakes in AI Recruiting

After working through the requirements, it is worth being direct about the mistakes that recur most frequently in practice. These are not theoretical risks — they are patterns that supervisory authorities have cited in investigations and that data protection practitioners encounter regularly.

The most common mistake is collecting too much data. Organizations deploy AI tools that capture behavioral signals, audio features, and interaction metadata because those features are enabled by default, and no one actively questions whether they are necessary. The result is a data set that goes far beyond what the hiring decision requires and creates obligations — to manage, secure, explain, and delete that data — that the organization has not planned for. The fix is simple in principle but requires discipline in practice: configure AI tools to collect only what is demonstrably job-relevant, and document that decision.

The second common mistake is relying on invalid consent. An application form that says something like by submitting this application you consent to us processing your data is not valid consent under GDPR. Valid consent must be granular, informed, freely given, and unambiguous. In the recruitment context, it is rarely the right lawful basis for the core processing activities involved in evaluating an application — legitimate interests or contract necessity are more appropriate. Consent is best reserved for specific optional activities like talent pool retention, where it can be properly structured and freely withdrawn.

The third common mistake is having no retention policy, or having one that is not implemented. Organizations frequently have a stated retention policy in their privacy notice that does not reflect actual practice. Candidate data accumulates in the ATS indefinitely because automated deletion was never configured and manual deletion is inconsistent. If a regulator or a candidate submits a SAR, the gap between stated policy and actual practice becomes immediately visible and is difficult to explain. Retention must be operationalized, not just documented.

Operating without a DPA with AI hiring vendors is another recurring compliance gap. It often happens when a vendor is onboarded quickly — trial periods in particular tend to skip the formal documentation stage. Once the trial converts to a paid subscription, the DPA has still not been signed, and the organization is processing candidate data through a vendor without the required contractual framework. The fix is process-level: require DPA execution before any candidate data is loaded into any vendor system, including during trials.

Finally, many organizations fail to conduct DPIAs before deploying AI hiring tools. The DPIA is treated as bureaucratic overhead rather than a substantive risk management exercise. The consequence is that potential compliance problems — data minimisation failures, Article 22 exposure, international transfer gaps — are discovered reactively rather than addressed proactively. A DPIA conducted before deployment does not guarantee compliance, but it significantly increases the likelihood of catching problems while they can still be fixed without an incident.

GDPR compliance in AI recruiting is not about limiting what your hiring team can do. It is about building a system that controls unnecessary data, respects candidates as individuals, and creates the kind of trust that makes people want to apply to your organization in the first place. Done well, it is a competitive advantage, not a constraint.

Metrics That Matter in GDPR Compliance

Compliance programs that lack measurement tend to drift over time. Building a small set of meaningful metrics into your AI hiring compliance program helps maintain accountability and provides evidence of an active compliance posture if you are ever subject to regulatory scrutiny.

Metric What it measures Why it matters
Data retention compliance rate Percentage of candidate records deleted within defined retention periods Directly demonstrates storage limitation compliance; detects ATS configuration failures
Consent validity rate Percentage of talent pool records backed by documented, current, freely given consent Ensures lawful basis for talent pool retention; identifies records requiring re-consent or deletion
SAR response timeliness Percentage of candidate Subject Access Requests responded to within 30 days Article 12 compliance indicator; identifies process bottlenecks before they become violations
DPA coverage rate Percentage of AI hiring vendors with a signed, current DPA Confirms Article 28 compliance across vendor portfolio; identifies gaps from new vendor onboarding
DPIA completion rate Percentage of new or materially changed AI tools with a completed DPIA on file Demonstrates proactive risk assessment; required by Article 35 for high-risk processing
Audit readiness score Availability and currency of ROPA, DPIAs, DPAs, retention schedules, and privacy notices Single indicator of overall compliance documentation health; drives remediation prioritization

These metrics do not require elaborate infrastructure to track. A quarterly compliance review that checks each of these areas — ideally with a designated data protection owner in HR or legal — is sufficient for most organizations. What matters is that the review actually happens, the findings are documented, and identified gaps are assigned to owners with defined timelines for remediation.

Choosing AI Recruiting Tools with Compliance in Mind

Not all AI recruiting platforms are built with the same approach to data governance, and the differences matter enormously when you are responsible for GDPR compliance. Evaluating tools on compliance infrastructure — not just features, integrations, and price — is the only way to avoid discovering gaps after deployment when fixing them is far more expensive.

The questions to ask any AI recruiting vendor before signing include: Where is candidate data stored and processed? What is your sub-processor list? Do you support automated retention and deletion policies? Can you produce per-candidate AI output data for Subject Access Requests? What does your standard DPA cover, and how current is it? Do you conduct DPIAs on your own products and can you share the output? Have you been subject to regulatory inquiry or investigation related to data protection, and what was the outcome?

When organizations compare platforms — whether evaluating ninjahire vs linkedin recruiter, assessing ninjahire vs converzai, or looking at ninjahire vs hireez — the compliance infrastructure questions above should be weighted as heavily as feature comparisons. A platform that excels on candidate experience or screening accuracy but cannot support your SAR obligations or does not offer a compliant DPA is a liability, not an asset.

The same applies to newer AI-native tools entering the recruiting space. When evaluating options like ninjahire vs tenzo ai or ninjahire vs heymilo, ask specifically about EU data residency, automated deletion capabilities, and their approach to Article 22 compliance. These are not niche questions — they are baseline requirements for any platform processing candidate data from EU residents. Vendors who cannot answer them clearly have not built compliance into their product architecture, and that gap will eventually be your problem.

Key Takeaway

GDPR and AI recruiting are not in conflict. AI can be used lawfully and effectively in talent acquisition — but only if the data architecture is deliberate, the lawful basis is correctly identified, retention policies are actually implemented, and candidates are treated with the transparency the regulation requires. The organizations that get this right gain something beyond compliance: they build a reputation as employers who respect candidate data, which in a market where trust is scarce is a genuine competitive advantage.

The practical path forward is not complex. Map your data. Audit your vendors. Configure your tools for minimisation and deletion. Make your privacy notices honest. Ensure human oversight is real, not theatrical. And review regularly — because the regulatory environment is still moving, and compliance that was adequate last year may have gaps today. Build the infrastructure once, maintain it consistently, and AI hiring compliance becomes an operational routine rather than a recurring crisis.

Make your AI hiring compliant, defensible, and scalable

NinjaHire is built for teams that take candidate data and compliance seriously — with transparent AI, built-in audit support, and data governance that holds up under scrutiny.

Try for free

Frequently Asked Questions

What data can recruiters collect under GDPR?
Recruiters can collect personal data that is adequate, relevant, and limited to what is necessary for the purpose of evaluating a candidate for a specific role. This typically includes CV and application form data, contact information, responses to role-specific screening questions, assessment results, and interview notes. It does not include behavioral data, audio features, or inferred characteristics unless those are directly job-relevant and can be justified as necessary. The data minimisation principle applies to every data point collected — if you cannot articulate why it is necessary for the hiring decision, you should not be collecting it.
How long can you keep candidate data under GDPR?
The general standard for application data is six months after the hiring process concludes — this window covers the typical period for potential discrimination claims. AI interview recordings should be deleted sooner, typically within three to six months. Transcripts may be retained for up to twelve months with a documented justification. Talent pool data requires explicit consent for retention beyond the active hiring process, and that consent should be renewed after twelve to twenty-four months. Retention periods should be automated where possible, and the actual practice should match what your privacy notice states.
Is AI hiring GDPR compliant?
AI hiring is GDPR compliant when it is properly configured and governed. That means having a valid lawful basis for each type of processing, collecting only necessary data, implementing deletion at defined intervals, providing candidates with transparent information about how AI is used in their evaluation, ensuring human oversight of AI-assisted decisions, and having a signed Data Processing Agreement with every AI vendor. The technology is not inherently non-compliant — but compliance requires deliberate design choices that many organizations have not yet made.
What are candidate rights regarding automated hiring decisions?
Under GDPR Article 22, candidates have the right not to be subject to decisions based solely on automated processing that significantly affect them — and employment decisions qualify. When Article 22 applies, candidates have the right to request human review of the decision, to express their point of view, and to receive a meaningful explanation of the logic applied. Even where an exception to the Article 22 prohibition applies (such as contract necessity), organizations must still implement safeguards and provide transparency about the automated process and available remedies.
What is the role of a Data Processing Agreement in AI recruiting?
A Data Processing Agreement (DPA) is a mandatory contract under GDPR Article 28 between a data controller (the employer) and a data processor (the AI recruiting vendor). It must specify the subject matter, duration, nature and purpose of processing, the type of data involved, the categories of data subjects, and the obligations of each party. It must also address sub-processors and data security obligations. A DPA must be signed before any candidate data is loaded into a vendor's system — including during trial periods. Operating without one is a compliance breach regardless of whether harm results.
Do you need to conduct a DPIA before using AI hiring tools?
Yes, in most cases. GDPR Article 35 requires a Data Protection Impact Assessment before processing that is likely to result in high risk to individuals' rights and freedoms. Automated decision-making that significantly affects individuals — which includes AI-assisted hiring decisions — is explicitly listed as a scenario requiring a DPIA. The DPIA should document the nature and purpose of the AI processing, the risks to candidates, and the measures taken to address those risks. It should be completed before the tool is deployed, not after. Many supervisory authorities have also published lists of processing activities that always require a DPIA; check the guidance from the relevant national authority for your jurisdiction.