Candidate Experience

How to tell candidates they'll be screened by AI

Bharat Sigtia
Bharat Sigtia
.
5 min read

March 15, 2026

Why AI Transparency in Hiring Matters

AI is already part of most hiring processes, whether companies openly talk about it or not. Candidates are aware of this shift. They may not know exactly how AI is used, but they assume some level of automation is involved the moment they apply.

What has changed recently is not just the use of AI, but the expectations around it. Candidates today are paying closer attention to how hiring decisions are made. They care about fairness, clarity, and whether their application is being evaluated properly. When that process feels unclear, it creates doubt.

Most companies don’t deliberately hide AI in recruiting. It simply goes unmentioned. A candidate applies, gets screened, receives a response, and moves on without ever knowing what happened in between. From the company’s perspective, the process is efficient. From the candidate’s perspective, it can feel impersonal and opaque.

That disconnect matters more than it seems.

When candidates don’t understand how they’re being evaluated, they start filling the gaps themselves. Some assume the system is biased. Others feel they were filtered out without real consideration. Even strong candidates can lose interest if the process feels like a black box.

This is where transparency plays a practical role.

Clearly communicating that AI is part of the screening process sets expectations early. It doesn’t make the process less efficient. It makes it more understandable. Candidates know what to expect and are less likely to misinterpret the outcome.

There is also a growing compliance angle. In regions like Europe, disclosing automated decision-making is already becoming a requirement. Even in markets where regulation is still evolving, the direction is clear. Companies will be expected to explain how technology is used in hiring.

But even without regulation, transparency has a direct impact on how a company is perceived.

Hiring is often the first real interaction someone has with your organisation. If the process feels unclear or overly automated, it reflects poorly on the company itself. On the other hand, when the process is explained in a straightforward way, it builds credibility.

The companies that handle this well are not avoiding AI. They are simply more deliberate about how they use it and more open about where it fits into the process.

That shift—from hidden automation to clear communication—is what candidates are starting to expect. And increasingly, it’s what separates a functional hiring process from a trustworthy one.

Do You Have to Tell Candidates About AI Screening?

The short answer is: in many cases, yes and even where it’s not mandatory yet, it’s quickly becoming expected.

If your hiring process uses AI only for support tasks like scheduling interviews or organising applications, disclosure is usually not required. But the moment AI starts influencing decisions such as filtering resumes, ranking candidates, or analysing video interviews the expectation changes.

In regions like the EU, regulations such as GDPR already require companies to inform candidates when automated decision-making is involved, especially if it has a meaningful impact on outcomes. Candidates also have the right to ask how decisions are made and, in some cases, request human review.

In the US, the landscape is still evolving, but there is increasing scrutiny around fairness and bias in AI hiring tools. Some states and cities have already introduced guidelines or laws requiring disclosure and audit of automated hiring systems.

In India, there isn’t a strict legal requirement yet. However, with the rise of global hiring and GCC setups, many companies are aligning with international standards by default. It’s less about current law and more about future-proofing the process.

But beyond compliance, there’s a simpler way to look at it.

If AI is affecting whether a candidate moves forward or not, they should know.

Not because it’s a rule everywhere, but because it’s part of a fair and transparent process. When candidates understand how decisions are made, they’re more likely to trust the outcome—even if it’s not in their favour.

On the other hand, when AI is used silently and candidates feel filtered out without clarity, it creates frustration. That frustration doesn’t stay limited to one application. It shapes how they view your company.

So while the legal requirement may depend on where you’re hiring, the practical answer is more consistent.

If AI is part of your decision-making process, it’s better to disclose it clearly and early.

Legal & Compliance Overview (Global + India)

The rules around AI in hiring are not uniform yet, but the direction is becoming clear. Most regions are moving toward more transparency, not less. If your hiring spans multiple geographies as it does for many GCCs and global teams you can’t rely on a single standard.

In Europe, the expectations are already well defined. Under GDPR, if automated systems are making decisions that significantly affect candidates, companies are expected to disclose this. Candidates also have the right to understand how those decisions are made and, in some cases, request human intervention. With the upcoming EU AI Act, this is becoming even more structured. Hiring tools that influence decisions are likely to be classified as “high-risk,” which means stricter requirements around transparency, documentation, and oversight.

The US is taking a slightly different route, but the intent is similar. There isn’t a single federal law covering AI in hiring, but regulators are paying attention to bias and fairness. Cities like New York have already introduced rules requiring audits of automated hiring tools. The focus here is less on disclosure alone and more on ensuring that AI does not lead to discriminatory outcomes. Even where disclosure is not strictly enforced, companies are expected to be able to explain how their systems work.

In India, regulation is still evolving. There is no direct requirement today that mandates companies to disclose AI use in hiring. However, this doesn’t mean companies can ignore it. Many organisations hiring in India are global or work with international clients, which means they often adopt global compliance standards by default. In practice, this leads to more transparency, even without local legal pressure.

This is where things get important for GCCs and cross-border hiring.

A company might be based in one country, hiring candidates in another, and using tools built elsewhere. In such cases, the strictest applicable standard often becomes the safest approach. If one part of your hiring process falls under stricter regulation, it affects how the entire process should be designed.

What this means in practice is simple.

Even if disclosure is not legally required in every market you operate in, it is becoming a baseline expectation. Candidates are becoming more aware, and regulations are gradually catching up.

The companies that wait for strict enforcement before acting usually end up reacting late. The ones that move earlier tend to build more stable and trusted hiring processes.

So while the laws may differ by region, the underlying principle is consistent.

If AI is involved in evaluating candidates, you should be able to explain it clearly.

When You MUST Disclose AI in Hiring

Not every use of AI in recruiting needs to be explained in detail. But there’s a clear point where disclosure stops being optional and starts becoming necessary.

The easiest way to understand this is to look at impact.

If AI is simply supporting the process like scheduling interviews or organising applications in the background there’s usually no need to call it out explicitly. It’s operational. It doesn’t affect whether a candidate moves forward or not.

But the moment AI starts influencing decisions, the expectation changes.

For example, if your system is filtering resumes before a recruiter sees them, that directly affects who gets considered. If candidates are being ranked or scored by an algorithm, that shapes the shortlist. If video interviews are being analysed using AI, even partially, that adds another layer of evaluation.

In all of these cases, AI is not just supporting the process. It is shaping outcomes.

That’s where disclosure becomes important.

Candidates don’t need a technical breakdown of how your system works. But they do need clarity on the fact that part of the evaluation is automated. This helps them understand the process and avoids the feeling that decisions are being made without visibility.

There is also a practical reason to be consistent here.

If a candidate later finds out that AI played a role in their rejection and it was never mentioned, it raises questions. Not just about the tool, but about the transparency of the company. Even if the process was fair, the lack of communication can make it feel otherwise.

Another situation where disclosure matters is when candidates are interacting directly with AI.

This includes chat-based assessments, automated interviews, or any stage where responses are being processed by a system rather than a person. In these cases, candidates should know who or what they are interacting with. It’s a basic expectation, and increasingly, a compliance requirement in some regions.

The same applies when candidates have the option to request human review.

In some markets, if automated decision-making is involved, candidates can ask for their application to be reviewed by a person. If you’re operating in those regions, disclosure is not just about transparency it’s about enabling that option.

So the rule becomes fairly straightforward.

If AI is making or influencing decisions, or directly interacting with candidates in a way that affects evaluation, it should be disclosed clearly.

Not as a legal formality, but as part of a process that candidates can understand and trust.

Scripts & Templates You Can Actually Use

This is where most teams get stuck. They understand the need for transparency but don’t know how to say it without sounding legal, robotic, or overly formal.

The key is to keep it simple and natural. You’re not making an announcement. You’re just setting expectations.

Here are practical ways to do it across different stages of the hiring process.

Job Description (Short Disclosure)

Add a single line toward the end of the JD. It should feel like part of the process, not a disclaimer.

“Applications may be reviewed using automated tools as part of the initial screening process. All final decisions are made by our hiring team.”

This works because it’s clear, balanced, and doesn’t over-explain.

Application Page (Before Submission)

Place this near the submit button or as a short note below the form.

“Once you apply, your profile may go through an initial automated screening to help us review applications efficiently. Every shortlisted profile is reviewed by our team.”

This reassures candidates that AI is part of the process, but not the only layer.

Application Acknowledgment Email

This is where you reinforce transparency without making the message heavy.

Subject: Application received

“Hi [Name],
Thank you for applying. We’ve received your application and our team will begin the review process shortly. As part of this, we use automated tools for initial screening to manage application volume efficiently. All shortlisted candidates are reviewed by our hiring team before moving forward.”

Simple, direct, and easy to understand.

Interview Stage (If Asked or Relevant)

You don’t need to bring it up aggressively, but if it comes up or if AI played a role in shortlisting—acknowledge it clearly.

“For the initial screening, we use some automated tools to help us manage applications. From this stage onward, the process is fully handled by our hiring team.”

This keeps the focus on human involvement where it matters.

Candidate FAQ / Support Response

If candidates ask directly, the response should be transparent but not defensive.

“Yes, we use automated tools for parts of the screening process, mainly to manage application volume. However, all hiring decisions involve human review, and no candidate is selected or rejected purely based on automated evaluation.”

Across all of these, the pattern is consistent.

You acknowledge the use of AI, you clarify where it fits, and you reinforce that humans are still involved in decision-making.

That’s enough.

You don’t need technical explanations or long disclaimers. In fact, the more complicated it sounds, the less trust it builds.

Clear, simple communication works better and it’s easier to maintain across the entire hiring process.

What NOT to Do When Disclosing AI in Hiring

Most mistakes here don’t come from bad intent. They come from trying to either “play it safe” or “avoid making it a big deal.”

Ironically, both approaches create more problems than they solve.

1. Hiding AI Completely

Some teams choose not to mention AI at all, assuming candidates won’t notice or won’t care.

That assumption doesn’t hold anymore.

Candidates already expect some level of automation. When it’s not disclosed and they later realise it especially after a rejection it creates distrust. The issue isn’t the use of AI. It’s the lack of clarity.

Even a single line early in the process is enough to avoid this.

2. Using Overly Legal or Technical Language

Another common mistake is going too far in the other direction.

Long disclaimers, complex wording, or terms like “algorithmic decision-making frameworks” make the process feel distant and impersonal. Candidates are not looking for a legal explanation. They just want to understand what’s happening.

If the message sounds like a policy document, most people will either ignore it or misunderstand it.

Clarity works better than precision here.

3. Making It Sound Fully Automated

Phrases like “your application will be processed by our AI system” can create the wrong impression.

Even if technically true at an early stage, it signals that there is little or no human involvement. For many candidates, that reduces confidence immediately.

The goal is to communicate balance.

AI supports the process. It doesn’t replace people.

4. Being Inconsistent Across the Process

Mentioning AI in one place and ignoring it elsewhere creates confusion.

For example, if it appears in the job description but not in emails or interviews, candidates may question how much of the process is actually automated.

Consistency doesn’t mean repetition. It means alignment.

Each stage should reflect the same message in a way that fits naturally into that interaction.

5. Over-Explaining the System

Some teams try to build trust by sharing too much detail—how the model works, what parameters it uses, how scoring is calculated.

This usually backfires.

Most candidates don’t need or want that level of detail. It adds complexity without improving understanding. In some cases, it can even raise more questions than it answers.

Simple explanations are more effective.

6. Treating It as a Compliance Task Only

If disclosure is handled purely as a checkbox for legal reasons, it shows.

The language becomes rigid, the tone becomes formal, and the message feels disconnected from the actual hiring experience.

Candidates can sense that.

Transparency works best when it’s treated as part of the overall candidate experience, not just a requirement to fulfil.

In the end, the goal is not to make AI the centre of the conversation.

It’s to make the process clear enough that candidates don’t have to guess how decisions are being made.

When that clarity is missing, even a well-designed hiring system can feel unfair.

When it’s present, even an automated step feels reasonable.

A Simple Framework for Ethical AI Hiring

Most teams don’t need a complex policy to get this right. What they need is a clear way to think about how AI fits into their hiring process.

A useful way to approach this is through three principles: transparency, consent, and oversight. If these are in place, most of the common issues around AI in hiring tend to reduce significantly.

Transparency comes first. Candidates should know where AI is being used and what role it plays. This doesn’t require detailed explanations or technical language. It simply means being upfront about whether any part of their evaluation involves automation. When candidates understand the process, they are far less likely to question its fairness.

Consent is the next layer. In some regions, this is a legal requirement. In others, it’s still emerging. But even where it’s not mandatory, giving candidates a sense of control improves the experience. This could be as simple as informing them before an automated assessment or allowing them to request a human review if needed. The idea is not to complicate the process, but to avoid making it feel one-sided.

Oversight is what keeps the system balanced. AI should not be making final decisions in isolation. There should always be a point where human judgment comes in—whether that’s during shortlisting, interviews, or final selection. This ensures that decisions are not driven purely by patterns or past data.

When these three elements are aligned, the process becomes more stable.

AI handles efficiency without creating confusion. Candidates understand how they are being evaluated. Hiring teams retain control over decisions instead of relying entirely on automated outputs.

This framework is not about limiting the use of AI. It’s about using it with clear boundaries.

And that’s usually the difference between a process that feels fair and one that doesn’t.

AI Disclosure in GCC and Global Hiring

This is where disclosure stops being a “nice to have” and becomes a practical necessity.

In local hiring, you’re usually operating within one regulatory environment and one set of candidate expectations. Once hiring becomes global especially in GCC setups that consistency disappears. You’re dealing with candidates from different regions, each with different expectations around privacy, transparency, and fairness.

And AI sits right in the middle of that complexity.

A candidate applying from Europe may expect clear disclosure because of GDPR norms. A candidate in the US may be more concerned about fairness and bias. In India, awareness is growing, and candidates are starting to ask more questions about how their applications are evaluated especially for global roles.

The challenge is that your hiring process is still one system.

If AI is being used anywhere in that process, it needs to hold up across all these expectations, not just the least strict one. This is why many global teams don’t wait for local regulations. They standardise their approach based on the highest bar.

Disclosure becomes part of that standard.

There’s also a practical risk in not doing this well.

In GCC hiring, roles are often high-impact. Teams are lean, expectations are high, and hiring mistakes are costly. If candidates feel the process is unclear or overly automated, it affects not just conversion rates but also the kind of talent you attract.

Strong candidates, especially those evaluating multiple global opportunities, pay attention to how structured and transparent the process is. They’re not just assessing the role. They’re assessing how decisions are made.

Another layer to consider is cross-border compliance.

You may be sourcing candidates in one country, processing applications in another, and using tools built elsewhere. That creates overlapping obligations. In such cases, the safest approach is consistency clear disclosure across the board, regardless of where the candidate is located.

This doesn’t mean making the process more complex.

In fact, the more global your hiring becomes, the simpler your communication needs to be. A clear, consistent message about how AI is used removes ambiguity and reduces the risk of misinterpretation across regions.

In GCC environments, where teams are built to operate across geographies, this clarity becomes even more important.

You’re not just hiring talent. You’re building a system that needs to work across different expectations.

And disclosure, done right, is what keeps that system aligned.