The invisible bias in your hiring process
Your recruitment team invested in an AI tool for CV screening. Review times dropped by 70%, the team is happy, efficiency metrics look great. But there is a problem the dashboards do not show: the AI is systematically discarding qualified candidates based on discriminatory patterns inherited from historical data.
This is not a hypothetical scenario. It is the documented reality of most automated screening tools that were not designed with fairness as a priority.
Key Takeaway
78% of AI screening tools exhibit some level of detectable bias. It is not a question of whether your tool has bias — it is a question of how much bias it has and whether you are measuring it. The good news: with the right practices, bias can be reduced by 72%.
How bias infiltrates AI
AI does not invent biases — it learns them. And it learns them from the data it was trained on. If a company historically hired more men for engineering roles, the AI will learn that "male + engineering = good candidate" and replicate that pattern.
Training data bias
The most common and most insidious problem. If historical hiring data reflects biased decisions (which in most companies, it does), the AI will amplify those biases at scale.
Proxy bias
Even when explicit demographic variables (gender, age, ethnicity) are removed, AI can use proxies: university name as a proxy for socioeconomic class, address as a proxy for ethnicity, graduation date as a proxy for age.
Format and style bias
AIs trained predominantly on resumes of a certain style penalize alternative formats. A functional resume (common among people with employment gaps or career changes) may receive lower scores than a chronological one, regardless of the candidate's competencies.
The 5 most common types of bias in CV screening
1. Gender bias
Multiple studies demonstrate that AI tools score resumes with male names higher for technical and leadership roles. The most famous case was Amazon, whose screening tool automatically penalized resumes containing the word "women's" (as in "women's chess club").
2. Ethnic bias
Names associated with certain ethnic groups receive callback rates up to 36% lower. AI replicates this pattern when trained on uncorrected historical hiring data.
3. Age bias
AIs tend to penalize older graduation dates, employment inactivity periods, and "legacy" technologies on resumes. This systematically discriminates against candidates over 45, regardless of their experience and capability.
4. Educational bias
Preference for elite universities is one of the hardest biases to detect because it is often considered "justified." However, evidence shows that university prestige is a weak predictor of job performance compared to actual skills and experience.
5. Geographic bias
AI tools can associate certain locations with higher or lower candidate quality, replicating historical patterns of geographic discrimination.
How to audit your AI tool
Disparate impact testing
The 80% rule (or four-fifths test) is the industry standard: if the selection rate of a protected group is less than 80% of the rate of the group with the highest selection, there is evidence of disparate impact.
Practical example: if your AI selects 50% of male candidates and 35% of female candidates, the ratio is 35/50 = 70%, below the 80% threshold. This indicates gender bias.
Controlled A/B testing
Submit the same resume with different names (one typically male, one female; one from the majority group, one from a minority) and compare scores. Significant differences are evidence of bias.
Cohort analysis
Review advancement rates by demographic group at each funnel stage quarterly. If there are consistent disparities, investigate the cause.
Mitigation strategies that work
Blind screening
Remove identifiable information from resumes before AI analysis: name, photo, address, university, graduation date. Selenios implements automatic blind screening that hides these variables while preserving all relevant competency and experience information.
Balanced training datasets
Ensure training data equitably represents all demographic groups. This requires actively curating datasets, not just using historical data "as is."
Real-time fairness metrics
Implement dashboards that continuously monitor selection rates by demographic group. If a deviation is detected, the system automatically alerts the HR team.
Competency-based evaluation
Configure AI to evaluate relevant skills and experience rather than credentials. A candidate should not be discarded because they did not attend a specific university if they have the required competencies.
Human review of borderline cases
Candidates near the cutoff threshold should be reviewed by a human. This is the gray zone where AI bias has the greatest impact.
Regulatory framework: what you need to know
NYC Local Law 144
Since 2023, companies in New York using AI tools for employment decisions must conduct annual bias audits by an independent auditor and publish the results.
EU AI Act
The European Union classifies AI used in recruitment as "high-risk," requiring conformity assessments, decision transparency, and mandatory human oversight.
Latin American regulations
Brazil is advancing AI regulation through the LGPD and complementary legislation. Colombia and Chile are developing similar regulatory frameworks. Companies with regional operations should prepare for multinational compliance.
The Selenios approach to fair screening
Selenios was designed from the ground up with fairness as a fundamental principle:
- Native blind screening: demographic variables are automatically hidden before analysis
- Continuous fairness monitoring: real-time dashboards with disparate impact alerts
- Competency-based evaluation: scoring is based on skills and experience, not credentials
- Quarterly audits: every model is regularly audited with diverse datasets
- Full transparency: every AI decision includes a comprehensible explanation
Responsibility is shared
Technology can reduce bias significantly, but it cannot eliminate it alone. HR teams must train in bias recognition, establish review processes, and maintain a culture of continuous improvement in diversity and inclusion.
What types of bias can AI have when reviewing resumes?+
The main biases are: gender bias (preference for male names in technical roles), ethnic bias (discrimination based on names or locations associated with ethnic groups), age bias (penalizing older graduation dates or legacy technologies), educational bias (preference for elite universities), and format bias (penalizing non-conventional CV structures). All these biases are learned from historical hiring data that reflects biased human decisions.
How can I audit my AI tool to detect bias?+
There are three main methods: first, controlled A/B tests submitting identical resumes with different demographic variables. Second, disparate impact analysis using the 80% rule to compare selection rates between groups. Third, quarterly cohort analysis reviewing advancement rates by demographic group at each funnel stage. Tools like Selenios include these audits in an automated fashion.
What regulations require transparency in AI for recruitment?+
NYC Local Law 144 has required annual bias audits for AI hiring tools since 2023. The EU AI Act classifies recruitment AI as high-risk, requiring conformity assessments and human oversight. In Latin America, Brazil is advancing regulation through the LGPD, while Colombia and Chile are developing similar frameworks. Companies should prepare for global compliance.