Why compliance matters in AI recruitment
The adoption of artificial intelligence in hiring processes is accelerating across Latin America. But this adoption brings legal obligations that many recruitment teams overlook or underestimate.
When an AI agent processes a candidate's resume, conducts a conversational interview, or generates a compatibility score, it is performing personal data processing. In some cases, sensitive data processing. Each country in the region has specific regulations governing this treatment.
Key Takeaway
Using AI in recruitment is not prohibited in any Latam country, but it is regulated. Non-compliance can result in fines up to 2% of gross annual revenue, plus reputational damage. The good news: compliance is achievable with clear processes and the right tools.
This article is for informational purposes only and does not constitute legal advice. Laws and regulations vary by jurisdiction and change frequently. Always consult with qualified local legal counsel before implementing AI recruitment tools in your organization.
Legal framework by country
Mexico: LFPDPPP
The Federal Law on Protection of Personal Data Held by Private Parties (LFPDPPP) is the primary regulation. For AI recruitment, the key requirements are:
- A specific privacy notice that mentions the use of automated tools
- Express consent for personal data processing
- Purpose limited to the selection process
- Rights of access, rectification, cancellation, and opposition (ARCO rights)
- Maximum data retention period defined in the privacy notice
Fines for non-compliance range from 100 to 320,000 UMAs, approximately 3 million Mexican pesos at the upper range.
Colombia: Law 1581 of 2012
Colombia has a robust data protection regime supervised by the Superintendence of Industry and Commerce (SIC). Critical points for AI recruitment include:
- Prior, express, and informed authorization from the candidate
- Database registration with the SIC
- Published and accessible data treatment policy
- Prohibition on processing sensitive data without explicit authorization
- The candidate can revoke authorization at any time
Argentina: LPDP (Law 25.326)
Argentina was a pioneer in data protection in the region. The law establishes that no one can be compelled to provide sensitive data, and that automated decisions significantly affecting a person must be reviewable by a human. This point is particularly relevant for AI screening.
Brazil: LGPD
The General Data Protection Law (LGPD) is the most rigorous in the region and imposes the highest penalties. For AI recruitment, the fundamental requirements are:
- Legal basis for processing (consent or legitimate interest)
- Data Protection Impact Assessment (DPIA) for high-risk processing
- Right of the data subject to request review of automated decisions
- Appointment of a Data Protection Officer (Encarregado)
- Notification to the ANPD in case of security incidents
Chile: Updated Law 19.628
Chile updated its data protection law in 2024, bringing it closer to European standards. The new regulation introduces the Personal Data Protection Agency as the enforcement body. For AI recruitment, this means active supervision and more severe penalties.
Legal checklist for AI recruitment
This checklist covers requirements common to all countries in the region. Each company should adapt it to their specific jurisdiction.
Consent and transparency
- Update the privacy notice or policy to explicitly mention the use of AI tools
- Obtain informed consent before processing candidate data with AI
- Inform the candidate that automated methods will participate in the evaluation
- Offer the option to request human review of automated decisions
- Document consent in an auditable manner
Data security
- Implement encryption in transit (TLS 1.3) and at rest (AES-256)
- Establish role-based access controls for candidate data
- Conduct periodic security assessments of the AI provider
- Define a security incident response plan
- Maintain access and processing logs for the legally required period
Minimization and retention
- Collect only the data necessary for evaluation
- Define maximum retention periods by data type
- Implement automatic deletion when the retention period expires
- Do not transfer candidate data to third parties without authorization
- Anonymize data used to train or improve AI models
Bias auditing
- Periodically evaluate AI models for biases based on gender, age, ethnicity, or other protected characteristics
- Document system fairness metrics
- Implement correction mechanisms when biases are detected
- Maintain bias audit records to demonstrate due diligence to regulators
Common mistakes and how to avoid them
Mistake 1: Using the same generic privacy notice
Many companies have a general privacy notice that does not mention AI or automated tools. This constitutes a lack of transparency and can be sanctioned. The solution is to create a specific addendum for the selection process that details the tools used.
Mistake 2: Not offering a human review option
All Latam legislations, to varying degrees, recognize the candidate's right not to be subject to purely automated decisions. If AI rejects a candidate, there must be a clear mechanism for them to request human review.
Mistake 3: Retaining data indefinitely
Data from unselected candidates cannot be kept indefinitely. Each jurisdiction has different timelines, but the recommended practice is a maximum of 12 months with explicit consent, and automatic deletion when the period expires.
Mistake 4: Ignoring international data transfers
If your AI provider processes data on servers outside the candidate's country, there is an international data transfer that requires additional safeguards. Verify that your provider has standard contractual clauses or adequate certifications.
How Selenios facilitates compliance
Selenios was designed with compliance as a priority from its architecture. The platform includes:
- Consent integrated into the candidate interview flow
- Configurable retention with automatic deletion
- Complete audit logs for every interaction
- Data processing in the candidate's region
- Exportable bias audit reports
- Human review option at every stage of the process
Compliance is not an obstacle to recruitment innovation. It is a competitive differentiator that strengthens employer branding and protects both the company and candidates.
What data protection laws apply to AI recruitment in Latam?+
Each country has its own law: Mexico has the LFPDPPP, Colombia has Law 1581, Argentina has the LPDP (Law 25.326), Brazil has the LGPD, and Chile has the updated Law 19.628. All require informed consent, specific purpose, and security measures for candidate personal data processed by AI tools.
Do I need candidate consent to use AI in screening?+
Yes, in every Latam country. The candidate must be informed that an AI tool will process their data, for what purpose, how long data will be stored, and how they can exercise their rights to access, rectify, and delete their information. Consent must be prior, express, and documented.
What happens if my company does not comply with recruitment privacy regulations?+
Penalties vary by country but include significant fines. In Brazil, they can reach 2% of gross annual revenue. In Mexico, up to 320,000 UMAs. Beyond financial risk, non-compliance damages employer branding and can trigger individual lawsuits from affected candidates.