Bias in the code: How AI recruitment processes can land employers in hot water

South Africa

Artificial intelligence (“AI”) is transforming recruitment processes as we know it, offering speed and efficiency. But what happens when an AI platform perpetuates bias? This risk is not only theoretical. A recent case in the U.S. District Court for the Northern District of California highlights how AI can create significant legal challenges for global companies. In Mobley v Workday Inc., an AI hiring platform allegedly excluded certain groups of employees from job opportunities, raising a red flag about how fair and neutral an AI hiring platform can be.

Mobley v Workday Inc.

Derek Mobley, a black American male, over the age of 40 with a disability alleged that the AI applicant screening platform of a human capital management platform, Workday, discriminated against him and similar applicants. Allegedly, Workday’s algorithms systematically disadvantaged black applicants, older applicants, and applicants with disabilities, leading to their exclusion from job opportunities.

The U.S. District Court for the Northern District of California allowed Mobley’s claims to move forward, recognising that even though Workday did not intentionally discriminate, its AI had an unintended but disproportionately negative impact on black applicants, older individuals and people with disabilities. This case highlights a key issue: even neutral-seeming AI platforms can replicate biases if the data they are trained on is skewed. This ruling serves as a lesson to global companies, including those operating in South Africa, where our employment equity laws require employers to eliminate unfair discrimination in any employment policy or practice.

The core issue at hand is the potential for AI systems to replicate biases from the data on which they are trained. AI relies on data to predict outcomes and make recommendations. If the data probability reflects societal biases - whether based on race, gender, age or disability – then AI is likely to perpetuate these biases.

The concerns raised by Mobley are relevant for South Africans, as the Employment Equity Act 55 of 1998 (“EEA”) protects job applicants from unfair discrimination. The EEA imposes obligations on employers to ensure fair treatment and prevent unfair discrimination in the workplace. If AI hiring platforms are found to perpetuate biases, especially against historically disadvantaged groups - such as black candidates, women or people with disabilities, employers could face serious legal consequences. The EEA holds employers accountable for all discriminatory practices, whether driven by human decision-making or AI-powered systems.

The Employment Equity Act

Under the EEA, employers are obligated to eliminate unfair discrimination in any employment practice, including recruitment. Section 6(1) of the EEA explicitly prohibits unfair discrimination on various grounds such as race, gender, age, disability, religion and others. Employers must ensure that their hiring processes do not unfairly discriminate against any individual or group, directly or indirectly. Employers have the right to implement employment practices that align with the inherent requirements of the job. This means that employers may use criteria, that are objectively justifiable and reasonable and that are necessary for carrying out the duties attached to the job.

If an employee or applicant for employment believes that they have been unfairly discriminated against, they may refer an unfair discrimination dispute to the Commission for Conciliation, Mediation, and Arbitration for conciliation within six months from the occurrence of the discriminatory act. If the dispute remains unresolved after conciliation, any party to the dispute may refer the dispute to the Labour Court (“LC”) for adjudication. If the LC decides that an employee has been unfairly discriminated against, the LC may make any appropriate order that is just and equitable including payment of compensation, damages, and/or an order directing the employer to take steps to prevent the same unfair discrimination or similar practice from occurring in the future.

Employers can mitigate the risks of bias in data by agreeing on strong oversight and auditing mechanisms for their AI-systems with their AI service providers. Regular reviews of the outputs produced by AI tools should be carried out to detect any unintentional biases. Human oversight is critical, and South African employers should ensure that their recruitment practices are aligned with the EEA. This proactive approach ensures compliance with local employment laws and cultivates an inclusive workplace.

AI Regulation: South Africa

South Africa is moving toward the formal regulation of AI with the National Artificial Intelligence Policy Framework for South Africa (“Framework”) to align with global AI governance. The Framework seeks public comment on the development of policies that will regulate AI across various industries. One of its key objectives is to ensure the responsible and ethical use of AI and mitigate the risks such as bias and discrimination in AI applications – these are critical issues in recruitment practices using AI platforms. Employers in South Africa must be mindful that while the Framework is in draft form, it signals towards a future with stringent regulations.

AI offers incredible potential in recruitment, but it carries significant risks if not managed properly. As discussed in Mobley v Workday Inc. AI can inadvertently perpetuate biases, exposing companies to legal and reputational damage. South African employers must be vigilant as the legal risks of non-compliance with the EEA are substantial. For AI platforms to truly support fair and equitable employment practices, they must be designed and regularly audited to ensure that the underlying data and algorithms reflect the diversity and realities of the South African workforce. In South Africa, the Framework is the first step to creating AI specific laws that will compliment current laws in governing the ethical use of AI systems. In the interim, employers remain accountable for the ethical use of AI systems in recruitment through the EEA, in instances which result in unfair discrimination.