How AI Can Remove Bias From The Hiring Process And Promote Diversity And Inclusion
Bias, according to science, stems from the human brain’s effort to process the enormous amounts of data it encounters in any given situation. Take, for example, HR teams who handle multiple tasks daily and are hard-pressed for time and mindspace. The pressure of having to sift rapidly through hundreds of applications – or interview large numbers of candidates per day as in campus hiring – can be enormously taxing. Under such circumstances, the human brain uses generalisations or “shortcuts” – i.e. biases, conscious or unconscious, that filter data and enable it to work faster without being overwhelmed. In recruitment, this means that better qualified candidates who do not align with the recruiter’s frame of reference can fall through the cracks.
Bias in hiring can impact a business in multiple ways – it inhibits diversity, affects promotions and retention rates and leads to poor decision-making. To tackle bias, recruiters have developed several tools – blind resumes, video-based interviews, skill-based assessments, etc. These measures, however, still fall short of desired outcomes; even with the best of intentions, unconscious bias can seep in and influence human interviewers in ways that are hard to recognise.
Clearly, relying solely on human intelligence to develop solutions for bias-free recruitment is unlikely to fully succeed, for the simple reason that much of the bias that seeps into recruitment processes is unconscious. Now, artificial intelligence (AI) may just have the answers. The recruitment industry believes so.
How does AI combat bias? For one, superior processing power enables AI to crunch through masses of data, using algorithms and machine learning at speeds beyond the scope of the human brain. No job application is overlooked. It analyses such data clinically without the baggage of irrational assumptions and selects the best suited candidates. What is more, as new data points are fed in, AI improves its capabilities to recognise high-quality talent.
Using AI to boost diversity and inclusion
In the business world, it is now accepted wisdom that a diverse, equitable, and inclusive workforce benefits companies and society at large. For organisations seeking to promote diversity, AI is invaluable in eliminating bias at every stage. Some ways in which this is accomplished are:
- An appropriately designed AI system gives every candidate an equal opportunity to qualify for an opening.
- At an even earlier stage, organisations can use AI to identify flaws in job descriptions and recognise discriminatory patterns in hiring and performance reviews. This is known as generative AI. By correcting flaws in their processes, HR professionals can promote better inclusivity.
When AI is employed to reduce bias at every stage of hiring, underrepresented candidates get the opportunity to showcase their capabilities. For instance, women, differently-abled individuals and those from socially disadvantaged communities are typically considered less employable than upper-class, socially privileged men though they may be equally or more talented. AI offers the best way to overcome the centuries-old bias against these groups, analysing candidates solely on the basis of skills.
Depending on a company’s recruitment objectives, AI can be programmed to disregard certain demographics and prioritise others. For instance, a company whose senior management is male-dominated may want to diversify its composition to include more women. Artificial intelligence can be used to critically examine and suggest the best qualified women candidates for the job.
AI has many potential benefits for business, the economy, and for tackling society’s most pressing social challenges, including the impact of human biases. But that will only be possible if people trust these systems to produce unbiased results. AI can help humans with bias – but only if humans are working together to tackle bias in AI.
(Harvard Business Review – October 15, 2019) |
AI can show bias too, but is open to correction
Artificial intelligence, despite its spectacular processing power, is only as effective as the information it receives. If, for instance, AI is not “taught” to ignore demographic data, it will pick up the biased data fed to it by humans and generate undesirable results.
Developing AI for recruitment therefore requires ethical commitment, careful planning and regular monitoring. Some steps that companies should adopt to get past flaws in their AI are:
- Conduct frequent audits of demographics to identify patterns of discrimination.
- Tweak algorithms to correct the identified flaws.
- Update data sets regularly based on the audit results.
Sources: