Artificial Intelligence (AI) has revolutionized various aspects of our lives, including the recruitment process. In a thought-provoking keynote delivered by Anita Lettink at the Breaking Bias Summit, the intersection of AI, recruitment, and bias was explored.
The role of AI in recruitment
AI is becoming increasingly integrated into recruitment and HR processes (approximately 92% of HR planning to increase their use of AI in at least one area of HR in the following 12-18 months). And yes, AI has the potential to bring numerous benefits, such as increased efficiency, improved decision-making, and more personalised experiences for candidates.
However, before using AI, have you thought about the following:
- What steps must you take to ensure AI algorithms do not perpetuate biases and discrimination present in training data?
- How can you maintain fairness and equity throughout your recruitment process when using AI?
- What are the potential legal challenges and ethical dilemmas arising from unregulated AI usage in hiring?
- How can you protect against privacy violations, data security breaches, and ensure compliance with regulations?
- How can you strike a balance between AI automation and the valuable human aspects of candidate assessment, such as soft skills, cultural fit, and subjective judgments?
- What role should human judgment play alongside AI in your recruitment efforts?
Here’s the catch: as much as it might seem that AI is the silver lining of making recruitment easier than ever, it also has the potential to result in discriminatory hiring practices.
Unintended bias in AI algorithms
Lettink emphasized the fact that AI algorithms can perpetuate biases, and, sadly, the majority of companies using AI are not even aware of this. The following examples serve as a reminder that the biases present in AI algorithms can amplify existing inequalities and contribute to hiring practices that are discriminatory.
Danger 1: AI-powered, video-based evaluations
Have you noticed the growing trend of video-based evaluations in recruitment, where AI algorithms analyze facial expressions and responses to assess candidates’ suitability? On paper this sounds amazing, however, a study conducted by Cambridge University actually demonstrated that AI-powered video-based evaluations are actually more harmful than we might have initially imagined.
The researchers questioned the effectiveness of video-based recruitment solutions that analyze traits based on facial expressions and responses. It involved a person speaking to the camera and being scored on the ocean criteria, which assesses traits like openness, conscientiousness, extroversion, agreeableness, and neuroticism. In situation A, the person received a high score, indicating a promising candidate.
Something unexpected happened when the researchers allowed adjustments to contrast, brightness, and saturation in the video: solely by adjusting the brightness, the candidate’s score significantly dropped. The visual expressions remained unchanged; only the quality of the camera on the person’s cheap laptop was altered. This observation should raise doubts about the reliability of video-based recruitment tools, as a candidate’s score should not be affected by such superficial factors.
Danger 2: Applicant Tracking Systems
ATS (Applicant Tracking Systems) have become increasingly prevalent in companies, and automation is highly valued. However, there is a growing awareness of the prominent role algorithms play in these ATS systems, determining which candidates are deemed suitable for specific vacancies.
In fact, a significant figure in the industry, the CEO of ZipRecruiter, highlights that around three-quarters of all job resumes in the United States—approximately 75%—are read and evaluated solely by algorithms. This means that the majority of resumes never even reach human eyes for consideration. You must have noticed that there is widespread advice online for candidates on how to optimize their resumes and cover letters to successfully navigate this initial algorithmic screening process. This advice often encourages candidates to embellish their accomplishments or use unconventional formatting (such as copy-pasting job description and responsibilities into the resume and making the font colour white 🤦).
It is worth mentioning that these algorithms, particularly those powered by artificial intelligence, have been found to be more biased than anticipated. The stories of biased algorithms drawing conclusions based on specific demographics, such as considering males in their 30s and 40s as ideal candidates for programming roles are never ending. This has led to instances where no females were selected in the Amazon ATS because the algorithm deemed them unsuitable based on the success of existing male employees.
Danger 3: Your best friend and worst enemy - ChatGPT
While AI can be useful, we must exercise great caution when dealing with people-centric matters. So let’s talk about ChatGPT. It’s crucial to understand that ChatGPT cannot assist in finding the right candidate for a job. Yes, it can help standardize the information provided by all candidates. However, this approach may result in candidates appearing similar on paper, making it challenging for you to identify the best fit for the position.
Let’s consider a practical example: the role of the Head of Customer Success. This vacancy, taken from Equalture website, serves as a reminder that AI utilization extends beyond HR and recruitment teams. As a candidate, Anita decided to apply for this position herself. All she had to do was simply copy all the text from the job listing and create what seemed to be a perfect cover letter and resume.
After reading it for just 15 seconds, Anita was convinced that if she were the hiring manager for the role, she would invite this candidate for an interview. The application appeared flawless, except for a minor mistake in the resume (a typo or formatting error).
However, it is crucial to remember that even with the use of algorithms, the responsibility for decisions still lies with the company, the recruiter, and the HR personnel.
The responsibility falls on YOU.
Failing to explain the reasoning behind a particular decision can have serious consequences. In fact, there have been cases where companies have had to settle with employees because they couldn’t justify why they were fired solely based on algorithmic decisions.
As you can see above, judges have questioned these decisions, resulting in financial settlements for the employees. Thus, companies and recruiters must remain accountable for the choices made, even when employing algorithms in their decision-making processes.
These revelations indicate that future AI-powered systems may exhibit even more bias than we originally perceived.
So what is your role when it comes to navigating AI and bias in recruitment?
Your role in navigating AI and bias in recruitment
Lettink urges HR professionals to be proactive in familiarizing themselves with AI technologies. While AI can provide efficiencies and help manage large volumes of applications, it is crucial to ensure that these systems are used responsibly and in a way that aligns with organizational values.
Ways to reduce AI-based discrimination
- Always ask the vendor to explain the algorithms and explain to you how they will change algorithms in future but also how they will use your data, as well as the data of your employees. Only consider AI systems that provide satisfactory answers to these questions, as failing to do so can lead to legal issues.
- Ask yourself: is there a clear advantage to use AI?
- Be upfront with candidates and employees about the use of algorithms in decision-making processes.
- Ensure that AI evaluates job skills and ability to do the job
- Be upfront about how people are measured to avoid any confusion and back up your decisions with factual claims.
- Conduct regular audits on algorithms to address any potential bias.
- Allow people to opt out of AI-based methods.