HR in the age of the machine
Artificial Intelligence is disrupting Recruiting. However, using AI may significantly influence a company’s reputation and employer branding in unforeseen ways. As part of our collaboration with Compass Ethics on issues at the frontier between ethics and communication, we explain what companies should look out for when planning to implement the new technology in their recruitment processes.
Artificial Intelligence (AI) is disrupting Human Resources (HR). A special area of interest is its use during the various stages of recruitment. Developers have been offering commercial AI tools to help companies with the sourcing, screening and assessments of potential job candidates, as well as for conducting automated job interviews for a long time. However, the recent rise in attention for AI has given the topic new momentum.
However, using AI in recruiting does not come without its own set of risks, that may significantly influence a company’s reputation and employer branding. Notably, the recently passed EU AI Act mentions recruiting as a high-risk activity that demands special attention by employers.
In this article, we provide an overview of:
The main business cases for using AI in recruiting;
the three major challenges to look out for when you automate your processes; and
our conclusions and suggestions for companies on how (and how not) to implement AI in their recruitment.
The business case for AI in recruiting
The business cases proposed for these new solutions are similar to those broadly associated with the technology: AI allows to automate and accelerate what used to be manual tasks, which saves companies time and money and increases their profits. By allowing companies to process larger pools of applicants, AI can enable businesses to expand opportunities more widely and also to find candidates with rare and precious qualities. It could also save applicants valuable time during their job-hunt, as automated interviews with an AI should not need prior appointment and could be conducted whenever they fit in a candidate’s schedule. In recruiting specifically, companies hope that AI will come with a number of additional advantages for potential employees. For example, AI in theory could help to reduce the influence of human prejudices in the recruiting process, such as biases against women and minority groups, biases in favor of people with certain physical attributes, recency biases when processing information, and judgments that vary based on the mood or relative hunger of a recruiter or interviewer. These biases inhibit reliability and diversity in hiring decisions, which is ethically problematic and strategically suboptimal. Thus, proponents of AI can point to reasons of efficiency and fairness alike for its application to recruiting.
Challenges of AI in recruiting
However, anecdotal experiences so far reveal some potential unintended consequences. Some of these raise ethical concerns, while others amount to poor employer branding communication. Companies should be aware of these challenges if they want to implement AI in their recruiting in a meaningful way.
Challenge #1 – Training AI to identify ideal candidates: automating bias
First, a major problem with AI may occur in how models are trained to identify ideal candidates, as the efforts to eliminate certain sources of bias may introduce biases of other kinds. To ‘understand’ how to distinguish high-potential candidates from less desirable ones, AI is usually trained on a large body of application documents. However, this data basis in itself can be biased, if it is not corrected for the disadvantages that many marginalized groups face in our real-world job market. If, for example, the sample of highly ‘promising’ applications mainly contains male candidates with a white middle- to upper-class background (as women and non-white candidates are often discriminated against when it comes to high-paying positions), an AI may accidentally learn to consider such characteristics (being of male gender or of a specific cultural background) themselves as preferable. In such a case, the technology will not solve the issue of discrimination, but reinforce it. A prominent example of such an issue has been the recruitment algorithm scandal at Amazon. In 2015, the popular online marketplace had to scrap its internally developed AI HR tool, which was found to be deeply prejudicial to women. The model was trained on past hiring data at Amazon, where top positions historically have been held by men. As a result, the model learned to down-rank resumes that included the word ‘women's’ and other traits associated with women, such as attending a women's college or being captain of a women’s sports team. Additionally, the algorithm preferred resumes that contained what media outlets have described as "masculine language" or bellicose verbs like ‘executed’ or ‘captured’.
Challenge #2 – Application screening: communication becomes conceit
Other challenges arise when AI is used to screen applications. As it turns a human-to-human into a human-to-machine interaction, it fundamentally changes the nature of the exchange, essentially incentivizing applicants to ‘game the system’.
When a job-seeker knows their application will be reviewed by a human, the documents become a means of communication. Their intent then is to convince, by highlighting personal qualities, be they specific skills, achievements or desirable personality traits. While such an exchange includes a level of persuasion by the applicant, they will naturally tend towards honestly working with what they have to offer. The potential employee can reasonably expect that the person reviewing their application will apply some level of social context to it. That is, they can expect that the reviewer will not simply tick features off a checklist (such as a degree by a prestigious university), but will also put weight on the candidate’s overall ‘fit’. They can also expect that a human will recognize – and value - honest effort put into an application from cheap ingratiation.
In an automated process, screening from the applicant’s position becomes a black box. As companies rarely publish the criteria by which their AI conducts its review, candidates are left to speculate on how to optimize their materials to make sure they ‘pass’ this hurdle and are forwarded to the person actually in charge of recruiting. As they cannot expect the same level of context-sensitivity from an algorithm, the use of AI may thus foster mistrust in candidates. This will encourage them to use unconventional methods to ‘hack’ their way through the automated phase of an application process, to get their materials noted by a human that would otherwise not see them.
Examples of this have already made their way into mainstream (social) media, such as 2023’s ‘white fonting’ trend, in which job hunters advised one-another via TikTok to copy a list of relevant keywords or the job description for their intended job itself, paste it in their résumé and change the font color to white. The idea behind this procedure is as simple as its damaging: AI bots or digital filters in applicant screening systems will read the white text, consider it as highly fitting, and forward the documents to human review. HR personnel will then view the materials as ‘normal’, as the white-fonted key words are invisible to them.
Regardless of the fact that HR professionals have discouraged the practice, it shows that AI will change the character of recruiting processes, prompting job-seekers to adapt new, potentially detrimental application strategies.
Challenge #3 – Job interviews and assessments: efficiency becomes neglect
Further, implementing AI in assessments and interviews may be perceived as disparaging by many – potentially some of the most valuable – candidates. With the ongoing shortage of skilled workers in the job market, companies today are increasingly competing to hire and keep talent. Young candidates especially are increasingly aware of their own worth as employees and are applying high standards to their current and future places of work. They have a variety of employers to choose from and are willing to change jobs if they feel unsatisfied, mistreated or underappreciated. When searching for a new job, candidates today thus often look out for red flags in companies, to avoid investing their time in firms that don’t reflect their values or priorities.
Among these, failures of clear and fair communication, such as negative interview experience or vagueness are regularly cited as some of the major turn-offs for potential applicants.
And while young talent may be generally open to working with AI, they are also generally a lot more educated on its potential shortcomings and less likely to give in to the illusion that a chatbot actually understands what they are saying the way a human would. They perceive AI as the machine it is, seeing through the efficiency-objective underlying its use in recruiting. As a consequence, even if they may understand the company’s rationale from a financial point of view, they may also perceive the use of AI as an unwillingness to invest in a real, two-sided human-to-human interaction with them. They notice the one-sidedness of the communication and interpret it as a lack of real interest and appreciation in them as a potential employee.
At the same time, the use of AI in job interviews strips them of their opportunity to probe a potential employer for themselves. This may significantly discourage high-potential candidates from giving a company a shot. On Reddit, candidates react to the increasing use of chatbots to conduct job interviews with a mixture of frustration and sardonic humor. One user summarized the issue as follows:
“[…] Speaking to a recruiter is a two-way street. A qualified recruiter can suss out if a candidate is going to be a good culture fit, and candidates can learn more about the culture of the company and if it's somewhere they want to be. What does is say about a company's culture when they care so little about their applicants and their staff that they put a […] AI as the first step? What kind of first impression does that leave?[…]”
Others are already coming up with creative counter-moves:
“So by this logic, as a candidate, I could use an AI to give my answers, surely. A job interview is supposed to be a 2 way street.”
Conclusions
All this makes the use of AI in recruiting a two-edged sword. While companies should not refrain from it during recruiting completely, they should be deliberate with where and how they implement it – and where they intentionally refrain from doing so. In sum, we recommend the following three conclusions to consider:
First, they should monitor the AI tool of their choice to make sure no unwanted biases enter its screening process. Additionally, they should communicate their efforts externally to not raise any suspicions from their potential applicants.
Second, companies should make their use of AI in screening as well as its screening criteria transparent early on in order to foster trust and discourage unwanted ‘hacking’ behavior.
Third, when it comes to assessments and interviews, companies should be aware of how an over-reliance on AI may be perceived by outside candidates. In these areas, it may be best to keep human interaction as the first point of contact or only employ AI in a moderated format.
Was this article helpful to you? If you are facing similar or other challenges with your employer branding, we’re happy to help! Contact us for more information or to arrange a first meeting with our experts!