However, as labor shortages necessitate widespread recruitment efforts, AI proponents argue that the technology, far from being risky, could help companies make hiring decisions that are more fair—not just faster.
The so-called Great Resignation, a mass workforce restructuring that coincided with the coronavirus pandemic, continues to loom large for businesses, with surveys of corporate leaders revealing that staffing issues remain among the most pressing near-term risks. Despite repeated warnings from regulators and experts about the potential for algorithms to effectively learn from and then magnify human biases, many have turned to AI to beef up their recruitment muscle.
Proponents, on the other hand, argue that removing the human factor can actually help. AI output is easily auditable, and computers can be stripped of some of the hidden biases that can lurk in a person's mind. Because a computer does not have a hometown, does not attend college, and does not have hobbies, it will not unconsciously warm to a friendly candidate in the same way that a real recruiter might.
"One candidate might talk about their varsity lacrosse team and when they were captain of the team, and another candidate might say, 'I watched the football game last night,'" said Kevin Parker, who is transitioning from CEO to an executive advisory role at hiring technology company HireVue Inc. HireVue provides interview automation software.
"When you can ask all of the candidates the same question about the skills required for their job, you get a much more equitable result...
"As a result, diversity improves," Mr. Parker said.
Many discussions about AI intervening in important decisions have focused on its potential to amplify biases, which can occur when a data set used to train an AI system was biased in the first place.
"There's a growing realization that these tools can exacerbate bias," says Matissa Hollister, an assistant professor of organizational behavior at McGill University. "I've lost count of the number of times I've heard, 'Keep the humans in human resources.'"
"Even non-shady tools can cause significant backlash," said Dr. Hollister, who recently collaborated with the World Economic Forum on a "tool kit" for AI in HR.
Some businesses have made notable AI mistakes. Amazon.com Inc., for example, reportedly scrapped an algorithm designed to aid in the hiring of top talent after discovering that it would reject candidates who listed on their résumé that they attended women's colleges or participated in women's clubs.
According to Reuters, the tool learned what Amazon was looking for in a candidate by reviewing the backgrounds of candidates who submitted resumes in the previous decade, a group that was heavily skewed male. According to The Wall Street Journal, Amazon only investigated the project on a trial basis before abandoning it because the algorithms were too primitive.
The potential for AI to cause harm has piqued the interest of regulators. The Equal Employment Opportunity Commission, a federal employment law enforcer, announced in October that it would investigate the use of AI in employment decisions.
When the agency's move was announced, EEOC Chair Charlotte A. Burrows stated that AI must not become "a high-tech pathway to discrimination." The EEOC did, however, state that it would look into "promising practices" in AI and other emerging tools.
According to Frida Polli, CEO of Pymetrics Inc., while some systems can replicate human biases, others can assist businesses in weeding out bias. Pymetrics, whose clients include McDonald's Corp. and Kraft Heinz Co., employs games to assess candidates' attributes such as attention and risk tolerance and to determine whether they are a good fit for a particular job.
Dr. Polli, a neuroscientist at Harvard University and the Massachusetts Institute of Technology, said Pymetrics' algorithms have been audited by Northeastern University experts to ensure they don't discriminate inadvertently.
The tests do not always have correct or incorrect answers, but they can help direct, for example, a methodical person to a job that matches such a personality, assisting companies in finding candidates who might otherwise be overlooked.
Some candidates, such as those who did not attend college, did not receive good grades, or do not know someone already working at a company, may be qualified for a job but are not discovered in a labor-intensive process in which recruiters are forced to make quick decisions and quickly rule out scores of candidates, according to Dr. Polli.
"I adore people," Dr. Polli explained. "I don't believe we should be disintermediating humans anytime soon." [However], there is no research to support the notion that humans are unbiased."
HireVue's Mr. Parker claims that the tools his company sells can also help broaden searches. HireVue's automated interviewing software allows a company to interview hundreds or thousands of candidates who respond to prerecorded questions, and then parses their transcribed responses to determine candidate attributes, such as how team-oriented a candidate is.
"We're always on the lookout for bias in the process," Mr. Parker explained. "We conduct extensive testing to ensure that something undesirable does not enter the process."
The software, developed after HireVue's founder noticed he was being passed over by employers looking for Ivy League graduates, interviews about a million people per month and has been used to conduct interviews in 40 languages. According to Mr. Parker, the company has an AI ethics advisory board that helps it navigate ethical issues.
"It's not meant to be a replacement for personal engagement," Mr. Parker said of his product. "It's just determining the most efficient way to determine who you should be talking to."
Mr. Parker, on the other hand, believes that an AI-powered platform has significant advantages in providing candidates with a first look. It can sit through a thousand interviews without becoming bored or resorting to mental shortcuts that may result in a candidate being eliminated unfairly.
"I once had a customer ask me, 'Can the artificial intelligence tell me if the candidate is wearing a tie?'" he explained. "Like, no way. What is it that you want to know? It should make no difference what your background is. It shouldn't make a difference what color shirt you're wearing. "None of that matters."
According to Dr. Hollister of McGill, companies should be aware of the high stakes, particularly for applicants whose livelihoods are at stake, and consider being transparent with candidates about its use. Companies must also resist being sold on AI's alleged "mystery and power" when evaluating its role, she added.
"Anyone can understand the basic premise of AI," she claims.
