Redefining work with AI   //   June 6, 2023

How HR leaders can become more AI-informed

This article is part of WorkLife’s artificial intelligence special edition, which breaks through the hype around AI – both traditional and generative – and examines what its role will be in the future of work, for desk-based workers. More from the series →

Will AI take the “human” out of human resources? 

That’s the question experts are trying to answer. With the boom in attention around generative AI across all sectors this year, the pressure to leverage AI developments to improve both customer and workforce experiences has intensified. But the million-dollar question remains: What is the right balance between capitalizing on the opportunity of AI within workforces without neglecting the vital role human employees play?

The bulk (92%) of HR leaders intend to increase their AI use in at least one area of HR, according to a recent report from AI acquisition and talent and recruiting platform Eightfold. But HR leaders also face the same challenge most of us do: they don’t all know how to use generative AI tools like ChatGPT, what questions to ask AI vendors, and what policies to put in place so it’s used responsibly.

Meanwhile, the immediate benefits that AI can offer when it comes to traditional HR functions, like job hiring and people decisions, make it vital for HR professionals to get a handle on the complex underlying flaws the tech currently has. For example, much of ChatGPT is trained on large datasets from the web, which is full of misinformation, and causes the dreaded “hallucinations.” And the confidence with which the chatbot can relay such false information, can fool even highly-trained legal professionals. But also, there are valid concerns around AI tools being discriminatory – a vital area HR execs must be across.

To tackle this steep learning curve, HR leaders are pooling resources to better make sense of the different AI legislation coming out, how to choose the right AI vendors, and use programs like ChatGPT themselves. For some, it starts by just learning the language of the world of AI, like what the difference is between generative, predictive and conversational AI, and sharing it with others. But this is just the start.

HR ‘show and tells’

That collective learning is exactly what HR professionals Alicia Henríquez and Amanda Halle want to create. In May, they hosted a chat for other HR folks to share what they know about AI, how they’re using it at work, and the potential pitfalls. 

“It seemed like a no-brainer to start figuring it out,” said Henríquez. “In the HR space, there is a wide spectrum of folks who approach AI differently, and that’s why it’s important for us to talk about it. I can’t talk to AI vendors if I haven’t figured out parameters within the organization.”

Halle admits that it can be intimidating, especially because there is so much to learn and it’s hard to know where to start. That’s why together they want to work toward information sharing around the topic. 

“HR and people teams have the reputation of being late adopters and slow to be at the forefront,” said Halle. “If there is an opportunity to partner across businesses and provide opportunities for people with shared learnings, we want to do that. It’s instilling curiosity in your organization.”

“In the HR space, there is a wide spectrum of folks who approach AI differently, and that’s why it’s important for us to talk about it."
Alicia Henríquez, head of people at Liveblocks.

Neither are claiming to be AI experts, but they know that this sort of “show and tell” among HR professionals can help create the space for people to ask questions and collaborate more than they would have otherwise.

Other HR leaders have also tried to encourage learning around AI. In May, AI governance risk and compliance platform Holistic AI held a webinar “Workplace Relations in the Age of AI: Practices, Regulations and the Future of Equal Opportunity Laws,” where Equal Employment Opportunity commissioner Keith Sonderling shared strategies and guidance for navigating the complexities of AI in HR practices. 

Getting a grip on AI bias and discrimination

Today, HR leaders can use AI across a wide range of tasks, ranging from recruitment and onboarding to payroll and performance management. In future, that may extend to tasks like answering employee questions (for instance about benefits or PTO policies) and even hiring or firing employees. By 2024, as much as 80% of the 2,000 global organizations will likely use AI-enabled “managers” to hire, fire and train employees, predicts market intelligence firm International Data Corporation in its latest research.

But soon there will be a lot more legislation passed to ensure employers incorporate AI ethically and responsibly, to prevent bias and discrimination. And HR professionals will need to be well-informed.

In New York, Local Law 144 will be enforced beginning July 5. At its core, it will require that a bias audit is conducted on an automated employment decision tool prior to the use of said tool. And across the pond, in May, the European Parliament approved the European Union’s AI Act, which will provide a blueprint for AI regulation for businesses.

While the payoff of using AI may be big, so is the risk if implemented poorly without bearing in mind bias and discrimination flaws in the tech. For example, Amazon’s automated hiring tool from a few years ago discriminated against women. In other situations, it could mean possible misuse of employee data and privacy and possible negative effects to well-being as AI applications reduce the need for human interaction. Given a core initiative of HR departments is to boost diversity, equity and inclusion, there is a fine line to walk between taking advantage of AI and falling victim to its kinks.

The need for AI aptitude

Having AI aptitude will be critical for HR leaders so they know what questions to ask when choosing out-of-house vendors to avoid algorithmic bias. 

“There is a sense that people don’t even know how to ask the questions,” said Dan O’Connell, chief strategy officer at AI-powered customer intelligence platform Dialpad. “Ethics and AI is new for people to consider when they think about vendor selection. It’s asking people: do you use your own data, how do you go and check for bias, how do you build these models, what user feedback is in place? Those are all things that I think a lot of people will go and select a vendor and not even think about asking. They might be unintentionally using a model that perhaps has some bias because the vendor hasn’t thought about it themselves,” he said.

Traditionally, HR departments have had to develop a deep understanding of bias in humans. Now they have to understand how that plays out when AI is in the mix.

“Algorithms, because they’re all programmed, have a certain danger because you can scale bias very fast and across a huge set of decisions people are making,” said Anand Rao, global AI lead at accounting firm PwC. “These systems are being built by people, and if you bring in people that all have a similar mindset, then the bias creeps into how they build it.”

That’s why O’Connell says that the best way to have diverse models is to have a diverse team. 

“There is a sense that people don’t even know how to ask the questions. Ethics and AI is new for people to consider when they think about vendor selection."
Dan O’Connell, chief strategy officer at AI-powered customer intelligence platform Dialpad.

“It starts from the get-go,” said O’Connell. “If you want to have a model that is diverse, you want to have a subset of people that have diverse backgrounds and represent different groups and think about these things.”

He recommends HR execs question AI vendors about the diversity of their own teams, who have worked on the technology, before selecting to use their tech.

Some companies are getting it right, trying to be intentional about how AI is used within HR. For example, recruitment software firm Joonko uses AI to make sure a candidate is the right fit for a role. It starts by recruiting underrepresented candidates and then uses AI to match them to a position. 

“Everyone in our pool is considered underrepresented,” said Albrey Brown, vp of strategy and general manager of New York at Joonko. “But we then make sure to audit how we are referring folks by demographic. We want to make sure the number of women and people of color in our pool, are being referred to in proportion to the number of folks that exist in the pool.” 

Joonko’s leadership wanted to get ahead of the NYC law and conducted an audit to ensure that it doesn’t hold any bias. “We wanted to make sure as a company that’s focused on diversity, we weren’t somehow building something that was racist or sexist,” said Brown.

Knowing what questions to ask

No one is asking HR leaders to act alone in navigating AI though. In fact, experts advise that HR execs make decisions around tech alongside other company leaders. 

“It’s critical to have a multidisciplinary team of people that reviews not just the technical side, but the suitability of when to use AI in a given situation,” said Ray Eitel-Porter, global lead for responsible AI at management consultancy Accenture. “You need to have HR professionals involved in that discussion, someone from legal, data science people who can understand how the AI might work, and probably a behavior scientist to think about the impact on people.” Together, the group can come up with the right questions to ask vendors. 

Jordan Suchow, an AI expert from the Stevens Institute of Technology, recommends looking past the AI label. “You will see the label AI applied to a broad variety of different features of HR software systems and it’s pretty easy to get lost in the hype,” said Suchow. “With respect to vendor selection, the most important thing is to look past the label of AI and ask what is actually being automated via this machine learning-backed system.”

By asking that question it’s easier to reveal what kinds of decisions are being handed off from the HR team and delegated to the system. From there, you can also learn what kind of credibility the company has when it comes to AI and HR. 

“In the context of hiring, there are regulatory frameworks that HR professionals are aware of like the ADA [Americans with Disabilities Act], and I wouldn’t want the work of an AI startup that is brand new and hasn’t shown deep understanding of these regulatory frameworks to be making decisions,” said Suchow.

“It’s critical to have a multidisciplinary team of people that reviews not just the technical side, but the suitability of when to use AI in a given situation.”
Ray Eitel-Porter, global lead for responsible AI at Accenture.

Overall, a good rule of thumb when figuring out which AI tools to use is to “ask the company the same questions you would for hiring a candidate,” he said.

Eitel-Porter says that with AI being embedded into our day to day lives, it’s more important than ever to draw attention to where it is being used. “HR could be procuring systems that actually have AI embedded in them and they may not realize it in some cases,” said Eitel-Porter. “They need to be aware of that and conscious of what they should be doing to check those systems.”

Beyond that, though, Eitel-Porter argues that when an HR department does decide to take advantage of AI, they should be transparent about it with the rest of the company, answering questions like why they chose one vendor over another, how it might impact the company, and any others that employees might have. 

This sort of governance will help ensure that bias and discrimination are lessened or avoided entirely. “It’s a necessity in adoption of AI for every organization,” said Navrina Singh, CEO of Credo AI. “Start laying down the foundation of governance from the onset. Everything related to employment is a high risk area.”