Technology   //   November 1, 2024

Should you make an HR AI chatbot of yourself?

While a flurry of new AI tools are becoming available to help with HR processes, few HR professionals actually have much experience with AI tech, and many remain confused or intimidated by it.

But some are highly tuned in and eager to integrate new tech into their personal lives and professional workflows, like Colleen McCreary, data platform Confluent’s new chief people officer. She’s leveraging knowledge from her past role in fintech investing, and a long career in HR, to experiment with practical use cases — including making a chatbot of herself, called “AI Colleen”

“I think there are a lot of HR people who are either terrified of AI, or they are over the moon excited, and just not quite sure what to do with it,” McCreary said. 

She’s working with a friend at a startup, Tmpt.me, an AI platform for content creators, to make an AI version of herself currently in a testing phase. The tech ingests someone’s entire body of work — from emails, to blogs, to podcast and recording transcripts, to create a personalized chatbot powered by a large language model based on that content. 

“I think there are a lot of HR people who are either terrified of AI, or they are over the moon excited, and just not quite sure what to do with it."
Colleen McCreary, data platform Confluent’s new chief people officer.

So far she’s only made the chatbot accessible to friends, family, and close professional contacts. “Most of the people in my audience who’ve been using it are all people who worked with me before, who feel like they’re getting free Colleen advice,” she said. 

And she doesn’t have near plans to make it accessible to those she works with, for a number of reasons. Primarily because when it comes to the kind of questions staff may ask, “the complexity of these situations can add up,” she said.

“These tools are going to be great for the 80% [of my job] but there’s this 20% contextual, nuanced place that I feel really good about my job security for a while,” she said.

Traditional chatbots have long existed though are getting more attention with the current AI boom, McCreary said, though their use in HR is unlikely to ever replace the need for real humans in the profession. 

Most HR chatbots are designed to answer employees’ questions that they would typically direct to their HR department, which can range quite a bit in sensitivity. For now HR chatbots are best able to answer employee questions by giving outputs based on company’s employee handbooks or other similar documents. 

But staff also direct more personal questions to their HR department, which brings up a range of issues, including those around privacy and documentation processes for certain conversations. Questions around leave policies, for example, need to be contextualized based on kinds of leave and local regulations, she said. And inquiries around time off for the death of a loved one, advice on filling out a performance review, or submitting complaints about a manager, for instance, are often more complex and shouldn’t be held with an AI HR bot — even one with more personalized responses.

If a friend or family member were to ask AI Colleen a question they did not like the answer to, “it doesn’t necessarily hold against you,” she said. “I think at work, people might take it too seriously.”

As with all AI experiments, there is a trade-off between benefits and risks.

“The theory of ‘what if I had someone who could do some of the trickier HR stuff in chatbot form,’ is both appealing and terrifying,” said Emily Rose McRae, senior director analyst at Gartner.

“The theory of 'what if I had someone who could do some of the trickier HR stuff in chatbot form,' is both appealing and terrifying."
Emily Rose McRae, senior director analyst at Gartner.

“On the one hand, cool. That could be really helpful, especially for people who don’t necessarily feel comfortable reaching out to an individual for this conversation. On the other hand, that’s a really high-risk area to have technology answering questions that it may not be answering correctly,” McRae said. 

“There could be real value in this, but there is a lot of risk as well.”