WTF is Shadow AI?
Workers are excited to play around with AI and see just what it is capable of.
In a lot of ways, that’s a good thing. Those early adopters are upskilling and navigating how to leverage the tool to boost their productivity and efficiency. But, that’s not without company guidelines that stress what can and cannot be shared with large language models.
The unregulated use of AI within companies is being referred to increasingly as “Shadow AI.” And according to a recent report, it’s on the rise.
In March, a 2024 report from data protection company Cyberhaven found that 27.4% of corporate data employees put into AI tools was sensitive, up from 10.7% a year ago. The top sensitive data type put into AI tools is customer support (16.3%), followed by source code (12.7%), research and development material (10.8%), and unreleased marketing material (6.6%). HR and employee records are 3.9% of sensitive information going to AI, including confidential HR details such as employee compensation and medical issues.
So what exactly is Shadow AI and what can employers do to curtail it?
What is Shadow AI?
Shadow AI is the unregulated, unauthorized use of AI within an organization without the knowledge or approval of IT or security teams. It’s the unsanctioned or ad-hoc use of AI within an organization that IT doesn’t know about.
“Shadow AI is anything where someone does things without the blessings or the OKs of IT,” said Tim Miner, founder and CEO of By the People Technology, an AI consultancy that works with businesses to implement internal generative AI.
But Gaurav Agarwal, chief operating officer at productivity platform ClickUp, says it’s not a completely bad thing. In fact, it shows that employees are curious about AI and want to try new ways of working to boost productivity. The trouble comes in when people share company data that can put the business at risk.
Why do employers need to be cognizant of it?
Most employees are already using AI in the workplace to help ease workloads and get their jobs done, however, less than 40% have actually received training for it.
“Knowledge workers could put sensitive company data at risk by utilizing unauthorized AI tools at work,” said Alexey Korotich, chief product officer at workflow platform Wrike. “These AI models could then leak company information, or unintentionally help competitors who may be utilizing the same AI models or tools.”
Miner describes that most people turn to Shadow AI for reasons that range from companies having an AI ban to not getting feedback on the AI tools that are chosen by leadership.
Shadow AI can cause several problems for organizations that range from security vulnerabilities and compliance issues to intellectual property problems and unintended exposure of confidential information.
Agarwal says that Shadow AI can actually end up negatively affecting productivity due to “toggle tax,” the time spent on frequent task or app switching, which is on the rise due to the increase in app sprawl.
“What you will have is the same toggle tax problem, but in the world of AI, when the whole idea was that AI should be making us more productive,” said Agarwal.
Another way it can impact productivity is if workers try new tools but don’t really know what they’re doing. “For example, if a team member uses generative AI to develop code, but lacks the skills to debug or test the software to ensure quality, they could open their organization up to more complicated problems than they may have anticipated,” said Korotich.
How do you know if Shadow AI is pervasive in your company?
Since this is going on behind the scenes, it’s hard to know right away how pervasive Shadow AI is. That, combined with the ongoing challenge of auditing AI usage in general, presents a frequent challenge.
But there are potential solutions. For example, at ClickUp, admins can see what apps employees have signed into recently to see if different AI tools are being used that they didn’t clear. Additionally, since some tools cost money, ClickUp also looks at expenses and company credit card alerts. This is another indicator of usage. However, there is a fine line of auditing usage and adopting a Big Brother approach.
“When you look at innovation and productivity, what you need to do is offer alternatives that are so well integrated that employees would want to use it,” said Agarwal. “I think that’s how you have to really solve the problem.”
Korotich puts it simply: “If employers don’t give workers tools that easily integrate AI into their existing workflows or share guidance on best practices for using AI at work, they put their organizations at risk of this new ‘shadow AI’ phenomenon.”
How can they put guardrails in place?
With more and more employees adopting the use of AI, an employer must have a plan in place that encourages best practices. That could look like anything from an AI handbook to information sessions guided by IT or tech teams.
Miner says that a big way to avoid Shadow AI is by building a culture of AI that encourages employees to feel comfortable trying it under the direction of a company, rather than doing it behind the company’s back if they have a ban in place.
“You have the whole spectrum,” said Miner. “You have the people who say ‘Absolutely, no AI in my company, don’t touch it,’ all the way to the people who embraced it outrightly saying ‘Yes, anything you want to.’ Both extremes are bad. It’s somewhere in between where you, as a leader, try to promote the use but say ‘Here is how to use it.’”
He says don’t let it be just a side thing, where there is one memo you release and then forget about it, but instead use intentionality around the use of AI. That might mean putting together steering committees, groups of people to provide feedback, administering internal pulse surveys, or a combination of each. For example, if a company has a chosen AI tool that has been approved, but an employee finds a better one, it’s having the culture that allows that employee to suggest a switch to that different tool or at least approve of it.
“As AI plays a bigger role in business strategy, CTOs and CIOs will need to have an airtight data strategy to ensure their organizations are securely driving mass adoption of AI technology,” said Korotich.