Shadow AI : How to deal with unauthorized models and uncontrolled agents


Shadow AI is considered the next iteration of Shadow IT,  with the big difference being that while developers might use a self-contained, unauthorized tool in their work, the tool itself does not create risk.

Shadow AI is particularly troublesome because an unauthorized model can gain access to databases it shouldn’t have and lack the system and organizational context to make correct decisions. Further, Shadow AI almost always involves someone in the organization taking company intellectual property and pasting it into a public tool, leaving the destination and subsequent processing unknown.

Part of the problem, according to Broadcom Head of Product Management, Clarity, Brian Nathanson, is an organization’s approach to governance and security exactly because AI is advancing so quickly and continually changing. The engineers feel that the governance is burdensome to get their work done, and that their organizations’ governance is too slow to bring different models on board. “Individuals are seeing the productivity benefit of AI for more than the enterprise does, at least right now, but enterprises, because of the concerns over liability and their IP protection, have basically tried to clamp down,” Nathanson said. “They’ve said, no you can’t use AI tools, or you can only use these authorized AI tools.”

Nathanson said that puts developers into a bind, because if the company only authorizes, say, Gemini, and the developer knows that Claude might give better responses for a certain activity, the developer thinks “I’ll just copy and paste into my private, personal account of Claude, and they say, ‘I’m just going to use it, because I can’t wait for the governance process to authorize the AI tools.’ ” 

Ted Way, vice president and chief product officer at SAP, said employees “just want to get stuff done,” and most of the time will ask for forgiveness later. But that’s not worth the risk of sensitive data being leaked, “and not only is it being leaked, but it’s stored and processed outside your company. It might be used to train a model. And then you have your compliance risk,” he said. “And, in the journey to get stuff done, are you actually not even doing it,” because you might not be getting the accurate results you want.

What organizations can do

Getting the shadow AI issue under control involves organizational governance, policy and culture.

Some companies, instead of restricting Ai, have created orchestration layers that allow engineers to use many different open source and proprietary models in a way that is controlled by the orchestration. This reduces the need for engineers to go outside of the company’s policies to get their work done with the model they choose, and thus reduces risk of a company’s proprietary data and conversations aren’t let out into the public.

From a policy perspective, Way said that it starts with a clear view of policy on generative AI. He explained that modern technology forces a trade-off: organizations can only achieve two out of three desired outcomes—safe, capable, and autonomous.

  • Safe and Capable: This state requires extensive “human babysitting” and is considered to be  too slow, as every request is “gated on humans.”
  • Capable and Autonomous: This represents the opposite extreme—a lack of oversight where the LLM decides what is safe. Way cites an example of an LLM deciding to decrypt repository answers to achieve a better score on an evaluation.
  • Safe and Autonomous: This state is too restricted, meaning the system will not have access to the necessary tools to be capable.

 Addressing Shadow AI requires moving past ineffective governance models. Michael Burch, director of application security at Security Journey, suggests that while an AI team or governance committee should exist, governance is not just a “10-page policy report that nobody’s gonna read.” Instead, it must be about “everyday-to-day practical governance—taking that 10-page report and making it actionable for individuals.” 

Governance, he said, “isn’t just about the policy publications and writing all the rules and buying the right tools. It’s, is all the work we put in, is it actionable? Did it actually have an impact? And did we give it to people in a way that let them actually do it day-to-day and improve the way they’re thinking and treating security?”  Any governance effort must be “grounded in real truth of day-to-day workflows,” he said, to ensure people will actually adopt it. The ultimate goal is a practical system that drives adoption and gets people to hold themselves accountable for how they use AI. Burch noted that governance fails when policies alone are relied upon to create good decisions. 

A vital step in this practical approach is building a security culture. This involves teams having a shared vocabulary, workflow guidance, and examples. If everyone understands how AI integrates into their workflows and speaks the same language, the potential for failure is significantly reduced. 

If we’re all talking the same language, if we all understand how AI integrates in our different workflows, and we have examples to work from so we understand how to… the lift to get there is a lot smaller for us, we have a lot less chance for failure, because everybody’s kind of on that same page,” Burch explained.

 

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img