Compliance leaders explore a new, context-driven approach to AI risk management

Island News Desk
|
Sep 3, 2025
AI Compliance

Arvinda Rao, Director of Compliance at Securiti, discusses how AI risk is defined by its context, and how use-case-based risk management is now necessary for effective AI governance.

Credit: Outlever

A fundamental misunderstanding about AI is leading many organizations astray. Most assume risk lives in the system itself: the large language model, the algorithm, the little black box. But the truth is far more nuanced. Despite popular belief, the real measure of AI risk is defined by the technology's application more than the actual tool itself. Now, an LLM considered low-risk in one context can become a catastrophic liability in another. To navigate this new reality, some experts say leaders need a fresh model for governance, one that shifts focus from the tool to the task.

We spoke with Arvinda Rao, the Director of Compliance for AI Governance, Responsible AI, Security, Privacy and Risk at Securiti. With over 16 years of experience in risk and compliance at firms like IBM and Accenture, Rao has been on the front lines of solving this enterprise challenge for some time. Today, he's the pioneer behind the Data Command Center, a centralized platform enabling the safe use of data and GenAI. According to Rao, the only way to manage AI effectively is by dismantling the one-size-fits-all approach and rebuilding governance from the ground up, centered around a single, powerful use case.

"Use-case-driven implementation is essential for success with AI," Rao explained. "The underlying technology might stay the same, but each application introduces a different risk. Now, use-case-based risk management is critical as a result. The notion that risk is defined by context rather than code is key to unlocking secure innovation," Rao said, "especially when the same AI model deployed across different business functions carries radically different threats."

  • Same model, different dangers: As an example, Rao considers HR. "When you're using an AI system to screen candidates," he explained, "the regulatory risk is higher. Denying someone an employment opportunity or unfairly judging them based on their resume is risky. Use that same AI system to generate a report, and you'll get a different type of risk. Or, in customer support, if AI hallucinates and gives you some bad advice, that creates another risk."

Unfortunately, this conundrum exposes a common yet critical error in corporate strategy. Instead of starting from scratch, many organizations opt to layer AI governance onto their existing frameworks. From Rao's perspective, this approach is destined to fail. Ignoring the unique, contextual nature of AI risk, he warned, can create a false sense of security.

  • The GRC trap: "Companies are adding AI governance on top of their existing GRC," Rao noted. "But very few recognize the need for a separate function that's focused entirely on governing AI. Most see it as one more layer when, in reality, it needs a dedicated team."

AI is a very use-case-driven implementation. The technology behind it is the same, but the use case for which it is being used brings in a different risk. Use-case-based risk management is what is very critical.

Avoiding this trap will require a strategic blueprint that embeds governance into the entire AI lifecycle. For Rao, this cross-functional effort begins long before a single line of code is deployed. Instead, it starts with a clear strategy, stakeholder alignment, and a meticulous definition of what success and safety actually look like for each specific use case.

  • A blueprint for action: "The first non-negotiable step is having a good AI governance roadmap or a strategy in place with business approval, followed by stakeholder buy-in from all the departments," Rao said. "From there, it's about defining success criteria and what your AI ethical scores or safety scores are." In practice, that means defining triggers for model retraining and human intervention.

  • Legal and financial imperative: "AI will make some critical decisions," he stressed, "and stakeholders will ask questions. But determining when to bring a human in the loop, those are the kinds of decisions that must be made." With external pressures like the EU AI Act and its requirements for model cards and explainability growing, he added, this is now a legal and financial imperative.

As the industry buzzes with speculation about AGI, Rao cautioned that a fixation on far-off, hypothetical superintelligence is a dangerous distraction from the powerful and risky tools already operating within most enterprises today. Instead of preparing for the future, the greater challenge for most is in managing the present. "The real risk is already there in the enterprise. A robust AI governance framework structured around a risk-based approach is critical, not for tomorrow, but for right now."

Powered by Island.
© ISLAND, 2025, All rights reserved