How the most proactive CISOs are striking first in the asymmetric war against AI threats
Darktrace's Nicole Carignan discusses AI's role in proactively combating cybersecurity threats, moving beyond reactive defense.

In cybersecurity, attackers move fast and play by no rules. Defenders rarely have that freedom, and by the time a threat has a name, it’s already too late. Battling AI with AI might be the only way to stop risks before they surface.
Nicole Carignan, SVP of Security and AI Strategy and Field CISO at Darktrace, views reactive mindsets as dead ends. With a background in cyber deception and two decades in U.S. intelligence, she’s focused on one thing: using AI to strike first against threats no one sees coming.
An asymmetric war: "Adversaries don't have to worry about responsibility, ethics, or accuracy to operationalize this technology," Carignan says. "We're combatting an asymmetric war." She notes that malicious AI doesn’t need to be precise to be dangerous. Even sloppy attacks can cause serious damage when there’s no regard for ethics or accuracy. "For us, that would be unacceptable," Carignan explains.
"There's much more burden on defenders to innovate quickly, but also to do it safely, ethically, and securely." The imbalance pushes security teams beyond a purely reactive posture. By automating frontline defense, AI frees up human experts to "think more proactively, even offensively," using advanced tools like attack path modeling to lay deception traps for adversaries and redirect them into controlled environments.
Adversaries don't have to worry about responsibility, ethics, or accuracy to operationalize this technology. We're combatting an asymmetric war.
Pre-crime prevention: "The biggest game-changer for SOCs today is autonomous action," Carignan says. "In many cases, we have to mitigate a risk that we may not even know about yet." And the advantage isn't just theoretical. Carignan points to a recent incident where a customer's edge security device began acting erratically, attempting to connect to a rare IP address and download a script.
The autonomous system immediately severed the connections and enforced the device's normal "pattern of life," preventing it from doing anything new or risky. "Sure enough, 11 days later, a CVE was disclosed," she recalls. "The system protected them against a vulnerability they didn't know existed and couldn't have patched. That's why autonomous action is so critical."
Your AI, your rules: For many organizations, handing over control to AI still feels risky. But for Carignan, the key to building trust isn't blind faith in the algorithm, but providing security teams with a powerful "control vehicle" to customize and dictate the AI's behavior, down to highly specific enforcement preferences.
In some cases, that means configuring the system to block known scanners or respond aggressively to third-party audits. "Being able to operationalize intelligence with that level of custom control is a game-changer," Carignan says. "There’s no one-size-fits-all in security, and your AI shouldn’t be either."
Know thy AI: Carignan sees the role of the security professional fundamentally transforming, driven by the rise of agentic AI. "Security is going to have to cross-skill into AI if we want to really harness the power of these systems," she says. "We have to ask the harder questions of these agentic systems and understand how these models are coming to their conclusions. Practitioners have to really understand what's in the guts of these systems."