AI has turned every CISO decision into a gamble between speed and security risk
Daniel Uzupis of MEP Cybersecurity discussses how cybersecurity decisions are driven by pressure and risk-taking rather than policy.

The corporate rush to adopt AI has left CISOs in an impossible bind: block progress and be sidelined, or enable it and accept the risk. With safeguards trailing innovation, cybersecurity is no longer about compliance. It’s about deciding how much risk is worth the bet.
Daniel Uzupis, vCISO of MEP Cybersecurity, argues that today’s cybersecurity decisions aren’t driven by policy. They’re shaped by pressure, perception, and the willingness to take a gamble.
High-stakes gamble: “Safety laws are written in blood because people had to be injured or killed for a precaution to become law,” Uzupis says. “If your only goal is to keep your cybersecurity just below reportable limits, are you really doing cybersecurity or are you just making a wager?” For him, the core issue isn’t compliance. It’s ethics. Like workplace safety before OSHA, real safeguards often come only after damage is done. The challenge is acting before the blood is on the floor.
Out of the shadows: “You shouldn’t stop anyone from using AI, because they’re going to find a way to use it,” Uzupis says. “Your responsibility is to provide them a secure way to do it, with policies and standards that define the limits.” The real danger, he argues, is when leaders fall back on the old mistakes of shadow IT—choosing prohibition over enablement and losing control in the process.
Safety laws are written in blood because people had to be injured or killed for a precaution to become law. If your only goal is to keep your cybersecurity just below reportable limits, are you really doing cybersecurity or are you just making a wager?
Growing pains: The deeper issue, Uzupis argues, is the unresolved tension between ethics and efficiency. “You get labeled as toxic in cybersecurity if at any point you prioritize the security of sensitive information over tools that make everyone’s job easier,” he says. “Business objectives and ethics are often mutually exclusive. We are in a state of adolescence where nothing has really been established as to what everyone’s responsibility is.”
Mirror, mirror: Even when AI tools are officially approved, Uzupis believes leaders often fall for the illusion that AI is a magical fix. He offers a more grounded view. “At the end of the day, if you’re using AI correctly, you’re holding up a mirror to yourself,” he says. “You’re refining your own thinking—not outsourcing it. The time you spend crafting the perfect prompt is usually the same time you would’ve spent just brainstorming.”
Check the chart: AI’s limitations require a strict safeguard. "If you’re an inpatient at a hospital, you might get a colored wristband to flag a fall risk or allergy,” explains Uzupis. “But that wristband doesn’t replace the chart. It’s just a reminder to check it. That’s our responsibility with AI. Never trust, always verify.”
Uzupis is skeptical of the idea that companies will take security seriously just to protect their brand. “Think of any breach in the last 10 years. Target, Uber, Equifax—we all still use those companies and have looked past it. Reputation is temporary.”