Cyber expert warns about gaps in AI governance: 'We aren't putting the right controls in place'

Island News Desk
|
Sep 3, 2025
AI in Healthcare

Chief Security Strategist for Cylera and global cybersecurity strategist Richard Staynings discusses the current lack of regulation and a widening gap between security and AI governance in healthcare.

Credit: Outlever

A conflict between the relentless pursuit of monetization and the urgent need for public safety is growing. Without independent oversight, AI is seeping into critical infrastructure with unsafe deployments and corrupted data. But nowhere are the risks greater than in healthcare, where the line between innovation and harm is measured in human lives.

We spoke with Richard Staynings, the Chief Security Strategist for Cylera, a Teaching Professor at the University of Denver, and a globally renowned cybersecurity strategist, thought leader, and public speaker with decades of experience at the intersection of technology and healthcare. Drawing on his experience advising governments and industry leaders on their defensive posture against emerging cyber threats, he argued that the current path of AI governance is dangerously misaligned with human safety. In healthcare, Staynings' primary field of focus, corrupted data can have life-or-death implications. He described the future of precision medicine as a utopia where AI tailors cures for diseases like pancreatic cancer based on a person’s unique genome. But he also cautioned that the success of this medical miracle rests almost entirely on the fragile assumption of data integrity.

  • Life-or-death data: The promise of a medical revolution, from AI-enhanced imaging to bespoke pharmaceuticals, "is all going to be based upon AI," he cautioned. "So we need to make sure that those AI models are accurate, have integrity, and that they're trained on accurate data in the first place."

The core issue is a widening governance gap, according to Staynings. Regulatory bodies like the FDA, which are responsible for the safety of medical drugs and devices, are moving too slowly to keep pace with the breakneck speed of technological adoption. This creates a dangerous vacuum where risks go unmanaged until it’s too late.

  • The governance paradox: Staynings’s philosophy is grounded in a principle of equilibrium, a concept he believes is dangerously absent from the current AI debate. "Any form of governance needs to be a balance between the freedom to exercise functionality and to innovate, with 'rails' to stop people from falling off. Those rails would include safety, security, privacy, and the other concerns that individuals, consumers, and patients would require. Right now, we’re racing ahead with new technologies at 100 miles an hour, but without the cybersecurity and privacy safeguards to control them,” Staynings warned. “That gap is an open door for perpetrators to wreak havoc, with consequences that include rising patient morbidity and mortality.”

  • The most to gain: For Staynings, the rush to deploy AI without the proper guardrails is part of a larger pattern where innovation consistently outpaces oversight. He argues that when tech leaders influence policy, speed and profit dominate. Inevitably, safety, security, and privacy are sidelined. "Risk is being overlooked in the pursuit of profit," Staynings warned. "They are obviously being pressured by those who have the most to gain: the tech leaders that are pushing AI very, very hard." But the danger extends beyond corporate greed into a deeper, systemic crisis: the very data fueling AI models is increasingly unreliable.

  • The poisoned well: Staynings pointed to a collapsing information ecosystem, citing examples of fraudulent academic books and AI-generated research citations that slipped past established gatekeepers. This type of data pollution, he noted, is a direct threat to model accuracy and integrity. "The internet is awash with misinformation and disinformation," Staynings said. "When that corrupted content is pulled into training sets, AI models are learning from falsehoods. We've already seen fake references generated by AI, fabricated publications, and even academic books filled with errors making it to market." If the data itself is compromised, he says, the outputs must not be trusted.

Any form of governance needs to be a balance between the freedom to exercise functionality and to innovate, with 'rails' to stop people from falling off. Those rails would include safety, security, privacy, and the other concerns that individuals, consumers, and patients would require.

Yet Staynings also emphasized that solutions are within reach. Clearer ethical standards, faster-moving regulators, and industry-wide collaboration can begin to close the gap. Frameworks like NIST’s AI Risk Management Framework provide a foundation, and initiatives such as precision medicine show what is possible when innovation is guided by integrity. When asked if any corporations are meaningfully putting safety above profits, Staynings offered a real-world example where this system has already failed.

  • Profit over patients: "United Healthcare Group is perhaps a great example of that with their pre-authorization AI, which is rejecting a large number of pre-authorizations for medical treatments," he stated. "And that has plainly been tweaked, tailored, and corrupted with the intention of raising profits for those mega-corporations."

Building independent oversight, mandating transparency in training data, and embedding safety testing into AI development can help realign incentives. “The goal is not to halt progress but to steer it responsibly," Staynings concluded. "If we get this right, AI can drive one of the greatest advances in healthcare the world has ever seen.”

Powered by Island.
© ISLAND, 2025, All rights reserved