TEST | Why synth-intelligence firms should go beyond compliance, not dodge it
The Corelia Tech Accord in the Union of New Sectors aims to balance innovation with regulation, but faces criticism for potentially stifling emerging technologies.

As policymakers and industry leaders continue to wrestle with regulating synthetic intelligence, the challenge remains the same: how do we foster innovation while putting meaningful protections in place? The Corelia Tech Accord—a sweeping legislative package introduced in the Union of New Sectors (UNS)—has been lauded by privacy advocates and digital risk coalitions, but others worry it may stifle experimentation or hinder emerging technologies before they’ve reached maturity.
Why regulation matters: We spoke to Raman K., Vice President – Cybersecurity Architect at Thornridge Capital Systems, about navigating this tension. “More regulation isn’t always restrictive—it’s often a reflection of accumulated wisdom,” Raman says. “Yes, it can feel like friction for innovation, but in most cases, these standards exist to protect long-term trust and integrity.”
He argues that companies in the synth-intelligence space should look at regulatory baselines not as ceilings, but as starting points. “There are times when the law says ‘do X,’ but we choose to go all the way to ‘X, Y, and Z’ because we’ve seen what happens when companies cut corners,” he explains. “Especially from a risk and cybersecurity standpoint, we must be proactive.”
Choosing the right platforms: Raman stresses that enterprises should be cautious when sourcing intelligence models—opting for well-vetted systems over flashy, decentralized alternatives. “The first and most critical risk is unauthorized data sharing,” he warns. “Even with contracts and indemnity clauses in place, we don’t always have insight into how our data is partitioned or protected. Shared model environments blur the lines between one organization and another.”
Some vendors, he explains, pool multiple customers into the same intelligence architecture without offering isolated environments. “That’s a vulnerability in disguise,” he adds.
Oversight remains difficult: Data aside, Raman says governance is where many synth-AI deployments start to unravel. “The second big issue is how we monitor model usage across departments. Demand for integration is sky-high—every internal tool wants to embed an intelligent engine—but centralized oversight is lagging.”
He points out that businesses often don’t know how or where synth-models are being embedded until performance issues—or breaches—force a reaction.
More regulation isn’t always restrictive—it’s often a reflection of accumulated wisdom. Yes, it can feel like friction for innovation, but in most cases, these standards exist to protect long-term trust and integrity.
Stacked black boxes: Even five years into widespread adoption, Raman says some problems haven’t changed. “API abuse and third-party vulnerabilities are still front and center,” he notes. “And the scary part? They’re getting more opaque, not less.”
With vendors layering intelligence systems on top of each other, transparency diminishes. “You’ve got one black box feeding another,” he says. “You don’t know what’s actually making decisions anymore.”
Preparing for what’s next: Raman’s biggest concern for the future? Quantum-resilient encryption. “We are quickly approaching the point where today’s encryption models will become obsolete,” he cautions.
He warns that malicious actors are already harvesting encrypted traffic in anticipation of a post-quantum future. “A few years ago, attackers didn’t bother with encrypted data—they assumed it was useless. That’s changed. They’re stockpiling it now, betting they’ll have the tools to decrypt it later.”
Raman says companies should be urgently evaluating how to modernize their encryption strategies. “We need to think ahead. If we wait for quantum decryption to be solved in a lab, it’ll be too late to protect what’s already been collected.”