Beyond AI Washing: How one Chief Legal Officer is building a blueprint for responsible innovation
Michelle Fleming, Chief Legal Officer at Bell Techlogix, discusses responsible AI use and proactive risk management.

Imagine your phone rings. It's your CEO's number, and it sounds exactly like her voice, urgently asking you to send over a sensitive contract. But it’s not her. It's an AI-powered deepfake, a vivid example of the new, sophisticated threats facing every business today. While such overt attacks are alarming, a more subtle risk is proliferating in boardrooms and marketing materials: "AI washing."
Reminiscent of the "cloud washing" trend a decade ago, companies are exaggerating their AI capabilities to attract customers and investors. But this time, federal agencies are taking notice, creating a new regulatory minefield for corporate leaders. Navigating this landscape requires a new kind of legal leadership, and one that champions innovation while rigorously managing its risks.
Enter Michelle Fleming, the Chief Legal Officer at Bell Techlogix, Inc. A challenge-seeker who once ran 12 half-marathons in 12 months, Fleming brings the same relentless drive to her professional life. She is a forward-thinking leader who aims to be a change maker, not a roadblock.
Training day: "I trained 500 lawyers on responsible use of AI, so that they're not grudgingly accepting the use of AI, but advocating for it," she says. Holding a CIPP/US certification in privacy law, Fleming is crafting a new playbook for the modern CLO in the age of AI. Fleming’s approach begins with a fundamental reframing of her role, moving beyond the traditional perception of a legal department as a risk-averse function of "no."
Taking calculated risks: "I see myself as a business person with a law degree," Fleming states. "Legal leaders often wants to take zero risks, but you can't advance without taking calculated risks to move forward."
I see myself as a business person with a law degree. Legal leaders often want to take zero risks, but you can't advance without taking calculated risks to move forward.
This philosophy translates into a proactive, client-centric strategy where she and her team actively monitor the regulatory changes and security threats impacting Bell Techlogix's key client sectors like education, healthcare, and aerospace to anticipate challenges before they arise. It’s a model built not on stopping change, but on safely guiding it.
Responsible driving: "I think AI is like a car," Fleming says. "A car will get you places much faster than walking, and we want to use that speed. But you are still responsible for hitting the brakes and making the turns. We want our employees to use AI, but we have to teach them 'responsible driving' through the responsible use of that AI tool." The consequences of failing to teach that responsibility can be severe. In a case that has become a cautionary tale for the industry, a recent Canadian tribunal decision held the airline responsible after its customer service chatbot provided a passenger with incorrect information about bereavement fares. For Fleming, the lesson is clear: even if you don't build the tool, you are still responsible for its output.
A blueprint for breaking silos: To translate philosophy into practice, Fleming helped implement a two-tiered governance structure at Bell Techlogix designed to foster both top-down oversight and bottom-up innovation. A leadership-level AI Governance Committee—comprising the COO, CTO, CISO, and Fleming as CLO—focuses on managing high-level risk. In parallel, an employee-led AI committee is empowered to identify and escalate new tools and use cases from the front lines. "It encourages the whole company to work cohesively instead of in silos, which is where the risk happens," Fleming explains. "It's advancing us, but doing it in a responsible way."
Achieving cohesion: This required deliberate buy-in from across the organization. The leadership team introduced the initiative at a company-wide all-hands meeting and followed up with smaller, more intimate "CTO Fireside talks" where employees were encouraged to ask questions and raise concerns, ensuring everyone felt part of the process.
As AI continues its evolution, Fleming believes the key isn't to avoid challenges, but to welcome them. "I actually like the bumps," she reflects. "If I don't hear about them, I worry about what's really going on. When I hear a lot about the bumps, I know we have an open channel of communication, and that's a good thing."