What happens when AI becomes accessible to every employee, not just data scientists? How do organizations navigate the fine line between innovation and risk when generative AI becomes part of everyday operations?
These are the questions Diane Gutiw, CGI Vice President and leader of the company’s AI Research Center, tackles in her recent conversation with Peter Scott on the AI and You podcast, which reaches 14,000 monthly listeners.
Diane and Peter offer a clear-eyed view on what it means to govern AI responsibly in today’s business and government landscapes.
Key themes covered in the podcast
AI governance: The need for holistic, evolving frameworks
Diane describes how AI is “not just another tool in the toolbox” but a lever that transforms decision-making and risk management. She outlines CGI’s approach to AI governance, which includes cross-functional collaboration across security, privacy, legal, and procurement teams. She emphasizes the importance of continuous validation and the creation of guardrails—not barriers—to foster safe innovation.
“Don’t design AI just because. Have a very specific purpose for that AI.”
— Diane Gutiw
The rise of agentic AI and second-agent fact checking
Hallucination—the tendency of generative AI to confidently deliver incorrect answers—isn’t just a quirky bug. It’s a business risk. Diane explains how CGI addresses this through agentic AI—layering fact-checking agents to validate responses and assign quality scores.
It’s not just AI literacy that matters, but AI discernment.
Rethinking work in the age of AI-human collaboration
As organizations face a looming talent crunch, especially in sectors like healthcare and energy, Diane reframes AI not as a job replacer, but as a capacity extender. She shares how CGI studies AI-driven productivity in development teams and contact centers, finding the most value among junior and intermediate roles. Meanwhile, senior professionals benefit by focusing on higher-order tasks.
“We are the last generation managing a purely human workforce.”
— Diane Gutiw
Responsible innovation and risk mitigation
Drawing on real-world examples—from radiology support tools to client-facing AI interfaces—Diane discusses how CGI uses purpose-built AI solutions and continuously monitored risk frameworks. This ensures that clients deliver value without sacrificing trust, safety, or compliance.
Why this matters
As generative AI reshapes industries faster than the internet or mobile ever did, the organizations that thrive will be those who embed responsibility into the core of their AI strategies. Diane’s insights reflect CGI’s deep experience helping clients accelerate innovation while managing emerging risks.
From scalable governance models to next-gen upskilling, this conversation offers valuable takeaways for leaders navigating today’s fast-moving AI landscape.
Also subscribe to our From AI to ROI podcast series.