Catch this episode on Apple, Spotify, Amazon, or YouTube.
The latest episode of GRC Uncensored dove deep into the magical world of AI governance, specifically on ISO 42001. This week, our guests are Chris Honda, Whistic’s Manager of Security, Risk, and Compliance; and Jonathan LeBaron, MasterControl Senior GRC Engineer with the golden voice. Our due shared their firsthand experiences navigating compliance, business adoption, and the broader implications of AI risk management.
Key Takeaways
ISO 42001 is becoming essential for companies adopting AI, not just for compliance but to build customer trust.
AI risk assessments are more complex than traditional security frameworks, requiring new approaches to impact analysis.
Shadow IT and vendor AI features introduce unexpected risks—companies must proactively monitor and review new AI functionalities.
AI governance isn’t just about compliance; it’s about trust. Businesses that prioritize transparency and ethical AI use will have a competitive edge. Also, AI may or may not be making us dumber.
Why ISO 42001 Matters Now
AI governance is quickly becoming a business priority, not just a compliance checkbox. Both Chris and Jonathan emphasized that organizations adopting AI need structured risk management frameworks, and that is where ISO 42001 helps.
Chris explained how pursuing ISO 42001 was both a proactive measure and a strategic decision:
“Even if we don’t have to do this, our customers care about it. If this makes them more comfortable doing business with us, and it’s financially reasonable, we’re going to pursue it.”
For Jonathan, operating in the life sciences industry, certification wasn’t just a competitive advantage—it was a necessity:
“We got folks who are making needles that go into surgery or hip replacements… They come to us with regulations in one hand and say, ‘Where does this apply to your AI?’”
Challenges of Implementation
Despite AI being a widely discussed topic, implementing an AI governance framework is far more complex than many expect. Jonathan highlighted the difficulty of conducting AI-specific risk assessments:
The standard itself is run of the mill… but once you start doing an impact assessment or a risk analysis on an AI system, that’s where it gets grittier than you think.
Chris echoed the challenge, explaining how Whistic leveraged existing security frameworks:
We have years of ISO 27001 under our belt, so we’re familiar with what ISO asks for, but even then, we’re trying to anticipate what auditors will ask beyond just the stated minimum.
AI Risk: The Known and the Unknown
AI presents new challenges beyond traditional cybersecurity risks. The conversation touched on AI-driven decision-making and bias, as well as the hidden risks of relying on AI-generated outputs without oversight. Sort of like this recap blog post, which kept misspelling our guest’s names and the companies they work for.
Jonathan raised concerns about unchecked AI adoption:
People are going in and trusting AI solutions because the trade-off is incredibly desirable… but what if we’re just reinforcing the same bias over and over?
Chris added that businesses must consider the long-term implications:
We don’t want to be the ones saying, ‘No, you can’t use it,’ but we also don’t know the extent of the risks yet. Someone has to be willing to take that risk.
Third-Party AI Risks & Shadow IT
One of the biggest surprises in AI governance isn’t new vendors—it’s existing vendors quietly introducing AI functionality into their products. Many businesses don’t realize that enabling a new AI feature could introduce an unapproved subprocessor into their environment.
Jonathan described the challenge:
My biggest problem isn’t new vendors—it’s old vendors putting a button to just turn on new AI functions. And then people in the company click it without telling us.
Chris echoed this concern, explaining how internal teams often bypass governance processes:
We’re not saying no, we’re saying, ‘Let us know before you turn it on.’ We can get a review done same day, but if we don’t know about it, that’s when it becomes a problem.
The Future of AI Governance & Regulation
The episode wrapped up with a discussion on the future of AI governance. While AI regulations are maturing (sort of), businesses can’t afford to wait. Implementing frameworks like ISO 42001 now positions organizations for compliance down the road.
Jonathan summarized the importance of proactive governance:
If you think you can protect your data now, it’s already too late. Your data is out there. The only thing you can do is control what happens next.
Chris emphasized transparency as the path forward:
We’re not trying to be the AI police, but we have to make sure that when our customers ask how we’re protecting their data, we have a real answer.
What are your thoughts on AI governance? Is your organization implementing frameworks like ISO 42001?
Share this post