[Barbara Donatien, Manager, Attest Services at BARR Advisory:]
Responsible AI starts with accountability. Artificial intelligence brings tremendous opportunities, but it also introduces new kinds of risks. Bias in algorithms, lack of transparency, privacy concerns, and data misuse are just a few of the challenges organizations face. To truly realize the benefits of AI, businesses must ensure their systems are secure, ethical, and compliant with evolving global standards.
So what does that mean for compliance programs today? Firstly, AI governance needs to start at the top. Privacy, fairness, and security should be built into your AI strategy from Day One. At BARR, we recommend following privacy by design principles, conducting regular risk assessments, and being transparent with customers and clients about how AI systems collect and use data.
Strong data governance also means protecting sensitive information with measures like encryption and access controls. But equally important is education — making sure employees who handle data or interact with AI systems understand their responsibilities.
Even if your business doesn’t directly use or build AI tools, chances are your vendors do. That’s why it’s critical to update your vendor risk management process to include questions about AI use. Ask vendors if they’ve mapped out how AI fits into their GRC programs and whether they follow any recognized security frameworks, like ISO 42001 or the NIST AI Risk Management Framework.
The NIST AI Risk Management Framework provides a flexible, voluntary approach for managing AI-related risks. It’s built around four key functions: govern, map, measure, and manage. Together, these help organizations identify potential risks, evaluate them, and put safeguards in place.
For companies that need a more formal certification, ISO 42001 lays out requirements for establishing, operating, and continually improving an AI management system. ISO 42001 ensures that AI is being used safely, ethically, and in alignment with your organization’s goals.
For healthcare, financial services, or other highly-regulated industries, HITRUST has also introduced two new programs: the AI Risk Management Assessment, which evaluates your organization’s AI risk posture, and the AI Security Certification, which validates that your AI systems meet prescriptive security and privacy controls.
By aligning with one or more of these frameworks, organizations can show customers, partners, and stakeholders that they’re managing AI responsibly and that trust is a top priority.
Beyond these voluntary frameworks, AI-related regulations are starting to take shape around the world. In Europe, the EU AI Act is the first major law designed to govern how AI systems are built and used. It takes a risk-based approach, prohibiting unacceptable risks, such as systems that manipulate behavior or score citizens, and setting strict transparency requirements for high-risk AI used in areas like healthcare and critical infrastructure. Even organizations outside the EU need to pay attention to this, since the law applies to any AI system used within Europe. Its impact will be global, much like GDPR before it.
In the U.S., we don’t have a single federal AI law, but conversations are happening. We can expect to see guidance evolve through existing regulators, like the FTC or the SEC, as they apply existing privacy, security, and consumer protection standards to AI use cases.
The takeaway? Whether you operate in one state or across multiple countries, AI governance is no longer optional. It’s becoming a key part of security compliance.
The bottom line here is that AI is here to stay, and just like cybersecurity and privacy, managing AI responsibly will become a defining factor in how organizations build trust and maintain compliance.
At BARR Advisory, we believe the goal isn’t just to comply with the new regulations. It’s to build an AI program that’s transparent, ethical, and aligned with your mission. We help you identify the standards and frameworks that fit your business, evaluate your AI risk posture, and design a compliance strategy that enables innovation rather than slowing it down.
The organizations that embrace secure, compliant, and ethical AI practices today are the ones that will earn trust, lead their industries, and thrive in an AI-driven future.
Contact us for a free consultation.