Back to Resources | Expert Interviews

cyBARR Chat: HITRUST Edition Episode 17, HITRUST AI Assurance Program

January 18, 2024 | AI, HITRUST

WATCH:

Transcript:

Hello everyone and welcome to today’s episode of cyBARR Chats. My name is Kyle Cohlmia, and I am the associate content writer at BARR Advisory. Today we are joined by the one and only Steve Ryan. He is the manager of attest services here at BARR and is our HITRUST consultant that we love to talk all things HITRUST with.

So Steve, in October of this year, HITRUST announced a new program called the HITRUST AI Assurance Program. It’s a program that provides a secure and sustainable strategy for trustworthy AI to leverage the HITRUST CSF, AI specific assurances, and shared responsibilities and inheritance. As BARR’s HITRUST expert, Steve is here to give us the details about this new initiative and an overview of how it works. So let’s begin.

Thank you so much, Steve, for being here, as always. Before we dive into the exact AI Assurance program, I thought we could briefly chat about the rise in artificial intelligence and how those systems are helping organizations and then also HITRUST’s trustworthy approach to AI.

Yeah, absolutely, Kyle. And as always, it’s my pleasure to be here. So, AI foundational models that are now available from certain cloud service providers and other leading organizations allow businesses to scale AI across industries and specific enterprise needs. The implementation and use of AI algorithms and systems, such as natural language processing, the resulting large language modules, and other machine learning applications, has significant potential.

These systems are being imagined and applied in novel and creative ways that are evolving at a breakneck pace. However, as we know, with any new disruptive technology comes a possibility of risk. HITRUST describes it as “the opaque nature of these deep neural networks introduces data privacy and security challenges that must be met with transparency and accountability.” In other words, to operate effectively, AI systems must be trustworthy, and risk management is only possible if multiple organizations involved share responsibility for identifying, managing, and measuring those risks.

HITRUST’s trustworthy approach to AI is aided by existing and proven approaches to risk, security, and compliance management, all supported by the reliable and scalable HITRUST assurance system.

All of the major cloud service providers who are leading the adoption of AI also leverage HITRUST’s shared responsibility and inherence program to provide reliable assurances for the subscribers today.

The addition of AI risk management considerations to the HITRUST ecosystem and shared industry leadership will enable secure and scalable AI in support of the expected benefits for companies adapting AI. The service providers providing compelling features such as large language modules and relying parties seeking evidence that companies and services they work with are responsible in their use of the AI.

Great. Thank you. It does sound like HITRUST is taking many steps to ensure the AI assurance program will be trustworthy and adaptable, which is great.

So let’s talk a little bit about the benefits of using AI systems for the HITRUST CSF specifically. Can you talk a little bit more about the current areas of focus that are already available?

Definitely. As an early adopter for AI for greater efficiency, HITRUST understands that users of AI systems can leverage the capabilities as part of their overarching risk management program with resulting increase in efficiency and trustworthiness of their systems.

HITRUST has already completed a few areas of innovation that provide benefits to the HITRUST community. The first area is Mapping of Assessed Entity Policy. This innovation uses AI to help map written policies to the HITRUST CSF and relevant safeguarding requirement statements support reduced assurance effort for assessed entities.

The second area is Quality Assurance Efficiency, which uses AI to accelerate HITRUST’s ability to do robust quality assurance of assessments that we all know and love.

Then the final innovation is CSF Quality and Updates, which includes the use of AI to map authoritative sources into the HITRUST CSF and keep the CSF relevant as security requirements are continually changing out there.

Awesome. Thank you so much, Steve. So HITRUST has also recently published this comprehensive AI strategy to tell the community about the secure and sustainable use of AI. The strategy encompasses a series of important elements critical to the delivery of trustworthy AI.

So there’s four initiatives to this strategy, and I thought we could touch on each one briefly.

The first element is prioritizing AI risk management for the HITRUST CSF. Can you talk a little bit about the use of AI for risk management in the HITRUST CSF?

Yeah, absolutely. Beginning with the release of HITRUST CSF, the 11.2, in October of 2023, HITRUST is incorporating AI risk management and security dimensions in the HITRUST CSF. This provides an important foundation that AI system providers and users can use to consider and identify risks and negative outcomes in their AI systems. HITRUST will provide regular updates as new controls and standards are identified and harmonized in the framework.

According to HITRUST’s AI strategy, HITRUST CSF 11.2 includes two risk management sources with plans to add additional sources through 2024. The first source is the NIST AI risk management framework. This framework provides for considerations of trustworthiness in the “design, development, use, and evaluation of AI products, services, and systems.”

The second is the ISO AI Risk Management Guideline. This guideline provides “guidance on how organizations develop, produce, deploy, or use products or systems and services that utilize AI can manage risks specifically related to the AI.”

Great. Sounds good. So we’ll move on to the second initiative, which is providing reliable assurances around AI risks and risk management through HITRUST reports. What information do you have about that for us?

Yeah. So beginning in 2024, HITRUST assurance reports will include AI risk management so that organizations can address AI risk through a common, reliable and proven approach. This will allow organizations that are implementing AI systems to understand the risks associated with and reliably demonstrate their adherence to the AI risk management principles with the same transparency, consistency, accuracy and quality available through all HITRUST reports.

More specifically, both AI users and AI service providers may add AI risk management dimensions to their existing HITRUST e1, i1, and r2 assurance reports and use the resulting reports to demonstrate the presence of AI risk management on top of robust and provable cybersecurity capabilities. This will support the ever-changing cybersecurity landscape as HITRUST and industry leaders regularly add additional control considerations to the AI insurance program.

Awesome. Thank you. And for the third initiative, it is embracing inheritance and support of shared responsibility for AI. So what exactly does that look like?

Yes, the HITRUST shared responsibility model will allow AI service providers and their customers to agree on the distribution of AI risks and allocation of shared responsibilities. It’s important to consider those areas where the parties share risk management roles, such as when both parties have responsibility for model training, tuning, and testing with different contexts.

As part of the model, parties must demonstrate that they have considered and addressed important questions, including but not limited to: identification of the training data used by the AI system; consideration that training data is relevant and of the expected quality; safeguards are in place to prevent poisoning of data with impacts the integrity of the system; recognition, identification of, and managing to minimize bias, clarity from model providers on the responsibilities of model users, including testing to evaluate whether the model is appropriate for the intended business outcome and further tuning of the model; and finally, identification of required distinctions where companies choose to create their own large language models or use their organization’s data to refine or extend the model.

Awesome. The final initiative is leading industry collaboration. I know HITRUST is collaborating with some professionals in the field. Can you talk a little bit more about that?

Yeah, absolutely. So HITRUST will use their long-standing experience in control frameworks, assurance, and shared responsibility to really drive responsible and industry-led solutions for AI risk management and security. For example, Microsoft Azure OpenAI service supports HITRUST maintenance of the CSF and enables accelerated mapping of the CSF to new regulations, data protection laws, and standards.

This in turn supports the Microsoft Global Healthcare Compliance Scale program, enabling solution providers to streamline compliance for accelerated solution adoption and time-to-value.

That is great information. Thank you. So much to share with this initiative, so we really appreciate you sitting down with us.

I know many organizations will find this program helpful, especially moving forward as they navigate their healthcare compliance goals. We look forward to learning more about AI for risk management, and as always, we appreciate you joining us on cyBARR Chats and keeping us up to date with all things HITRUST.

Thanks for having me back, Kyle.

Yes. And thank you everyone. We’ll see you next time.