HITRUST Announces New Initiatives for Secure and Sustainable Use of AI

November 9, 2023 | HITRUST

HITRUST recently announced a new program called The HITRUST AI Assurance Program, which provides a secure and sustainable strategy for trustworthy AI leveraging the HITRUST common security framework (CSF), AI-specific assurances, and shared responsibilities and inheritance. The HITRUST AI Assurance Program is the only assurance program to enable the sharing of security control assurances for generative AI and other emerging AI model applications. 

With this exciting initiative comes a few essential details to explore. Let’s take a closer look at HITRUST’s AI Assurance Program as outlined in its strategy report for the secure and sustainable use of AI. 

The Rise of Artificial Intelligence 

AI foundational models, now available from cloud service providers and other leading organizations, allow businesses to scale AI across industries and specific enterprise needs. However, with any new, disruptive technology comes the possibility of risks. 

“The opaque nature of these deep, neural networks introduces data privacy and security challenges that must be met with transparency and accountability,” states HITRUST. In other words, to operate effectively, AI systems must be trustworthy—and risk management is only possible if the multiple organizations involved share responsibility for identifying, managing, and measuring those risks. 

HITRUST’s trustworthy approach to AI is aided by existing and proven approaches to risk, security, and compliance management, all supported by the reliable and scalable HITRUST assurance system. 

Benefits of Using AI Systems for the HITRUST CSF

As an early adopter of AI for greater efficiency, HITRUST understands that users of AI systems can leverage the capabilities as part of their overarching risk management program to see an overall increase in efficiency and trustworthiness of their systems.

With their new strategy, HITRUST has already completed a few areas of innovation that provide benefits to the HITRUST community, including:

  • Mapping of Assessed Entity Policy (patent pending): The use of AI to help map written policies to the HITRUST CSF and relevant safeguard and requirement statements supports reduced assurance effort for assessed entities.
  • Quality Assurance Efficiency: The use of AI to accelerate HITRUST’s ability to complete robust quality assurance of assessments. 
  • CSF Quality and Updates: The use of AI to map authoritative sources into the HITRUST CSF and keep the CSF relevant as security requirements are continually changing and new authoritative sources are identified.

“AI has tremendous social potential, and the cyber risks that security leaders manage every day extend to AI. Objective security assurance approaches such as the HITRUST CSF and HITRUST certification reports assess the needed security foundation that should underpin AI implementations,” said Omar Khawaja, Field CISO of Databricks.

The HITRUST AI Assurance Program

HITRUST’s strategy for the secure and sustainable use of AI encompasses a series of important elements critical to the delivery of trustworthy AI. According to HITRUST, they, along with “industry leader partners, are identifying and delivering practical and scalable assurance for AI risk and security management through key initiatives providing organizations with the leadership needed to achieve the benefits of AI while managing the risks and security of their AI deployments.”

Take a look at the four key initiatives outlined in the HITRUST AI strategy. 

1. Prioritizing AI Risk Management for the HITRUST CSF

Beginning with the release of HITRUST CSF version 11.2 in Oct. 2023, HITRUST is incorporating AI risk management and security dimensions in the HITRUST CSF. This provides an important foundation that AI system providers and users can use to consider and identify risks and negative outcomes in their AI systems. HITRUST will provide regular updates as new controls and standards are identified and harmonized in the framework. 

According to the HITRUST AI strategy, HITRUST CSF 11.2 includes two risk management sources with plans to add additional sources through 2024:  

  • NIST AI Risk Management Framework: The NIST AI Risk Management Framework provides for considerations of trustworthiness in the “design, development, use and evaluation of AI products, services and systems.”  
  • ISO AI Risk Management Guidelines: ISO Risk Management Guidelines (ISO 23894) provides “guidance on how organizations that develop, produce, deploy or use products, systems, and services that utilize artificial intelligence (AI) can manage risk specifically related to AI.” 

2. Providing Reliable Assurances around AI Risks and Risk Management through HITRUST Reports 

Beginning in 2024, HITRUST assurance reports will include AI risk management so that organizations can address AI risks through a common, reliable, and proven approach. This will allow organizations that are implementing AI systems to understand the risks associated and reliably demonstrate their adherence to AI risk management principles with the same transparency, consistency, accuracy, and quality available through all HITRUST reports. 

More specifically, both AI users and AI service providers may add AI risk management dimensions to their existing HITRUST e1, i1, and r2 assurance reports and use the resulting reports to demonstrate the presence of AI risk management on top of robust and provable cybersecurity capabilities. This will support the ever-changing cybersecurity landscape as HITRUST and industry leaders regularly add additional control considerations to the AI Assurance Program.

3. Embracing Inheritance in Support of Shared Responsibility for AI

The HITRUST Shared Responsibility Model will allow AI service providers and their customers to agree on the distribution of AI risks and allocation of shared responsibilities. It’s important to consider those areas where the parties share risk management roles, such as when both parties have responsibility for model training, tuning, and testing with different contexts. 

As part of the model, parties must demonstrate that they have considered and addressed important questions, including but not limited to:

  • ​​Identification of the training data used by the AI system; 
  • Consideration that training data is relevant and of the expected quality;
  • Safeguards are in place to prevent poisoning of data with impacts to the integrity of the system;
  • Recognition, identification of, and managing to minimize bias;
  • Clarity from model providers on the responsibilities of model users, including testing to evaluate whether the model is appropriate for the intended business outcome and further tuning of the model; and, 
  • Identification of required distinctions where companies choose to create their own large language models or use their organization’s data to refine or extend the model. 

4. Leading Industry Collaboration 

HITRUST will use its long-standing experience in control frameworks, assurance, and shared responsibility to drive responsible and industry-led solutions for AI risk management and security.

For example, Microsoft Azure OpenAI Service supports HITRUST maintenance of the CSF and enables accelerated mapping of the CSF to new regulations, data protection laws, and standards. This, in turn, supports the Microsoft Global Healthcare Compliance Scale Program, enabling solution providers to streamline compliance for accelerated solution adoption and time-to-value.

As HITRUST continues to develop their AI Assurance Program, BARR is dedicated to helping you transition with its new initiatives and innovations for trustworthy AI. Contact us today to speak with a HITRUST specialist and begin or continue your HITRUST journey.

Let's Talk