News & Events

Webinar: AI Risk Management and Trustworthiness for Healthcare Organizations

Written by Palindrome Technologies | Jul 7, 2025 7:47:14 PM

When:  September 25, 2025 - 12:00 PM CDT

We are excited to offer this training session in collaboration with Society of Corporate Compliance and Ethics (SCCE) and Health Care Compliance Association (HCCA).

This webinar provides a concise overview of managing risk and trustworthiness in AI systems, focusing on prominent AI risk management frameworks including NIST AI RMF 1.0, ISO/IEC 42001, and HITRUST AI. It delves into the challenges of AI risk management, such as risk measurement, tracking emergent risks, and the availability of reliable metrics. The presentation also highlights key trustworthiness characteristics for AI systems, including validity, reliability, safety, security, resilience, explainability, interpretability, privacy-enhancement, fairness with harmful bias managed, accountability, and transparency. Practical aspects of AI security are explored through a discussion of AI/LLM security analysis and penetration testing, outlining various attack types and essential test coverage areas.

For healthcare organizations, the insights from this webinar are particularly beneficial due to the critical nature of data and decisions in the medical field. Adopting robust AI risk management frameworks can help to:

  • Enhance Patient Safety and Trust: By implementing frameworks such as NIST AI RMF, healthcare organizations can foster trustworthy AI systems that are valid, reliable, safe, and secure, minimizing the risk of adverse outcomes for patients.
  • Ensure Data Privacy and Compliance: Frameworks such as ISO/IEC 42001 and HITRUST AI emphasize privacy-enhanced AI systems and data protection, which is crucial for handling sensitive patient information and adhering to regulations like HIPAA. HITRUST, in particular, primarily targets the healthcare industry and offers certification for AI cybersecurity, providing a recognized standard for assurance.
  • Mitigate Operational and Reputational Risks: By understanding and addressing challenges such as training data dependencies, inappropriate system outputs, and targeted AI system attacks, healthcare organizations can proactively manage risks that could lead to operational disruptions or damage to their reputation.
  • Improve System Accountability and Transparency: The frameworks promote accountability and transparency in AI systems, enabling healthcare providers to better understand and explain AI-driven decisions to patients and stakeholders, fostering greater confidence in AI adoption.
  • Strengthen Supply Chain Security: Given the reliance on third-party service providers in AI development and deployment, the emphasis on supply chain security within these frameworks is vital for healthcare organizations to assess and manage risks associated with external partners and software.

Explore AI risk management frameworks (NIST, ISO, HITRUST) for trustworthy AI systems. This webinar addresses key challenges like risk measurement, emergent risks, and trustworthiness characteristics (safety, privacy, reliability, accountability). Discover how these frameworks are crucial for healthcare, enhancing patient safety, ensuring data privacy and compliance (especially with HITRUST's focus), mitigating operational and reputational risks, improving system transparency, and strengthening supply chain security for confident AI adoption.

Learning Objectives:

  • Objective 1: Understand key AI risk management frameworks: learn about NIST, ISO, and HITRUST AI frameworks for managing risks and building trustworthy AI systems.
  • Objective 2: Identify AI system challenges and characteristics: recognize AI risk measurement challenges and key trustworthiness characteristics like safety, privacy, and accountability.
  • Objective 3: Apply AI frameworks to healthcare benefits: discover how AI frameworks enhance patient safety, data privacy, risk mitigation, and supply chain security in healthcare.

Register HERE to attend the webinar

Also check out our blog on Managing Risk in Artificial Intelligence Systems-A Practitioners Approach 2025