Managing Risk in Artificial Intelligence Systems, a Practitioners Guide 2025

AI risks have sparked concerns for human safety and liberties, physical and digital security, and environmental and societal impacts. In response, government and industry organizations have produced guidance on identifying, measuring, and managing AI risk.

This report provides an introduction and comparison of three leading AI risk management frameworks, published by the National Institute of Standards and Technology (NIST), the International Standards Organization (ISO), and a certification assessment framework for secure AI solutions from HITRUST. Furthermore, this report e outlines recommendations for ensuring Security and Privacy in AI implementations from a practitioner’s perspective.

 

Download

Frequently asked questions

What's the right AI Risk management framework for my organization?

This depends on the following:

  • System Requirements: Regulatory, customer, partners and strategy requirements.
  • Budget: This is dictated mainly by the requirements (e.g., regulatory, partners, customers). 
  • Duration: each framework has its strengths and limitations and thus selecting the right risk management framework will impact the time required to develop and implement the AI system.  
  • Resources: engaging the right stakeholders and subject matter experts to best formulate your strategy.  

The report provides insights on selecting the framework which meets your needs best.

You also contact us for a free consultation. 

What strategy I should leverage to verify trustworthiness for my AI System?

Regardless of framework, certification, or guidance, most industry standards provide similar recommendations around governance and risk management for artificial intelligence systems. In fact, from a logistical perspective, many recommendations align with existing best practices for IT systems, and integrating artificial intelligence requires process evolution and new controls or enhancement of existing controls. The following are area of prioritization you should consider in your strategy when deploying trustworthy artificial intelligence systems.

  • Governance
  • Application, Infrastructure and Data Security
  • Risk Management
  • Supply Chain
  • Security and Verification

For more information visit AI Cyber Security  

 

How does AI penetration testing compares to traditional network penetration testing?

Although AI Penetration Testing examines similar areas  as traditional penetration testing (e.g,. infrastructure components, API's) it emphasizes security analysis around the AI model and explores specific attack vectors such as (but not limited to):

  • Prompt Injection Attacks
  • Evasion Attacks
  • Reconstruction / Inference Attacks Sensitive Information Disclosure
  • Supply Chain
  • Data and Model Poisoning
  • Improper Output Handling
  • Excessive Agency
  • System Prompt Leakage
  • Vector and Embedding Weaknesses
  • Misinformation
  • Unbounded Consumption

Implementing effective security controls helps mitigate security risks and prevent attacks against the AI system. In order to verify the effectiveness of these controls, organizations should conduct thorough security analysis and penetration testing exercises throughout the AI system’s lifecycle.

Contact us to learn more... 

Let us know if you would like to stay informed about our upcoming webinar on this topic.