Skip to the main content.

1 min read

Managing Risk in Artificial Intelligence Systems-A Practitioners Approach 2025

Managing Risk in Artificial Intelligence Systems-A Practitioners Approach 2025
Managing Risk in Artificial Intelligence Systems-A Practitioners Approach 2025
10:06

Overview of AI security frameworks and recommendations for practitioners

Artificial intelligence (AI) is a rapidly evolving and maturing technology making a footprint in enterprises globally. A tool consulted for its robust dictionary of raw domain knowledge, providing real-time decision-making and generative responses proves itself invaluable in fields such as healthcare, finance, and information technology. While artificial intelligence continues making headlines for its impact and ingenuity, the consequences and risks of this technology should similarly remain a priority for all stakeholders involved. It is essential for early adopters to understand the dangers of leveraging a highly complex, heavily abstracted solution for business applications, particularly when information output is dependent on the accuracy of inputs, data sources, and training techniques.

AI risks have sparked concerns for human safety and liberties, physical and digital security, and environmental and societal impacts. In response, government and industry organizations have produced guidance on identifying, measuring, and managing AI risk.

The prevailing Risk Management Frameworks (RMF) for managing AI systems include the National Institute of Standards and Technology (NIST) AI RMF 1.0, the International Standards Organization (ISO-42001), and the certification assessment framework for secure AI solutions from HITRUST.

We compiled a detailed report which provides an introduction these frameworks to provide practitioners with insights on how to best navigate risk in AI systems. The report includes:

  • NIST AI RMF Overview
  • HITRUST AI Security Certification Overview
  • ISO 42001 Overview
  • Characteristics of Trustworthy AI
  • Examples of AI Security Risks, Threats and Attacks
  • Ensuring Security and Privacy in AI
    • Governance
    • Application, Infrastructure, and Data Security
      Risk Management
    • Supply Chain
    • Security Testing and Verification
  • Conclusions

Furthermore, this report outlines recommendations for ensuring Security and Privacy in AI implementations from a practitioner’s perspective.

 

Get the full report:

Managing Risk in Artificial Intelligence Systems: A Practitioners Guide 2025 


 

Considering options for Auditing and Penetration Testing of your AI system? 

Build Secure, Trusted IoT Systems with the IEEE IoT Sensor Devices Cybersecurity Framework

In today's hyperconnected world, every connection introduces new risk and securing your IoT sensor devices is no longer optional, it’s a competitive...

Read More

Understanding EU RED Requirements and EN 18031 Testing: A Comprehensive Q&A for Device Manufacturers

The European Union’s Radio Equipment Directive (RED) 2014/53/EU lays the groundwork for ensuring radio equipment is safe, functional, and...

Read More

Managing AI Risk the Smart Way — Why ISO/IEC 42001 can be a Game Changer

As artificial intelligence (AI) rapidly integrates into nearly every aspect of business, from customer service to data analytics, organizations face...

Read More

Why EN 18031 Certification Matters for IoT Device Manufacturers in the EU

As the European Union strengthens its regulatory framework around digital products, cybersecurity certification is no longer a competitive...

Read More