Skip to the main content.

2 min read

Webinar: The EU AI Act Deadline-Is Your Business Ready for the Worlds Strictest AI Law

Webinar: The EU AI Act Deadline-Is Your Business Ready for the Worlds Strictest AI Law

When:  October 22, 2025 - 12:00 PM CDT

We are excited to offer this training session in collaboration with Society of Corporate Compliance and Ethics (SCCE) and Health Care Compliance Association (HCCA).

The European Union's AI Act (Regulation EU 2024/1689) establishes the world's first comprehensive, horizontal legal framework for artificial intelligence, creating a de facto global standard that fundamentally re-architects the AI lifecycle. This analysis deconstructs the Act from a computer science perspective, translating its legal mandates into concrete engineering and strategic challenges. The regulation is built upon a risk-based pyramid, reserving its most stringent obligations for "high-risk" systems, for which it codifies four technical pillars:

  • Data and Data Governance (Article 10), demanding unprecedented data quality and bias mitigation;
  • Transparency and Explainability (Article 13), requiring systems to be interpretable;
  • Human Oversight (Article 14), mandating effective human-in-the-loop architectures; and
  • Accuracy, Robustness, and Cybersecurity (Article 15), which requires resilience against errors and sophisticated adversarial attacks.

Furthermore, the Act introduces a novel, tiered regulatory regime for General-Purpose AI (GPAI) models, using a computational threshold (training with over 1025 FLOPs) as a primary proxy for identifying models that pose "systemic risk." This transforms the AI supply chain, creating a cascade of liability and due diligence obligations from foundation model developers to downstream providers. For organizations, the strategic imperative is to view the Act not as a compliance burden, but as a framework for building trustworthy, defensible, and market-leading AI. This requires a paradigm shift toward integrated, "compliance-by-design" approaches, focusing on unified governance, operationalizing technical compliance in MLOps, investing in AI security research, re-architecting systems for explainability, and strategically managing the new complexities of the GPAI supply chain.

Key Takeaways for the Talk

  1. Compliance is a Technical Engineering Challenge, Not Just a Legal One: The Act's core requirements for high-risk AI (data governance, transparency, human oversight, and robustness) are deeply technical and legislate solutions to open research problems in computer science. Success requires engineering and data science teams to be at the center of the compliance strategy.
  2. The Entire AI Supply Chain is Now Regulated: The Act's rules for General-Purpose AI (GPAI) models mean that no company is an island. If you use a third-party foundation model in a high-risk system, you are still liable for its compliance. This necessitates a new, critical enterprise function: rigorous GPAI model procurement, validation, and auditing.
  3. The New Competitive Advantage is Trust: The AI Act transforms "responsible AI" from a talking point into a legally mandated, auditable standard. Organizations that master the art of building, documenting, and deploying verifiably safe and trustworthy AI will not only ensure compliance but will also build significant brand equity and a powerful competitive advantage in the global market.

Register HERE to attend the webinar

Also check out our blog on Managing Risk in Artificial Intelligence Systems-A Practitioners Approach 2025

Poking AI in the Eye:   A Practical Intro to Adversarial AI

New York Metro Joint Cyber Security Conference September 26th, 2024

Read More

OWASP Delaware Chapter Event: Security and Compression

Lucas Driscoll presents "Security and Compression", a talk on the cybersecurity risks caused by using compression, especially in a web context. Lucas...

Read More