Move 37 Podcast Logo

Ignoring Responsible AI Could Cost Your Company Millions

Featuring Hadassah Drukarch, Director, Responsible AI Institute

Understanding Responsible AI Regulations: Navigating the EU AI Act and Beyond

In this episode of the Move 37 Podcast podcast, host Steven Walther speaks with Hadassah Drukarch, Director of Policy and Delivery at the Responsible AI Institute. They discuss the complexities of Responsible AI regulations, including the EU AI Act, and provide practical guidance for businesses navigating this rapidly evolving regulatory landscape.

Key Points:

The Need for Responsible AI

Responsible AI aims to guide organizations in ethically developing and implementing AI technologies. It addresses two main challenges:

  • Regulatory Complexity: AI evolves rapidly, making it difficult for regulations to keep pace.
  • Implementation Difficulty: Businesses struggle to practically implement evolving regulatory frameworks.

Responsible AI promotes governance frameworks that align AI usage with ethical principles, ensuring AI serves humanity safely and trustworthily.

The “Pacing Problem” in AI Regulation

Hadassah explains the “Collingridge Dilemma,” also known as the pacing problem, where technological advancements significantly outpace regulatory frameworks. Regulations struggle to manage risks posed by rapid AI development due to slower legislative processes.

EU AI Act Overview

The EU AI Act, a significant piece of upcoming legislation, categorizes AI systems into four risk levels:

  1. Prohibited AI Systems: Includes exploitative systems like social scoring or emotion detection in sensitive contexts. These are completely banned.
  2. High-Risk AI Systems: Systems with significant impacts on human lives (e.g., law enforcement, employment). These must meet rigorous requirements around data transparency, bias mitigation, and documentation.
  3. Limited-Risk AI Systems: Systems like deepfakes, which require transparency so users know they’re interacting with AI.
  4. Minimal-Risk AI Systems: Systems posing negligible risk, such as sports game outcome predictions, carry no special requirements.

General Purpose AI and ChatGPT

The EU AI Act addresses “General Purpose AI” (GPAI) systems like ChatGPT separately, imposing additional transparency and documentation requirements. This special attention reflects the broad implications of GPAI systems.

Practical Guidance for Business Leaders

Business leaders should prioritize:

  • Inventory and Compliance: Identify AI systems in use and determine if they comply with existing or forthcoming regulations.
  • Consumer Trust: Actively communicate responsible AI practices to maintain consumer confidence.
  • Regulatory Debt Avoidance: Integrate compliance early in AI development processes to avoid future regulatory and technical debt.

Consequences of Non-compliance

Non-compliance with the EU AI Act could result in severe penalties, including fines of up to 35 million Euros or 7% of global revenue, highlighting the importance of early compliance efforts.

Role of the Responsible AI Institute

The Responsible AI Institute offers guidance, assessments, and certification programs to help organizations implement responsible AI practices. Hadassah highlights their pilot certification programs, providing companies with actionable frameworks to meet regulatory standards.

Final Advice for Organizations

Hadassah emphasizes the importance of taking proactive first steps:

  • Conduct an inventory of AI usage.
  • Join communities dedicated to responsible AI practices.
  • Engage early with compliance to manage risk effectively.

For more insights into Responsible AI, listeners are encouraged to follow community resources and expert guidance.


For more details, subscribe to the Move 37 Podcast podcast for future episodes on AI’s role in healthcare, education, and ethics.