top of page
Search

Beyond AI Regulations and Fines: The Importance of the Risk Culture in Organisations

Didier Vila, PhD - Founder and CEO of Alpha Matica and Alberto Barroso, PhD - Global Head of Decision Science at Tetra Pak*


In the rapidly evolving landscape of artificial intelligence, regulations and compliance are no longer optional but very soon mandatory. Building a robust AI risk culture is crucial for navigating the complexities of this new era. This article delves into the importance of such a culture, focusing initially on upcoming EU regulations, the need for internal processes and guardrails, and fostering a culture of continuous learning and change management.


1. Regulations Are Coming: The Impact of EU Regulations


The European Union is spearheading the global movement towards stringent AI regulations. The proposed AI Act categorises AI systems into tiers based on their risk levels from prohibited AI practices (Chapter II, Article 5) and High-Risk AI systems (Chapter III, Section 1, Article 6, ). In particular, High-risk AI systems will face rigorous requirements, including stringent data governance, transparency, and human oversight measures:


Simplified Overview of EU AI Regulations:


  • Tiered Risk Levels: AI systems will be classified into different tiers, with a focus on high-risk systems (Chapt III, Section 1, Article 6 & Annex III) requiring rigorous scrutiny (Chapt III, Section 2,3,4,5)

  • Governance, Transparency, Accountability and Post-Market Monitoring: Organisations must ensure requirements in their AI processes and accountability in their deployment life cycle for High-Risk AI systems (Chapter III, Section 2,3,4,5,) and post-market monitoring (Chapter IX, Section1)

  • Fines for Non-Compliance: Non-compliance could result in hefty fines. For example, potentially up to 7% of an organisation's global turnover or €35 million, whichever is higher (Chapter XII, Article 99, 100,101).


These regulations underscore the necessity for organisations to comply with and proactively prepare for these changes. Ignoring these developments can lead to severe financial penalties and damage to reputation.


To address imminent changes, organisations should implement various measures, as illustrated by these two examples:


  • Example 1: A leading European bank has already started to align its AI systems with the proposed AI Act by enhancing its data governance policies and implementing rigorous transparency measures.

  • Example 2: A multinational tech company is setting up dedicated AI ethics committees to oversee high-risk AI projects, ensuring compliance and ethical standards.


2. Implementing Processes and Guardrails in the Organisation


While regulations set requirements, obligations, standards and conformity assessments (Chapter III. Section 2,3,4,5, Chapters III, IV, V & IX) and Code of Conduct (Chapter X), the implementation of these internal processes and guardrails are essential to ensure consistent compliance and risk management. This involves establishing robust operational frameworks for AI deployment, monitoring, and auditing:


Key Components of Effective AI Processes:


  • Risk Assessment Frameworks: Develop comprehensive risk assessment frameworks to evaluate the potential risks associated with AI applications

  • Data Governance Policies: Implement stringent data governance policies to ensure data integrity, privacy, and security.

  • Continuous Monitoring and Auditing: Establish mechanisms for constant monitoring and auditing of AI systems to ensure ongoing compliance and performance.

  • Ethical Guidelines: Formulate and enforce ethical guidelines for AI development and deployment to align with organisational values and regulatory requirements.


By embedding these processes into the organisational fabric, companies can create a resilient structure that supports compliance and effectively mitigates risks.


These two examples showcase the methodical creation of a framework designed to oversee the entirety of their respective AI ecosystems:


  • Example 1: A healthcare provider has implemented a continuous monitoring system for its AI diagnostics tools to ensure they meet regulatory standards and perform accurately.

  • Example 2: A retail giant is using AI risk assessment frameworks to evaluate new AI applications before deployment, ensuring they align with ethical guidelines and regulatory requirements.


3. It’s About the Culture: Change Management and Learnings


Creating a culture that embraces AI while managing its risks involves more than just processes and regulations; it requires a paradigm shift in organisational mindset. Change management and continuous learning are pivotal in this transformation.


Fostering an AI-Ready Culture:


  • Training for All: Provide comprehensive AI training programs for employees at all levels to ensure a fundamental understanding of AI technologies and their implications.

  • Executive Training: Equip executives with the knowledge and tools to make informed decisions about AI adoption and risk management.

  • Continuous Learning: Encourage a culture of continuous learning and adaptation to keep pace with the fast-evolving AI landscape.

  • Collaboration and Communication: Promote open communication and collaboration across departments to integrate AI initiatives seamlessly into the organisational workflow.


Embracing a culture that prioritises AI literacy, ethical considerations, and proactive risk management will position organisations to leverage AI's full potential responsibly and sustainably.


With the shared goal of fostering an AI-centric culture at its core, these two initiatives below prioritize the implementation of comprehensive training programs and integrative strategies:


  • Example 1: A global manufacturing firm has instituted regular AI training sessions for all employees, ensuring they stay updated on the latest developments and risks in AI.

  • Example 2: A financial services company promotes cross-departmental AI projects, fostering collaboration and integrating AI initiatives into the company's core strategies.


Conclusion


As AI regulations are coming, organisations must go beyond mere compliance to build a resilient AI risk culture. By understanding and preparing for upcoming regulations, implementing robust internal processes, and fostering a culture of continuous learning and change management, organisations can navigate the complexities of AI adoption effectively. This holistic approach not only ensures compliance but also positions companies to harness AI's transformative power while mitigating risks.


Building such a culture is not just about avoiding fines; it's about creating a sustainable, ethical, and innovative environment where AI can thrive and drive significant value for the organisation and beyond!


*The views and opinions expressed in this article are solely those of the authors.


References


European Parliament website Artificial Intelligence:



REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations

 
 
bottom of page