Should privacy professionals take on AI governance roles?

EU

As organizations adjust to the evolving landscape of digital regulations in Europe and beyond, the question of whether in-house privacy professionals, such as data protection officers, should also take on AI governance-related roles has become increasingly pertinent. This LawNow article provides insights into current practices, the pros and cons of privacy professionals taking on AI governance roles, and our perspective on the matter.

Current market trend

Organizations are increasingly recognizing the importance of establishing robust governance structures to manage AI compliance. According to the IAPP-EY Professionalizing Organizational AI Governance Report, 60% of organizations have either already established a dedicated AI governance function or plan to do so within the next 12 months. The privacy function often leads the way in acquiring additional responsibilities for AI governance, with 57% of privacy functions having taken on this role.

Pros of privacy professionals taking on AI governance roles

  1. Synergy of skills: Privacy professionals possess a deep understanding of data protection principles, which are closely aligned with responsible AI governance. Their expertise in managing data privacy, conducting impact assessments and ensuring compliance with regulations can be leveraged to address AI governance challenges.
  2. Established processes: Many organizations are building AI governance on top of existing privacy programs. This approach allows for the integration of AI impact assessments with privacy impact assessments, creating a streamlined and efficient governance process.
  3. Regulatory alignment: Privacy regulations, such as the GDPR, already encompass principles relevant to AI governance, including transparency, fairness and accountability. Privacy professionals are well-versed in these regulations and can ensure that AI systems comply with both privacy and AI governance requirements.
  4. Cross-functional collaboration: Privacy professionals are accustomed to working with various departments, including legal, IT and business units. This cross-functional approach is essential for effective AI governance, which requires collaboration across multiple disciplines.

Cons of privacy professionals taking on AI governance roles

  1. Resource constraints: Privacy professionals are already managing a significant workload, and adding AI governance responsibilities may lead to resource constraints. This could result in burnout and reduced effectiveness in both privacy and AI governance roles.
  2. Lack of specialized knowledge: While privacy professionals have a strong foundation in data protection, AI governance requires specialized knowledge of AI technologies, algorithms and their ethical implications. Without additional training, privacy professionals may struggle to effectively perform an AI governance role.
  3. Potential conflicts of interest: Combining privacy and AI governance roles may lead to conflicts of interest, for example when balancing the need for data minimization against the desire to leverage large datasets for increasing the trustworthiness of an AI system. This could compromise the integrity of both governance functions.

Our perspective

In our view, privacy professionals can play a valuable role in AI governance, but it is essential to address the challenges associated with this dual responsibility. Organizations should consider the following approaches:

  1. Training and development: Invest in training programs to equip privacy professionals with the necessary knowledge and skills to effectively govern AI systems. This includes understanding AI technologies, ethical considerations and emerging regulations.
  2. Clear role definitions: Clearly define the roles and responsibilities of privacy and AI governance professionals to avoid conflicts of interest and ensure accountability. This includes establishing reporting structures and decision-making processes that support effective governance.
  3. Cross-functional collaboration: Encourage collaboration between privacy, legal, IT and business units to address the multifaceted challenges of AI governance. This cross-functional approach ensures that AI systems are developed and deployed responsibly, with input from all relevant stakeholders. For instance, organizations can establish regular inter-departmental meetings to discuss AI projects, share best practices and address any emerging issues collectively.
  4. Leveraging existing standards: Organizations should leverage established international standards, such as ISO/IEC 42001:2023, when defining their AI governance frameworks. This standard provides comprehensive requirements and guidance for establishing, implementing, maintaining and continually improving an AI management system. By aligning their AI governance practices with ISO/IEC 42001:2023, businesses can ensure their AI systems are trustworthy, transparent and compliant with regulatory requirements. For more information about the ISO’s AI standards, see our LawNow article here.