In a recent interview with Help Net Security, David Dumont, Partner at Hunton Andrews Kurth, delved into the implications of the EU AI Act and provided insights on how organizations can navigate the new regulations while leveraging their existing GDPR frameworks. With a focus on conformity assessments, transparency requirements, and strategies for mitigating risks from national-level enforcement variations and third-party AI vendors, Dumont shed light on the evolving landscape of AI regulation in the EU.
Drawing parallels between the AI Act and the GDPR, Dumont emphasized the similarities in accountability, data quality, governance, vendor diligence, and transparency obligations between the two regulations. Organizations that have established comprehensive GDPR compliance programs can leverage their existing policies and infrastructure to address their obligations under the AI Act. However, Dumont also highlighted the need for organizations to build certain elements of their AI Act compliance programs from scratch, especially in areas such as conformity assessments for high-risk AI systems.
One key concern raised by Dumont is the potential for varying enforcement at the national level, as the AI Act grants enforcement powers to national supervisory authorities while allowing EU Member States to establish their own enforcement rules. This variation in enforcement could lead to additional complexities for organizations operating across multiple EU jurisdictions, necessitating a close monitoring of local legal developments to ensure compliance and mitigate risk exposure.
Looking ahead, Dumont anticipates a need for further clarifications from regulators and industry bodies as the AI Act continues to evolve. With the introduction of new legal concepts and requirements under the AI Act, guidance from the European Commission on various aspects of compliance will be crucial for organizations seeking to navigate the regulatory landscape effectively. Practical implementation of the AI Act may also rely on codes of practice and standards developed by industry bodies, further shaping the regulatory environment for AI in the EU.
Addressing the challenge of transparency requirements for high-risk AI systems, Dumont acknowledged the tension between transparency obligations and the protection of trade secrets and intellectual property. While the AI Act aims to ensure transparency in AI development and use, it also recognizes the need to safeguard intellectual property rights and confidential business information. Finding a balance between transparency and IP protection will require collaboration between AI providers, organizations, and supervisory authorities.
Finally, Dumont highlighted the importance of in-house lawyers assessing and mitigating risks when engaging third-party AI vendors. By conducting thorough diligence and updating vendor screening procedures in line with the AI Act requirements, organizations can ensure compliance and manage the risks associated with externally sourced AI systems. Detailed technical instructions from vendors, along with targeted information obtained through vendor questionnaires, can aid in-house lawyers in understanding and addressing the compliance challenges posed by third-party AI solutions.
As the regulatory landscape for AI continues to evolve, organizations will need to adapt their strategies and compliance efforts to meet the requirements of the AI Act while navigating the nuances of enforcement variations and third-party vendor engagements in the EU. By leveraging existing GDPR frameworks, staying informed about regulatory developments, and collaborating with industry stakeholders, organizations can position themselves to comply with the AI Act and foster innovation in the AI ecosystem.

