CyberSecurity SEE

AI Companies Falling Below EU AI Act Standards

Leading artificial intelligence (AI) models in Europe are facing challenges in meeting important regulatory standards related to cybersecurity resilience and prevention of discriminatory outputs. The European Union’s AI Act, which is being rolled out in phases over the next two years, aims to address concerns surrounding the ethical, societal, and safety implications of AI technologies that are increasingly embedded in various aspects of daily life.

The EU AI Act is the first comprehensive AI legislation introduced by a major regulatory body. It categorizes AI applications into three risk levels, with applications posing an unacceptable risk, high-risk applications, and low-risk applications falling under different legal requirements and restrictions.

Non-compliance with the AI Act could result in hefty fines of up to 35 million euros or 7% of the global annual turnover for companies. To evaluate the compliance of generative AI models with the EU’s AI Act, a new tool developed by Swiss startup LatticeFlow AI in collaboration with ETH Zurich and Bulgaria’s INSAIT has been put in place. This tool scores AI models across categories such as technical robustness and safety, providing insights into areas that need improvement.

Generative AI models from companies like Meta, OpenAI, Alibaba, and Anthropic have been assessed using this framework, with scores generally averaging above 0.75. However, areas of concern such as discriminatory outputs and cybersecurity challenges have been highlighted during these evaluations. For example, OpenAI’s GPT-4 Turbo and Alibaba’s Cloud scored lower in categories related to discriminatory outputs and cybersecurity, indicating the need for improvements in these areas.

As the EU finalizes enforcement mechanisms for the AI Act, experts are actively working on developing a code of practice expected to be ready by spring 2025. The goal is to ensure that AI models are compliant with the regulatory standards set forth by the EU.

Dr. Ilia Kolochenko, an expert in cybersecurity, emphasizes the importance of addressing privacy, safety, and reliability issues associated with large GenAI models. He warns of potential violations of multiple laws and regulations beyond just the EU AI Act and GDPR, highlighting the need for greater transparency and accountability among GenAI vendors.

Despite the growing scrutiny and regulatory measures surrounding AI technologies, the adoption of GenAI models continues to rise. However, experts caution that the blind pursuit of profitability by GenAI vendors may lead to significant risks and challenges in the future. The lack of transparency, questionable data collection practices, and inadequate security controls pose serious concerns for the industry as it strives to navigate the complex regulatory landscape while meeting ethical and societal expectations.

In conclusion, while GenAI models offer immense promise and potential, the need for responsible development, transparent practices, and regulatory compliance is essential to ensure that these technologies are deployed in a safe and ethical manner. As the EU AI Act continues to shape the future of AI regulation in Europe, companies must prioritize compliance and ethical considerations to build trust and mitigate risks associated with AI technologies.

Lidhja e burimit

Exit mobile version