OpenAI CEO Sam Altman has suggested that his company may exit Europe if its regulations on the use of artificial intelligence (AI) technology become too strict. The European Union (EU) is working on the implementation of the first set of international regulations for AI. The law requires businesses using generative AI tools, including ChatGPT, to report any copyrighted materials used to create such systems. Altman claimed his company wanted to “work with the government” to prevent the technology from being destructive, but also compared the swift development of AI to the discovery of the printing press.
Even Altman admitted that the implementation of AI will impact jobs, and that the government would need to devise ways to mitigate that. Altman clarified this point at the University College London event and stated that “The right answer is probably something between the traditional European-UK approach and the traditional US approach,” he added. “I hope we can all get it right together this time.”
Altman stated that OpenAI would try to comply with the European regulation when it is established before considering leaving. “Either we’ll be able to solve those requirements or not. If we can comply, we will, and if we can’t, we’ll cease operating”, Altman said.
This month, the EU legislators agreed on the act’s drafting. The representatives of the Council, the Commission, and the Parliament will now discuss the bill’s final details. According to Altman, “The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back. They are still talking about it.”
In a report analyzing the EU AI Act, the Future of Life Institute described general-purpose AI as AI systems with a broad range of potential uses, both intended and unanticipated by the developers. Legislators have proposed the term “general purpose AI system” to describe AI systems with multiple uses, such as generative AI models like Microsoft-backed ChatGPT.
While addressing the University College London, the CEO of OpenAI acknowledged disinformation concerns surrounding AI, highlighting, in particular, the tool’s capacity to produce false information that is “interactive, personalized [and] persuasive,” and saying that more work needed to be done on that front.
Nicole Gill, executive director and co-founder of Accountable Tech, wrote an op-ed for Fast Company last week comparing Altman to Meta’s founder and CEO Mark Zuckerberg, writing, “Lawmakers appear poised to trust Altman to self-regulate under the guise of ‘innovation,’ even as the speed of AI is ringing alarm bells for technologists, academics, civil society, and yes, even lawmakers.”
In conclusion, the implementation of new regulations for AI in Europe could cause companies such as OpenAI to leave the continent. While Altman claims that his company wants to work with the government to prevent harmful use of AI, he also believes that being overregulated could be detrimental to his company’s operations. It remains to be seen how the implementation of the EU AI Act will play out and whether it will be satisfactory for companies that use AI technology.

