By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.
SITEMAP
Copyright © 2024 CGTN. 京ICP备20000184号
Disinformation report hotline: 010-85061466
SITEMAP
Copyright © 2024 CGTN. 京ICP备20000184号
Disinformation report hotline: 010-85061466
/CFP
The world's biggest technology companies have embarked on a final push to persuade the European Union (EU) to take a light-touch approach to regulating artificial intelligence as they seek to fend off the risk of billions of dollars in fines.
EU lawmakers in May agreed on the AI Act, the world's first comprehensive set of rules governing the technology, following months of intense negotiations between different political groups.
However, until the law's accompanying codes of practice are finalized, it remains unclear how strictly rules around "general purpose" AI (GPAI) systems, such as OpenAI's ChatGPT, will be enforced, and how many copyright lawsuits and multi-billion dollar fines companies may face.
The EU has invited companies, academics and other stakeholders to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorized to speak publicly.
The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate compliance. A company claiming to follow the law while ignoring the code could face a legal challenge.
"The code of practice is crucial. If we get it right, we will be able to continue innovating," said Boniface de Champris, a senior policy manager at trade organization CCIA Europe, whose members include Amazon, Google and Meta.
"If it's too narrow or too specific, that will become very difficult," he added.
Data scraping
Companies such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators' permission is a breach of copyright.
Under the AI Act, companies will be obliged to provide "detailed summaries" of the data used to train their models. In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts.
Some business leaders have argued that the required summaries should contain minimal details in order to protect trade secrets, while others assert that copyright holders have a right to know if their content has been used without permission.
OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter, who declined to be named.
Google has also submitted an application, a spokesman told Reuters. Meanwhile, Amazon said it hopes to "contribute our expertise and ensure the code of practice succeeds."
Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organization behind the Firefox web browser, expressed concern that companies are "going out of their way to avoid transparency."
"The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box," he said.
Big business and priorities
Some in business have criticized the EU for prioritizing tech regulation over innovation, and those tasked with drafting the text of the code of practice will strive for a compromise.
Last week, former European Central Bank chief Mario Draghi told the bloc it needed a better-coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States.
Thierry Breton, a vocal champion of EU regulation and critic of non-compliant tech companies, quit his role as European Commissioner for the Internal Market this week after clashing with Ursula von der Leyen, the president of the bloc's executive arm.
Against a backdrop of growing protectionism within the EU, homegrown tech companies are hoping for carve-outs to be introduced in the AI Act to benefit up-and-coming European firms.
"We've insisted these obligations need to be manageable and, if possible, adapted to startups," said Maxime Ricard, policy manager at Allied for Startups, a network of trade organizations representing smaller tech companies.
Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts start being measured against it.
Non-profit organizations, including Access Now, the Future of Life Institute, and Mozilla, have also applied to help draft the code.
Gahntz said, "As we enter the stage where many of the AI Act's obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates."