DOT Europe > News > AI Act: Why Proposed Regulations Miss the Mark on Tech Neutrality and Risk-Based Control

AI Act: Why Proposed Regulations Miss the Mark on Tech Neutrality and Risk-Based Control

Brussels, November 27, 2023 – We, the undersigned associations, representing European and International companies from the Information Technology sector, have been following the developments in the inter-institutional Trilogue negotiations on the EU Artificial Intelligence Act. As the negotiations proceed towards finalizing the Act, we want to express our concerns about the direction of the current proposals to regulate General Purpose AI (GPAI) systems and AI Foundation models which include diverging and unclear scopes and definitions. These proposals do not take into account the complexity of the AI value chain and are not consistent with the AI Act’s technology neutral risk-based approach, which regulates the use of AI systems according to risk, not the types of technology being used.

We are equally concerned about the proposals to classify GPAIs and AI Foundation Models as “highly capable” or as having “high impact” through criteria that are not clearly linked to the level of risk an AI system developed with GPAIs and Foundation Models may pose. Furthermore, any technical measurement that would be used to establish whether a model is to be considered “highly capable” or having “high impact” should be based on a thorough assessment of the current scientific consensus, and following consultation with AI industry, academic and civil society experts. Any obligations designed for AI Foundation Models should take into account the ongoing multinational and multi-stakeholder fora and enable a co-regulatory process in which actors in the AI value chain and expert stakeholders can inform the future AI Act governance.

Given the complexity of the AI value chain, and the multifaceted compliance obligations that the AI Act would impose along the value chain for high-risk use-cases, the AI Act should formalize a mechanism to ensure the sharing of relevant and necessary information about, among other things, model capabilities and limitations, between Foundation Model providers and deployers to facilitate compliance with the AI Act.

Furthermore, we are concerned about the proposals to introduce additional requirements for the use of copyrighted data to train AI systems, despite the existing comprehensive copyright protection and enforcement framework in the EU. The existing framework notably contains provisions that can help address AI-related copyright issues such as the text and data mining exemption and corresponding opt-out for rightsholders in Art. 4 of the Copyright Directive.

We believe this additional legal complexity is out of place in the AI Act, which is primarily focused on health, safety, and fundamental rights. Additionally, such late additions to the AI Act would not be grounded in any impact assessment or stakeholder consultation to ascertain how and whether these obligations would be needed, or would affect the EU Digital Single Market.

Signatories:
BSA | The Software Alliance
CCIA – Computer & Communications Industry Association
Developers Alliance
DOT Europe
eco – Association of the Internet Industry
ITI, Information Technology Industry Council

Share:
More News