Position Paper on the EU AI Act

Just over two years after the European Commission presented its initial proposal in April 2021, the EU Artificial Intelligence (AI) Act is entering the final stage of the EU legislative process: the inter-institutional trilogue negotiations between the Commission, the Council of the European Union and the European Parliament.

 

Over the past years, the German AI Association has worked to ensure that the AI Act provides a regulatory framework that fosters innovation and encourages the adoption of AI across the EU, unlocking unprecedented economic benefits for all Europeans, rather than confronting developers, providers, and deployers of AI systems with complex and impractical regulations, prohibitive costs and legal uncertainties. 

 

In this position paper, we highlight the key issues that need to be addressed in the upcoming negotiations. We compare and contrast the relevant positions of the Commission, Council and Parliament, give our assessment and propose specific and workable solutions to the issues at stake. We also present concrete use cases of how our members and their business models would be affected by the pending legislation, providing an insight into the real-world impact of this regulatory framework.

 

Our Key Positions: 

To ensure that the AI Act does not impose unsustainable regulatory burdens and disproportionate compliance costs on the European AI ecosystem, risking a dangerous competitive disadvantage and loss of innovation, we identify five key areas that we and our members argue should be urgently addressed in the upcoming trilogue negotiations. 

 

Based on expert opinions from the private sector, research, and politics, as well as specific use cases from our members, we formulate five recommendations for EU decision-makers to consider during the trilogue negotiations:

  1. Make regulation of foundation models workable and proportionate:
    Foundation models should be subject to transparency and data governance requirements that are proportionate to the risk level of the specific use case. The compliance requirements currently proposed would be largely unworkable in practice.

  2. Focus on applications that are indeed high-risk:
    The high-risk classification in ANNEX III should be further narrowed down to critical areas, should more adequately take into account the size and resources of the respective provider/deployer, and should only include use cases that are not already covered by existing regulatory frameworks.

  3. Define AI precisely and unambiguously:
    The definition of AI in the AI Act should be narrowed to ensure that the focus of the AI Act is strictly on AI systems rather than any advanced software.

  4. Promote the development of harmonised standards:
    The EU should facilitate the timely development of harmonised standards in line with the rapid technological evolution of AI. Industry experts should be closely involved in the standardisation process in order to provide more clarity and certainty for stakeholders.

  5. Strengthen support of innovation and SMEs:
    In addition to regulatory sandboxes, the AI Act should include other provisions that have greater potential to stimulate and support private sector initiatives, in particular European AI start-ups and SMEs.

We believe that these issues are critical to ensuring that the AI Act is a catalyst for innovation, investment and adoption of AI across Europe, unlocking the economic potential of AI rather than creating obligations that could hamper the development of this technology in Europe.