Statement by Daniel Abbou on the AI Act (Hearing in the Digital Committee)

Written statement by Daniel Abbou, Managing Director KI Bundesverband e.V.

For the hearing of the Committee on Digital Affairs to be held on 26.09.2022 on the EU Regulation on Artificial Intelligence including Competitiveness in Artificial Intelligence and Blockchain Technology.

On 21 April 2021, the European Commission presented the so-called European AI Act to regulate the development and use of Artificial Intelligence (AI) in the European Union. AI is one of the crucial technologies of our future, so it is important that it is applied in the spirit of European and democratic values.

Inadequate definition of AI
Neither in science nor in practice there is a generally accepted definition of AI. The AI regulation therefore contains its own definition of AI in Article 3.1 and Annex 3, which includes all software that uses statistical methods or search and optimisation methods. This is problematic in that it classifies almost all existing and future software that is not AI-related as “AI”.

This creates incalculable risks for companies that develop or use software. We therefore agree with the proposal of our Swedish friends from AI Sweden, the national centre for applied artificial intelligence, that the definition of AI in EU regulation should be in line with the generally accepted OECD definition.

Too broad a definition of “high-risk applications”.
We welcome the European Union’s approach to regulating AI in those areas where there is great potential for harm. Although we welcome the Commission’s risk-based approach, there is still considerable room for improvement in the allocation of AI applications to the different risk levels.

In particular, the definition of so-called high-risk applications is too broad. It should only include systems that may pose a potentially high risk to security or fundamental rights. Currently, however, it provides that applications which pose a low risk also fall within the scope of the specifications for high-risk AI.

Risk assessment
As the AI Bundesverband already wrote in its statement on the Commission’s proposal, we question the affordability of self- and third-party assessments, especially for start-ups and SMEs. The commissioning of external experts and specialised auditing firms means costs in the case of third-party assessment that can only be borne by large companies. The measures mentioned in the draft to avoid disadvantages for start-ups and SMEs are not sufficient and must be expanded.

In the case of self-assessments, depending on the scope and associated liability risk, the question of the feasibility of setting up a quality management system by SMEs and other companies without their own significant control and testing infrastructure continues to arise.

The risk assessment of critical applications should take into account several criteria that must be cumulatively examined and fulfilled. There is no (comprehensible) weighing of risk and benefit in the sense of a risk-benefit ratio, so that a disproportionate risk assessment can quickly result. 

We also advocate the creation of a harmonised sanctions infrastructure across Europe. There must be no differences in the treatment of the same facts between the individual member states. It is not clear enough from the proposal how a harmonised approach to audit processes can be established and ensured within a European framework.

We welcome the Commission’s aim to create better data sets. The limited availability of high-quality data that accurately represents a population can be one of the biggest hurdles in developing AI. However, the question for AI software producers is how to ensure the unrealistic goal of Article 10.3. After all, the most common source of bias is data that does not adequately represent the target population.

However, the proposal should take into account that if, for example, discrimination against minorities arises from the use of inadequate datasets, it only requires the modification of these datasets or certain parameters to remedy the problem. So instead of penalising AI developers for using older data that can lead to such biases, the Commission should encourage the development of fair and well-curated datasets across institutions. They should be aggregated to capture diversity between and within demographic groups.

This should also be promoted much more at EU level to drive innovation.

Avoid over-regulation and unnecessary complexity
The intention of regulation is to create a common legal framework for AI applications and a level playing field for both intra- and extra-European companies. Designing such a unified legal system is a major challenge. Nevertheless, the goal of regulation should be to minimise this complexity to the extent that it does not result in disadvantages for start-ups and small companies. The linguistic complexity of the proposal is already so high that directly affected companies cannot comprehend the law, let alone its effects.

It is important to avoid over-regulation and excessive bureaucratic requirements, especially for smaller start-ups and SMEs. This is because, unlike their large US or Chinese counterparts, they do not have the overflowing human and financial resources to implement the new framework. As a result, they can quickly fall behind their non-European competitors, and the development of European AI innovations is hindered and slowed down.

Accordingly, AI regulation must be designed in such a way that young German and European companies can also cope with it.

Unclear competences regarding implementation
Another point of criticism concerns the implementation of the regulation after it has been passed. It is not clear with which authority the decisive competences for the realisation of the regulation will lie.

For example, will the responsibilities be divided among different ministries and authorities or will a separate institution be established?

In both cases, increased resources and capacities will be required. Although Germany certainly has these capacities, the question still arises, how less affluent member states, which have few resources at their disposal, will be supported in this regard. Otherwise, regulation, the purpose of which is to create a level playing field, creates an uneven playing field.


Based on the previously mentioned points, we recommend

  • Define “artificial intelligence” more adequately, so that not almost all software is defined as “AI”.
  • Make the notion of high-risk applications more realistic, so that applications that are only low-risk do not fall under it
  • Avoid disproportionate risk assessments, especially for young and smaller companies.
  • Not penalise AI developers for using older datasets, and instead promote a balanced population representation in upcoming datasets
  • avoid over-regulation and complexities in order to avoid competitive disadvantages for smaller European companies compared to large American and Chinese companies
  • Determine the allocation of competences regarding the implementation of regulation in a timely manner.

Daniel Abbou

Managing Director KI Bundesverband e.V.