FutureOfLifeInstitute2021PositionPaperAIAct

Literature note on Future of Life, “FLI Position Paper on the EU AI Act”

Bibliographic info

Future of Life Institute (FLI). “FLI Position Paper on the EU AI Act”, (August, 2021)

Commentary

The feedback on the draft of the AI Act provided by the Future of Life Institute (FLI) roughly consists of three main recommendations, namely:

  1. "Account of the full (and future) risks of AI",
  2. "Enhance protections of fundamental rights" and
  3. "Boost AI innovation in Europe". For their first recommendation, FLI describes the existing occurrences of bias in for example GPT-3, which created sentences in which Muslims were often linked to "shooting, bombs, murder and violence".
    The draft of the AI Act only states that there should be legal requirements for given purposes, but the FLI wants to extends this definition by requiring a "complete risk assessment of all of an AI system's intended uses". I value this a useful addition to the current AI Act proposal, as it accounts for more future implementations of existing AI techniques. If, for example, more generalized AI were to become reality, it is important to consider all the applications of such an AI tool, instead of just considering the applications for which the techniques were originally created. In that regard, the FLI also mentions that foreseeable misuses should be assessed. They continue with this point when stating that for impact made by AI systems that has little influence on individuals, the impact on a larger scale should also be taken into consideration. Although their point of wanting to address "indirect and aggregate harms" is important, they do not come up with extensive solutions or improvements to tackle this issue. Although this piece is only a short overview of suggested improvements for the AI Act, an addition could have been to add a more detailed description of this future risk assessment.

The second recommendation focuses on the protection of fundamental rights. The FLI underlines the importance of fundamental rights protection, and suggestion the addition of the right to take legal action for people who are affected by AI outcomes.
The third recommendation vouches for an improvement of the European environment for innovation. In this section, the FLI proposes a strong change, suggesting to make the European Artificial Intelligence Board authorised to implement changes to the AI Act "on its own and without restrictions". Although the authors argue for this to be able to keep up with quick changes in the field of AI, this level of authorisation for the board seems risky. However, the authors do balance this out slightly by suggesting that this board will be helped by a group of experts, whose task it is to "ensure that beneficial technologies are accelerated and risks mitigated". Personally, I would think that giving the board all this power by themselves might be too strong of a change for the AI Act. Instead, the authors could have suggested to have mandatory discussions with the group of experts when new changes for consideration arrive.
Altogether, the commentary by the FLI gives a clear overview of the suggested changes. Most recommendations are well-considered, but they could benefit from a more extensive practical explanation. The authors likely intended to give a concrete overview of their proposed changes, without going into detail on the specifics of how this would influence the AI Act. It would therefore be helpful if they provided more in-depth material on their thoughts and recommendations.

Excerpts & Key Quotes

Societal-level harm

Page 4: "AI applications may cause societal-level harms, even when they cause only negligible harms to individuals"

Comment:

In this section, the authors make an interesting claim regarding the harm that could follow from the impact of AI on a broader-than-considered level. The FLI explains this point with the example of micro-targeting for elections, for which the results are not only visible on an individual level, but also on a national scale. Although they make an important point, the authors could have done more to suggest a practical solution or implementation.

Whistleblowers

Page 7: Therefore, these developers should be provided with the right to voice concerns to a relevant supervisory authority through a dedicated channel if internal company channels are insufficient, and should be able to rely on EU whistleblower protections"

Comment:

In this citation, the authors explain how they want to extend the proposed AI Act with more support for whistleblowers on a European level. This is an important taks, and it is good that the authors spend time underlying this in times of quick AI research.

European Innovation

"If Europe truly wants to be at the forefront of AI development however, its approach to sandboxes could be made more ambitious."

Comment:

Although the previous citation seems to suggest that people in companies should help calling out mispractices, the authors also include their wishes for an improved innovative field. They discuss the notion of sandboxes: "environments in which firms can try out new services without fear or penalties" and this seems to me as an important European point of focus. This for example supports small start-ups in working on innovative projects. The FLI, however, criticizes the current ideas on sandboxes, by stating that they could be more ambitious. More specifically, they speak of pan-European sandboxes, accessible through an online AI portal. I like that the authors give a concrete suggestion here, and their approach seems to be a right one. It is nice to see that their recommendations support both the improvement of innovation, and the decrease of risks.