Brakel2021FLIPosition

Future of Life Institute (FLI): FLI Position Paper on the EU AI Act

Bibliographic info

  1. Brakel, M. (2021). Feedback from: Future of Life Institute. Artificial intelligence – ethical and legal requirements. https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665546
  2. John, R. R. (2023). The tech giants’ anti-regulation fantasy. The Atlantic. https://www.theatlantic.com/ideas/archive/2023/11/tech-regulation-bell-system/676110/

Commentary

The Future of Life Institute was formed in the United States uniting some of the world's most renowned researchers in the realm of artificial intelligence. While the US currently does not have any regulations concerning AI governance, the FLI formulated the Asilomar AI principles, one of the earliest proposals to regulate AI. Thus, the FLI supports the EU with the creation of the AI Act in becoming the first major regulator to install overarching regulations on the development of AI, and they propose three recommendations that would fully account for current and potential risks of AI, ensure that fundamental rights are protected for EU citizens, and support continuous innovation of AI within Europe [1]. What I find interesting about this is, contrary to large technology companies in the US that stifle the idea of regulation in fear of their freedom to innovation being restricted [2], FLI supports widespread regulation in AI while jointly encouraging innovation in AI.

Excerpts & Key Quotes

Page 4:

Future AI systems will be even more generalised than GPT-3. Therefore, the proposal should require a complete risk assessment of all of an AI system’s intended uses (and foreseeable misuses) instead of categorising AI systems by a limited set of stated intended purposes.

Comment:

Although there is good intention to this recommendation by considering the completeness of an AI system, this may be impossible to fully address, as AI systems are fundamentally based on the thinking patterns of a human brain, which can creatively utilize a solution for an entirely different problem. Because of this, as AI systems like DALL-E are intended for the main purpose of producing images from a text prompt, DALL-E can, for example, also produce art that seems to have the exact same style as a well-known artist. This can lead to copyright issues, where it may be impossible to know whether the artist did indeed produce the art, or if the art is merely produced by AI. There can be an infinite amount of these foreseeable misuses, so it is very difficult to be able to legally address all possible uses.

Page 5:

When AI systems act in high-risk contexts that have traditionally been performed by humans with fiduciary duties, such as doctors, lawyers or financial advisors, they should be required to abide by the applicable professional standards.

Comment:

I think this is a wise recommendation to include as feedback. It is important that AI implemented in a high-risk industry does not only adhere to the requirements for high-risk systems but that its behavior is also constrained within the social and regulatory boundaries set in the context it is placed in. This ensures that the output from the AI is catered to the interests of the user and is not defaulting to a response given to every user. Also, this can help mitigate the possibility of a conflict of interest, where the AI response is biased towards a service or company instead of providing neutral responses.

Page 8:

Moreover, the EU should consider opening up access to sandboxes to SME’s from outside the Union. In that way, sandboxes would promote the dissemination of EU standards to supplies from outide its borders.

Comment:

Regulatory sandboxes are an interesting concept, but because I still find its implementation vague, I am unsure how beneficial they will be for SME's from outside of the EU. Having a sandbox for development would ease the testing phase of AI without potential risks to the public, but this does not necessarily mean that an SME will be encouraged to make use of it, especially if an alternative to testing their AI is already made available to them. It is somewhat unclear how EU standards will be disseminated through the sandbox, so I doubt that increasing the availability of sandboxes to outside the Union will produce many valuable benefits.


  1. Brakel, M. (2021). Feedback from: Future of Life Institute. Artificial intelligence – ethical and legal requirements. https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665546 ↩︎

  2. John, R. R. (2023). The tech giants’ anti-regulation fantasy. The Atlantic. https://www.theatlantic.com/ideas/archive/2023/11/tech-regulation-bell-system/676110/ ↩︎