FutureSociety2021TrustInExcellence
Literature note on: "Trust in Excellence & Excellence in Trust" by The Future Society
Commentary
This text has three principal arguments that are elaborated upon to
create guidelines/recommendations for the EU AI Act. What makes this
text interesting is the groundedness it gives in the real world, when
speaking of very broad and oftentimes vague legal terms. A weakness of
this paper (an sich) is the fact that it requires an understanding of a
massive amount of background information, multiple years of discussion,
and key factors in the field of AI (thus, there is a high threshold for
good interpretability of this text).
Excerpts & Key Quotes
Invest in Smart Governance Capabilities for AI and the Digital Single Market
Page 4: "In the AI Act, the European Commission has proposed the
development of interesting governance capabilities such as the European
AI Board and the regulatory sandboxes. We welcome this ambition:
ensuring the balance of excellence and trust in the pervasive,
fast-evolving field of AI will require a strong & effective governance
capacity."
Comment:
This quotation is central to the first of the three
recommendations the authors make, acknowledging that the EU is taking
the right steps towards capable governing capabilities. The first
recommendation they give mostly emphasizes the importance of this,
showing the necessity for robust and effective governance structures to
balance - as the title says - excellence with trust. What is interesting
here is that the authors frame these mechanisms not just as regulatory
tools, but as foundations upon which an AI environment can be safely
developed. This way, there is a basis that is more proactive in nature,
rather than reactive.
'Ensure Governance is Adaptive and Responsive to Macro-trends'
Page 5: "The AI Act as it stands today cannot deal with the speed of
change in the technological landscape nor with the technology's
macro-implications: its focus on the intended purpose of the AI system
inherently constrains governance mostly to the immediate micro-level
interactions between the system and its environment."
Comment:
This quote gives us an insight into what extent the current
AI Act is limited in its ability to keep up with rapid technological
advancements. The authors address that certain broader societal effects
are overlooked and that they may extend beyond the direct effects AI
systems can have on society, giving more focus to secondary effects that
may be unforeseen. What is interesting about this is that the authors
recognize that the AI Act focuses mainly on the intended purposes of AI,
critiquing the Act in its narrowness. They want to give a more broad
view of unforeseen or unintended consequences, which may result from AI
usage. The recommendation here is to provide a more flexible framework,
which looks to the further future, as well as the immediate future,
suggesting a governance form that considers multiple dimensions of
consequences of AI.
'Avoid a "Lemons Market" by Fostering Trustworthy Market Dynamics'
Page 7: "To offset this trend we suggest the AI Act and broader
governance system be adapted to ensure transparency and trustworthiness
and therefore to keep the regulatory burden onto those with the most
information available."
Comment:
This quote gives us the principal argument used to prevent a
"Lemons market". This is a situation where lower-quality AI systems
dominate the field, whilst being untrustworthy because of information
asymmetry. The authors argue for giving the AI providers and developers
responsibility, because they have the best understanding of the AI
systems. They argue for transparency and trustworthiness, so that AI
providers can ensure that the systems they provide can be used ethically
and safely. In the argument, there is a focus on the use of the
providers' knowledge, because ensuring that they understand their
product is essential for rapid technological advancements and
implications on the effects the products may have on the markets.