Digital Assistant for Parliamentary Inquiries

Data-Ethical Consultation for Digital Assistant for Parliamentary Inquiries

1. The organization

Data Science Initiative (no date) "Aan de slag met AI binnen de overheid"

Link to document (page 14-15.)
This document "Aan de slag met AI binnen de overheid" ("Getting Down to Work with AI within the Government") is written for the Dutch Rijksoverheid in the framework of the Data Science Initiative. This is an effort of the Dutch government to come up with a strategic action plan to reap benefits from AI and big data, also to contribute and meet the digital strategy of the European Commission. This document presents multiple concrete and practical action plans to implement algorithms and machine learning in government practice, while operating from and within the frames of the law and public values. The aim is that the proposed practices will lead to beneficial outcomes, while still being explainable to the public sector and beyond.
This document describes multiple pilots to use AI (machine learning, algorithms) and Big Data in government institutions to foster public values. I have chosen to focus on pilot "digitale assistent for het beantwoorden van kamervragen". English translation: "digital assistent to answer parliamentary inquiries".

2. The AI technologies Employed

⇒ What AI technology or technologies do they use and for what purposes?
I will focus on the assistent to answer parliamentary inquiries. I thought this one was very interesting and provided a good basis for this assignment.
This technology is a mechanism that helps policy advisors and officers in their everyday work, which partly consists of answering parliamentary inquiries. The digital assistent is a machine learning algorithm that helps employees searching all publicly available sources the employees can use to answer these questions, which are largely political in content, but they can be about a wide variety of topics. The answers to these questions can for instance be used in parliamentary debates.
This algorithm goes beyond a mere search engine: it can scan entire sentences, and learn to provide more relevant results, filter out irrelevant results, and can learn to do this better and quicker over time.
This facilitates the employees to do their job more quickly. They can quickly access relevant information they may use to do the political answering part of the question. This process is not done by the algorithm; the employees formulate the answers based on the relevant sources selected by the digital assistant. There is already a sense of accountability in place.

3. Ethical concerns

introduction

The algorithm is still in its testing phase. It's likely that the underlying logic of the algorithm as well as the outcomes are understandable, transparent, and explainable. Also, the use of publicly available data sources does not lead to extra issues related to for instance privacy. This digital assistant is a tool to reach the same types of sources, but in a quicker and more relevant way for the employees.
Also, the political decision making is not outsourced to "robots". From the text, it becomes clear the authors realize with sensitive political matters it's important to have human accountability. The decisions are thus not opaque, but made by a human who can be called to account for the decisions. This does not of course mean the decisions made by humans are inherently better, but the person can be held accountable, the decision can be scrutinized, and there is less risk of black boxing of the outcomes of the algorithmic decision making.
So what becomes clear, the authors realize already that using the algorithm to this end is an inherently political process (Within, 2014), hence they still allow humans to make the decisions on how to answer the parliamentary inquiries.

ethics lobbying

If the working of the algorithm is opaque or imbued with the biases and values of the designers, there is a risk of ethics lobbying (Floridi, 2019). This is a risk of exploiting digital technologies to delay, revise, replace, avoid, etc. legislation, its enforcement or design, certain legal norms, etc. Because if certain information is deemed irrelevant, the employees will not have access to it to inform their decision making. This is especially important when the inquiries pertain to sensitive societal issues, such as if they are not presented with co2 emissions numbers of certain corporations, or only get arguments from 1 side of important societal debates.

construct space

The algorithm maps from a construct space (the features that need to be searched or measured) to a decision space (outcomes) (Binns, 2018). However, the position in the construct space is only measured mediated through proxy variables included in the training data. Since there is never direct access, the construct and observed space can be different, they might not be legitimate grounds on which to make decisions (Ibid). To translate this to our technology: how can the employees know they can make legitimate decisions based on the information presented to them by the technology? Only the attributes initially included in the algorithmic design are selected, but this not necessarily make the grounds for the decisions acceptable or legitimate. This is especially important for societal issues such as racism.

[Governance and literacy]

There is the risk that employees presented with information from the algorithm simply accept it as true, and only use that to answer the parliamentary inquiry, and forget that other "irrelevant" information is not picked up by the algorithm, and so the employees do not consider this possibility of alternative information. The digital assistant may lead to employees not interpreting the provided information themselves, but see it as the "correct" and easy answer to the question they are asked to investigate. This is troublesome if it pertains to important issues of human rights or democracy. To illustrate: how often do you look beyond the first page of google? (This argument is inspired by the articles / discussion on public health, and the diagnosis algorithm, the employees may think the digital assistant just easily provides a correct diagnosis).
#comment/Anderson: Automation tends to increase complacency

4. Recommendations

To address the concerns highlighted above…

Ethics lobbying: it must be made absolutely transparent, understandable and explainable how the algorithm selects the resources and knowledge presented to the employees. It then becomes clear why certain information is deemed irrelevant, it can be retrieved if necessary.

Construct spaces: the algorithm of the digital assistant now focuses on the processing stage of the information, focusing on the set of classes described and derived from the notions of law, justice, and public values from the designers. Careful reflection on the questions and features used to train the algorithmic learning model are necessary (Binns, 2018). This entails that it's not enough for the digital assistant to learn to filter more "relevant results" in the sense of f.i. the YouTube algorithm learns to find more relevant videos. The digital assistant needs continuous careful reflection on what "relevant" entails. What are the public values that are used? What are the categories? Only then can the employees use all necessary information in the inquiries.
Governance and literacy: my recommendation is twofold: 1) if it pertains to sensitive topics, the Principe of essentiality must be considered, if the matter is essential (such as concerning fundamental rights) it must be dealt with democratically, so for example multiple employees implement the digital assistant and give multiple answers to the parliamentary inquiries; 2) the employees need to have an attitude of critical Big Data / Algorithmic literacy (Sander, 2020). For instance they need training, to search, aggregate, look for patterns, and scrutinize the algorithm in a way that is meaningful for their work and public value creation.