Floridi2023MachineUnlearning

Luciano Floridi, "Machine Unlearning: Its Nature, Scope, and Importance for a “Delete Culture”"

Bibliographic info

Floridi, L. (2023). Machine Unlearning: its nature, scope, and importance for a “delete culture”. Philosophy & Technology36(2), 42.

Commentary

⇒ In general, what about this text makes it particularly interesting or thought-provoking? What do you see as weaknesses?
In this paper, Floridi notices a societal shift from a culture of recording information to a new culture of deleting it. Today's information is often digital and therefore it is easy to archive, reproduce and modify it, which leads to an accumulation of information and a dependence of information society on the accumulation. This raises the question what information needs to be removed (e.g. inappropriate, privacy sensitive, or illegal content). Two strategies can be employed for information removal: deletion, making the information unavailable, or blocking, rendering it inaccessible. The distinction is important due to the differing costs, levels of implementability, and degrees of flexibility associated with each strategy.
When privacy or intellectual property concerns arise, common solutions such as complete model deletion, removing unwanted training data and retraining, or attempting to 'block' the information are insufficient. The first approach is too extreme, the second can be costly or resource-intensive, and the third proves ineffective. Therefore Floridi advocates machine unlearning (MU) as solution. MU tackles these challenges by selectively removing all traces of certain data points from the model, akin to a form of selective amnesia. This removal process is carried out without compromising the model's performance. As a result, the model no longer generates or provides the problematic information associated with those specific data points.

This paper holds particular interest due to Floridi's observation that in the digital age, there is a staggering amount of personal information being collected and stored. Delete culture emphasizes the importance of individuals having control over their data and the ability to delete it when it is no longer needed or desired. This helps protect personal privacy and prevent unauthorized access to sensitive information. MU can play a significant role in supporting a delete culture by enabling the removal of unwanted or sensitive information from machine learning models. I understand the clear need for further research on machine unlearning (MU). However, I would appreciate a more comprehensive explanation of what an MU model entails and a more detailed exploration of the drawbacks associated with using such models.

One potential weakness of delete culture using MU is the risk of incomplete or imperfect removal of information. MU techniques rely on identifying and removing specific patterns or associations from trained models. However, there is a possibility that certain traces of the deleted information may still persist within the model. These residual traces can potentially be exploited or reconstructed, leading to privacy or security vulnerabilities.

Excerpts & Key Quotes

⇒ For 3-5 key passages, include the following: a descriptive heading, the page number, the verbatim quote, and your brief commentary on this

Unavailability and inaccessibility in the deleting culture

"... our newborn deletion culture concerns what information can, and if so, ought to be made unavailable in principle (completely removed or irrecoverably deleted), or at least inaccessible in practice—that is, still available yet not obtainable—and in what circumstances, that is, when, where, to whom, how long for, for what purposes, and so forth."

Comment:

The quote underscores the key aspects of our emerging deletion culture, focusing on the determination of which information should be made unavailable or inaccessible. It highlights the spectrum of possibilities, ranging from complete removal or irrecoverable deletion to practical inaccessibility while still being technically available. Additionally, it raises important considerations about the contextual factors that influence the decision-making process, such as the timing, location, intended recipients, duration, and intended purposes of the information. This quote emphasizes the challenges of handling information in a culture that focuses on deleting or removing content. It suggests that it is important to carefully evaluate and make thoughtful decisions to find the right balance between making information accessible, respecting privacy, and considering ethical concerns.

"Right to be unlearnt"

"In the future, users may ask search engines to comply with their “right to be forgotten” and ML systems with their “right to be unlearnt”."

Comment:

The quote envisions a future where users assert their "right to be forgotten" with search engines and extend that concept to machine learning (ML) systems as a "right to be unlearnt." It highlights the growing recognition of individuals' rights to control their personal information and the potential application of this right in the context of ML systems.

The "right to be forgotten" refers to an individual's ability to request the removal of personal data from online platforms or search engine results. Extending this concept to ML systems suggests that individuals may desire the ability to request the removal or unlearning of their personal information from machine learning models.

This quote reflects the evolving conversation around privacy, data protection, and user agency in the digital age. It suggests that individuals should have the power to influence the retention or removal of their information within ML systems, aligning with principles of privacy and control.

However, the practical implementation and implications of a "right to be unlearnt" in ML systems pose significant challenges. Unlearning specific information from ML models may require complex processes, and striking a balance between privacy rights and the utility of trained models is a complex task.

Ethical challenges and potential misuses

"MU takes as its input a trained model that provides some undesirable or unacceptable information and produces as its output a revised model that no longer provides that information. The problem is to decide what is undesirable or unacceptable. In the hands of malicious or unscrupulous agents, from autocrats to criminals, MU could become a powerful tool for censorship, misinformation, propaganda, vandalism, cyber-attacks, or new forms of ransomware, making acceptable and desirable information no longer available."

Comment:

The quote raises a critical concern regarding the potential misuse or abuse of machine unlearning (MU) techniques. While MU offers the ability to revise a trained model to remove undesirable or unacceptable information, the determination of what falls under these categories becomes a crucial challenge.

In the wrong hands, MU could be exploited by malicious actors, ranging from autocrats to criminals, leading to detrimental consequences. The quote highlights the potential risks of MU being employed as a tool for censorship, misinformation, propaganda, vandalism, cyber-attacks, or even the development of new forms of ransomware. Such actions could result in the suppression of acceptable and desirable information, undermining the integrity of information systems and societal well-being.

The quote serves as a reminder of the importance of ethical frameworks and responsible use of technology in safeguarding against potential misuse and protecting the integrity of information ecosystems. It highlights the need for ongoing research, awareness, and proactive measures to mitigate the risks associated with MU, ultimately promoting the responsible and positive impact of this technology.


#comment/Anderson There are interesting connections with harm to memory, especially if one accepts certain robot rights: Could it be ethically objectionable to destroy traces of the past.