Requirements for the Independent Work module "Ethics of AI", as a supplement to Digital Ethics for 2024-25
Updated for 2024-2025 on 2025-01-20
The requirements are as follows:
Attendance at 3 extra seminar meetings
In these 2-hour meetings, to be held during Block 3, we will discuss a selection of additional readings about philosophy of A.I.
Required readings:
A set of readings to be read by all (and discussed in the seminar meetings)
For 2024-25, we'll start in mid-February with a set of readings about the use of AI to help facilitate collective decision-making, discussing the so-called "Habermas Machine."
Further readings will be established at the first meeting.
Two additional journal article readings that each of you select (pending approval from me) on topics related to “Ethics of AI” and not reading that you’ve been assigned for another course (such as Philosophy of AI). Otherwise, you are free to pick your own readings, as long as it is a full-length article that is clearly in the area of ethics of AI and from a peer-refereed journal.
The journals listed in the instructions for the course paper: Big Data & Society; Ethics and Information Technology; Information, Communication & Society; International Review of Information Ethics; Surveillance & Society; Philosophy & Technology. If you have any doubts, email me a PDF of the article for quick approval.
For reference, I've listed readings from previous years at the bottom of this page.
Five short written assignments: (1 or 2 pages each)
Three "literature notes", including comments on the reading as a whole, and on 3-5 quoted passages (follow the template p. ovided in OneDrive/Teams); please try to avoid duplications with existing literature notes. (Use the templates):
one about an assigned reading from the second meeting;
one about one of your two individually selected readings;
one about EITHER an assigned reading from the third meeting OR one of your two individually selected readings. (feel free to do both, if you like!)
One "concept note" about a key concept in the ethics of AI: using a standard format in the files I've set up for you. For inspiration, you can also consult the attached file Key concepts for AI Ethics.
Later, I'll set up a form where you enter which readings, concepts, and DEC topics you'll be choosing, to avoid overlap.
Formalities:
When you've met the requirements, each of you will receive 2.5 EC, registered as the course "GWMIND1601 Independent Work" – and should be able to add "Philosophy of AI for Digital Ethics students". It will be registered as a "YEAR" course, so not in a particular block.
Please submit these by Sept 1, 2025. If you need more time, or if you need a grade sooner, let me know!
van Woudenberg, R., Ranalli, C., & Bracker, D. (2024). Authorship and ChatGPT: a Conservative View. Philos Technol, 37(1), 34. https://doi.org/10.1007/s13347-024-00715-1
Porsdam Mann, S., Earp, B. D., Nyholm, S., Danaher, J., Møller, N., Bowman-Smart, H., Hatherley, J., Koplin, J., Plozza, M., Rodger, D., Treit, P. V., Renard, G., McMillan, J., & Savulescu, J. (2023). Generative AI entails a credit–blame asymmetry. Nature Machine Intelligence, 5(5), 472–475. https://doi.org/10.1038/s42256-023-00653-1
Birhane, A., Kasirzadeh, A., Leslie, D., & Wachter, S. (2023). Science in the age of large language models. Nature Reviews Physics, 5(5), 277–280. https://doi.org/10.1038/s42254-023-00581-4
1. Shoshana Zuboff, S. (2015). Big other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, 30(1), 75–89. https://doi.org/10.1057/jit.2015.5
2. Williams, A., & Raekstad, P. (2022). Surveillance Capitalism or Information Republic? Journal of Applied Philosophy, 39(3), 421–440. https://doi.org/10.1111/japp.12570
3. Bruineberg, J. (2023). Adversarial inference: predictive minds in the attention economy. Neuroscience of Consciousness, 2023(1), niad019. https://doi.org/10.1093/nc/niad019/51260657/niad019.pdf
4. Gould, C. C. (2019). How Democracy Can Inform Consent: Cases of the Internet and Bioethics. Journal of Applied Philosophy, 36(2), 173–191. https://doi.org/10.1111/japp.12360
5. Brennan, J. (2019). Democracy as Uninformed Non‐Consent. Journal of Applied Philosophy, 36(2), 205–211. https://doi.org/10.1111/japp.12359
6. Baeza-Yates, R., & Fayyad, U. M. (2022). The Attention Economy and the Impact of Artificial Intelligence. In Perspectives on Digital Humanism (pp. 123-134). Springer International Publishing. https://doi.org/10.1007/978-3-030-86144-5_18
7. Bombaerts, G., Anderson, J., Dennis, M., Gerola, A., Frank, L., Hannes, T., Hopster, J., Marin, L., & Spahn, A. (2023). Attention as Practice: Buddhist Ethics Responses to Persuasive Technologies. Global Philosophy, 33(2). https://doi.org/10.1007/s10516-023-09680-4
Some suggestions for additional readings (to select individually - again, you are free to select any two additional readings, not necessarily from this list – just check with me first)
Abbate, F. (2023). Natural and Artificial Intelligence: A Comparative Analysis of Cognitive Aspects. Minds and Machines, 33(4), 791–815. https://doi.org/10.1007/s11023-023-09646-w
Bhargava, V. R., & Velasquez, M. (2020). Ethics of the Attention Economy: The Problem of Social Media Addiction. Business Ethics Quarterly, 1–39. https://doi.org/10.1017/beq.2020.32
Kerr, A. D., & Scharp, K. (2022). The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence. Minds and Machines, 32(3), 585–611. https://doi.org/10.1007/s11023-022-09609-7
Sangiovanni, A. (2019). Democratic Control of Information in the Age of Surveillance Capitalism. Journal of Applied Philosophy, 36(2), 212–216. https://doi.org/10.1111/japp.12363
West, S. M. (2019). Data capitalism: Redefining the logics of surveillance and privacy. Business & society, 58(1), 20–41. https://doi.org/10.1177/0007650317718185
Session One
Bostrom, Nick, and Eliezer Yudkowsky. 2014. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish, and William M. Ramsey, 316–34. Cambridge: Cambridge University Press.
Peters 2022 Explainable AI lacks regulative reasons- Why AI and human decision‑making are not equally opaque
Peters2022ExplainableAI
Sunstein 2019 Algorithms, Correcting Biases
Sunstein2019AlgorithmsCorrectingBiases
Coeckelbergh 2020 AI Ethics - Responsibility and Explainability
Coeckelbergh2020ResponsibilityExplainability
Session Three
Sven Nyholm 2022 A new control problem? Humanoid robots, artificial intelligence, and the value of control
Nyholm2022NewControlProblem
Björn Lundgren 2021 Ethical machine decisions and the input-selection problem
Lundgren2021InputSelection
Readings in 2021-22
Backer, Larry Catt. 2018. “And an Algorithm to Bind Them All? Social Credit, Data Driven Governance, and the Emergence of an Operating System for Global Normative Orders.” SSRN Electronic JournalBacker2018AlgorithmToBindThemAll_4j88c7
Creemers, Rogier. manuscript. “China’s Social Credit System: An Evolving Practice of Control.” SSRN Electronic Journal
Floridi, L., Cowls, J., Beltrametti, M. et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds & Machines 28, 689–707 (2018).
Bostrom, Nick, and Eliezer Yudkowsky. 2014. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish, and William M. Ramsey, 316–34. Cambridge: Cambridge University Press.
Commission, European. 2021. “Fostering a European Approach to Artificial Intelligence.”
{Quotation here… Use the > sign to format as a blockquote}
Comment:
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Page 123:
{Quotation here… Use the > sign to format as a blockquote}
Comment:
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Page 123:
{Quotation here… Use the > sign to format as a blockquote}
Comment:
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
⇒ #keyConcepts
{For this "concept note", please elaborate on a key concept of your choice (for some example, see the file "Key concepts for AI Ethics"). This note should be 300-500 words}
Definition of
{Define this concept in 2-3 sentences, then explain how it is distinct from two other closely related concepts (see, for example, items on this list: Key concepts for AI Ethics)}
Implications of commitment to
…
Societal transformations required for addressing concern raised by
…
1. The organization
2. The AI technologies Employed
3. Ethical concerns
{⇒ In this section, discuss three ethical concerns that are raised by this organization's use of these AI technologies}
{Replace this text with a brief intro, mention some of the things that the organization is clearly aware of or doing well.}