artificial moral agents
This term refers to artificial agents who are considered in the (hypothetical?) role of making moral judgements. Note that this is different from the question of whether artificial agents have moral status and deserve respect for their autonomy and dignity or concern for their interests and well-being.
The question of when and under what conditions AI systems could be considered to be artificial moral agents hinges on whether they could make moral decisions and if so, what obligations they thereby incur.
With regard to the question of whether artificial moral agents in fact exist, we can distinguish between weak and strong AI moral agency.
- Weak moral agency refers to AI systems designed to make decisions based on programmed ethical rules or algorithms. Weak moral agents perform or behave in ways that meet ethical criteria and may even follow decision-making procedures that are also used by human moral agents, but they do so without understanding or consciousness. As mere output of their design, their behavior does not meet the standard of moral autonomous.
- Strong moral agency – if it turns out to be possible – would imply an AI system has an understanding of ethics and can make autonomous moral decisions. It would need a high level of autonomy, consciousness, and perhaps virtue not yet achievable in AI systems.