Agency is the freedom and capacity for an entity to act.

source

Mechanistic Agency

Mechanistic Agency is an entity’s functional capacity to take action based on the information given, and is commonly attributed to artificial intelligence and machines.

The agent holds representational states of how things are in its environment. And it also has motivational or desired state, which specify how things ought to be. Its functional capacity is its ability to process these two to act suitably when there is mismatch and dissonance.

Mechanistic agents are concerned purely with outcomes (which action will lead to the better immediate result?). The mechanistic view centers around functionality and is closely related to the Platonic view of ethics. That is, ethical behaviour is not a matter of choosing possibly conflicting “goods”, but a matter of knowledge.

On artificial intelligence:

The problem for AI, of course, is that AI has no intrinsic motivation or “desire” to be a particular “kind” of moral agent.

Moreover, a decision made at any timestep does not fundamentally change itself; to put it crudely, doing one forward inference pass through the model does not update its weights. Even if it somehow did—through some active or continual learning setting—its weights encode only its representations of the world, not any sense of self-regard or self-perception.

See: Back Propagation, Node, Computer vision

That said:

It follows, that more information, more data, more annotations, can bring such agents closer to the ethical path. The LLM can simply be improved or patched towards some ideal.

Volitional Agency

Volitional Agency on the other hand, is rooted in actively making decisions guided by internal desires.

It’s not just taking an action but evaluating one’s values or becoming a certain kind of person. Volitional agency is rooted in Aristotle’s notion that virtue is an active and unending practice. Not only are observed actions part of morality, but so are the internal judgements leading up to their actions, and thinking about those actions’ consequences.

The implication is that volitional agency is reserved for those capable of genuine value-based deliberation and self-shaping.

See: Self-anthropology, Metacognition

Side note: Three necessary conditions for whether an agent can be held morally responsible

  1. Normative significance: The agent faces a significant choice of choosing good or bad, right or wrong.
  2. Judgemental capacity: The agent has understanding and access to evidence for making informative judgements.
  3. Relevant control: The agent has control over choosing.

Taken these into account, only a minor subset of all AI agents can really be held responsible.

Questions

  1. Can we one day outsource moral decision-making to an LLM? Who is held responsible for their decisions?
  2. As AI systems become more and more “agentic”, will they take from human agency?

See also: Tools for Conviviality

New vocabulary

epistemic (adjective): relating to knowledge or the study of knowledge