— The generalization
$$\forall e \in \mathcal{E} : \exists W_e \quad [\text{agency is not required}]$$

Every entity that exists in a system has a welfare state. Agency is a special case.

Why agents are not the right unit

Standard optimization — and standard AI alignment — takes the agent as the fundamental unit. But agents are a subset of a broader category: entities. Rivers, ecosystems, future generations, microbiomes — all are affected by decisions but have no agency.

The consequence: a framework built on agents cannot formally represent the welfare of non-agentic entities. Their exclusion is not deliberate — it is structural. The framework has no slot for them, so they never enter the objective function, which means they can never generate a corrective signal from within the system.

From agent to entity

Define entity $E$ as: any element of a system whose welfare state $W_e$ can be formally represented and for which decisions produce measurable welfare changes $\Delta W_e$.

$$\mathcal{E} = \{e : \exists W_e,\; \exists \Delta W_e(a)\}$$

— The entity set: all elements with a representable welfare state and measurable welfare change under actions.

The agent is a special case of entity: an entity that also has the capacity to take actions and observe outcomes. This does not make agents more important — it makes them a subset. Building a framework around the subset and ignoring the superset is the ontological error that 186 years of economic theory has reproduced.

— Theorem 3.1 (Observational Closure)

Observational Closure Theorem

Let $S$ be a system with operative entity set $E_{op}(S)$. For any entity $e \notin E_{op}(S)$:

(1) $\Delta W_e$ generates no signal in $S$'s objective function;
(2) $S$ cannot detect its own exclusion of $e$ from within;
(3) Therefore, verification of inclusion requires an external perspective.

Corollary: External audit of AI systems is not an ethical preference — it is a mathematical necessity. A system cannot verify its own alignment because any entity excluded from its operative set is invisible to its own objective function.

The corrected objective function

$$F_1 = \sum_{e \in \mathcal{E}} \Delta W_e(a) \quad \text{subject to phenomenon constraints}$$

— The systemic objective: aggregate welfare change across all entities in the system, constrained by physical reality.

Three properties distinguish systemic optimization from standard optimization:

  • The entity set is open — any affected entity can be included. The boundary of $\mathcal{E}$ is empirical, not definitional.
  • Welfare functions need not be agent-declared — they can be observed, constructed from data, or built participatorily. Preference revelation is not required.
  • The constraint set is physical, not social — resources, thermodynamics, time. Budget constraints and institutional constraints are inputs, not axioms.

What changes when entities replace agents

AI systems

Training with $F_1$ instead of individual reward means the system learns to maximize welfare of all affected entities from the start — not via RLHF constraints layered on top of a misaligned objective. The correction is at the level of the loss function, not the guardrail.

Policy

Cost-benefit analysis with entities instead of agents includes ecosystem tipping points, future generation welfare, and communities without political representation. The scope of the analysis is determined by who is affected — not by who has a vote or a preference declaration.

Law

Corporate personhood becomes a special case of entity representation. Rivers, ecosystems, and future generations can be formally represented with $W_e$ — which is already happening in Ecuador, New Zealand, and India. The formal framework provides the mathematical grounding for what those legal systems are already doing pragmatically.