There is not a currently accepted formal definition of an optimizing system (in the sense meant in agent foundations). People tend to have the sense that they “know it when they see it”, but pinning down the exact definition has been challenging. This matters because the danger of powerful AI systems routes through their ability to strongly optimize their environment.
Finding such a definition is one of our core research problems.
One thing that a formal definition allows you to do is to definitively say whether a giving system is an optimizing system. However, such a definition may not let you make this determination in practice. For this reason we also seek an understanding of effective methods to determine whether systems are optimizing, possibly by making approximations or assumptions about the system. Such effective methods may also let you calculate things like the optimization’s strength, robustness, and its target.
Given any system trajectory, one could argue that that exact outcome is precisely what the system was “optimizing” for. This is analogous to the argument that, for any sequences of choices over time, there exists a utility function for which those choices maximized utility.
We don’t think this idea poses a serious challenge to these definitions, but clarifying it is something that a good definition is required to do.
If a dynamical system contains an attractor, then most attempts at defining optimization would categorize trajectories that start in the basin of attraction as optimizing trajectories. However, most people are interested in optimization because it is in some sense unusual, surprising, or impressive. Thus extremely simple dynamical systems with attractors do not seem to be paragons of optimization.
We tend to bite this bullet and say that attractors represent optimizers, but that the more interesting class of optimizers is described by what we call agents.
In some sense, optimization is not possible in ergodic systems, because all trajectories will eventually “spread out” around the state space. In contrast, though the dynamical laws of physical systems are often ergodic, we regularly see what is intuitively optimization behavior in them. So it seems that optimization can be a transient or temporary property of a trajectory.
For some systems, it seems that making a very small change can dramatically reduce the size of the future state space, even when we wouldn’t consider that change to be the addition of an optimizer. The canonical example is a bottle of water with no cap on it, in which case the water could come out of the bottle and go anywhere, versus a bottle of water with a cap on it, which then constrains the water to stay in the small interior of the bottle. We would not typically consider a bottle cap to be an optimizer, nor would we consider a capped bottle sitting still to be an optimizing system.