The Law of Requisite Knowledge
Principia Cybernetica Web

The Law of Requisite Knowledge

In order to adequately compensate perturbations, a control system must "know" which action to select from the variety of available actions


Control is not only dependent on a requisite variety of actions in the regulator: the regulator must also know which action to select in response to a given perturbation. In the simplest case, such knowledge can be represented as a one-to-one mapping from the set D of perceived disturbances to the set R of regulatory actions: f: D -> R, which maps each disturbance to the appropriate action that will suppress it.

For example, a thermostat will map the perception "temperature too low" to the action "heat", and the perception "temperature high enough" to the action "do not heat". Such knowledge can also be expressed as a set of production rules of the form "if condition (perceived disturbance), then action".

This "knowledge" is embodied in different systems in different ways, for example through the specific ways designers have connected the components in artificial systems, or in organisms through evolved structures such as genes or learned connections between neurons as in the brain.

In the absence of such knowledge, the system would have to try out actions blindly, until one would by chance eliminate the perturbation. The larger the variety of disturbances (and therefore of requisite actions), the smaller the likelihood that a randomly selected action would achieve the goal, and thus ensure the survival of the system. Therefore, increasing the variety of actions must be accompanied by increasing the constraint or selectivity in choosing the appropriate action, that is, increasing knowledge. This requirement may be called the law of requisite knowledge. Since all living organisms are also control systems, life therefore implies knowledge, as in Maturana's often quoted statement that "to live is to cognize".

In practice, for complex control systems control actions will be neither blind nor completely determined, but more like "educated guesses" that have a reasonable probability of being correct, but without a guarantee of success. Feedback may help the system to correct the errors it thus makes before it is destroyed. Thus, goal-seeking activity becomes equivalent to heuristic problem-solving .

Mathematical representation

Such incomplete or "heuristic" knowledge can be quantified as the conditional uncertainty of an action from R, given a disturbance in D: H(R|D). (The uncertainty or entropy H is calculated in the normal way , but using conditional probabilities P(r|d)).

H(R|D) = 0 represents the case of no uncertainty or complete knowledge, where the action is completely determined by the disturbance. H(R|D) = H(R) represents complete ignorance. Aulin has shown that the law of requisite variety can be extended to include knowledge or ignorance by simply adding this conditional uncertainty term (which remained implicit in Ashby's non-probabilistic formulation of the law):

H(E) ≥ H(D) + H(R|D) - H(R) - K

This says that the variety in the essential variables E can be reduced by:

1) increasing buffering K;
2) increasing variety of action H(R); or
3) decreasing the uncertainty H(R|D) about which action to choose for a given disturbance, that is, increasing knowledge.

Conclusion

This principle reminds us that a variety of actions is not sufficient for effective control, the system must be able to (vicariously) select an appropriate one. Without knowledge, the system would have to try out an action blindly , and the larger the variety of perturbations, the smaller the probability that this action would turn out to be adequate. Notice the tension between this law and the law of selective variety : the more variety, the more difficult the selection to be made, and the more complex the requisite knowledge.

An equivalent principle was formulated by Conant and Ashby (1970) as "Every good regulator of a system must be a model of that system". Therefore the present principle can also be called the law of regulatory models .

Reference: Heylighen F. (1992): " Principles of Systems and Cybernetics: an evolutionary perspective ", in: Cybernetics and Systems '92, R. Trappl (ed.), (World Science, Singapore), p. 3-10.


Copyright© 2001 Principia Cybernetica - Referencing this page

Author
F. Heylighen, & C. Joslyn, ,

Date
Sep 3, 2001 (modified)
Aug 1993 (created)

Home

Metasystem Transition Theory

Principles of Systems and Cybernetics /

Up
Prev. Next
Down



Discussion

Add comment...