Updating supposing and maxent
The principle of maximum entropy states that, subject to precisely stated prior data (such as a proposition that expresses testable information), the probability distribution which best represents the current state of knowledge is the one with largest entropy.
Another way of stating this: Take precisely stated prior data or testable information about a probability distribution function.
Jaynes was a strong advocate of this approach, claiming the maximum entropy distribution represented the least informative distribution.
Alternatively, the principle is often invoked for model specification: in this case the observed data itself is assumed to be the testable information.
The selected distribution is the one that makes the least claim to being informed beyond the stated prior data, that is to say the one that admits the most ignorance beyond the stated prior data.
In particular, Jaynes offered a new and very general rationale why the Gibbsian method of statistical mechanics works.
He argued that the entropy of statistical mechanics and the information entropy of information theory are principally the same thing.
Consequently, statistical mechanics should be seen just as a particular application of a general tool of logical inference and information theory.
We have some testable information I about a quantity x which takes values in some interval of the real numbers (all integrals below are over this interval).
We assume this information has the form of m constraints on the expectations of the functions f The invariant measure function m(x) can be best understood by supposing that x is known to take values only in the bounded interval (a, b), and that no other information is given.