In mathematics, majorization is a preorder on vectors of real numbers. For a vector, we denote by the vector with the same components, but sorted in descending order. Given, we say that weakly majorizesfrom below written as iff Equivalently, we say that is weakly majorized by from below, written as. If and in addition, we say that majorizes , written as. Equivalently, we say that is majorized by, written as. Note that the majorization order does not depend on the order of the components of the vectors or. Majorization is not a partial order, since and do not imply, it only implies that the components of each vector are equal, but not necessarily in the same order. Note that the notation is inconsistent in the mathematical literature: some use the reverse notation, e.g., is replaced with . A function is said to be Schur convex when implies. Similarly, is Schur concave when implies The majorization partial order on finite sets, described here, can be generalized to the Lorenz ordering, a partial order on distribution functions. For example, a wealth distribution is Lorenz-greater than another iff its Lorenz curve lies below the other. As such, a Lorenz-greater wealth distribution has a higher Gini coefficient, and has more income inequality.
Examples
The order of the entries does not affect the majorization, e.g., the statement is simply equivalent to. majorization:. For vectors with n components majorization:. For vectors with n components:
Geometry of majorization
For we have if and only if is in the convex hull of all vectors obtained by permuting the coordinates of. Figure 1 displays the convex hull in 2D for the vector. Notice that the center of the convex hull, which is an interval in this case, is the vector. This is the "smallest" vector satisfying for this given vector. Figure 2 shows the convex hull in 3D. The center of the convex hull, which is a 2D polygon in this case, is the "smallest" vector satisfying for this given vector.
Each of the following statements is true if and only if :
for some doubly stochastic matrix . This is equivalent to saying can be represented as a convex combination of the permutations of. Furthermore the permutations require at most.
From we can produce by a finite sequence of "Robin Hood operations" where we replace two elements and with and, respectively, for some .
Suppose that for two real vectors, majorizes. Then it can be shown that there exists a set of probabilities and a set of permutations such that. Alternatively it can be shown that there exists a doubly stochastic matrix such that
We say that a Hermitian operator,, majorizes another,, if the set of eigenvalues of majorizes that of.
Given, then is said to majorize if, for all,. If there is some so that for all, then is said to dominate . Alternatively, the preceding terms are often defined requiring the strict inequality instead of in the foregoing definitions.
Generalizations
Various generalizations of majorization are discussed in chapters 14 and 15 of the reference workInequalities: Theory of Majorization and Its Applications. Albert W. Marshall, Ingram Olkin, Barry Arnold. Second edition. Springer Series in Statistics. Springer, New York, 2011.