If the vectors are centered random variables, the Gramian is approximately proportional to the covariance matrix, with the scaling determined by the number of elements in the vector.
In control theory, the controllability Gramian and observability Gramian determine properties of a linear system.
Gramian matrices arise in covariance structure model fitting.
In the finite element method, the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace.
In machine learning, kernel functions are often represented as Gram matrices.
The Gram matrix is symmetric in the case the real product is real-valued; it is Hermitian in the general, complex case by definition of an inner product. The Gram matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation: The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definiteness of the inner product. Note that this also shows that the Gramian matrix is positive definite if and only if the vectors are linearly independent.
Finding a vector realization
Given any positive semidefinite matrix, one can be decompose it as: where is the conjugate transpose of . Here is a matrix, where is the rank of. Various ways to obtain such a decomposition include computing the Cholesky decomposition or taking the non-negative square root of. The columns of can be seen as n vectors in . Then where the dot product is the usual inner product on. Thus a Hermitian matrix is positive semidefinite if and only if it is the Gram matrix of some vectors. Such vectors are called a vector realization of. The infinite-dimensional analog of this statement is Mercer's theorem.
Uniqueness of vector realizations
If is the Gram matrix of vectors in, then applying any rotation or reflection of to the sequence of vectors results in the same Gram matrix. That is, for any orthogonal matrix, the Gram matrix of is also. This is the only way in which two real vector realizations of can differ: the vectors are unique up to orthogonal transformations. In other words, the dot products and are equal if and only if some rigid transformation of transforms the vectors to and 0 to 0. The same holds in the complex case, with unitary transformations in place of orthogonal ones. That is, if the Gram matrix of vectors is equal to the Gram matrix of vectors in, then there is a unitary matrix such that for.
The rank of the Gram matrix of vectors in or equals the dimension of the spacespanned by these vectors.
Gram determinant
The Gram determinant or Gramian is the determinant of the Gram matrix: If are vectors in, then it is the square of the n-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the parallelotope has nonzero n-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is nonsingular. The Gram determinant can also be expressed in terms of the exterior product of vectors by