Categorical distribution
In probability theory and statistics, a categorical distribution is a discrete probability distribution that describes the possible results of a random variable that can take on one of K possible categories, with the probability of each category separately specified. There is no innate underlying ordering of these outcomes, but numerical labels are often attached for convenience in describing the distribution,. The K-dimensional categorical distribution is the most general distribution over a K-way event; any other discrete distribution over a size-K sample space is a special case. The parameters specifying the probabilities of each possible outcome are constrained only by the fact that each must be in the range 0 to 1, and all must sum to 1.
The categorical distribution is the generalization of the Bernoulli distribution for a categorical random variable, i.e. for a discrete variable with more than two possible outcomes, such as the roll of a die. On the other hand, the categorical distribution is a special case of the multinomial distribution, in that it gives the probabilities of potential outcomes of a single drawing rather than multiple drawings.
Terminology
Occasionally, the categorical distribution is termed the "discrete distribution". However, this properly refers not to one particular family of distributions but to a general class of distributions.In some fields, such as machine learning and natural language processing, the categorical and multinomial distributions are conflated, and it is common to speak of a "multinomial distribution" when a "categorical distribution" would be more precise. This imprecise usage stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-K" vector rather than as an integer in the range 1 to K; in this form, a categorical distribution is equivalent to a multinomial distribution for a single observation.
However, conflating the categorical and multinomial distributions can lead to problems. For example, in a Dirichlet-multinomial distribution, which arises commonly in natural language processing models as a result of collapsed Gibbs sampling where Dirichlet distributions are collapsed out of a hierarchical Bayesian model, it is very important to distinguish categorical from multinomial. The joint distribution of the same variables with the same Dirichlet-multinomial distribution has two different forms depending on whether it is characterized as a distribution whose domain is over individual categorical nodes or over multinomial-style counts of nodes in each particular category. Both forms have very similar-looking probability mass functions, which both make reference to multinomial-style counts of nodes in a category. However, the multinomial-style PMF has an extra factor, a multinomial coefficient, that is a constant equal to 1 in the categorical-style PMF. Confusing the two can easily lead to incorrect results in settings where this extra factor is not constant with respect to the distributions of interest. The factor is frequently constant in the complete conditionals used in Gibbs sampling and the optimal distributions in variational methods.
Formulating distributions
A categorical distribution is a discrete probability distribution whose sample space is the set of k individually identified items. It is the generalization of the Bernoulli distribution for a categorical random variable.In one formulation of the distribution, the sample space is taken to be a finite sequence of integers. The exact integers used as labels are unimportant; they might be or or any other arbitrary set of values. In the following descriptions, we use for convenience, although this disagrees with the convention for the Bernoulli distribution, which uses. In this case, the probability mass function f is:
where, represents the probability of seeing element i and.
Another formulation that appears more complex but facilitates mathematical manipulations is as follows, using the Iverson bracket:
where evaluates to 1 if, 0 otherwise. There are various advantages of this formulation, e.g.:
- It is easier to write out the likelihood function of a set of independent identically distributed categorical variables.
- It connects the categorical distribution with the related multinomial distribution.
- It shows why the Dirichlet distribution is the conjugate prior of the categorical distribution, and allows the posterior distribution of the parameters to be calculated.
where represents the probability of seeing element i and.
This is the formulation adopted by Bishop.
Properties
- The distribution is completely given by the probabilities associated with each number i:, i = 1,...,k, where. The possible sets of probabilities are exactly those in the standard -dimensional simplex; for k = 2 this reduces to the possible probabilities of the Bernoulli distribution being the 1-simplex,
- The distribution is a special case of a "multivariate Bernoulli distribution" in which exactly one of the k 0-1 variables takes the value one.
- Let be the realisation from a categorical distribution. Define the random vector Y as composed of the elements:
- The conjugate prior distribution of a categorical distribution is a Dirichlet distribution. See the [|section below] for more discussion.
- The sufficient statistic from n independent observations is the set of counts of observations in each category, where the total number of trials is fixed.
- The indicator function of an observation having a value i, equivalent to the Iverson bracket function or the Kronecker delta function is Bernoulli distributed with parameter
Bayesian inference using conjugate prior
Formally, this can be expressed as follows. Given a model
then the following holds:
This relationship is used in Bayesian statistics to estimate the underlying parameter p of a categorical distribution given a collection of N samples. Intuitively, we can view the hyperprior vector α as pseudocounts, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations in order to derive the posterior distribution.
Further intuition comes from the expected value of the posterior distribution :
This says that the expected probability of seeing a category i among the various discrete distributions generated by the posterior distribution is simply equal to the proportion of occurrences of that category actually seen in the data, including the pseudocounts in the prior distribution. This makes a great deal of intuitive sense: if, for example, there are three possible categories, and category 1 is seen in the observed data 40% of the time, one would expect on average to see category 1 40% of the time in the posterior distribution as well.
, which is indeed what the posterior reveals. However, the true distribution might actually be or
MAP estimation
The maximum-a-posteriori estimate of the parameter p in the above model is simply the mode of the posterior Dirichlet distribution, i.e.,In many practical applications, the only way to guarantee the condition that is to set for all i.
Marginal likelihood
In the above model, the marginal likelihood of the observations is a Dirichlet-multinomial distribution:This distribution plays an important role in hierarchical Bayesian models, because when doing inference over such models using methods such as Gibbs sampling or variational Bayes, Dirichlet prior distributions are often marginalized out. See the article on this distribution for more details.
Posterior predictive distribution
The posterior predictive distribution of a new observation in the above model is the distribution that a new observation would take given the set of N categorical observations. As shown in the Dirichlet-multinomial distribution article, it has a very simple form:There are various relationships among this formula and the previous ones:
- The posterior predictive probability of seeing a particular category is the same as the relative proportion of previous observations in that category. This makes logical sense — intuitively, we would expect to see a particular category according to the frequency already observed of that category.
- The posterior predictive probability is the same as the expected value of the posterior distribution. This is explained more below.
- As a result, this formula can be expressed as simply "the posterior predictive probability of seeing a category is proportional to the total observed count of that category", or as "the expected count of a category is the same as the total observed count of the category", where "observed count" is taken to include the pseudo-observations of the prior.
The crucial line above is the third. The second follows directly from the definition of expected value. The third line is particular to the categorical distribution, and follows from the fact that, in the categorical distribution specifically, the expected value of seeing a particular value i is directly specified by the associated parameter pi. The fourth line is simply a rewriting of the third in a different notation, using the notation farther up for an expectation taken with respect to the posterior distribution of the parameters.
Observe data points one by one and each time consider their predictive probability before observing the data point and updating the posterior. For any given data point, the probability of that point assuming a given category depends on the number of data points already in that category. In this scenario, if a category has a high frequency of occurrence, then new data points are more likely to join that category — further enriching the same category. This type of scenario is often termed a preferential attachment model. This models many real-world processes, and in such cases the choices made by the first few data points have an outsize influence on the rest of the data points.
Posterior conditional distribution
In Gibbs sampling, one typically needs to draw from conditional distributions in multi-variable Bayes networks where each variable is conditioned on all the others. In networks that include categorical variables with Dirichlet priors, the Dirichlet distributions are often "collapsed out" of the network, which introduces dependencies among the various categorical nodes dependent on a given prior. One of the reasons for doing this is that in such a case, the distribution of one categorical node given the others is exactly the posterior predictive distribution of the remaining nodes.That is, for a set of nodes, if the node in question is denoted as and the remainder as, then
where is the number of nodes having category i among the nodes other than node n.
Sampling
There are a number of methods, but the most common way to sample from a categorical distribution uses a type of inverse transform sampling:Assume a distribution is expressed as "proportional to" some expression, with unknown normalizing constant. Before taking any samples, one prepares some values as follows:
- Compute the unnormalized value of the distribution for each category.
- Sum them up and divide each value by this sum, in order to normalize them.
- Impose some sort of order on the categories.
- Convert the values to a cumulative distribution function by replacing each value with the sum of all of the previous values. This can be done in time O. The resulting value for the first category will be 0.
- Pick a uniformly distributed number between 0 and 1.
- Locate the greatest number in the CDF whose value is less than or equal to the number just chosen. This can be done in time O, by binary search.
- Return the category corresponding to this CDF value.
function draw_categorical // where n is the number of samples to draw from the categorical distribution
r = 1
s = 0
for i from 1 to k // where k is the number of categories
v = draw from a binomial distribution // where p is the probability of category i
for j from 1 to v
z = i // where z is an array in which the results are stored
n = n - v
r = r - p
shuffle the elements in z
return z
Sampling via the Gumbel distribution
In machine learning it is typical to parametrize the categorical distribution, via an unconstrained representation in, whose components are given by:where is any real constant. Given this representation, can be recovered using the softmax function, which can then be sampled using the techniques described above. There is however a more direct sampling method that uses samples from the Gumbel distribution. Let be k independent draws from the standard Gumbel distribution, then
will be a sample from the desired categorical distribution.
Related distributions
- Dirichlet distribution
- Multinomial distribution
- Bernoulli distribution
- Dirichlet-multinomial distribution