Cumulative frequency analysis
Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The phenomenon may be time- or space-dependent. Cumulative frequency is also called frequency of non-exceedance.
Cumulative frequency analysis is performed to obtain insight into how often a certain phenomenon is below a certain value. This may help in describing or explaining a situation in which the phenomenon is involved, or in planning interventions, for example in flood protection.
This statistical technique can be used to see how likely an event like a flood is going to happen again in the future, based on how often it happened in the past. It can be adapted to bring in things like climate change causing wetter winters and drier summers.
Principles
Definitions
Frequency analysis is the analysis of how often, or how frequently, an observed phenomenon occurs in a certain range.Frequency analysis applies to a record of length N of observed data X1, X2, X3... XN on a variable phenomenon X. The record may be time-dependent or space-dependent or otherwise.
The cumulative frequency MXr of a reference value Xr is the frequency by which the observed values X are less than or equal to Xr.
The relative cumulative frequency Fc can be calculated from:
where N is the number of data
Briefly this expression can be noted as:
When Xr = Xmin, where Xmin is the unique minimum value observed, it is found that Fc = 1/N, because M = 1. On the other hand, when Xr=Xmax, where Xmax is the unique maximum value observed, it is found that Fc = 1, because M = N. Hence, when Fc = 1 this signifies that Xr is a value whereby all data are less than or equal to Xr.
In percentage the equation reads:
Probability estimate
From cumulative frequency
The cumulative probability Pc of X to be smaller than or equal to Xr can be estimated in several ways on the basis of the cumulative frequency M.One way is to use the relative cumulative frequency Fc as an estimate.
Another way is to take into account the possibility that in rare cases X may assume values larger than the observed maximum Xmax. This can be done dividing the cumulative frequency M by N+1 instead of N. The estimate then becomes:
There exist also other proposals for the denominator.
By ranking technique
The estimation of probability is made easier by ranking the data.When the observed data of X are arranged in ascending order, and Ri is the rank number of the observation Xi, where the adfix i indicates the serial number in the range of ascending data, then the cumulative probability may be estimated by:
When, on the other hand, the observed data from X are arranged in descending order, the maximum first and the minimum last, and Rj is the rank number of the observation Xj, the cumulative probability may be estimated by:
Fitting of probability distributions
Continuous distributions
To present the cumulative frequency distribution as a continuous mathematical equation instead of a discrete set of data, one may try to fit the cumulative frequency distribution to a known cumulative probability distribution,.If successful, the known equation is enough to report the frequency distribution and a table of data will not be required. Further, the equation helps interpolation and extrapolation. However, care should be taken with extrapolating a cumulative frequency distribution, because this may be a source of errors. One possible error is that the frequency distribution does not follow the selected probability distribution any more beyond the range of the observed data.
Any equation that gives the value 1 when integrated from a lower limit to an upper limit agreeing well with the data range, can be used as a probability distribution for fitting. A sample of probability distributions that may be used can be found in probability distributions.
Probability distributions can be fitted by several methods, for example:
- the parametric method, determining the parameters like mean and standard deviation from the X data using the method of moments, the maximum likelihood method and the method of probability weighted moments.
- the regression method, linearizing the probability distribution through transformation and determining the parameters from a linear regression of the transformed Pc on the transformed X data.
- the normal distribution, the lognormal distribution, the logistic distribution, the loglogistic distribution, the exponential distribution, the Fréchet distribution, the Gumbel distribution, the Pareto distribution, the Weibull distribution and other
Discontinuous distributions
Sometimes it is possible to fit one type of probability distribution to the lower part of the data range and another type to the higher part, separated by a breakpoint, whereby the overall fit is improved.The figure gives an example of a useful introduction of such a discontinuous distribution for rainfall data in northern Peru, where the climate is subject to the behavior Pacific Ocean current El Niño. When the Niño extends to the south of Ecuador and enters the ocean along the coast of Peru, the climate in Northern Peru becomes tropical and wet. When the Niño does not reach Peru, the climate is semi-arid. For this reason, the higher rainfalls follow a different frequency distribution than the lower rainfalls.
Prediction
Uncertainty
When a cumulative frequency distribution is derived from a record of data, it can be questioned if it can be used for predictions. For example, given a distribution of river discharges for the years 1950–2000, can this distribution be used to predict how often a certain river discharge will be exceeded in the years 2000–50?The answer is yes, provided that the environmental conditions do not change. If the environmental conditions do change, such as alterations in the infrastructure of the river's watershed or in the rainfall pattern due to climatic changes, the prediction on the basis of the historical record is subject to a systematic error.
Even when there is no systematic error, there may be a random error, because by chance the observed discharges during 1950 − 2000 may have been higher or lower than normal, while on the other hand the discharges from 2000 to 2050 may by chance be lower or higher than normal. Issues around this have been explored in the book The Black Swan.
Confidence intervals
can help to estimate the range in which the random error may be.In the case of cumulative frequency there are only two possibilities: a certain reference value X is exceeded or it is not exceeded. The sum of frequency of exceedance and cumulative frequency is 1 or 100%. Therefore, the binomial distribution can be used in estimating the range of the random error.
According to the normal theory, the binomial distribution can be approximated and for large N standard deviation Sd can be calculated as follows:
where Pc is the cumulative probability and N is the number of data. It is seen that the standard deviation Sd reduces at an increasing number of observations N.
The determination of the confidence interval of Pc makes use of Student's t-test. The value of t depends on the number of data and the confidence level of the estimate of the confidence interval. Then, the lower and upper confidence limits of Pc in a symmetrical distribution are found from:
This is known as Wald interval.
However, the binomial distribution is only symmetrical around the mean when Pc = 0.5, but it becomes asymmetrical and more and more skew when Pc approaches 0 or 1. Therefore, by approximation, Pc and 1−Pc can be used as weight factors in the assignation of t.Sd to L and U :
where it can be seen that these expressions for Pc = 0.5 are the same as the previous ones.
N = 25, Pc = 0.8, Sd = 0.08, confidence level is 90%, t = 1.71, L = 0.58, U = 0.85 Thus, with 90% confidence, it is found that 0.58 < Pc < 0.85 Still, there is 10% chance that Pc < 0.58, or Pc > 0.85 |
Notes
- The Wald interval is known to perform poorly.
- The Wilson score interval provides confidence interval for binomial distributions based on score tests and has better sample coverage, see and binomial proportion confidence interval for a more detailed overview.
- Instead of the "Wilson score interval" the "Wald interval" can also be used provided the above weight factors are included.
Return period
- Pe = 1 − Pc
- T = 1/Pe
The upper and lower confidence limits of return periods can be found respectively as:
- TU = 1/
- TL = 1/
The strict notion of return period actually has a meaning only when it concerns a time-dependent phenomenon, like point rainfall. The return period then corresponds to the expected waiting time until the exceedance occurs again. The return period has the same dimension as the time for which each observation is representative. For example, when the observations concern daily rainfalls, the return period is expressed in days, and for yearly rainfalls it is in years.
Need for confidence belts
The figure shows the variation that may occur when obtaining samples of a variate that follows a certain probability distribution. The data were provided by Benson.The confidence belt around an experimental cumulative frequency or return period curve gives an impression of the region in which the true distribution may be found.
Also, it clarifies that the experimentally found best fitting probability distribution may deviate from the true distribution.
Histogram
The observed data can be arranged in classes or groups with serial number k. Each group has a lower limit and an upper limit. When the class contains mk data and the total number of data is N, then the relative class or group frequency is found from:- Fg = mk / N
- Fgk = m/N
- Fg = 100m/N
The histogram can also be derived from the fitted cumulative probability distribution:
- Pgk = Pc − Pc
Often it is desired to combine the histogram with a probability density function as depicted in the black and white picture.