Statistics, like all mathematical disciplines, does not infer valid conclusions from nothing. Inferring interesting conclusions about real statistical populations almost always requires some background assumptions. Those assumptions must be made carefully, because incorrect assumptions can generate wildly inaccurate conclusions.
Here are some examples of statistical assumptions.
- Independence of observations from each other (this assumption is an especially common error).
- Independence of observational error from potential confounding effects.
- Exact or approximate normality of observations (or errors).
- Linearity of graded responses to quantitative stimuli, e.g. in linear regression.
Classes of assumptionsEdit
There are two approaches to statistical inference: model-based inference and design-based inference. Both approaches rely on some statistical model to represent the data-generating process. In the model-based approach, the model is taken to be initially unknown, and one of the goals is to select an appropriate model for inference. In the design-based approach, the model is taken to be known, and one of the goals is to ensure that the sample data are selected randomly enough for inference.
Statistical assumptions can be put into two classes, depending upon which approach to inference is used.
- Model-based assumptions. These include the following three types:
- Distributional assumptions. Where a statistical model involves terms relating to random errors, assumptions may be made about the probability distribution of these errors. In some cases, the distributional assumption relates to the observations themselves.
- Structural assumptions. Statistical relationships between variables are often modelled by equating one variable to a function of another (or several others), plus a random error. Models often involve making a structural assumption about the form of the functional relationship, e.g. as in linear regression. This can be generalised to models involving relationships between underlying unobserved latent variables.
- Cross-variation assumptions. These assumptions involve the joint probability distributions of either the observations themselves or the random errors in a model. Simple models may include the assumption that observations or errors are statistically independent.
- Design-based assumptions. These relate to the way observations have been gathered, and often involve an assumption of randomization during sampling.
The model-based approach is the most commonly used in statistical inference; the design-based approach is used mainly with survey sampling. With the model-based approach, all the assumptions are effectively encoded in the model.
Given that the validity of any conclusion drawn from a statistical inference depends on the validity of the assumptions made, it is clearly important that those assumptions should be reviewed at some stage. Some instances—for example where data are lacking—may require that researchers judge whether an assumption is reasonable. Researchers can expand this somewhat to consider what effect a departure from the assumptions might produce. Where more extensive data are available, various types of procedures for statistical model validation are available—e.g. for regression model validation.
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (February 2010) (Learn how and when to remove this template message)
- Cox D. R. (2006), Principles of Statistical Inference, Cambridge University Press.
- de Gruijter J., Brus D., Bierkens M., Knotters M. (2006), Sampling for Natural Resource Monitoring, Springer-Verlag.
- Kruskal, William (December 1988). "Miracles and statistics: the casual assumption of independence (ASA Presidential address)". Journal of the American Statistical Association. 83 (404): 929–940. doi:10.2307/2290117. JSTOR 2290117.
- McPherson, G. (1990), Statistics in Scientific Investigation: Its Basis, Application and Interpretation, Springer-Verlag. ISBN 0-387-97137-8