Autopatrolled, Extended confirmed users, Page movers, New page reviewers, Pending changes reviewers, Rollbackers
447,143
edits
(refs) 
(clean up, typos fixed: For example → For example, using AWB) 

{{expert
[[File:Biologist and statistician Ronald Fisher.jpgthumbright200pxBiologist and statistician Ronald Fisher]]
'''Fiducial inference''' is one of a number of different types of [[statistical inference]]. These are rules, intended for general application, by which conclusions can be drawn from [[Sample (statistics)samples]] of data. In modern statistical practice, attempts to work with fiducial inference have fallen out of fashion in favour of [[frequentist inference]], [[Bayesian inference]] and [[decision theory]]. However, fiducial inference is important in the [[history of statistics]] since its development led to the parallel development of concepts and tools in [[theoretical statistics]] that are widely used. Some current research in statistical methodology is either explicitly linked to fiducial inference or is closely connected to it.
==The fiducial distribution==
Fisher required the existence of a [[sufficient statistic]] for the fiducial method to apply. Suppose there is a single sufficient statistic for a single parameter. That is, suppose that the [[conditional distribution]] of the data given the statistic does not depend on the value of the parameter. For example, suppose that ''n'' independent observations are uniformly distributed on the interval <math>[0,\omega]</math>. The maximum, ''X'', of the ''n'' observations is a [[sufficient statistic]] for ω. If only ''X'' is recorded and the values of the remaining observations are forgotten, these remaining observations are equally likely to have had any values in the interval <math>[0,X]</math>. This statement does not depend on the value of ω. Then ''X'' contains all the available information about ω and the other observations could have given no further information.
The [[cumulative distribution function]] of ''X'' is
