# Location parameter

In statistics, a location parameter of a probability distribution is a scalar- or vector-valued parameter ${\displaystyle x_{0}}$, which determines the "location" or shift of the distribution. In the literature of location parameter estimation, the probability distributions with such parameter are found to be formally defined in one of the following equivalent ways:

• either as having a probability density function or probability mass function ${\displaystyle f(x-x_{0})}$[1]; or
• having a cumulative distribution function ${\displaystyle F(x-x_{0})}$[2]; or
• being defined as resulting from the random variable transformation ${\displaystyle x_{0}+X}$, where ${\displaystyle X}$ is a random variable with a certain, possibly unknown, distribution[3] (See also #Additive_noise).

A direct example of location parameter is the parameter ${\displaystyle \mu }$ of the normal distribution. To see this, note that the p.d.f. (probability density function) ${\displaystyle f(x|\mu ,\sigma )}$ of a normal distribution ${\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}$ can have the parameter ${\displaystyle \mu }$ factored out and be written as:

${\displaystyle g(y-\mu |\sigma )={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {y}{\sigma }}\right)^{2}}}$

thus fulfilling the first of the definitions given above.

The above definition indicates, in the one-dimensional case, that if ${\displaystyle x_{0}}$ is increased, the probability density or mass function shifts rigidly to the right, maintaining its exact shape.

A location parameter can also be found in families having more than one parameter, such as location–scale families. In this case, the probability density function or probability mass function will be a special case of the more general form

${\displaystyle f_{x_{0},\theta }(x)=f_{\theta }(x-x_{0})}$

where ${\displaystyle x_{0}}$ is the location parameter, θ represents additional parameters, and ${\displaystyle f_{\theta }}$ is a function parametrized on the additional parameters.

An alternative way of thinking of location families is through the concept of additive noise. If ${\displaystyle x_{0}}$  is a constant and W is random noise with probability density ${\displaystyle f_{W}(w),}$  then ${\displaystyle X=x_{0}+W}$  has probability density ${\displaystyle f_{x_{0}}(x)=f_{W}(x-x_{0})}$  and its distribution is therefore part of a location family.

## Proofs

For the continuous univariate case, consider a probability density function ${\displaystyle f(x|\theta ),x\in [a,b]\subset \mathbb {R} }$ , where ${\displaystyle \theta }$  is a vector of parameters. A location parameter ${\displaystyle x_{0}}$  can be added by defining:

${\displaystyle g(x|\theta ,x_{0})=f(x-x_{0}|\theta ),\;x\in [a-x_{0},b-x_{0}]}$

it can be proved that ${\displaystyle g}$  is a p.d.f. by verifying if it respects the two conditions[4] ${\displaystyle g(x|\theta ,x_{0})\geq 0}$  and ${\displaystyle \int _{-\infty }^{\infty }g(x|\theta ,x_{0})dx=1}$ . ${\displaystyle g}$  integrates to 1 because:

${\displaystyle \int _{-\infty }^{\infty }g(x|\theta ,x_{0})dx=\int _{a-x_{0}}^{b-x_{0}}g(x|\theta ,x_{0})dx=\int _{a-x_{0}}^{b-x_{0}}f(x-x_{0}|\theta )dx}$

now making the variable change ${\displaystyle u=x-x_{0}}$  and updating the integration interval accordingly yields:

${\displaystyle \int _{a}^{b}f(u|\theta )du=1}$

because ${\displaystyle f(x|\theta )}$  is a p.d.f. by hypothesis. ${\displaystyle g(x|\theta ,x_{0})\geq 0}$  follows from ${\displaystyle g}$  sharing the same image of ${\displaystyle f}$ , which is a p.d.f. so its image is contained in ${\displaystyle [0,1]}$ .