Inverse-Wishart distribution

In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution.

Inverse-Wishart
Notation
Parameters degrees of freedom (real)
, scale matrix (pos. def.)
Support is p × p positive definite
PDF

Mean For
Mode [1]:406
Variance see below

We say follows an inverse Wishart distribution, denoted as , if its inverse has a Wishart distribution . Important identities have been derived for the inverse-Wishart distribution.[2]

DensityEdit

The probability density function of the inverse Wishart is:[3]

 

where   and   are   positive definite matrices, and Γp(·) is the multivariate gamma function.

TheoremsEdit

Distribution of the inverse of a Wishart-distributed matrixEdit

If   and   is of size  , then   has an inverse Wishart distribution   .[4]

Marginal and conditional distributions from an inverse Wishart-distributed matrixEdit

Suppose   has an inverse Wishart distribution. Partition the matrices   and   conformably with each other

 

where   and   are   matrices, then we have

i)   is independent of   and  , where   is the Schur complement of   in  ;

ii)  ;

iii)  , where   is a matrix normal distribution;

iv)  , where  ;

Conjugate distributionEdit

Suppose we wish to make inference about a covariance matrix   whose prior   has a   distribution. If the observations   are independent p-variate Gaussian variables drawn from a   distribution, then the conditional distribution   has a   distribution, where  .

Because the prior and posterior distributions are the same family, we say the inverse Wishart distribution is conjugate to the multivariate Gaussian.

Due to its conjugacy to the multivariate Gaussian, it is possible to marginalize out (integrate out) the Gaussian's parameter  .

 

(this is useful because the variance matrix   is not known in practice, but because   is known a priori, and   can be obtained from the data, the right hand side can be evaluated directly). The inverse-Wishart distribution as a prior can be constructed via existing transferred prior knowledge.[5]

MomentsEdit

The following is based on Press, S. J. (1982) "Applied Multivariate Analysis", 2nd ed. (Dover Publications, New York), after reparameterizing the degree of freedom to be consistent with the p.d.f. definition above.

The mean:[4]:85

 

The variance of each element of  :

 

The variance of the diagonal uses the same formula as above with  , which simplifies to:

 

The covariance of elements of   are given by:

 


The results are expressed in the more succinct Kronecker product form by Rosen[6] as follows.

 

where   and   commutation matrix. There is a typo in Rosen's paper whereby the coefficient of   is given as   rather than  . Also the expression for the mean square inverse Wishart, corollary 3.1, should read  

The variances of the Wishart product are also obtained by Cook et. al.[7] in the singular case and, by extension, to the full rank case. In the complex case, the "white" inverse complex Wishart   was shown by Shaman[8] to have diagonal statistical structure in which the leading diagonal elements are correlated, while all other element are uncorrelated. It was also shown by Brennan and Reed[9] using a matrix partitioning procedure, albeit in the complex variable domain, that the marginal pdf of the [1,1] diagonal element of this matrix has an Inverse-chi-squared distribution. This extends easily to all diagonal elements since   is statistically invariant under orthogonal transformations, which includes interchanges of diagonal elements.

For the inverse Chi squared distribution, with arbitrary   degrees of freedom, the pdf is

 

the mean and variance of which are   respectively. These two parameters are matched to the corresponding inverse Wishart diagonal moments when   and hence the diagonal element marginal pdf of   becomes:

 

which, below, is generalized to all diagonal elements. Note that the mean of the complex inverse Wishart is thus   and differs from the real valued Wishart case which is  .

Related distributionsEdit

A univariate specialization of the inverse-Wishart distribution is the inverse-gamma distribution. With   (i.e. univariate) and  ,   and   the probability density function of the inverse-Wishart distribution becomes

 

i.e., the inverse-gamma distribution, where   is the ordinary Gamma function.

The Inverse Wishart distribution is a special case of the inverse matrix gamma distribution when the shape parameter   and the scale parameter  .


Another generalization has been termed the generalized inverse Wishart distribution,  . A   positive definite matrix   is said to be distributed as   if   is distributed as  . Here   denotes the symmetric matrix square root of  , the parameters   are   positive definite matrices, and the parameter   is a positive scalar larger than  . Note that when   is equal to an identity matrix,  . This generalized inverse Wishart distribution has been applied to estimating the distributions of multivariate autoregressive processes.[10]

A different type of generalization is the normal-inverse-Wishart distribution, essentially the product of a multivariate normal distribution with an inverse Wishart distribution.

When the scale matrix is an identity matrix,   is an arbitrary orthogonal matrix, replacement of   by   does not change the pdf of   so   belongs to the family of spherically invariant random processes (SIRPs) in some sense.
Thus, an arbitrary p-vector   with   can be rotated into the vector   without changing the pdf of  , moreover   can be a permutation matrix which exchanges diagonal elements. It follows that the diagonal elements of   are identically inverse chi squared distributed, with pdf   in the previous section though they are not mutually independent. The result is known in optimal portfolio statistics, as in Theorem 2 Corollary 1 of Bodnar et al[11], where it is expressed in the inverse form  .

See alsoEdit

ReferencesEdit

  1. ^ A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. 2B (2 ed.). Arnold. ISBN 978-0-340-80752-1.
  2. ^ Haff, LR (1979). "An identity for the Wishart distribution with applications". Journal of Multivariate Analysis. 9 (4): 531–544. doi:10.1016/0047-259x(79)90056-3.
  3. ^ Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.; Vehtari, Aki; Rubin, Donald B. (2013-11-01). Bayesian Data Analysis, Third Edition (3rd ed.). Boca Raton: Chapman and Hall/CRC. ISBN 9781439840955.
  4. ^ a b Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. Academic Press. ISBN 978-0-12-471250-8.
  5. ^ Shahrokh Esfahani, Mohammad; Dougherty, Edward (2014). "Incorporation of Biological Pathway Knowledge in the Construction of Priors for Optimal Bayesian Classification". IEEE Transactions on Bioinformatics and Computational Biology. 11 (1): 202–218. doi:10.1109/tcbb.2013.143. PMID 26355519.
  6. ^ Rosen, Dietrich von (1988). "Moments for the Inverted Wishart Distribution". Scand J Statistics. 15: 97–109 – via JSTOR.
  7. ^ Cook, R D; Forzani, Liliana (August 2019). "On the mean and variance of the generalized inverse of a singular Wishart matrix". Electronic Journal of Statistics. 5.
  8. ^ Shaman, Paul (1980). "The Inverted Complex Wishart Distribution and Its Application to Spectral Estimation" (PDF). Journal of Multivariate Analysis. 10: 51–59.
  9. ^ Brennan, L E; Reed, I S (January 1982). "An Adaptive Array Signal Processing Algorithm for Communications". IEEE Trans on Aerospace and Electronic Systems. AES-18, No. 1: 120–130.
  10. ^ Triantafyllopoulos, K. (2011). "Real-time covariance estimation for the local level model". Journal of Time Series Analysis. 32 (2): 93–107. arXiv:1311.0634. doi:10.1111/j.1467-9892.2010.00686.x.
  11. ^ Bodnar T, Mazur S, Podg'orski K (January 2015). "Singular Inverse Wishart Distribution with Application to Portfolio Theory". Department of Statistics, Lund university. Department of Statistics, Lund university. (Working Papers in Statistics; Nr. 2): 1–17.