Jump to content

Moment matrix: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Added an external link (EOM).
Citation bot (talk | contribs)
Alter: url. URLs might have been anonymized. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_CommandLine
 
(25 intermediate revisions by 14 users not shown)
Line 1: Line 1:
{{multiple issues|
In [[mathematics]], a '''moment matrix''' is a special symmetric square [[matrix (mathematics)|matrix]] whose rows and columns are indexed by [[monomial]]s. The entries of the matrix depend on the product of the indexing monomials only (cf. [[Hankel matrices]].)
{{confusing|reason= the subject of the article is not defined and the concepts linked in the introduction (e.g. [[monomial]]) seem to be not used with their common meaning|date=April 2021}}
{{technical|reason= only a specialist of the subject can understand its content|date=April 2021}}
{{context|details=context is provided only by linking some WP articles that do not mention the subject at all|date=April 2021}}
}}
In [[mathematics]], a '''moment matrix''' is a special symmetric square [[matrix (mathematics)|matrix]] whose rows and columns are indexed by [[monomial]]s. The entries of the matrix depend on the product of the indexing monomials only (cf. [[Hankel matrix|Hankel matrices]].)


Moment matrices play an important role in [[polynomial fitting]], polynomial optimization (since [[positive semidefinite matrix|positive semidefinite]] moment matrices correspond to polynomials which are [[Polynomial SOS|sums of squares]])<ref>{{Cite book|last=Lasserre, Jean-Bernard, 1953-|url=https://www.worldcat.org/oclc/624365972|title=Moments, positive polynomials and their applications|date=2010|publisher=Imperial College Press|others=World Scientific (Firm)|isbn=978-1-84816-446-8|location=London|oclc=624365972}}</ref> and [[econometrics]].<ref>{{cite book |first=Arthur S. |last=Goldberger |author-link=Arthur Goldberger |chapter=Classical Linear Regression |title=Econometric Theory |location=New York |publisher=John Wiley & Sons |year=1964 |isbn=0-471-31101-4 |pages=[https://archive.org/details/econometrictheor0000gold/page/156 156–212] |chapter-url=https://books.google.com/books?id=KZq5AAAAIAAJ&pg=PA156 |url-access=registration |url=https://archive.org/details/econometrictheor0000gold/page/156 }}</ref>
Moment matrices play an important role in [[polynomial optimization]], since [[positive semidefinite matrix|positive semidefinite]] moment matrices correspond to polynomials which are [[sums of squares]]{{dn|date=December 2013}}.


==Application in regression==
==Definition==
A multiple [[linear regression]] model can be written as
{{Empty section|date=July 2010}}
:<math>y = \beta_{0} + \beta_{1} x_{1} + \beta_{2} x_{2} + \dots \beta_{k} x_{k} + u</math>
where <math>y</math> is the explained variable, <math>x_{1}, x_{2} \dots, x_{k}</math> are the explanatory variables, <math>u</math> is the error, and <math>\beta_{0}, \beta_{1} \dots, \beta_{k}</math> are unknown coefficients to be estimated. Given observations <math>\left\{ y_{i}, x_{1i}, x_{2i}, \dots, x_{ki} \right\}_{i=1}^{n}</math>, we have a system of <math>n</math> linear equations that can be expressed in matrix notation.<ref>{{cite book |first=David S. |last=Huang |title=Regression and Econometric Methods |location=New York |publisher=John Wiley & Sons |year=1970 |isbn=0-471-41754-8 |pages=52–65 |url=https://books.google.com/books?id=5IxRAAAAMAAJ&pg=PA52 }}</ref>
:<math>\begin{bmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{n} \end{bmatrix} = \begin{bmatrix} 1 & x_{11} & x_{12} & \dots & x_{1k} \\ 1 & x_{21} & x_{22} & \dots & x_{2k} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n1} & x_{n2} & \dots & x_{nk} \\ \end{bmatrix} \begin{bmatrix} \beta_{0} \\ \beta_{1} \\ \vdots \\ \beta_{k} \end{bmatrix} + \begin{bmatrix} u_{1} \\ u_{2} \\ \vdots \\ u_{n} \end{bmatrix}</math>
or
:<math>\mathbf{y} = \mathbf{X} \boldsymbol{\beta} + \mathbf{u}</math>
where <math>\mathbf{y}</math> and <math>\mathbf{u}</math> are each a vector of dimension <math>n \times 1</math>, <math>\mathbf{X}</math> is the [[design matrix]] of order <math>N \times (k+1)</math>, and <math>\boldsymbol{\beta}</math> is a vector of dimension <math>(k+1) \times 1</math>. Under the [[Gauss–Markov theorem|Gauss–Markov assumptions]], the best linear unbiased estimator of <math>\boldsymbol{\beta}</math> is the linear [[least squares]] estimator <math>\mathbf{b} = \left( \mathbf{X}^{\mathsf{T}} \mathbf{X} \right)^{-1} \mathbf{X}^{\mathsf{T}} \mathbf{y}</math>, involving the two moment matrices <math>\mathbf{X}^{\mathsf{T}} \mathbf{X}</math> and <math>\mathbf{X}^{\mathsf{T}} \mathbf{y}</math> defined as
:<math>\mathbf{X}^{\mathsf{T}} \mathbf{X} = \begin{bmatrix} n & \sum x_{i1} & \sum x_{i2} & \dots & \sum x_{ik} \\ \sum x_{i1} & \sum x_{i1}^{2} & \sum x_{i1} x_{i2} & \dots & \sum x_{i1} x_{ik} \\ \sum x_{i2} & \sum x_{i1} x_{i2} & \sum x_{i2}^{2} & \dots & \sum x_{i2} x_{ik} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \sum x_{ik} & \sum x_{i1} x_{ik} & \sum x_{i2} x_{ik} & \dots & \sum x_{ik}^{2} \end{bmatrix}</math>
and
:<math>\mathbf{X}^{\mathsf{T}} \mathbf{y} = \begin{bmatrix} \sum y_{i} \\ \sum x_{i1} y_{i} \\ \vdots \\ \sum x_{ik} y_{i} \end{bmatrix}</math>
where <math>\mathbf{X}^{\mathsf{T}} \mathbf{X}</math> is a square [[normal matrix]] of dimension <math>(k+1) \times (k+1)</math>, and <math>\mathbf{X}^{\mathsf{T}} \mathbf{y}</math> is a vector of dimension <math>(k+1 ) \times 1</math>.


==See also==
==See also==
* [[Design matrix]]
{{Empty section|date=July 2010}}
* [[Gramian matrix]]
* [[Projection matrix]]

==References==
{{Reflist}}


==External links==
==External links==
* {{springer|title=Moment matrix|id=p/m130190}}
* {{springer|title=Moment matrix|id=p/m130190}}

{{Matrix classes}}


[[Category:Matrices]]
[[Category:Matrices]]
[[Category:Least squares]]





Latest revision as of 15:03, 19 March 2023

In mathematics, a moment matrix is a special symmetric square matrix whose rows and columns are indexed by monomials. The entries of the matrix depend on the product of the indexing monomials only (cf. Hankel matrices.)

Moment matrices play an important role in polynomial fitting, polynomial optimization (since positive semidefinite moment matrices correspond to polynomials which are sums of squares)[1] and econometrics.[2]

Application in regression

[edit]

A multiple linear regression model can be written as

where is the explained variable, are the explanatory variables, is the error, and are unknown coefficients to be estimated. Given observations , we have a system of linear equations that can be expressed in matrix notation.[3]

or

where and are each a vector of dimension , is the design matrix of order , and is a vector of dimension . Under the Gauss–Markov assumptions, the best linear unbiased estimator of is the linear least squares estimator , involving the two moment matrices and defined as

and

where is a square normal matrix of dimension , and is a vector of dimension .

See also

[edit]

References

[edit]
  1. ^ Lasserre, Jean-Bernard, 1953- (2010). Moments, positive polynomials and their applications. World Scientific (Firm). London: Imperial College Press. ISBN 978-1-84816-446-8. OCLC 624365972.{{cite book}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
  2. ^ Goldberger, Arthur S. (1964). "Classical Linear Regression". Econometric Theory. New York: John Wiley & Sons. pp. 156–212. ISBN 0-471-31101-4.
  3. ^ Huang, David S. (1970). Regression and Econometric Methods. New York: John Wiley & Sons. pp. 52–65. ISBN 0-471-41754-8.
[edit]