|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| '''Simultaneous equation models''' are a form of [[statistical model]] in the form of a set of [[linear simultaneous equations]]. They are often used in [[econometrics]].
| | Adrianne Swoboda is what it husband loves to switch her though she [http://Imgur.com/hot?q=doesn%27t doesn't] really like being text like that. After being out of or perhaps job for years the guy became an order maid of honor. What your loves doing is to get information to karaoke but the woman is thinking on starting a new challenge. Massachusetts is considered where he's always lived. She is running and always keeping a blog here: http://prometeu.net<br><br>Here is my website ... clash of clans cheats - [http://prometeu.net read here] - |
| | |
| == Structural and reduced form ==
| |
| Suppose there are ''m'' regression equations of the form
| |
| : <math>
| |
| y_{it} = y_{-i,t}'\gamma_i + x_{it}'\;\!\beta_i + u_{it}, \quad i=1,\ldots,m,
| |
| </math>
| |
| | |
| where ''i'' is the equation number, and {{nowrap|''t'' {{=}} 1, ..., ''T''}} is the observation index. In these equations ''x<sub>it</sub>'' is the ''k<sub>i</sub>×''1 vector of exogenous variables, ''y<sub>it</sub>'' is the dependent variable, ''y<sub>−i,t</sub>'' is the ''n<sub>i</sub>×''1 vector of all other endogenous variables which enter the ''i''<sup>th</sup> equation on the right-hand side, and ''u<sub>it</sub>'' are the error terms. The “−''i''” notation indicates that the vector ''y<sub>−i,t</sub>'' may contain any of the ''y''’s except for ''y<sub>it</sub>'' (since it is already present on the left-hand side). The regression coefficients ''β<sub>i</sub>'' and ''γ<sub>i</sub>'' are of dimensions ''k<sub>i</sub>×''1 and ''n<sub>i</sub>×''1 correspondingly. Vertically stacking the ''T'' observations corresponding to the ''i''<sup>th</sup> equation, we can write each equation in vector form as
| |
| : <math> | |
| y_i = Y_{-i}\gamma_i + X_i\beta_i + u_i, \quad i=1,\ldots,m,
| |
| </math>
| |
| where ''y<sub>i</sub>'' and ''u<sub>i</sub>'' are ''T×''1 vectors, ''X<sub>i</sub>'' is a ''T×k<sub>i</sub>'' matrix of exogenous regressors, and ''Y<sub>−i</sub>'' is a ''T×n<sub>i</sub>'' matrix of endogenous regressors on the right-hand side of the ''i''<sup>th</sup> equation. Finally, we can move all endogenous variables to the left-hand side and write the ''m'' equations jointly in vector form as
| |
| : <math>
| |
| Y\Gamma = X\Beta + U.\,
| |
| </math>
| |
| This representation is known as the '''structural form'''. In this equation {{nowrap|''Y'' {{=}} [''y''<sub>1</sub> ''y''<sub>2</sub> ... ''y<sub>m</sub>'']}} is the ''T×m'' matrix of dependent variables. Each of the matrices ''Y<sub>−i</sub>'' is in fact an ''n<sub>i</sub>''-columned submatrix of this ''Y''. The ''m×m'' matrix Γ, which describes the relation between the dependent variables, has a complicated structure. It has ones on the diagonal, and all other elements of each column ''i'' are either the components of the vector ''−γ<sub>i</sub>'' or zeros, depending on which columns of ''Y'' were included in the matrix ''Y<sub>−i</sub>''. The ''T×k'' matrix ''X'' contains all exogenous regressors from all equations, but without repetitions (that is, matrix ''X'' should be of full rank). Thus, each ''X<sub>i</sub>'' is a ''k<sub>i</sub>''-columned submatrix of ''X''. Matrix Β has size ''k×m'', and each of its columns consists of the components of vectors ''β<sub>i</sub>'' and zeros, depending on which of the regressors from ''X'' were included or excluded from ''X<sub>i</sub>''. Finally, {{nowrap|''U'' {{=}} [''u''<sub>1</sub> ''u''<sub>2</sub> ... ''u<sub>m</sub>'']}} is a ''T×m'' matrix of the error terms.
| |
| | |
| Postmultiplying the structural equation by {{nowrap|Γ<sup> −1</sup>}}, the system can be written in the '''reduced form''' as
| |
| : <math>
| |
| Y = X\Beta\Gamma^{-1} + U\Gamma^{-1} = X\Pi + V.\,
| |
| </math>
| |
| This is already a simple [[general linear model]], and it can be estimated for example by [[ordinary least squares]]. Unfortunately, the task of decomposing the estimated matrix <math style="vertical-align:0">\scriptstyle\hat\Pi</math> into the individual factors Β and {{nowrap|Γ<sup> −1</sup>}} is quite complicated, and therefore the reduced form is more suitable for prediction but not inference.
| |
| | |
| === Assumptions ===
| |
| Firstly, the rank of the matrix ''X'' of exogenous regressors must be equal to ''k'', both in finite samples and in the limit as {{nowrap|''T'' → ∞}} (this later requirement means that in the limit the expression <math style="vertical-align:-.4em">\scriptstyle \frac1TX'\!X</math> should converge to a nondegenerate ''k×k'' matrix). Matrix Γ is also assumed to be non-degenerate.
| |
| | |
| Secondly, error terms are assumed to be serially [[independent and identically distributed]]. That is, if the ''t''<sup>th</sup> row of matrix ''U'' is denoted by ''u''<sub>(''t'')</sub>, then the sequence of vectors {''u''<sub>(''t'')</sub>} should be iid, with zero mean and some covariance matrix Σ (which is unknown). In particular, this implies that {{nowrap|E[''U''] {{=}} 0}}, and {{nowrap|E[''U′U''] {{=}} ''T'' Σ}}.
| |
| | |
| Lastly, the [[identification condition]]s requires that the number of unknowns in this system of equations should not exceed the number of equations. More specifically, the ''order condition'' requires that for each equation {{nowrap|''k<sub>i</sub> + n<sub>i</sub> ≤ k''}}, which can be phrased as “the number of excluded exogenous variables is greater or equal to the number of included endogenous variables”. The ''rank condition'' of identifiability is that {{nowrap|rank(Π<sub>''i''0</sub>) {{=}} ''n<sub>i</sub>''}}, where Π<sub>''i''0</sub> is a {{nowrap|(''k − k<sub>i</sub>'')×''n<sub>i</sub>''}} matrix which is obtained from Π by crossing out those columns which correspond to the excluded endogenous variables, and those rows which correspond to the included exogenous variables.
| |
| | |
| == Estimation ==
| |
| | |
| === Two-stages least squares (2SLS) ===
| |
| The simplest and the most common<ref>{{harvtxt|Greene|2003|loc=p. 398}}</ref> estimation method for the simultaneous equations model is the so-called [[two-stage least squares]] method, developed independently by {{harvtxt|Theil|1953}} and {{harvtxt|Basmann|1957}}. It is an equation-by-equation technique, where the endogenous regressors on the right-hand side of each equation are being instrumented with the regressors ''X'' from all other equations. The method is called “two-stage” because it conducts estimation in two steps:<ref>{{harvtxt|Greene|2003|loc=p. 399}}</ref>
| |
| : ''Step 1'': Regress ''Y<sub>−i</sub>'' on ''X'' and obtain the predicted values <math style="vertical-align:-.2em">\scriptstyle\hat{Y}_{\!-i}</math>;
| |
| : ''Step 2'': Estimate ''γ<sub>i</sub>'', ''β<sub>i</sub>'' by the [[ordinary least squares]] regression of ''y<sub>i</sub>'' on <math style="vertical-align:-.2em">\scriptstyle\hat{Y}_{\!-i}</math> and ''X<sub>i</sub>''.
| |
| | |
| If the ''i''<sup>th</sup> equation in the model is written as
| |
| : <math>
| |
| y_i = \begin{pmatrix}Y_{-i} & X_i\end{pmatrix}\begin{pmatrix}\gamma_i\\\beta_i\end{pmatrix} + u_i
| |
| \equiv Z_i \delta_i + u_i,
| |
| </math>
| |
| where ''Z<sub>i</sub>'' is a ''T×''(''n<sub>i</sub> + k<sub>i</sub>'') matrix of both endogenous and exogenous regressors in the ''i''<sup>th</sup> equation, and ''δ<sub>i</sub>'' is an (''n<sub>i</sub> + k<sub>i</sub>'')-dimensional vector of regression coefficients, then the 2SLS estimator of ''δ<sub>i</sub>'' will be given by<ref>{{harvtxt|Greene|2003|loc=p. 399}}</ref>
| |
| : <math>
| |
| \hat\delta_i = \big(\hat{Z}'_i\hat{Z}_i\big)^{-1}\hat{Z}'_i y_i
| |
| = \big( Z'_iPZ_i \big)^{-1} Z'_iPy_i,
| |
| </math>
| |
| where {{nowrap|''P'' {{=}} ''X'' (''X'' ′''X'')<sup>−1</sup>''X'' ′}} is the projection matrix onto the linear space spanned by the exogenous regressors ''X''. | |
| | |
| === Indirect least squares ===
| |
| Indirect least squares is an approach in [[econometrics]] where the [[coefficient]]s in a simultaneous equations model are estimated from the [[reduced form]] model using [[ordinary least squares]].<ref>Park, S-B. (1974) "On Indirect Least Squares Estimation of a Simultaneous Equation System", ''The Canadian Journal of Statistics / La Revue Canadienne de Statistique'', 2 (1), 75–82 {{JSTOR|3314964}}</ref><ref>Vajda, S., Valko, P. Godfrey, K.R. (1987) "Direct and indirect least squares methods in continuous-time parameter estimation", ''Automatica'', 23 (6), 707–718 {{DOI|10.1016/0005-1098(87)90027-6}}</ref> For this, the structural system of equations is transformed into the reduced form first. Once the coefficients are estimated the model is put back into the structural form.
| |
| | |
| === Limited information maximum likelihood (LIML) ===
| |
| The “limited information” maximum likelihood method was suggested by {{harvtxt|Anderson|Rubin|1949}}. It is used when one is interested in estimating a single structural equation at a time (hence its name of limited information), say for variable i:
| |
| | |
| : <math> y_i = Y_{-i}\gamma_i +X_i\beta_i+ u_i \equiv Z_i \delta_i + u_i </math>
| |
| | |
| The structural equations for the remaining endogeneous variables Y<sub>−1</sub> are not specified, and they are given in their reduced form:
| |
| : <math> Y_{-i} = X \Pi + U_{-1} </math>
| |
| | |
| The explicit formula for this estimator is:<ref>{{harvtxt|Amemiya|1985|loc=p. 235}}</ref>
| |
| : <math>
| |
| \hat\delta_i = \Big(Z'_i(I-\lambda M)Z_i\Big)^{\!-1}Z'_i(I-\lambda M)y_i,
| |
| </math>
| |
| where {{nowrap|''M'' {{=}} ''I − X'' (''X'' ′''X'')<sup>−1</sup>''X'' ′}}, {{nowrap|''M<sub>i</sub>'' {{=}} ''I − X<sub>i</sub>'' (''X<sub>i</sub>''′''X<sub>i</sub>'')<sup>−1</sup>''X<sub>i</sub>''′}}, and ''λ'' is the smallest characteristic root of the matrix
| |
| : <math>
| |
| \Big(\begin{bmatrix}y_i\\Y_{-i}\end{bmatrix} M_i \begin{bmatrix}y_i&Y_{-i}\end{bmatrix} \Big)
| |
| \Big(\begin{bmatrix}y_i\\Y_{-i}\end{bmatrix} M \begin{bmatrix}y_i&Y_{-i}\end{bmatrix} \Big)^{\!-1}
| |
| </math>
| |
| | |
| In other words, ''λ'' is the smallest solution of the [[Generalized eigenvalue problem#Generalized eigenvalue problem|generalized eigenvalue problem]], see {{harvtxt|Theil|1971|loc=p. 503}}:
| |
| | |
| : <math>
| |
| \Big|\begin{bmatrix}y_i&Y_{-i}\end{bmatrix}' M_i \begin{bmatrix}y_i&Y_{-i}\end{bmatrix} -\lambda
| |
| \begin{bmatrix}y_i&Y_{-i}\end{bmatrix}' M \begin{bmatrix}y_i&Y_{-i}\end{bmatrix} \Big|=0
| |
| </math>
| |
| | |
| ==== K class estimators ====
| |
| The LIML is a special case of the K-class estimators:<ref>{{harvtxt|Davidson|Mackinnon|1993|loc=p. 649}}</ref>
| |
| : <math> | |
| \hat\delta = \Big(Z'(I-\kappa M)Z\Big)^{\!-1}Z'(I-\kappa M)y,
| |
| </math>
| |
| | |
| with:
| |
| * <math> \delta = \begin{bmatrix} \beta_i & \gamma_i\end{bmatrix} </math>
| |
| * <math> Z = \begin{bmatrix} X_i & Y_{-i}\end{bmatrix} </math>
| |
| Several estimators belong to this class:
| |
| * κ=0: [[OLS]]
| |
| * κ=1: 2SLS. Note indeed that in this case, <math> I-\kappa M = I-M= P </math> the usual projection matrix of the 2SLS
| |
| * κ=λ: LIML
| |
| * κ=λ - α (n-K): {{harvtxt|Fuller|1977}} estimator. Here K represents the number of instruments, n the sample size, and α a positive constant to specify. A value of α=1 will yield an estimator that is approximately unbiased.<ref>{{harvtxt|Davidson|Mackinnon|1993|loc=p. 649}}</ref>
| |
| | |
| === Three-stage least squares (3SLS) ===
| |
| The three-stage least squares estimator was introduced by {{harvtxt|Zellner|Theil|1962}}. It combines [[2SLS|two-stage least squares]] (2SLS) with [[seemingly unrelated regressions]] (SUR).
| |
| | |
| == See also ==
| |
| * [[General linear model]]
| |
| * [[Seemingly unrelated regressions]]
| |
| * [[Indirect least squares]]
| |
| | |
| == Notes ==
| |
| {{reflist}}
| |
| | |
| == References ==
| |
| {{refbegin}}
| |
| * {{cite book
| |
| | last = Amemiya | first = Takeshi
| |
| | title = Advanced econometrics
| |
| | year = 1985
| |
| | publisher = Harvard University Press
| |
| | location = Cambridge, Massachusetts
| |
| | isbn = 0-674-00560-0
| |
| | ref = harv
| |
| }}
| |
| * {{cite journal
| |
| | last1 = Anderson | first1 = T.W.
| |
| | last2 = Rubin | first2 = H.
| |
| | title = Estimator of the parameters of a single equation in a complete system of stochastic equations
| |
| | year = 1949
| |
| | journal = Annals of Mathematical Statistics
| |
| | volume = 20 | issue = 1
| |
| | pages = 46–63
| |
| | jstor = 2236803
| |
| | ref = harv
| |
| }}
| |
| * {{cite journal
| |
| | last = Basmann | first = R.L.
| |
| | title = A generalized classical method of linear estimation of coefficients in a structural equation
| |
| | year = 1957
| |
| | journal = Econometrica
| |
| | volume = 25 | issue = 1
| |
| | pages = 77–83
| |
| | jstor = 1907743
| |
| | ref = harv
| |
| }}
| |
| * {{cite book
| |
| | last1 = Davidson | first1 = Russell
| |
| | last2 = MacKinnon | first2 = James G.
| |
| | title = Estimation and inference in econometrics
| |
| | year = 1993
| |
| | publisher = Oxford University Press
| |
| | isbn = 978-0-19-506011-9
| |
| | ref = harv
| |
| }}
| |
| * {{cite journal
| |
| | last = [[Wayne Fuller|Fuller]] | first = Wayne
| |
| | title = Some Properties of a Modification of the Limited Information Estimator
| |
| | year = 1977
| |
| | journal = Econometrica
| |
| | volume = 45 |issue=4
| |
| | pages = 939–953
| |
| | ref = harv
| |
| }}
| |
| * {{cite book
| |
| | last = Greene | first = William H.
| |
| | title = Econometric analysis
| |
| | publisher = Prentice Hall
| |
| | year = 2002 | edition = 5th
| |
| | isbn = 0-13-066189-9
| |
| | ref = harv
| |
| }}
| |
| * {{cite book
| |
| | last = [[Henri Theil|Theil]]| first = Henri
| |
| | title = Principles of Econometrics
| |
| | year = 1971
| |
| | publisher = John Wiley
| |
| | location = New York
| |
| | ref = harv
| |
| }}
| |
| * {{cite journal
| |
| | last1 = [[Arnold Zellner|Zellner]] | first1 = Arnold
| |
| | last2 = [[Henri Theil|Theil]] | first2 = Henri
| |
| | title = Three-stage least squares: simultaneous estimation of simultaneous equations
| |
| | year = 1962
| |
| | journal = Econometrica
| |
| | volume = 30 | issue = 1
| |
| | pages = 54–78
| |
| | jstor = 1911287
| |
| | ref = harv
| |
| }}
| |
| {{refend}}
| |
| | |
| == External links ==
| |
| *[http://economics.about.com/library/glossary/bldef-ils.htm About.com:economics] Online dictionary of economics, entry for ILS
| |
| | |
| [[Category:Multivariate statistics]]
| |
| [[Category:Econometrics]]
| |