back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [ 22 ] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


22

Covariance Matrix of a Set of Random Variables

Let x = {xx, X2, . . . , x„) be a set of n independent random variables with mean zero and common variance cr-. Earlier we defined xx as a scalar product. This

is equal to X x. If we consider xx, this will be an x ,z matrix. The covari-

ance matrix of the variables is (since their mean is 0)

\ = E

X1X2 xl

XiX„ XjXn

Since

XiX„ XjXn

EiXiX) =

= £(xx)

if/=y if / j

we have V = lo. In the general case where £(x,) = x, and co\(x,Xj) = cr,y, we have E{\} = fx, where (jl is the vector of means, and the covariance matrix is V = {\ - (jl)(x - = S, an X « matrix whose (/,J)th element is .

Positive Definite and Negative Definite Matrices

In the case of scalars, a number >> is said to be positive if > 0, negative if < 0, nonnegative if > 0, and nonpositive if < 0. In the case of matrices, the corresponding concepts are positive definite, negative definite, positive semidefinite, and negative semidefinite, respectively. Corresponding to a square matrix we define the quadrative form Q = xBx, where x is a nonnull vector. Then:

is said to be positive definite ii Q > 0. is said to be positive semidefinite if g > 0. is said to be negative definite if Q <0. is said to be negative semidefinite if (2 s 0.

All these relations should hold for any nonnull vector x.

For a positive definite matrix B, leading (diagonal) determinants of all orders are > 0. In particular, the diagonal elements are > 0 and B > 0. For example, consider the matrix

3 1

-4 2 6 -4

The diagonal elements are 3, 2, and 7, all positive. The diagonal determinants of order 2 are

3 1 -4 2

= 10,

3 -3 6 7

= 39, and

2 2 -4 7

= 22

which are all positive. Also, B = 94 > 0. Hence is positive definite.



As yet another example, consider the quadratic form

Q = 4x] + 9x1 + 2x5 + 6XX2 + 6xX2 + 6 , + 8x3X3

Is this positive for all values of x,, Xj, and X3? To answer this question, we write Q = xBx. The matrix is given by

The diagonal terms are all positive. As for the three leading determinants of order 2, they are

4 3 3 9

9 4 4 2

, and

4 3 3 2

The first two are positive, but the last one is not. Also, B = - 19. Hence is not positive definite. The answer to the question asked is "no."

For a negative definite matrix all leading determinants of odd order are < 0 and all leading determinants of even order are > 0. For semidefinite matrices we replace > 0 by > 0 and < 0 by < 0. For example, the matrix

-4 6

1 -3 0 2

-4 7

is positive semidefinite. Suppose that A is an / x n matrix. Then AA and AA are square matrices of orders n x n and m x m, respectively. We can show that both these matrices are positive semidefinite.

Consider = AA. Then xBx = xAAx. Define = Ax. Then xBx = yy ~ 2 which is > 0. Hence xBx > 0; that is, is positive semidefinite.

Finally, consider two positive semidefinite matrices and C. We shall write

1 2

0 1

2 1

Then

= AA = are both positive definite.

5 4 4 6

= AA =

5 2 4 2 1 1 4 1 5

The Multivariate Normal Distribution

Let x = (x,, X2, . . . , xJ be a set of variables which are normally distributed with mean vector and variance matrix V. Then x is said to have an n-variate normal distribution 7V„((jl, V). Its density function is given by

(2-irr V

exp [(X - iiy\-\x ~ 11)]



Ifx~7V„(0, V),thenxV-»x~x

Idetnpotent Matrices and the -Distribution

A matrix A is said to be idempotent if A = A. For example, consider A = X(XX)~X. Then A = X(XX)-XX(XX)-X = X(XX)X = A. Thus A is idempotent. Such matrices play an important role in econometrics. We shall state two important theorems regarding the relationship between idempotent matrices and the x-distribution (proofs are omitted).**

Let x = (X, 2, . . . , x„) be a set of independent normal variables with mean 0 and variance 1. We know that xx = has a x"-distribution with degrees of freedom (d.f.) n. But some other quadratic forms also have a x-distribution, as stated in the following theorems.

Theorem 1: If A is an x n idempotent matrix of rank r, then xAx has a x-distribution with d.f. r.

Theorem 2: If A, and Aj are two idempotent matrices with ranks r, and , respectively, and AjAj = 0, then xA,x and xAjX have independent y-distributions with d.f. r, and r, respectively.

We shall be using this result in statistical inference in the multiple regression model in Chapter 4.

Trace of a Matrix

The trace of a matrix A is the sum of its diagonal elements. We denote this by Tr(A). These are a few important results regarding traces.

"For proofs, see G. S. Maddala, Econometrics (New York: McGraw-Hill, 1977), pp. 455-456.

Note that with « = 1, we have V = and we can see the analogy with the density function for the univariate normal we considered earlier. Consider the simpler case , = 0. Also make the transformation

= \-"ht

Then the ys are linear functions of the xs and hence have a normal distribution. The covariance matrix of the ys is

£(yy) = E V-"2xxV-"2

Thus the ys are independent normal with mean 0 and variance 1. Hence ly] has a distribution with degrees of freedom n. But

2 y? = yy = -"2 -"2 = xV-x

Hence we have the result



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [ 22 ] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]