back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [ 42 ] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


42

(c) = a + log :

(d)

(e)

(f) = a + Vjc

(g) = a +

(h) = a +

Appendix to Chapter 3

In the following proofs we will use the compact notation ly, for 5)", y,.

(!) Proof That the Least Squares Estimators Are Best Linear Unbiased Estimators (BLUE) Consider the regression model

y,==x, + u, I = 1,2, . . . , n

For simplicity we have omitted the constant term. We assume that u, are independently distributed with mean 0 and variance cr. Since x, are given constants, E(y,) = px, and var(y,) = crl The least squares estimator of p is

. lx,y,

where c, = x,/lx]. Thus p is a linear function of the sample observations y, and hence is called a linear estimator Also,

) = 1 ( .) = 1

I \ Yx

Hence p is an unbiased linear estimator. We have to show that this is the best (i.e., has minumum variance among the class of Unear unbiased estimators). Consider any Unear estimator

P = ld,y.

If it is unbiased, we have

£(P) = ld,E{y,) = ld,{x,) = p

Hence we should have ld,x, = 1. Since y, are independent with a common variance , we have



But24Jt, = 1. Hence

llius we get

1 - 2

Xf

*=2 = 24

which are the least squares coefficients c,-. Thus the least squares estimator has the minimum variance in the class of linear unbiased estimators. This minimum variance is

Note that what we have shown is that the least squares estimator has the minimum variance among the class of linear unbiased estimators. It is possible in some cases that we can find nonlinear estimators that are unbiased but have a smaller variance than the linear estimators. However, if u, are independently and normally distributed, then the least squares estimator has minimum variance among all (linear and nonlinear) unbiased estimators. Proofs of these {MPiixisitions are beyond our scope and hence are omitted.

(2) Derivation of the Sampling Distributions of the Least Squares Estimators

Consider the regression model

y, = tt + Px, + , u, ~ 1N(0, We have seen that the least squares estimators are

§ = and & = y-m

We have to find 4 so that this variairce is minimum subject to the condition that d.x, = 1.

Hence we minimize df - k(dfXi - I), where is the Lagrangean multiplier. Differentiating with respect to 4 and equating to zero, we get

24 - b:i = 0 or 4 = jc,

Multiplying both sides by jc, and summing over /, we get



To derive the sampling distributions, we will use the following result: If 2> • > , are independent normal with variance , and if

1 = Sc,y, and L2 = 1d,y,

are two linear functions of y„ then L, and are jointly normally distributed

var(L,) = cr Ic] varCLj) = cr Id]

cov(iL„ 1,2) = 0-2

We now write 0 and a as linear functions of y,. First we write

5„ = - x)(y, - y)

= 1(x, - x)y, - lix, - X) = 1{x, - x)y,

The last term is zero since (x, - x) = 0. Thus

P = f = Sc,y,

where c, = (x, - )/5,. Also,

a = - Px = - - X--- = Z,d,y,

where

Thus

1 x(x, - x)

var(p) = Sctr = - = -

var(a) = ld](s

* (x, - xf - -- (x, - x)

n 5,

Hence

2(x, - X) = 0 - = 5 1-2 = -

n- n

var(«) = a(i + £) cov(a, P) = lc,d,<T



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [ 42 ] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]