back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [ 13 ] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


13

Sometimes, an alternative definition is used: Έ„ is an asymptotically unbiased estimator of 6 if „- » » (Έ„) = 6. The problem with this definition is that £(§„) may not exist, as in the case of the example we cited earlier.

Asymptotic Variance

Again we have two definitions: one the variance of the limiting distribution, the other the limit of variances. The problem with the latter definition is that var(e„) may not exist. The two definitions are:

!• AE [Vn (Έ„ - 6)] if we consider the variance of the limiting distribution. 2. lim„-»oo EWn (Έ„ - e)] if we consider the limit of variances.

Some Examples

Consistency and asymptotic unbiasedness are different, and the two concepts of unbiasedness are also different; that is, plim(e„), (Έ„), and lim Έ„) are not all the same. Suppose that we have a sample of size n from a normal distribution 7V(e, 1). Consider Έ„ = Jc as an estimator of 6. Then

plim(ej = AE(e„) = lim E{h„) = 6

But suppose that we define

.1 1

Then

£(Έ„) = + e and lim „ „) = e

But plim(e„) = k, + ie 7 e. Thus plim(e„) 9 lim „). Thus we have an asymptotically unbiased estimator of 6 that is not consistent. As another example, consider \lx„ as an estimator of 1/6. We have plim(l/x„) = 1/6, so that the estimator is consistent. Also, AE(l/x„) = 1/6. But lim E(llx„) does not exist. Thus the estimator is asymptotically unbiased or not depending on what definition we use.

Very often in the econometric literature, the asymptotic variances are obtained by replacing AE by plim and evaluating plims. (The implicit assumption is that they are equal.) Thus the asymptotic variance of Έ„ is evaluated as

AE[V?i{e„ - 6)] = plim[V(e„ - Q)?

2.7 Sampling Distributions for Samples from a Normal Population

The most commonly used sampling distributions are those for samples from normal populations. We state the results here (without proof). Suppose that we



„ V; sample mean

« (2.1)

$ =-r 2 ( ; - y) sample variance

n - 1

Then

1. The sampling distribution of the sample mean is also normal with mean p, and variance uIn.

2. (n - l)5Vo-- has a x-distribution with degrees of freedom (n - 1). Further, the distributions of and 5 are independent.

3. Since

--MO, 1) and -1--x ,

CT CT

and these distributions are independent, we have (by the definition of the t-distribution as the distribution of a standard normal variable divided by the square root of an independent averaged x variable) that \/n(y - \i-)IS has a r-distribution with degrees of freedom (« - 1).

4. Also, £(y) = \i. and 5-) = a- and thus and are unbiased estimators for p, and o, respectively.

These sampling distributions will also be used to get interval estimates and for testing hypotheses about p- and .

2.8 Interval Estimation

In interval estimation we construct two functions g,(yi, , . • , y„) and 2( . 2, . . . , y„) of the sample observations such that

Prob(g, < e < 2) = a a given probability (2.2)

a is called the confidence coefficient and the interval (g,, 2) is called the confidence interval. Since 6 is a parameter (or a constant that is unknown), the probability statement (2.2) is a statement about g, and and not about 9. What it implies is that if we use the formulas g,(y,, 2, . . . , y„) and giiyu , • • , y„) repeatedly with different samples and in each case construct the confidence intervals using these formulas, then in 1( percent of all the cases (samples) the interval given will include the true value.

As an illustration of how to use the sampling distributions to construct confidence intervals, consider a sample y,, 2, . . . , y„ of n independent observations from a normal population with mean p and variance o-. Then

have a sample of n independent observations, namely, , , > from a normal population with mean jx and variance . Consider the following:



where and 5 are the sample mean and sample variance defined in (2.1).

If the sample size n is 20, so that the degrees of freedom = n - 1 = 19, we can refer to the tables with 19 degrees of freedom and say that

Prob

/195

> 32.852

= 0.025

Prob 10.

117 < - < 30.144) = 0.90

Also, referring to the f-tables with 19 degrees of freedom, we find that

Prob

-2.093 <

< 2.093

= 0.95

(2.3)

(2.4)

(2.5)

From equation (2.4) we get

and if 5 = 9.0 we get the 90% confidence interval for o- as (5.7, 16.9). From equation (2.5) we get

2.0935\

„ , 2.0935 Probjy - lJL<y +

= 0.95

If = 5 and S = 3.0 we get the 95% confidence interval for \x as (3.6, 6.4).

These intervals are called two-sided intervals. One can also construct onesided intervals. For instance, equation (2.3) implies that

Prob

(j2 <

Prob >

195 32.852

195 32.852

= 0.025

= 0.975

If 5 = 9, we get the 97.5% (right-sided) confidence interval for as (5.205, oo). We can construct similar one-sided confidence interval for p..

2.9 Testing of Hypotheses

We list here some essential points regarding hypothesis testing.

1. A statistical hypothesis is a statement about the values of some parameters in the hypothetical population from which the sample is drawn.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [ 13 ] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]