back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [ 11 ] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


11

An important property of the normal distribution is that any linear function of normally distributed variables is also normally distributed. This is true whether the variables are independent or correlated. If

:, ~ N(JL,, CT?) and X2 ~ N(jX2, o)

and the correlation between , and X2 is p, then

flii + 022 ~ NCiM-i + » + alcrl + 2paia2Uia2)

In particular,

Xj + X2~ \ ., + X2. CT? + + 2pCT,a2)

:, - 2 ~ N(}jL, - M,2, CT? + cT - 2pa,cT2)

Related Distributions

In addition to the normal distribution, there are other probability distributions that we will be using frequently. These are the x, t, and F distributions tabulated in the Appendix. These distributions are derived from the normal distribution and are defined as described below.

-Distribution

If ,, 2, . . . , x„ are independent normal variables with mean zero and variance 1, that is, X, ~IN(0, 1), i = 1, 2, . . . , «, then

is said to have the x-distribution with degrees of freedom n, and we will write this as Z ~ Xn- The subscript n denotes degrees of freedom. The distribution is the distribution of the sum of squares of n independent standard normal variables.

If X, ~ IN(0, CT), then Z should be defined as

z =

The x-distribution also has an "additive property," although it is different from the property of the normal distribution and is much more restrictive. The property is:

If Z, ~ x and Z2 ~ Xm and Z, and Z2 are independent, then Z, -f Z2 ~ xL„,-

Note that we need independence and we can consider simple additions only, not any general linear combinations. Even this limited property is useful in practical applications. There are many distributions for which even this limited property does not hold.



t-Distribution

If jc ~ N(0, 1) and ~ Xn and x and are independent, Z = xls/yJn has a t-distribution with degrees of freedom n. We write this as Z ~ f„. The subscript n again denotes the degrees of freedom. Thus the /-distribution is the distribution of a standard normal variable divided by the square root of an independent averaged x variable (x variable divided by its degrees of freedom). The /-distribution is a symmetric probability distribution like the normal distribution but is flatter than the normal and has longer tails. As the degrees of freedom n approaches infinity, the /-distribution approaches the normal distribution.

F-Distribution

If ~ X,l and 2 ~ Xm and y, and y are independent, Z = (yi/nil/Cyj/nj) has the F-distribution with degrees of freedom (d.f.) n, and /Tj. We write this as

Z~F„,.„2

The first subscript, refers to the d.f. of the numerator, and the second subscript, «2. refers to the d.f. of the denominator. The F-distribution is thus the distribution of the ratio of two independent averaged x variables.

2.5 Classical Statistical Inference

Statistical inference is the area that describes the procedures by which we use the observed data to draw conclusions about the population from which the data came or about the process by which the data were generated. Our assumption is that there is an unknown process that generates the data we have and that this process can be described by a probabihty distribution, which, in turn, can be characterized by some unknown parameters. For instance, for a normal distribution the unknown parameters are p. and cr-.

Broadly speaking, statistical inference can be classified under two headings: classical inference and Bayesian inference. Classical statistical inference is based on two premises:

1. The sample data constitute the only relevant information.

2. The construction and assessment of the different procedures for inference are based on long-run behavior under essentially similar circumstances.

In Bayesian inference we combine sample information with prior information. Suppose that we draw a random sample y,, 2, • • • , „ of size n from a normal population with mean (j, and variance (assumed known), and we want to make inferences about \i.

In classical inference we take the sample mean as our estimate of jx. Its variance is oln. The inverse of this variance is known as the sample precision. Thus the sample precision is n/cr".

In Bayesian inference we have prior information on jx. This is expressed in



posterior mean =

i(20) + KIO)

I + I

= 3-33

posterior valance = (i + = 4 = 1.33

The posterior mean will lie between the sample mean and the prior mean. The posterior variance will be less than both the sample and prior variances.

We do not discuss Bayesian inference in this book, because this would take us into a lot more detail than we intend to cover. However, the basic notion of combining the sample mean and prior mean in inverse proportion to their variances will be a useful one to remember.

Returning to classical inference, it is customary to discuss classical statistical inference under three headings:

1. Point estimation.

2. Interval estimation.

3. Testing of hypotheses.

Point Estimation

Suppose that the probability distribution involves a parameter and we have a sample of size n, namely y,, 2, . . . , y„, from this probability distribution. In point estimation we construct a function ( ,, • • • . ) of these observations and say that g is our estimate (guess) of 6. The common terminology is to call the function ( ,, , . . . , y„) an estimator and its value in a particular sample an estimate. Thus an estimator is a random variable and an estimate is

For more discussion of Bayesian econometrics, refer to A. Zellner, Introduction to Bayesian Analysis in Econometrics (New York: Wiley, 1971), and E. E. Leamer, Specification Searcties: Ad Hoc Inference witti Non-experimental Data (New York: Wiley, 1978).

terms of a probability distribution known as the prior distribution. Suppose that the prior distribution is normal with mean and variance ul, that is, precision 1/cTo. We now combine this with the sample information to obtain what is known as the posterior distribution of This distribution can be shown to be normal. Its mean is a weighted average of the sample mean and the prior mean jl,„ weighted by the sample precision and prior precision, respectively. Thus

ii (Bayesian) = --

where = n/u = sample precision

W2 = 1/cTo = prior precision

Also, the precision (or inverse of the variance) of the posterior distribution of x is + W2, that is, the sum of sample precision and prior precision.

For instance, if the sample mean is 20 with variance 4 and prior mean is 10 with variance 2, we have



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [ 11 ] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]