back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [ 141 ] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]


141

0.25 0.2-0.15-

-0.06-

(b) -0.08--

Figure A.3 Residuals from (a) weekly and (b) daily model.

on daily returns data. When the estimated coefficients change, so do the residuals. In this example they are quite different depending on whether daily or weekly data are used for the Brazilian model, as shown in Figure A.3.

The model coefficients are not random variables (although their values will always be unknown), but the estimators are random variables. Different data give different estimates, so the estimator distributions arise from differences in samples. For this reason they are called sampling distributions. Two types of random variables have been introduced in the context of regression: the stochastic error processes { ,} are assumed to have theoretical (often normal) distributions; the coefficient estimators are random variables because different data sources, frequencies of observations, or time periods yield different values of the estimator.

The sampling distributions of the estimators may have more or less desirable properties. This will be determined by the method of estimation employed and the assumptions made about the distribution of {e,}. Two desirable properties for estimator distributions are unbiasedness and efficiency. Unbiasedness means that the expected value of the estimator equals the true value of the



Unbiased Biased

Efficient

Inefficient

Figure A.4 Sampling distributions of estimators: (a) unbiased and efficient; (b) biased and efficient; (c) unbiased and inefficient; (d) biased and inefficient.

True value

Figure A.5 Distributions of a consistent estimator.

parameter, and efficiency means that the variance of the estimator is as small as possible.

If many different estimates are obtained from an unbiased and efficient estimator, they would all be near the true parameter value, as in Figure A.4a. But if the distribution of the estimator is biased, or inefficient, or both, it would look like one of the other curves in Figure A.4. For an inefficient estimator, the estimates arising from small differences in samples are likely to vary considerably, as in Figures A.4c and A.4d. Thus estimates will not be robust even to small changes in the data. Parameter estimates may change considerably when a few more days of data are used, for no obvious reason. Perhaps the worst case is biased and efficient estimators, where most estimates lie far away from the true value, as in Figure A.4b. In this case estimates will be quite robust, changing little even when the data change a lot. But they will almost always be far above the true model parameter.



In many financial markets it is possible to obtain hundreds if not thousands of data points. The asymptotic properties of estimators are therefore particularly relevant to models of financial markets. A consistent estimator has a distribution such as that shown in Figure A.5, which converges to the true value of the parameter as the sample size tends to infinity. That is, the probability limit (plim) of the estimator distribution is the true parameter value, written plim = R. The estimator may be biased and/or inefficient for small sample sizes, but as the number of observations used to make the estimate increases the distribution converges in probability to the true parameter value.

OLS estimators are not always consistent (for example, when the model includes a time trend or the data are measured with error) but maximum likelihood estimators are almost always consistent, even though their small-sample properties may not be very good (§A.6.1)

Properties of OLS Estimators with Non-Stochastic Regressors

If possible, one should choose an estimation method that gives unbiased and efficient estimators when sample sizes are small. The Gauss-Markov theorem states that OLS estimators are unbiased and the most efficient of all linear unbiased estimators if:

> the explanatory variables are non-stochastic, and

> the error terms are stationary, homoscedastic, and not autocorrelated.2

To see this, substitute (A. 1.9) into (A. 1.10) to obtain

b = p + tXxy-XE. (A.1.12)

Assuming that X is non-stochastic, taking expectations shows that the OLS estimators are unbiased:

E(b) = £(P) + ( ) = p.

The covariance matrix of b is

K(b) = E((b - p)(b - P)).

Since b - p is x 1 vector, the covariance matrix is a symmetric x matrix, whose diagonal elements are the variances of each estimator, and whose off-diagonal elements are the covariances. By (A.1.12),

\A stronger assumption is that the error process be independent and identically distributed: we write e, -~ i.i.d.(0. cr t. \ cr denotes the \ariance of the process. An even stronger assumption that the errors have independent normal distributions (e, ~- NID(0. cr)) is usually necessary for the standard hypothesis tests that are outlined in Appendix 2



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [ 141 ] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]