back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [ 144 ] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]


144

Put another way, Prob((3 > b - tva (est. s.e. b)) - 1 - a. Similarly, a lower 100(1 - a)% confidence interval is (-oo, b + tva (est. s.e. b)), which is another way of saying that Prob((3 < b + rva (est. s.e. bj) - 1 - a.

A.2.2 f-Tests

To illustrate the basic idea of a /-test, consider the simple numerical example in Appendix 1 and perform a /-test of whether there is a significant deterministic trend in the data." Suppose the null hypothesis H0: (3 = 0 is tested against the alternative hypothesis H,: (30. The test statistic is /-distributed with 3 degrees of freedom, and is given by

t - (b - (3)/(est. s.e. b).

Evaluating / under the null hypothesis and given the estimated model in §A.1.2 and §A.1.4, yields a particular value for /, viz.

/* = (2.5 -0)/V0.383 = 4.04.

Statistical tables give critical values of the /3 distribution. For a two-sided 5% test the critical values are ±/3j0.o25 =±3.182, and for a 1% test the critical values are i /3 0.005 - i 5.841. Since 4.04 lies in the 5% critical region of the /3 distribution but not in the 1 % critical region, we can reject the null hypothesis at 5% but not at 1%. We conclude that it is reasonable to include the trend in the model, but that it is not very highly significant.

When OLS regression is run in a statistical package, the estimated standard errors of the estimates and the /-ratios for the null hypothesis Ff0: (3 = 0 are automatically computed. For example, estimating the daily CAPM for Eletrobras with the Brazilian index Ibovespa gives the following Excel output for the period from 1 August 1994 to 30 December 1997:

Coefficient

Standard error

t-Ratio

Intercept

-0.00025

0.000608

-0.41369

1.211073

0.021586

56.1039

With hundreds of data points, the critical values of these /-statistics are the same as the normal critical values (Z005 = 1.645, Z0.025 = 1-96, ZQM = 2.326). The /-statistic of 56.1039 shows that the index is a very highly significant determinant of Eletrobras returns, but the intercept is not statistically significant (/ = -0.41369), as implied by the Sharpe-Lintner model (§8.1.1).

This is not at all a good statistical test, we use it here just to illustrate ideas. Much better tests are described in §11.1.6.



The estimated standard errors of the coefficient estimate may seem redundant in the table above: the -ratio for the hypothesis that the model parameter is zero is the ratio of the coefficient estimate to its estimated standard error. The estimated standard errors are reported because this allows many other hypotheses to be tested, such as

H0: P = 1 against H,: P < 1.

In the current example, this alternative hypothesis is that Eletrobras is a low-risk stock. It cannot be accepted because the critical region of the test is (-oo, -Za) and since ,= 0.211073/0.021586 = 9.78 it does not lie in the critical region.

The estimated standard errors also give confidence intervals for the model parameters. For example a two-sided 95% confidence interval for P is 1.211073 ± 1.96*0.021586, that is, (1.1688, 1.2534).

Knowledge of the estimated covariance matrix of the regression estimates makes it possible to test quite general linear hypotheses with a simple Mest, provided there is only one restriction. At the most general level the Mest statistic in a linear regression takes the form

f=/(b)/est. s.e. f(b) ~ tT k, (A.2.1)

where /(•) = 0 is some linear restriction on the model parameters, and b are the estimated parameter values.12 For example, if the null hypothesis is H0: a = P then the linear restriction is f(a, P) = a - P = 0, so the test statistic is

t = (a - 6)/est. s.e.(a - b).

The denominator is calculated from the square root of the estimate of V(a - b). Since

V(a - b) = V(a) + V(b) - 2 cov(a, b),

the estimate of V(a - b) is obtained by substituting elements of the estimated covariance matrix into this formula. Similarly, for the null hypothesis H0: 2a = P - 1 the linear restriction is 2a - P + 1 = 0 so it has the test statistic

t = (2a - b + l)/est. s.e. (2a -b+l).

Again the denominator is calculated by substituting elements of the estimated covariance matrix in a variance formula, in this case using

V(2a - b+l) = V(2a - b) = 4V(a) + V(b) - 4 cov(a, b).

To see how this works in practice, again consider the deterministic trend model of Appendix 1 with the test

12The /-distribution of (A.2.1) is based on the normality of linear functions of OLS estimators (§A. 1.3).



A.2.3 F-tests

F-tests are commonly used to test the joint significance of several explanatory variables in a linear model. For example, they can form the basis of Granger causality tests in vector auto regressions (§11.4.3).

Complex hypotheses involving more than two parameters may be formulated as a set of q linear restrictions. It is convenient to write these in matrix form as Rp = q, where R is a q x matrix of coefficients on the model parameters and q is a q x 1 vector of constants. For example, the null hypothesis

Pi = P2, p! + P3=0, p2+2p3 = p,+2

may be written

p2 =

Complex hypotheses of the form Rp = q can be tested using the -statistic

F= [(RSSR - RSSu)/<?]/[RSSu/(r- k)] - FqJ k. (A.2.2)

To calculate the F-statistic the regression model is estimated twice, first with no restrictions to obtain the residual sum of squares RSStj and then after imposing the q restrictions in the null hypothesis to obtain the restricted residual sum of squares RSSR.

Here is an example that illustrates how to construct a restricted model. Suppose the test is of the null hypothesis

H0: a + (3 = 7 versus H,: a + P > 7. Since V(a + b-7)= Via + b) the test statistic (A.2.1) is

t = [(a + b) - 7]/ est. s.e. (a + b) The estimated standard error of a + b is calculated using the formula

V{a + b)= Via) + V(b) + 2 cov(a, b).

Putting in the estimated variances and covariances from the estimated covariance matrix in §A.1.4 and taking square roots gives

est. s.e. (a+ 6) = (4.217 + 0.383 - 2 x 1.15)= 1.517.

Evaluating t under H0 and given our data gives t* - (11 - 7)/1.517 = 2.638, and since ?3005 = 2.353 we can reject the null at 5%.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [ 144 ] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]