back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [ 85 ] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]


85

(a) Parameter

Prior

/ \ Likelihood

(b) Parameter

Figure 8.3 The posterior density with: (a) an uncertain prior; (b) a certain prior.

density. In this case the posterior mean will be close to the sample mean and prior beliefs will have little influence on the parameter estimates. But in Figure 8.3b, where prior beliefs are expressed with a high degree of confidence, the posterior density is much closer to the prior density and parameter estimates will be much influenced by subjective prior beliefs.

We have seen that it is not just prior expectations that influence the Bayesian estimate: the degree of confidence held in ones beliefs also has an effect. In Bayesian analysis the posterior density will take more or less account of objective sample information, depending on the confidence of beliefs as represented by the variance of prior densities. How confident should prior beliefs be? One should always use a prior that reflects all the information, views and opinions that one has a priori-no more, no less. This is crucial for rational descriptions and decision-making.

8.3.2 Bayesian Estimation of Factor Models

In the context of specifying a multi-factor model r = Xp + £, Bayes rule becomes8

8 We assume that the variance of the error term, cr is known, so that it is only the factor sensitivities p that will be estimated by the Bayesian method. See Greene (1998) for the generalization to the case where ct2 is also estimated by Bayesian methods.



/(Pr, X, 2) = Ag(rp, X, a2)A(P), (8.11)

where:

>- A(P) is a prior density function that expresses uncertain views about the

model parameters P before adding any sample information on , X and 2; »- g(rp, X, 2) is the joint density function of the dependent variable when the

parameters and explanatory variables are regarded as fixed. This is obtained

from the likelihood of the sample (see appendix 6); »- /( , X, 2) is the posterior density function that expresses revised views on

parameter uncertainties, given the beliefs about P expressed in the prior

density and the sample information on , X and 2; »- is the normalization constant that makes /(Pr, X, 2) into a proper

density function.

The strength of Bayesian methods is their ability to take into account, via the prior density / ( ), any sort of prior information about model parameters. It may be purely subjective views on model parameters, or prior densities can be based on information from a previous model fitting exercise. Of course, in a proper density function the area under the curve is 1, but it is usual to express total lack of prior information about the parameters by an improper density function, the non-informative prior / ( ) = l.9 With a non-informative prior the posterior density is just the normalized likelihood of the sample, so with a non-informative prior Bayesian estimates reduce to the estimates from standard sampling theory, such as the maximum likelihood estimates; one would not expect anything else.

For convenience, informative priors are commonly described using the same shape of density function as the likelihood, otherwise (8.11) would give some rather strange functional forms for the posterior density. Even with these so called conjugate priors the algebra of normalization can become quite burden some. However, retaining the assumption of normal disturbances, £ ~ A/(0, ct2I), with known variance 2 does simplify things considerably. Since £ ~ N(0, ct2I) implies r ~ A/(Xp, ct2I), an informative prior on p that has the same functional form as the likelihood will take the form

p ~ A/(p0, E0).

In this case, after rather a lot of algebra (see Greene, 1998; Griffiths et al., 1993), it may be shown that the Bayesian estimators will also be normally distributed. The revised estimators of the model parameters will lie on the line between the prior and sample parameter estimates:

b*=Fp0 + (I-F)b,

where

9 The appropriate for normalization is found after the functional form of the posterior has been demed Because this normalization comes at the end it is not a problem to use improper densities in the prior.

The strength of Bayesian methods is their ability to take into account, via the prior density / ( ), any sort of prior information about model parameters



F = E*!1

and £* is the covariance matrix of the Bayesian estimators given by

in which £ is the covariance matrix of the OLS sample estimators, viz. a2 (XX)-1.

Bayesian estimates of Thus Bayesian betas are linearly interpolated between the OLS estimate (the mean of the likelihood) and the prior belief (the mean of the prior density). The exact point for the posterior estimate on the line between these two estimates will depend on the standard errors of these densities. In fact Bayesian estimates of beta will move closer to the value of beta that is assumed in the prior beliefs as more confidence is expressed in those beliefs, as the example below demonstrates.

Since ct2 is really unknown, it is common to replace it in the above by its posterior estimate .v2. In this case Greene (1998) shows that the posterior density will be f-distributed with expectation

* = 53*(530-1 0 + 53 1 ), (8.12)

and the estimated variance X* is given by

beta will move closer to the value of beta that is assumed in the prior beliefs as more confidence is expressed in those beliefs

where E, is the estimated covariance matrix of the OLS sample estimators, viz. .v2(XXr and is a degrees-of-freedom-adjustment, = m/(m - 2), in which in represents the degrees of freedom in the model.

To illustrate how to calculate Bayesian estimates of asset betas, consider the example of estimating a CAPM for Eletrobras in the Ibovespa index based on daily data for the period 1 August 1994 to 30 December 1997. The OLS estimate of the stock beta is 1.211, with a standard error of 0.021586 (§A.1.3 and §A.2.2). Now suppose that a prior density on this beta is normal with expected value 0.8 with a standard error of 0.1. Since the sample size is so large the -distributed posterior converges to a normal posterior, and the degrees-of-freedom correction in (8.13) is approximately 1. Thus the estimated variance of the posterior estimator computed from (8.13) is simply

((0.021586 2 + (0.1)-y

0.0004452.

Putting this in (8.12) gives the revised estimate for the stock beta, given both sample data and prior beliefs, as

0.0004452(1.211/(0.021586)2 +0.8/(0.1)2) = 1.193.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [ 85 ] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]