back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [ 84 ] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]


84

90 -

- 1.5

" 1

10 -

Jan-96 May-96 Sep-96 Jan-97 May-97 Sep-97 Jan-98 May-98 Sep-98 Jan-99 -- CAC volatility-Total volatility Specif ic volatility LOreal Beta

Figure 8.1 Decomposing LOreal total volatility into market and specific components.

portoflio level, using (8.4). At any point in time the total volatility of the stock or portfolio is attributed to two components: market volatility and specific volatility.

Figure 8.1 shows the exponentially weighted moving average estimates of total, market and specific volatilities for the stock LOreal in the CAC index from 2 January 1996 to 9 February 1999. The specific volatility is calculated from the stock-specific return series: y, - \i,Xt, where y, is the stock return, X, is the market return and p, is the EWMA stock beta which is also shown on the graph on the right-hand scale. All EWMAs have been calculated with a smoothing constant of 0.94. For most of the period the LOreal beta has been greater than 1, and the riskiness of this stock is augmented by a specific volatility of an order of magnitude similar to the market volatility. On 12 October 1998 LOreal volatility reached almost 90%, but only part of this was attributed to increase volatility in the CAC during the global equity market crash. A large part of this volatility was attributed to specific risk that has not been captured by the market factor. Other examples can be generated using the risk decomposition spreadsheet on the CD.

8.3 Bayesian Methods for Estimating Factor Sensitivities

Classical statisticians assume that at any point in time there is one true value for a model parameter. This frequentist approach to statistics focuses on the question what is the probability of the data given the model parameters?. That is, the functional form and the parameters of the model are as fixed, so



The idea is to express uncertainty about the true value of a model parameter with a prior density that describes ones beliefs about this true value

that probabilistic statements are only made about the likelihood of the sample data given assumed parameter values.

This section examines what can be said about the uncertainty of model parameters. There may well be one true value at any point in time, but since we shall never know what it is, we could represent the possible true values of a parameter by a probability distribution. In this way probabilistic statements can also be made about model parameters, thus turning around the question above to ask what is the probability of the parameter given the data?.

This approach has been named after the Rev. Thomas Bayes, whose Essay towards solving a problem in the doctrine of chances was published posthumously in the Philosophical Transactions of the Royal Society of London in 1764. The Bayesian process of statistical estimation is one of continuously revising and refining our subjective beliefs as more data become available. It can be considered as an extension of, rather than an alternative to, classical inference: indeed, some of the best classical estimators may be regarded as basic forms of Bayesian estimator.6

Some securities firms (for example, Merrill Lynch) have published stock betas based on Bayesian methods. Bayesian methods extend the classical viewpoint so that prior information about model parameters becomes part of the model fitting process. In this section it will be shown that Bayesian betas can be substantially different from OLS beta estimates, depending on the strength of ones prior beliefs about the true value of a beta.

Bayesian estimates are a combination of prior beliefs and sample information. The idea is to express uncertainty about the true value of a model parameter with a prior density that describes ones beliefs about this true value. Note that these beliefs can be entirely subjective if so wished. Then more objective information is added, in the form of a historic sample. This information is summarized in a likelihood function7 that is used to update the prior density to a posterior density, using the method for multiplying conditional probabilities which is referred to as Bayes rule.

8.3.1 Bayes Rule

The cornerstone of Bayesian methods is the theorem of conditional probability of events X and Y:

Prober and Y) = Vroh(X\ 7)Prob( Y) = ( ) ( ).

6 In fact the maximum likelihood estimator is a Bayesian estimator with a non-informative prior.

7 The difference between the prior and the likelihood is that sample information is always rooted in some real observable quantities (the new data) whereas prior densities reflect ones views prior to collecting the new data. These views may or may not have an empirical basis.



This can be rewritten in a form that is known as Bayes rule, which shows how prior information about Y may be used to revise the probability of X:

Prob(T Y) = ( ) (; )/ ( ).

When Bayes rule is applied to distributions about model parameters, it becomes

Prob(parametersdata) = Prob(dataparameters)Prob(parameters)/Prob(data).

The unconditional probability of the data, Prob(data), only serves as a scaling constant, and the generic form of Bayes rule is usually written:

Prob(parametersdata)ocProb(dataparameters)Prob(parameters).

Prior beliefs about the parameters are given by the prior density, Prob(para-meters); and the likelihood of the sample data, Prob(datalparameters), is called the sample likelihood. The product of these two densities defines the posterior density, Prob(parametersdata), which incorporates both prior beliefs and sample information into an updated view of the model parameters, as depicted in Figure 8.2.

If prior beliefs are that all possible values of parameters are equally likely, this is the same as saying there is no prior information. The prior density is just the uniform density and the posterior density is just the same as the sample likelihood. On the other hand, if sample data are not available then the posterior density is the same as the prior density. More generally, the posterior density will have a lower variance than both the prior density and the sample likelihood. The increased accuracy reflects the value of additional information, whether subjective, as encapsulated by prior beliefs, or objective, as represented in the sample likelihood.

Subjective beliefs may have a great influence on model parameter estimates if they are expressed with a high degree of confidence. Figure 8.3 shows two posterior densities based on the same likelihood. In Figure 8.3a the prior beliefs are rather uncertalnT-which is represented by the large variance of the prior

Subjective beliefs may have a great influence on model parameter estimates if they are expressed with a high degree of confidence

Posterior

\ Likelihood

Prior

Parameter

Figure 8.2 The posterior density is the product of the prior density and the sample likelihood.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [ 84 ] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]