The Meanvariance Framework For Measuring Financial Risk
2.1.1 The Normality Assumption
The traditional solution to this problem is to assume a meanvariance framework: we model financial risk in terms of the mean and variance (or standard deviation, the square root of the variance) of P/L (or returns). As a convenient (although oversimplified) starting point, we can regard this framework as underpinned by the assumption that daily P/L (or returns) obeys a normal distribution.1 A random variable X is normally distributed with mean / and variance a2 (or standard deviation a) if the probability that X takes the value x, f (x), obeys the following probability density function (pdf):
where X is defined over to < x < to. A normal pdf with mean 0 and standard deviation 1, known as a standard normal, is illustrated in Figure 2.1.
This pdf tells us that outcomes are more likely to occur close to the mean /. The spread of the probability mass around the mean depends on the standard deviation a: the greater the standard deviation, the more dispersed the probability mass. The pdf is also symmetric around the mean: X is as likely to take a particular value x — / as to take the corresponding negative value — (x  /). Outcomes well away from the mean are very unlikely, and the pdf tails away on both sides: the lefthand tail corresponds to extremely low realisations of the random variable, and the righthand tail to extremely high realisations of it. In risk management, we are particularly concerned about the lefthand tail, which corresponds to high negative values of P/L — or big losses, in plain English.
1 Strictly speaking, the meanvariance framework does not require normality, and many accounts of it make little or no mention of normality. Nonetheless, the statistics of the meanvariance framework are easiest understood in terms of an underlying normality assumption, and viable alternatives (e.g., such as assumptions of elliptical distributions) are usually harder to understand and less tractable to use.
0.35
Quantile or xvalue
Figure 2.1 The normal probability density function.
A pdf gives a complete representation of possible random outcomes: it tells us what outcomes are possible, and how likely these outcomes are. Such a representation enables us to answer questions about possible outcomes and, hence, about the risks we face. These questions come in two basic forms:
• The first are questions about likelihood or probability. We specify the quantity (or quantile), and then ask about the associated probability. For example, how likely is it that profit (or loss) will be greater than, or less than, a certain amount?
• The others are questions about quantiles. We specify the probability, and then ask about the associated amount. For example, what is the maximum likely profit (or loss) at a particular level of probability?
These questions and their answers are illustrated in Figure 2.2. This figure shows the same normal pdf, but with a particular Xvalue, equal to 1.645. We can regard this value as a profit of 1.645 or a loss of 1.645. The probability of a P/L value less than 1.645 is given by the lefthand tail — the area under the curve to the left of the vertical line marking off X = 1.645. This area turns out to be 0.05, or 5%, so there is a 5% probability that we will get a P/L value less than 1.645, or a loss greater than 1.645. Conversely, we can say that the maximum likely loss at a 95% probability level is 1.645. This is often put another way: we can be 95% confident of making a profit or making a loss no greater than 1.645. This value of 1.645 can then be described as the value at risk (or VaR) of our portfolio at the 95% level of confidence, and we will have more to say about this presently.
The assumption that P/L is normally distributed is attractive for three reasons. The first is that it often has some, albeit limited, plausibility in circumstances where we can appeal to the central limit theorem.
Quantile or xvalue
Figure 2.1 The normal probability density function.
Quantile
Figure 2.2 Normal quantiles and probabilities.
Quantile
Figure 2.2 Normal quantiles and probabilities.
The second is that it provides us with straightforward formulas for both cumulative probabilities and quantiles, namely:
where cl is the chosen confidence level (e.g., 95%), and acl is the standard normal variate for that confidence level (e.g., a0.95 = 1.645). acl can be obtained from standard statistical tables or from spreadsheet functions (e.g., the 'normsinv' function in Excel or the 'norminv' function in MATLAB). Equation (2.2a) is the normal distribution (or cumulative density) function: it gives the normal probability of x being less than or equal to X, and enables us to answer probability questions. Equation (2.2b) is the normal quantile corresponding to the confidence level cl (i.e., the lowest value we can expect at the stated confidence level) and enables us to answer quantity questions. The normal distribution is thus very easy to apply in practice.
The third advantage of the normal distribution is that it only requires estimates of two parameters — the mean and the standard deviation (or variance)—because it is completely described by these two parameters alone.
2.1.2 Limitations of the Normality Assumption
Nonetheless, the assumption of normality also has its limitations. Ironically, the key ones stem from the last point — that the normal distribution requires only two parameters. Generally speaking, any statistical distribution can be described in terms of its moments. The first moment is the mean, and
Figure 2.3 A skewed distribution.
the second moment corresponds to the variance. However, there are also higher moments, and the third and fourth moments can be of great importance.
The third moment gives an indication of the asymmetry or skewness of the distribution. This leads to the skewness:
The skewness coefficient will be zero for a symmetric distribution, and nonzero for an asymmetric one. The sign of the coefficient indicates the direction of the skew: a positive skew indicates a short tail on the left and a long tail on the right, and a negative skew indicates the opposite.
An example of a positively skewed distribution is shown in Figure 2.3, along with the earlier symmetric normal distribution for comparison. The skew alters the whole distribution, and tends to pull one tail in whilst pushing the other tail out. If a distribution is skewed, we must therefore take account of its skewness if we are to be able to estimate its probabilities and quantiles correctly.
The fourth moment, the kurtosis, gives an indication of the flatness of the distribution. In risk measurement practice, this is usually taken to be an indication of the fatness of the tails of the distribution. The kurtosis parameter is:
If we ignore any skewness for convenience, there are three cases to consider:
• If the kurtosis parameter is 3, the tails of our P/L distribution are the same as those we would get under normality.
• If the kurtosis parameter is greater than 3, our tail is fatter than under normality. Such fat tails are common in financial returns, and indicate that extreme events are more likely, and more likely to be large, than under normality.
Quantile
Figure 2.4 A fattailed distribution.
Quantile
Figure 2.4 A fattailed distribution.
• If the kurtosis parameter is less than 3, our tail is thinner than under normality. Thin tails indicate that extreme events are less likely, and less likely to be large, than under normality.
The effect of kurtosis is illustrated in Figure 2.4, which shows how a symmetric fattailed distribution—in this case, a Student idistribution with five degrees of freedom—compares to a normal one. Because the area under the pdf curve must always be 1, the distribution with the fatter tails also has less probability mass in the centre. Tailfatness — kurtosis in excess of 3—means that we are more likely to gain a lot or lose a lot, and the gains or losses will tend to be larger, relative to normality.
The moral of the story is that the normality assumption is only strictly appropriate if we are dealing with a symmetric (i.e., zeroskew) distribution with a kurtosis of 3. If these conditions are not met — if our distribution is skewed, or (in particular) has fat tails — then the normality assumption is inappropriate and can lead to major errors in risk analysis.
Box 2.1 Other Risk Measures
The most widely used measure of risk (or dispersion) is the standard deviation (or its square, the variance), but the standard deviation has been criticised for the arbitrary way in which deviations from the mean are squared and for giving equal treatment to upside and downside outcomes. If we are concerned about these, we can use the mean absolute deviation or the downside semivariance instead: the former replaces the squared deviations in the standard deviation formula with absolute deviations and gets rid of the square root operation; the latter can be obtained from the variance formula by replacing upside values (i.e., observations above the mean) with zeros. We can also replace the standard deviation with other simple dispersion measures such as the entropy measure or the Gini coefficient (see, e.g., Kroll and Kaplanski (2001, pp. 1314)).
A more general approach to dispersion is provided by Fishburn a  t measures, defined as (t  x)a f (x) dx (Fishburn (1977)). This measure is defined on two parameters: a, which describes our attitude to risk, and t, which specifies the cutoff between the downside that we worry about and other outcomes that we don't worry about. Many risk measures are special cases of the Fishburn measure or are closely related to it. These includes the downside semivariance, which is very closely related to the Fishburn measure with a = 2 and t equal to the mean; Roy's safetyfirst criterion, where a ^ 0; and the expected tail loss (ETL), which is closely related to the Fishburn measure with a = 1. In addition, the Fishburn measure encompasses the stochastic dominance rules that are sometimes used for ranking risky alternatives:2 the Fishburn measure with a = n + 1 is proportional to the nthorder distribution function, so ranking risks by this Fishburn measure is equivalent to ranking by nthorder stochastic dominance.3
2.1.3 Traditional Approaches to Financial Risk Measurement
2.1.3.1 Portfolio Theory
It is also important to check for normality because of its close connection with some of the most popular traditional approaches to financial risk measurement. A good example is portfolio theory, whose starting point is the assumption that the behaviour of the returns to any set of assets can be described in terms of a vector of expected returns and a variancecovariance matrix that captures the relationships between individual returns. Any portfolio formed from this set of assets will then have a return whose mean and standard deviation are determined by these factors. If the specification of the behaviour of portfolio returns is to be complete, and if we leave aside various exceptions and disguises (e.g., such as elliptical distributions or lognormality), we then require either that individual asset returns be multivariate normally distributed, or (less restrictively) that our portfolio has a normally distributed return. Either way, we end up with a portfolio whose returns are normally distributed. If we are to use portfolio theory, we have to make assumptions somewhere along the line that lead us to normality or something closely related to it.
Unfortunately, once we are signed up to normality, we are stuck with it: we have a framework that cannot (again, honourable exceptions aside) be relied upon to give us good answers in the presence of major departures from normality, such as skewness or fat tails.
2.1.3.2 Duration Approaches to Fixedincome Risk Measurement
Another traditional method is the duration approach to fixedincome risk measurement. This method gives us approximation rules that help us to determine how bond prices will change in the face of specified changes in bond yields or interest rates. For example, suppose we start with a bond's priceyield
2An nthorder distribution function is defined as F(n)(x) = (n!)] /<» (x  u)n 1 f (u)du, and X1 is said to be nthorder stochastically dominant over X2 if F1(n)(x) < F2n)(x), where F1(n)(x) and F^n)(x) are the nthdegree distribution functions of X1 and X2 (Yoshiba and Yamai (2001, p. 8)). Firstorder stochastic dominance therefore implies that the distribution function for X1 is never above the distribution function for X2, secondorder stochastic dominance implies that their seconddegree distribution functions do not cross, and so on. Since a risk measure with nthdegree stochastic dominance is also consistent with higher degrees of stochastic dominance, we can say that firstorder stochastic dominance implies second and higher orders of stochastic dominance, but not the reverse. Firstorder stochastic dominance is a fairly strict condition, secondorder stochastic dominance is less restrictive, and so forth: higher orders of stochastic dominance are less strict than lower orders of stochastic dominance.
3See Ingersoll (1987, p. 139) or Yoshiba and Yamai (2001, p. 8).
relationship, P(y), and take a linear firstorder approximation around the current combination of price (P) and yield (y):
where Ay is some small change in yield. Fixedincome theory tells us that:
where Dm is the bond's modified duration (see, e.g., Fabozzi (2000, p. 66)). Expressions such as Equation (2.6) are usually used to provide approximate answers to 'what if' questions (e.g., what if yields rise by 10 basis points?) and, as such, they are useful, though limited, tools in the risk measurer's armoury.
However, risk analysis in the proper sense of the term requires that we link events (i.e., changes in bond price) to probabilities. If we are to use duration measures for risk measurement purposes in this sense, our best option is to derive the standard deviation of holdingperiod return and then feed that into a normal risk framework. Thus, the percentage change in bond price is:
and the volatility of the bond price is approximately:
If we want a risk measure, the easiest step is to assume that bond prices are approximately normal and we can then work out the probabilities of specified gains or losses, and so forth. We could also assume alternative distributions if we wished to, but the normal distribution is certainly the most convenient, and makes durationbased measures of risk more tractable than they would otherwise be.
Optimum Options
Get All The Support And Guidance You Need To Be A Success At Options Trading. This Book Is One Of The Most Valuable Resources In The World When It Comes To How To Come Out A Winner In Options Trading.
Responses

giovanni7 years ago
 Reply