Extreme Value Theory

1469941517622

Standard estimators of the dependence between assets are the correlation coefficient or the Spearman’s rank correlation for instance. However, as stressed by [Embrechts et al. ], these kind of dependence measures suffer from many deficiencies. Moreoever, their values are mostly controlled by relatively small moves of the asset prices around their mean. To cure this problem, it has been proposed to use the correlation coefficients conditioned on large movements of the assets. But [Boyer et al.] have emphasized that this approach suffers also from a severe systematic bias leading to spurious strategies: the conditional correlation in general evolves with time even when the true non-conditional correlation remains constant. In fact, [Malevergne and Sornette] have shown that any approach based on conditional dependence measures implies a spurious change of the intrinsic value of the dependence, measured for instance by copulas. Recall that the copula of several random variables is the (unique) function which completely embodies the dependence between these variables, irrespective of their marginal behavior (see [Nelsen] for a mathematical description of the notion of copula).

In view of these limitations of the standard statistical tools, it is natural to turn to extreme value theory. In the univariate case, extreme value theory is very useful and provides many tools for investigating the extreme tails of distributions of assets returns. These new developments rest on the existence of a few fundamental results on extremes, such as the Gnedenko-Pickands-Balkema-de Haan theorem which gives a general expression for the distribution of exceedence over a large threshold. In this framework, the study of large and extreme co-movements requires the multivariate extreme values theory, which unfortunately does not provide strong results. Indeed, in constrast with the univariate case, the class of limiting extreme-value distributions is too broad and cannot be used to constrain accurately the distribution of large co-movements.

In the spirit of the mean-variance portfolio or of utility theory which establish an investment decision on a unique risk measure, we use the coefficient of tail dependence, which, to our knowledge, was first introduced in the financial context by [Embrechts et al.]. The coefficient of tail dependence between assets Xi and Xj is a very natural and easy to understand measure of extreme co-movements. It is defined as the probability that the asset Xi incurs a large loss (or gain) assuming that the asset Xj also undergoes a large loss (or gain) at the same probability level, in the limit where this probability level explores the extreme tails of the distribution of returns of the two assets. Mathematically speaking, the coefficient of lower tail dependence between the two assets Xi and Xj , denoted by λ−ij is defined by

λ−ij = limu→0 Pr{Xi<Fi−1(u)|Xj < Fj−1(u)} —– (1)

where Fi−1(u) and Fj−1(u) represent the quantiles of assets Xand Xj at level u. Similarly the coefficient of the upper tail dependence is

λ+ij = limu→1 Pr{Xi > Fi−1(u)|Xj > Fj−1(u)} —– (2)

λ−ij and λ+ij are of concern to investors with long (respectively short) positions. We refer to [Coles et al.] and references therein for a survey of the properties of the coefficient of tail dependence. Let us stress that the use of quantiles in the definition of λ−ij and λ+ij makes them independent of the marginal distribution of the asset returns: as a consequence, the tail dependence parameters are intrinsic dependence measures. The obvious gain is an “orthogonal” decomposition of the risks into (1) individual risks carried by the marginal distribution of each asset and (2) their collective risk described by their dependence structure or copula.

Being a probability, the coefficient of tail dependence varies between 0 and 1. A large value of λ−ij means that large losses occur almost surely together. Then, large risks can not be diversified away and the assets crash together. This investor and portfolio manager nightmare is further amplified in real life situations by the limited liquidity of markets. When λ−ij vanishes, these assets are said to be asymptotically independent, but this term hides the subtlety that the assets can still present a non-zero dependence in their tails. For instance, two normally distributed assets can be shown to have a vanishing coefficient of tail dependence. Nevertheless, unless their correlation coefficient is identically zero, these assets are never independent. Thus, asymptotic independence must be understood as the weakest dependence which can be quantified by the coefficient of tail dependence.

For practical implementations, a direct application of the definitions (1) and (2) fails to provide reasonable estimations due to the double curse of dimensionality and undersampling of extreme values, so that a fully non-parametric approach is not reliable. It turns out to be possible to circumvent this fundamental difficulty by considering the general class of factor models, which are among the most widespread and versatile models in finance. They come in two classes: multiplicative and additive factor models respectively. The multiplicative factor models are generally used to model asset fluctuations due to an underlying stochastic volatility for a survey of the properties of these models). The additive factor models are made to relate asset fluctuations to market fluctuations, as in the Capital Asset Pricing Model (CAPM) and its generalizations, or to any set of common factors as in Arbitrage Pricing Theory. The coefficient of tail dependence is known in close form for both classes of factor models, which allows for an efficient empirical estimation.

Advertisement

Speculations

swing-trading

Any system that uses only single asset price (and possibly prices of multiple assets, but this case is not completely clear) as input. The price is actually secondary and typically fluctuates few percent a day in contrast with liquidity flow, that fluctuates in orders of magnitude. This also allows to estimate maximal workable time scale: the scale on which execution flow fluctuates at least in an order of magnitude (in 10 times).

Any system that has a built-in fixed time scale (e.g. moving average type of system). The market has no specific time scale.

Any “symmetric” system with just two signals “buy” and “sell” cannot make money. Minimal number of signals is four: “buy”, “sell position”, “sell short”, “cover short”. The system where e.g. “buy” and “cover short” is the same signal will eventually catastrophically lose money on an event when market go against position held. Short covering is buying back borrowed securities in order to close an open short position. Short covering refers to the purchase of the exact same security that was initially sold short, since the short-sale process involved borrowing the security and selling it in the market. For example, assume you sold short 100 shares of XYZ at $20 per share, based on your view that the shares were headed lower. When XYZ declines to $15, you buy back 100 shares of XYZ in the market to cover your short position (and pocket a gross profit of $500 from your short trade).

Any system entering the position (does not matter long or short) during liquidity excess (e.g. I > IIH) cannot make money. During liquidity excess price movement is typically large and “reverse to the moving average” type of system often use such event as position entering signal. The market after liquidity excess event bounces a little, then typically goes to the same direction. This give a risk of on what to bet: “little bounce” or “follow the market”. What one should do during liquidity excess event is to CLOSE existing position. This is very fundamental – if you have a position during market uncertainty – eventually you will lose money, you must have ZERO position during liquidity excess. This is very important element of the P&L trading strategy.

Any system not entering the position during liquidity deficit event (e.g. I < IIL) typically lose money. Liquidity deficit periods are characterized by small price movements and difficult to identify by price-based trading systems. Liquidity deficit actually means that at current price buyers and sellers do not match well, and substantial price movement is expected. This is very well known by most traders: before large market movement volatility (and e.g. standard deviation as its crude measure) becomes very low. The direction (whether one should go long or short) during liquidity deficit event can, to some extent, be determined by the balance of supply–demand generalization.

An important issue is to discuss is: what would happen to the markets when this strategy (enter on liquidity deficit, exit on liquidity excess) is applied on mass scale by market participants. In contrast with other trading strategies, which reduce liquidity at current price when applied (when price is moved to the uncharted territory, the liquidity drains out because supply or demand drains ), this strategy actually increases market liquidity at current price. This insensitivity to price value is expected to lead not to the strategy stopping to work when applied on mass scale by market participants, but starting to work better and better and to markets’ destabilization in the end.