# Tranche Declension. With the CDO (collateralized debt obligation) market picking up, it is important to build a stronger understanding of pricing and risk management models. The role of the Gaussian copula model, has well-known deficiencies and has been criticized, but it continues to be fundamental as a starter. Here, we draw attention to the applicability of Gaussian inequalities in analyzing tranche loss sensitivity to correlation parameters for the Gaussian copula model.

We work with an RN-valued Gaussian random variable X = (X1, … , XN), where each Xj is normalized to mean 0 and variance 1, and study the equity tranche loss

L[0,a] = ∑m=1Nlm1[xm≤cm] – {∑m=1Nlm1[xm≤cm] – a}

where l1 ,…, lN > 0, a > 0, and c1,…, cN ∈ R are parameters. We thus establish an identity between the sensitivity of E[L[0,a]] to the correlation rjk = E[XjXk] and the parameters cj and ck, from where subsequently we come to the inequality

∂E[L[0,a]]/∂rjk ≤ 0

Applying this inequality to a CDO containing N names whose default behavior is governed by the Gaussian variables Xj shows that an increase in name-to-name correlation decreases expected loss in an equity tranche. This is a generalization of the well-known result for Gaussian copulas with uniform correlation.

Consider a CDO consisting of N names, with τj denoting the (random) default time of the jth name. Let

Xj = φj-1(Fjj))

where Fj is the distribution function of τj (relative to the market pricing measure), assumed to be continuous and strictly increasing, and φj is the standard Gaussian distribution function. Then for any x ∈ R we have

P[Xj ≤ x] = P[τj ≤ Fj-1j(x))] = Fj(Fj-1j(x))) = φj(x)

which means that Xj has standard Gaussian distribution. The Gaussian copula model posits that the joint distribution of the Xj is Gaussian; thus,

X = (X1, …., Xn)

is an RN-valued Gaussian variable whose marginals are all standard Gaussian. The correlation

τj = E[XjXk]

reflects the default correlation between the names j and k. Now let

pj = E[τj ≤ T] = P[Xj ≤ cj]

be the probability that the jth name defaults within a time horizon T, which is held constant, and

cj = φj−1(Fj(T))

is the default threshold of the jth name.

In schematics, when we explore the essential phenomenon, the default of name j, which happens if the default time τis within the time horizon T, results in a loss of amount lj > 0 in the CDO portfolio. Thus, the total loss during the time period [0, T] is

L = ∑m=1Nlm1[xm≤cm]

This is where we are essentially working with a one-period CDO, and ignoring discounting from the random time of actual default. A tranche is simply a range of loss for the portfolio; it is specified by a closed interval [a, b] with 0 ≤ a ≤ b. If the loss x is less than a, then this tranche is unaffected, whereas if x ≥ b then the entire tranche value b − a is eaten up by loss; in between, if a ≤ x ≤ b, the loss to the tranche is x − a. Thus, the tranche loss function t[a, b] is given by

t[a, b](x) = 0 if x < a; = x – a, if x ∈ [a, b]; = b – a; if x > b

or compactly,

t[a, b](x) = (x – a)+ – (x – b)+

From this, it is clear that t[a, b](x) is continuous in (a, b, x), and we see that it is a non-decreasing function of x. Thus, the loss in an equity tranche [0, a] is given by

t[0,a](L) = L − (L − a)+

with a > 0.

# Frege-Russell and Mathematical Identity

Frege considered it a principal task of his logical reform of arithmetic to provide absolutely determinate identity conditions for the objects of that science, i.e. for numbers. Referring to the contemporary situation in this discipline he writes:

How I propose to improve upon it can be no more than indicated in the present work. With numbers … it is a matter of fixing the sense of an identity.

Frege makes the following critically important assumption : identity is a general logical concept, which is not specific to mathematics. Frege says:

It is not only among numbers that the relationship of identity is found. From which it seems to follow that we ought not to define it specially for the case of numbers. We should expect the concept of identity to have been fixed first, and that then from it together with the concept of number it must be possible to deduce when numbers are identical with one another, without there being need for this purpose of a special definition of numerical identity as well.

In a different place Frege says clearly that this concept of identity is absolutely stable across all possible domains and contexts:

Identity is a relation given to us in such a specific form that it is inconceivable that various forms of it should occur.

Frege’s definition of natural number, as modified in Russell (Bertrand Russell – Principles of Mathematics) later became standard. Intuitively the number 3 is what all collections consisting of three members (trios) share in common. Now instead of looking for a common form, essence or type of trios let us simply consider all such things together. According to Frege and Russell the collection (class, set) of all trios just is the number 3. Similarly for other numbers. Isn’t this construction circular? Frege and Russell provide the following argument which they claim allows us to avoid circularity here: given two different collections we may learn whether or not they have the same number of members without knowing this number and even without the notion of number itself. It is sufficient to find a one-one correspondence between members of two given collections. If there is such a correspondence, the two collections comprise the same number of members, or to avoid any reference to numbers we can say that the two collections are equivalent. This equivalence is Humean. Let us define natural numbers as equivalence classes under this relation. This definition reduces the question of identity of numbers to that of identity of classes. This latter question is settled through the axiomatization of set theory in a logical calculus with identity. Thus Frege’s project is realized: it has been seen how the logical concept of identity applies to numbers. In an axiomatic setting “identities” in Quine’s sense (that is, identity conditions) of mathematical objects are provided by an axiom schema of the form

∀x ∀y (x=y ↔ ___ )

called the Identity Schema (IS). This does not resolve the identity problem though because any given system of axioms, generally speaking, has multiple models. The case of isomorphic models is similar to that of equal numbers or coincident points (naively construed): there are good reasons to think of isomorphic models as one and there is also good reason to think of them as many. So the paradox of mathematical “doubles” reappears. It is a highly non-trivial fact that different models of Peano arithmetic, ZF, and other important axiomatic systems are not necessarily isomorphic. Thus logical analysis à la Frege-Russell certainly clarifies the mathematical concepts involved but it does not settle the identity issue as Frege believed it did. In the recent philosophy of mathematics literature the problem of the identity of mathematical objects is usually considered in the logical setting just mentioned: either as the problem of the non-uniqueness of the models of a given axiomatic system or as the problem of how to fill in the Identity Schema. At the first glance the Frege-Russell proposal concerning the identity issue in mathematics seems judicious and innocent (and it certainly does not depend upon the rest of their logicist project): to stick to a certain logical discipline in speaking about identity (everywhere and in particular in mathematics).

# Representation in the Philosophy of Science. The concept of representation has gained momentum in the philosophy of science. The simplest concept of representation conceivable is expressed by the following dyadic predicate: structure S(HeB) represents HeB. Steven French defended that to represent something in science is the same as to have a model for it, where models are set-structures; then ‘representation’ and ‘model’ become synonyms and so do ‘to represent’ and ‘to model’. Nevertheless, this simplest conception was quickly thrown overboard as too simple by amongst others Ronald Giere, who replaced this dyadic predicate with a quadratic predicate to express a more involved concept of representation:

Scientist S uses model S to represent being B for purpose P,

where ‘model’ can here be identified with ‘structure’. Another step was set by Bas van Fraassen. As early as 1994, in his contribution to J. Hilgevoord’s Physics and our View of the World, Van Fraassen brought Nelson Goodman’s distinction between representation-of and representation-as — drawn in his seminal Languages of Art – to bear on science; he went on to argue that all representation in science is representation-as. We represent a Helium atom in a uniform magnetic field as a set-theoretical wave-mechanical structure S(HeB). In his new tome Scientific Representation, Van Fraassen has moved essentially to a hexadic predicate to express the most fundamental and most involved concept of representation to date:

Repr(S, V, S, B, F, P) ,

which reads: subject or scientist S is V -ing artefact S to represent B as an F for purpose P. Example: In the 1920ies, Heisenberg (S) constructed (V) a mathematical object (S) to represent a Helium atom (B) as a wave-mechanical structure (F) to calculate its electro-magnetic spectrum (P). We concentrate on the following triadic predicate, which is derived from the fundamental hexadic one:

ReprAs(S, B, F) iff ∃S, ∃V, ∃P : Repr(S, V, A, B, F, P)

which reads: abstract object S represents being B as an F, so that F(S).

Giere, Van Fraassen and contemporaries are not the first to include manifestations of human agency in their analysis of models and representation in science. A little more than most half a century ago, Peter Achinstein expounded the following as a characteristic of models in science:

A theoretical model is treated as an approximation useful for certain purposes. (…) The value of a given model, therefore, can be judged from different though related viewpoints: how well it serves the purposes for which it is eimployed, and the completeness and accuracy of the representation it proposes. (…) To propose something as a model of X is to suggest it as way of representing X which provides at least some approximation of the actual situation; moreover, it is to admit the possibility of alternative representations useful for different purposes.

One year later, M.W. Wartofsky explicitly proposed, during the Annual Meeting of the American Philosophical Association, Western Division, Philadelphia, 1966, to consider a model as a genus of representation, to take in that representation involves “relevant respects for relevant for purposes”, and to consider “the modelling relation triadically in this way: M(S,x,y), where S takes x as a model of y”.20 Two years later, in 1968, Wartofsky wrote in his essay ‘Telos and Technique: Models as Modes of Action’ the following:

In this sense, models are embodiments of purpose and, at the same time, instruments for carrying out such purposes. Let me attempt to clarify this idea. No entity is a model of anything simply by virtue of looking like, or being like, that thing. Anything is like anything else in an infinite number of respects and certainly in some specifiable respect; thus, if I like, I may take anything as a model of anything else, as long as I can specify the respect in which I take it. There is no restriction on this. Thus an array of teacups, for example, may be take as a model for the employment of infantry battalions, and matchsticks as models of mu-mesons, there being some properties that any of these things share with the others. But when we choose something to be a model, we choose it with some end in view, even when that end in view is simply to aid the imagination or the understanding. In the most trivial cases, then, the model is already normative and telic. It is normative in that is chosen to represent abstractly only certain features of the thing we model, not everything all at once, but those features we take to be important or significant or valuable. The model is telic in that significance and value can exist only with respect to some end in view or purpose that the model serves.

Further, during the 1950ies and 1960ies the role of analogies, besides that of models, was much discussed among philosophers of science (Hesse, Achinstein, Girill, Nagel, Braithwaite, Wartofsky).

On the basis of the general concept of representation, we can echo Wartofsky by asserting that almost anything can represent everything for someone for some purpose. In scientific representations, representans and representandum will share some features, but not all features, because to represent is neither to mirror nor to copy. Realists, a-realists and anti-realists will all agree that ReprAs(S, B, F) is true only if on the basis of F(S) one can save all phenomena that being B gives rise to, i.e. one can calculate or accommodate all measurement results obtained from observing B or experimenting with B. Whilst for structural empiricists like Van Fraassen this is also sufficient, for StrR it is not. StrR will want to add that structure S of type F ‘is realised’, that S of type F truly is the structure of being B or refers to B, so that also F(B). StrR will want to order the representations of being B that scientists have constructed during the course of history as approaching the one and only true structure of B, its structure an sich, the Kantian regulative ideal of StrR. But this talk of truth and reference, of beings and structures an sich, is in dissonance with the concept of representation-as.

Some being B can be represented as many other things and all the ensuing representations are all hunky-dory if each one serves some purpose of some subject. When the concept of representation-as is taken as pivotal to make sense of science, then the sort of ‘perspectivalism’ that Giere advocates is more in consonance with the ensuing view of science than realism is. Giere attempts to hammer a weak variety of realism into his ‘perspectivalism’: all perspectives are perspectives on one and the same reality and from every perspective something is said that can be interpreted realistically: in certain respects the representans resembles its representandum to certain degrees. A single unified picture of the world is however not to be had. Nancy Cartwright’s dappled world seems more near to Giere’s residence of patchwork realism. A unified picture of the physical world that realists dream of is completely out of the picture here. With friends like that, realism needs no enemies.

There is prima facie a way, however, for realists to express themselves in terms of representation, as follows. First, fix the purpose P to be: to describe the world as it is. When this fixed purpose leaves a variety of representations on the table, then choose the representation that is empirically superior, that is, that performs best in terms of describing the phenomena, because the phenomena are part of the world. This can be established objectively. When this still leaves more than one representation on the table, which thus save the phenomena equally well, choose the one that best explains the phenomena. In this context, Van Fraassen mentions the many interpretations of QM: each one constitutes a different representation of the same beings, or of only the same observable beings (phenomena), their similarities notwithstanding. Do all these interpre- tations provide equally good explanations? This can be established objectively too, but every judgment here will depend on which view of explanation is employed. Suppose we are left with a single structure A, of type G. Then we assert that ‘G(B)’ is true. When this ‘G’ predicates structure to B, we still need to know what ‘structure’ literally means in order to know what it is that we attribute to B, of what A is that B instantiates, and, even more important, we need to know this for our descriptivist account of reference, which realists need in order to be realists. Yes, we now have arrived where we were at the end of the previous two Sections. We conclude that this way for realists, to express themselves in terms of representation, is a dead end. The concept of representation is not going to help them.

The need for substantive accounts of truth and reference fade away as soon as one adopts a view of science that takes the concept of representation-as as its pivotal concept. Fundamentally different kinds of mathematical structure, set-theoretical and category-theoretical, can then easily be accommodated. They are ‘only representations’. That is moving away from realism, StrR included, dissolving rather than solving the problem for StrR of clarifying its Central Claim of what it means to say that being B is or has structure S — ‘dissolved’, because ‘is or has’ is replaced with ‘is represented-as’. Realism wants to know what B is, not only how it can be represented for someone who wants to do something for some purpose. When we take it for granted that StrR needs substantive accounts of truth and reference, more specifically a descriptivist account of reference and then an account of truth by means of reference, then a characterisation of structure as directly as possible, without committing one to a profusion of abstract objects, is mandatory.

The Characterisation of Structure

# Eliminating Implicit Reference to Elements: Via Einsteinian Algebra. (3) Previously, we highlighted the inadequacy of implicitly quantifying over elements and it is to circumvent, or circumnavigate this point that Jonathan Bain introduced his specific argument, to which we now turn here.

G3 above yields a special translation scheme that allows one to avoid making explicit reference to elements. The key insight driving the specific argument is that, if one looks at a narrower range of cases, a rather different sort of translation scheme is possible: indeed one that not only avoids making explicit reference to elements, but also allows one to generalize the C-objects in such a way that these new C-objects can no longer be considered to have elements (or as many elements) in the sense of the original objects. According to Bain, this shows that the

‘…correlates [of elements of structured sets] are not essential to the articulation of the relevant structure.’

Implicit reference to elements is thereby claimed to be eliminated. Bain argues by appealing to a particular instance of how this translation is supposed to work, viz. the example of Einstein algebras.

Here is our abstract reconstruction of Bain’s specific argument. Let there be two theories T1 and T2, each represented by a category of models respectively. T1 is the original physical theory that makes reference to O-objects.

S1: T1 can be translated into T2. In particular, each T1-model can be translated into a T2 model and vice versa.

S2: T2 is contained in a strictly larger theory (i.e. a larger category of models) T2∗. In particular, T2∗ is constructed by generalizing T2-models to yield models of T2∗, typically by dropping an algebraic condition from the T2-models. We will use T2′ to denote the complement of T2 in T2∗.

S3: T2′ cannot be translated back into T1 and so its models do not contain T1 -objects.

S4: T2′ is relevant for modeling some physical scenarios.

When taken together, S1-S4 are supposed to show that:

CS: The T1-object correlates in T2 do not play an essential role in articulating the physical structure (smooth structure, in Bain’s specific case) of T2∗ .

Let us defer for the moment the question of exactly how the idea of ‘translation’ is supposed to work here. The key idea behind S1–S4 is that one can generalize T2 to obtain a new – more general – theory T2∗, some of whose models do not contain T1-objects (i.e. O-objects in T1).

We now discuss the premises of the argument and show that S3 rests on a technical misunderstanding; however, we will rehabilitate S3 before proceeding to argue that the argument fails. First, S1: Bain notes that these space-time points are in 1-1 correspon- dence with ‘maximal ideals’ (an algebraic feature) in the corresponding EA model. We are thus provided with a translation scheme: points of space in a geometric description of GTR are translated into maximal ideals in an algebraic description of GTR. So the idea is that EA models capture the physical content of GR without making explicit reference to points. Now the version of S2 that Bain uses is one in which T2, the category of EAs, gets generalized to T2∗, the category of sheaves of EAs over a manifold, which has a generalized notion of ‘smooth structure’. The former is a proper subcategory of the latter, because a sheaf of EAs over a point is just equivalent to an EA.

Bain then tries to obtain S3 by saying that a sheaf of EAs which is inequivalent to an EA does not necessarily have global elements (i.e. sections of a sheaf) in the sense previously defined, and so does not have points. Unfortunately, he confuses the notion of a local section of a sheaf of EAs (which assigns an element of an EA to an open subset of a manifold) with the notion of a maximal ideal of an EA (i.e. the algebraic correlate of a spacetime point). And since the two are entirely different, a lack of global sections does not imply a lack of spacetime points (i.e. O-objects). Therefore S3 needs to be repaired.

Nonetheless, we can easily rehabilitate S3 is the following manner. The key idea is that while T1 (a geometric model of GTR) and T2 (the equivalent EA model) both make reference to T1-objects (explicitly and implicitly, respectively), some sheaves of EAs do not refer to T1-objects because they have no formulation in terms of geometric models of GTR. In other words, the generalized smooth structure of T2′ cannot be described in terms of the structured sets used to define ordinary smooth structure in the case of T1 and T2.

Finally, as regards S4, various authors have taken the utility of T2′ to be e.g. the in- clusion of singularities in space-time, and as a step towards formulating quantum gravity (Geroch).

We now turn to considering the inference to CS. It is not entirely clear what Bain means by ‘[the relata] do not play an essential role’ – nor does he expand on this phrase – but the most straightforward reading is that T1-objects are eliminated simpliciter from T2∗.

One might compare this situation to the way that the collection of all groups (analogous to T2) is contained in the collection of all monoids (analogous to T2∗): it might be claimed that inverses are eliminated from the collection of all monoids. One could of course speak in this way, but what this would mean is that some monoids (in particular, groups) have inverses, and some do not – a ‘monoid’ is just a general term that covers both cases. Similarly, we can see that CS does not follow from S1–S3, since T2∗ contains some models that (implicitly) quantify over T1-objects, viz. the models of T2, and some that do not, viz. the models of T2′.

We have seen that the specific argument will not work if one is concerned with eliminating reference to T1-objects from the new and more general theory T2∗. However, what if one is concerned not with eliminating reference, but rather with downgrading the role that T1-objects play in T2∗, e.g. by claiming that the models of T2′ have a conceptual or metaphysical priority? And what would such a ‘downgrading’ even amount to?

# Extreme Value Theory Standard estimators of the dependence between assets are the correlation coefficient or the Spearman’s rank correlation for instance. However, as stressed by [Embrechts et al. ], these kind of dependence measures suffer from many deficiencies. Moreoever, their values are mostly controlled by relatively small moves of the asset prices around their mean. To cure this problem, it has been proposed to use the correlation coefficients conditioned on large movements of the assets. But [Boyer et al.] have emphasized that this approach suffers also from a severe systematic bias leading to spurious strategies: the conditional correlation in general evolves with time even when the true non-conditional correlation remains constant. In fact, [Malevergne and Sornette] have shown that any approach based on conditional dependence measures implies a spurious change of the intrinsic value of the dependence, measured for instance by copulas. Recall that the copula of several random variables is the (unique) function which completely embodies the dependence between these variables, irrespective of their marginal behavior (see [Nelsen] for a mathematical description of the notion of copula).

In view of these limitations of the standard statistical tools, it is natural to turn to extreme value theory. In the univariate case, extreme value theory is very useful and provides many tools for investigating the extreme tails of distributions of assets returns. These new developments rest on the existence of a few fundamental results on extremes, such as the Gnedenko-Pickands-Balkema-de Haan theorem which gives a general expression for the distribution of exceedence over a large threshold. In this framework, the study of large and extreme co-movements requires the multivariate extreme values theory, which unfortunately does not provide strong results. Indeed, in constrast with the univariate case, the class of limiting extreme-value distributions is too broad and cannot be used to constrain accurately the distribution of large co-movements.

In the spirit of the mean-variance portfolio or of utility theory which establish an investment decision on a unique risk measure, we use the coefficient of tail dependence, which, to our knowledge, was first introduced in the financial context by [Embrechts et al.]. The coefficient of tail dependence between assets Xi and Xj is a very natural and easy to understand measure of extreme co-movements. It is defined as the probability that the asset Xi incurs a large loss (or gain) assuming that the asset Xj also undergoes a large loss (or gain) at the same probability level, in the limit where this probability level explores the extreme tails of the distribution of returns of the two assets. Mathematically speaking, the coefficient of lower tail dependence between the two assets Xi and Xj , denoted by λ−ij is defined by

λ−ij = limu→0 Pr{Xi<Fi−1(u)|Xj < Fj−1(u)} —– (1)

where Fi−1(u) and Fj−1(u) represent the quantiles of assets Xand Xj at level u. Similarly the coefficient of the upper tail dependence is

λ+ij = limu→1 Pr{Xi > Fi−1(u)|Xj > Fj−1(u)} —– (2)

λ−ij and λ+ij are of concern to investors with long (respectively short) positions. We refer to [Coles et al.] and references therein for a survey of the properties of the coefficient of tail dependence. Let us stress that the use of quantiles in the definition of λ−ij and λ+ij makes them independent of the marginal distribution of the asset returns: as a consequence, the tail dependence parameters are intrinsic dependence measures. The obvious gain is an “orthogonal” decomposition of the risks into (1) individual risks carried by the marginal distribution of each asset and (2) their collective risk described by their dependence structure or copula.

Being a probability, the coefficient of tail dependence varies between 0 and 1. A large value of λ−ij means that large losses occur almost surely together. Then, large risks can not be diversified away and the assets crash together. This investor and portfolio manager nightmare is further amplified in real life situations by the limited liquidity of markets. When λ−ij vanishes, these assets are said to be asymptotically independent, but this term hides the subtlety that the assets can still present a non-zero dependence in their tails. For instance, two normally distributed assets can be shown to have a vanishing coefficient of tail dependence. Nevertheless, unless their correlation coefficient is identically zero, these assets are never independent. Thus, asymptotic independence must be understood as the weakest dependence which can be quantified by the coefficient of tail dependence.

For practical implementations, a direct application of the definitions (1) and (2) fails to provide reasonable estimations due to the double curse of dimensionality and undersampling of extreme values, so that a fully non-parametric approach is not reliable. It turns out to be possible to circumvent this fundamental difficulty by considering the general class of factor models, which are among the most widespread and versatile models in finance. They come in two classes: multiplicative and additive factor models respectively. The multiplicative factor models are generally used to model asset fluctuations due to an underlying stochastic volatility for a survey of the properties of these models). The additive factor models are made to relate asset fluctuations to market fluctuations, as in the Capital Asset Pricing Model (CAPM) and its generalizations, or to any set of common factors as in Arbitrage Pricing Theory. The coefficient of tail dependence is known in close form for both classes of factor models, which allows for an efficient empirical estimation.