Optimal Hedging…..

hedging

Risk management is important in the practices of financial institutions and other corporations. Derivatives are popular instruments to hedge exposures due to currency, interest rate and other market risks. An important step of risk management is to use these derivatives in an optimal way. The most popular derivatives are forwards, options and swaps. They are basic blocks for all sorts of other more complicated derivatives, and should be used prudently. Several parameters need to be determined in the processes of risk management, and it is necessary to investigate the influence of these parameters on the aims of the hedging policies and the possibility of achieving these goals.

The problem of determining the optimal strike price and optimal hedging ratio is considered, where a put option is used to hedge market risk under a constraint of budget. The chosen option is supposed to finish in-the-money at maturity in the, such that the predicted loss of the hedged portfolio is different from the realized loss. The aim of hedging is to minimize the potential loss of investment under a specified level of confidence. In other words, the optimal hedging strategy is to minimize the Value-at-Risk (VaR) under a specified level of risk.

A stock is supposed to be bought at time zero with price S0, and to be sold at time T with uncertain price ST. In order to hedge the market risk of the stock, the company decides to choose one of the available put options written on the same stock with maturity at time τ, where τ is prior and close to T, and the n available put options are specified by their strike prices Ki (i = 1, 2,··· , n). As the prices of different put options are also different, the company needs to determine an optimal hedge ratio h (0 ≤ h ≤ 1) with respect to the chosen strike price. The cost of hedging should be less than or equal to the predetermined hedging budget C. In other words, the company needs to determine the optimal strike price and hedging ratio under the constraint of hedging budget.

Suppose the market price of the stock is S0 at time zero, the hedge ratio is h, the price of the put option is P0, and the riskless interest rate is r. At time T, the time value of the hedging portfolio is

S0erT + hP0erT —– (1)

and the market price of the portfolio is

ST + h(K − Sτ)+ er(T−τ) —– (2)

therefore the loss of the portfolio is

L = (S0erT + hP0erT) − (ST +h(K−Sτ)+ er(T−τ)) —– (3)

where x+ = max(x, 0), which is the payoff function of put option at maturity.

For a given threshold v, the probability that the amount of loss exceeds v is denoted as

α = Prob{L ≥ v} —– (4)

in other words, v is the Value-at-Risk (VaR) at α percentage level. There are several alternative measures of risk, such as CVaR (Conditional Value-at-Risk), ESF (Expected Shortfall), CTE (Conditional Tail Expectation), and other coherent risk measures. The criterion of optimality is to minimize the VaR of the hedging strategy.

The mathematical model of stock price is chosen to be a geometric Brownian motion, i.e.

dSt/St = μdt + σdBt —– (5)

where St is the stock price at time t (0 < t ≤ T), μ and σ are the drift and the volatility of stock price, and Bt is a standard Brownian motion. The solution of the stochastic differential equation is

St = S0 eσBt + (μ−1/2σ2)t —– (6)

where B0 = 0, and St is lognormally distributed.

Proposition:

For a given threshold of loss v, the probability that the loss exceeds v is

Prob {L ≥ v} = E [I{X ≤ c1} FY (g(X) − X)] + E [I{X ≥ c1} FY (c2 − X)] —– (7)

where E[X] is the expectation of random variable X. I{X < c} is the index function of X such that I{X < c} = 1 when {X < c} is true, otherwise I{X < c} = 0. FY (y) is the cumulative distribution function of random variable Y , and

c1 = 1/σ [ln(K/S0) − (μ−1/2σ2)τ] ,

g(X) = 1/σ [(ln (S0 + hP0)erT − h (K − f(X)) er(T−τ) −v)/S0 − (μ − 1/2σ2) T],

f(X) = S0 eσX + (μ−1/2σ2)τ,

c2 = 1/σ [(ln (S0 + hP0) erT − v)/S0 − (μ− 1/2σ2) T

X and Y are both normally distributed, where X ∼ N(0,√τ), Y ∼ N(0,√(T−τ).

For a specified hedging strategy, Q(v) = Prob {L ≥ v} is a decreasing function of v. The VaR under α level can be obtained from equation

Q(v) = α —– (8)

The expectations in Proposition can be calculated with Monte Carlo simulation methods, and the optimal hedging strategy which has the smallest VaR can be obtained from equation (8) by numerical searching methods….

Topological Drifts in Deleuze. Note Quote.

Brion Gysin: How do you get in… get into these paintings?

William Burroughs: Usually I get in by a port of entry, as I call it. It is often a face through whose eyes the picture opens into a landscape and I go literally right through that eye into that landscape. Sometimes it is rather like an archway… a number of little details or a special spot of colours makes the port of entry and then the entire picture will suddenly become a three-dimensional frieze in plaster or jade or some other precious material.

The word fornix means “an archway” or “vault” (in Rome, prostitutes could be solicited there). More directly, fornicatio means “done in the archway”; thus a euphemism for prostitution.

Diagrammatic praxis proposes a contractual (push, pull) approach in which the movement between abstract machine, biogram (embodied, inflected diagram), formal diagram (drawing of, drawing off) and artaffect (realized thing) is topologically immanent. It imagines the practice of writing, of this writing, interleaved with the mapping processes with which it folds and unfolds – forming, deforming and reforming both processes. The relations of non-relations that power the diagram, the thought intensities that resonate between fragments, between content ad expression, the seeable and the sayable, the discursive and the non-discursive, mark entry points; portals of entry through which all points of the diagram pass – push, pull, fold, unfold – without the designation of arrival and departure, without the input/output connotations of a black boxed confection. Ports, as focal points of passage, attract lines of resistance or lines of flight through which the diagram may become both an effectuating concrete assemblage (thing) and remain outside the stratified zone of the audiovisual. It’s as if the port itself is a bifurcating point, a figural inflected archway. The port, as a bifurcation point of resistance (contra black box), modulates and changes the unstable, turbulent interplay between pure Matter and pure Function of the abstract machine. These ports are marked out, localized, situated, by the continuous movement of power-relations:

These power-relations … simultaneously local, unstable and diffuse, do not emanate from a central point or unique locus of sovereignty, but at each moment move from one point to another in a field of forces, marking inflections, resistances, twists and turns when one changes direction or retraces one’s steps… (Gilles Deleuze, Sean Hand-Foucault)

An inflection point, marked out by the diagram, is not a symmetrical form but the difference between concavity and convexity, a pure temporality, a “true atom of form, the true object of geography.” (Bernard Cache)

Untitled

Figure: Left: A bifurcating event presented figurally as an archway, a port of entry through order and chaos. Right: Event/entry with inflexion points, points of suspension, of pure temporality, that gives a form “of an absolute exteriority that is not even the exteriority of any given interiority, but which arise from that most interior place that can be perceived or even conceived […] that of which the perceiving itself is radically temporal or transitory”. The passing through of passage.

Cache’s absolute exteriority is equivalent to Deleuze’s description of the Outside “more distant than any exterior […] ‘twisted’, folded and doubled by an Inside that is deeper than any interior, and alone creates the possibility of the derived relation between the interior and the exterior”. This folded and doubled interior is diagrammed by Deleuze in the folds chapter of Foucault.

Thinking does not depend on a beautiful interiority that reunites the visible ad articulable elements, but is carried under the intrusion of an outside that eats into the interval and forces or dismembers the internal […] when there are only environments and whatever lies betwen them, when words and things are opened up by the environment without ever coinciding, there is a liberation of forces which come from the outside and exist only in a mixed up state of agitation, modification and mutation. In truth they are dice throws, for thinking involves throwing the dice. If the outside, farther away than any external world, is also closer than any internal world, is this not a sign that thought affects itself, by revealing the outside to be its own unthought element?

“It cannot discover the unthought […] without immediately bringing the unthought nearer to itself – or even, perhaps, without pushing it farther away, and in any case without causing man’s own being to undergo a change by the very fact, since it is deployed in the distance between them” (Gilles Deleuze, Sean Hand-Foucault)

Untitled

Figure: Left: a simulation of Deleuze’s central marking in his diagram of the Foucaultian diagram. This is the line of the Outside as Fold. Right: To best express the relations of diagrammatic praxis between content and expression (theory and practice) the Fold figure needs to be drawn as a double Fold (“twice twice” as Massumi might say) – a folded möbius strip. Here the superinflections between inside/outside and content/expression provide transversal vectors.

A topology or topological becoming-shapeshift retains its connectivity, its interconnectedness to preserve its autonomy as a singularity. All the points of all its matter reshape as difference in itself. A topology does not resemble itself. The möbius strip and the infamous torus-to-coffe cup are examples of 2d and 3d topologies. technically a topological surface is totalized, it can not comprise fragments cut or glued to produce a whole. Its change is continuous. It is not cut-copy-pasted. But the cut and its interval are requisite to an emergent new.

For Deleuze, the essence of meaning, the essence of essence, is best expressed in two infinitives; ‘to cut ” and “to die” […] Definite tenses keeping company in time. In the slash between their future and their past: “to cut” as always timeless and alone (Massumi).

Add the individuating “to shift” to the infinitives that reside in the timeless zone of indetermination of future-past. Given the paradigm of the topological-becoming, how might we address writing in the age of copy-paste and hypertext? The seamless and the stitched? As potential is it diagram? A linguistic multiplicity whose virtual immanence is the metalanguage potentiality between the phonemes that gives rise to all language?

Untitled

An overview diagram of diagrammatic praxis based on Deleuze’s diagram of the Foucaultian model shown below. The main modification is to the representation of the Fold. In the top figure, the Fold or zone of subjectification becomes a double-folded möbius strip.

Four folds of subjectification:

1. material part of ourselves which is to be surrounded and folded

2. the fold of the relation between forces always according to a particular rule that the relation between forces is bent back in order to become a relation to oneself (rule ; natural, divine, rational, aesthetic, etc)

3. fold of knowledge constitutes the relation of truth to our being and our being to truth which will serve as the formal condition for any kind of knowledge

4. the fold of the outside itself is the ultimate fold: an ‘interiority of expectation’ from which the subject, in different ways, hopes for immortality, eternity, salvation, freedom or death or detachment.

Deleuzo-Foucauldian Ontological Overview From the Machine to the Archive. Thought of the Day 26.0

In his book on Foucault first published in 1986, Deleuze drew a diagram in the last chapter, Foldings, that depicts in overview the Outside as abstract machine, defined by the line of the outside (1), which separates the unformed interplay of forces and resistance from the strategies and strata that filter the affects of power relations to become “the world of knowledge”.

Untitled

The central Fold of subjectification, of ‘Life’ is “hollowed out” and ignored by the forces of the outside as they are realized in the strata fulfilling the obligation of the diagram to “come to fruition in the archive.” This is dual process of integration and differentiation. The residual dust of the affective relations produced by force upon force, integrate into the strata even as they differentiate to forms of realization – visible or articulable. The ‘empty’ fissure/fold attracts and repels these moving curvilinear strategies as they differentiate and ”hop over” it. Ostensibly, the Fold of subjectification effectuates change as both continuously topological, and as discontinuously catastrophic (as in leaping over). So, the process of crystallization from informal to formal paradoxically integrates as it differentiates. Deleuze’s somewhat paradoxical description follows:

The informal relations between forces differentiate from one another by creating heterogeneous curves which pass through the neighborhood of particular features (statements) and that of the scenes which distribute them into figures of light (visibilities). And at the same time the relations between forces became integrated, precisely in the formal relations between the two, from one side to the other of differentiation. This is because the relations between forces ignored the fissure within the strata, which begins only below them. They are apt to hollow out the fissure by being actualized in the strata, but also to hop over it in both senses of the term by becoming differentiated even as they become integrated. Gilles Deleuze, Sean Hand-Foucault

So this “pineal gland” figure of the Fold is the “center of the cyclone”, where life is lived “par excellence” as a “slow Being”.

As clarifying as Deleuze’s diagram is in summarizing the layered dimensionality of the Foucauldian/Deleuzian hybrid, some modifications will be drawn off to alternatively express the realizations of the play of informal forces as this diagram takes on the particular features of a Research Creation praxis. True to the originating wax tablet diagramma, the relations are drawn and redrawn, in recognition, after Bergson’s notion of recognition as the intensive point where memory meets action of the contemporary social field that situates it. The shifts from the 19C to 20C disciplinary diagram of Foucault’s focus modulates with the late 20C society of control diagram formulated by Deleuze. The shorthand for the force field relevant to the research creation diagram of practice-led arts research today is a transdisciplinary diagram, the gamespace of just-in-time capitalism, which necessarily elicits mutations in the Foucault/Deleuze model. Generating the power-resistance relations in this outside qua gamespace are, among others, the revitalized forces of the military-academic-entertainment complex that fuel economic models such as the Creative Industries that pervade the conditions of play in artistic research. McKenzie Wark concludes his book GAMER THEORY, with prescient comments on the black hole quality of a topology of the outside qua contemporary “gamespace” from Deleuze and Guattari (ATP) and Guy Debord. “Only by going further and further into gamespace might one come out the other side of it, to realize a topology beyond the limiting forms of the game. Deleuze and Guattari: “… one can never go far enough in the direction of [topology]: you haven’t seen anything yet — an irreversible process. And when we consider what there is of a profoundly artificial nature […] we cry out, ‘More perversion! More artifice!’ — to a point where the earth becomes so artificial that the movement of [topology] creates of necessity and by itself a new earth.”

Bayesianism in Game Theory. Thought of the Day 24.0

16f585c6707dae1b884ef409d0b5c7ef

Bayesianism in game theory can be characterised as the view that it is always possible to define probabilities for anything that is relevant for the players’ decision-making. In addition, it is usually taken to imply that the players use Bayes’ rule for updating their beliefs. If the probabilities are to be always definable, one also has to specify what players’ beliefs are before the play is supposed to begin. The standard assumption is that such prior beliefs are the same for all players. This common prior assumption (CPA) means that the players have the same prior probabilities for all those aspects of the game for which the description of the game itself does not specify different probabilities. Common priors are usually justified with the so called Harsanyi doctrine, according to which all differences in probabilities are to be attributed solely to differences in the experiences that the players have had. Different priors for different players would imply that there are some factors that affect the players’ beliefs even though they have not been explicitly modelled. The CPA is sometimes considered to be equivalent to the Harsanyi doctrine, but there seems to be a difference between them: the Harsanyi doctrine is best viewed as a metaphysical doctrine about the determination of beliefs, and it is hard to see why anybody would be willing to argue against it: if everything that might affect the determination of beliefs is included in the notion of ‘experience’, then it alone does determine the beliefs. The Harsanyi doctrine has some affinity to some convergence theorems in Bayesian statistics: if individuals are fed with similar information indefinitely, their probabilities will ultimately be the same, irrespective of the original priors.

The CPA however is a methodological injunction to include everything that may affect the players’ behaviour in the game: not just everything that motivates the players, but also everything that affects the players’ beliefs should be explicitly modelled by the game: if players had different priors, this would mean that the game structure would not be completely specified because there would be differences in players’ behaviour that are not explained by the model. In a dispute over the status of the CPA, Faruk Gul essentially argues that the CPA does not follow from the Harsanyi doctrine. He does this by distinguishing between two different interpretations of the common prior, the ‘prior view’ and the ‘infinite hierarchy view’. The former is a genuinely dynamic story in which it is assumed that there really is a prior stage in time. The latter framework refers to Mertens and Zamir’s construction in which prior beliefs can be consistently formulated. This framework however, is static in the sense that the players do not have any information on a prior stage, indeed, the ‘priors’ in this framework do not even pin down a player’s priors for his own types. Thus, the existence of a common prior in the latter framework does not have anything to do with the view that differences in beliefs reflect differences in information only.

It is agreed by everyone that for most (real-world) problems there is no prior stage in which the players know each other’s beliefs, let alone that they would be the same. The CPA, if understood as a modelling assumption, is clearly false. Robert Aumann, however, defends the CPA by arguing that whenever there are differences in beliefs, there must have been a prior stage in which the priors were the same, and from which the current beliefs can be derived by conditioning on the differentiating events. If players differ in their present beliefs, they must have received different information at some previous point in time, and they must have processed this information correctly. Based on this assumption, he further argues that players cannot ‘agree to disagree’: if a player knows that his opponents’ beliefs are different from his own, he should revise his beliefs to take the opponents’ information into account. The only case where the CPA would be violated, then, is when players have different beliefs, and have common knowledge about each others’ different beliefs and about each others’ epistemic rationality. Aumann’s argument seems perfectly legitimate if it is taken as a metaphysical one, but we do not see how it could be used as a justification for using the CPA as a modelling assumption in this or that application of game theory and Aumann does not argue that it should.

wpid-bilindustriella-a86478514b

Weil Conjectures. Note Quote.

2

Solving Diophantine equations, that is giving integer solutions to polynomials, is often unapproachably difficult. Weil describes one strategy in a letter to his sister, the philosopher Simone Weil: Look for solutions in richer fields than the rationals, perhaps fields of rational functions over the complex numbers. But these are quite different from the integers:

We would be badly blocked if there were no bridge between the two. And voilà god carries the day against the devil: this bridge exists; it is the theory of algebraic function fields over a finite field of constants.

A solution modulo 5 to a polynomial P(X,Y,..Z) is a list of integers X,Y,..Z making the value P(X,Y,..Z) divisible by 5, or in other words equal to 0 modulo 5. For example, X2 + Y2 − 3 has no integer solutions. That is clear since X and Y would both have to be 0 or ±1, to keep their squares below 3, and no combination of those works. But it has solutions modulo 5 since, among others, 32 + 32 − 3 = 15 is divisible by 5. Solutions modulo a given prime p are easier to find than integer solutions and they amount to the same thing as solutions in the finite field of integers modulo p.

To see if a list of polynomial equations Pi(X, Y, ..Z) = 0 have a solution modulo p we need only check p different values for each variable. Even if p is impractically large, equations are more manageable modulo p. Going farther, we might look at equations modulo p, but allow some irrationals, and ask how the number of solutions grows as we allow irrationals of higher and higher degree—roots of quadratic polynomials, roots of cubic polynomials, and so on. This is looking for solutions in all finite fields, as in Weil’s letter.

The key technical points about finite fields are: For each prime number p, the field of integers modulo p form a field, written Fp. For each natural number r > 0 there is (up to isomorphism) just one field with pr elements, written as Fpr or as Fq with q = pr. This comes from Fp by adjoining the roots of a degree r polynomial. These are all the finite fields. Trivially, then, for any natural number s > 0 there is just one field with qs elements, namely Fp(r+s) which we may write Fqs. The union for all r of the Fpr is the algebraic closure Fp. By Galois theory, roots for polynomials in Fpr, are fixed points for the r-th iterate of the Frobenius morphism, that is for the map taking each x ∈ Fp to xpr.

Take any good n-dimensional algebraic space (any smooth projective variety of dimension n) defined by integer polynomials on a finite field Fq. For each s ∈ N, let Ns be the number of points defined on the extension field F(qs). Define the zeta function Z(t) as an exponential using a formal variable t:

Z(t) = exp(∑s=1Nsts/s)

The first Weil conjecture says Z(t) is a rational function:

Z(t) = P(t)/Q(t)

for integer polynomials P(t) and Q(t). This is a strong constraint on the numbers of solutions Ns. It means there are complex algebraic numbers a1 . . . ai and b1 . . . bj such that

Ns =(as1 +…+ asi) − (bs1 +…+ bsj)

And each algebraic conjugate of an a (resp. b) also an a (resp. b).

The second conjecture is a functional equation:

Z(1/qnt) = ± qnE/2tEZ(t)

This says the operation x → qn/x permutes the a’s (resp. the b’s).The third is a Riemann Hypothesis

Z(t) = (P1(t)P3(t) · · · P2n−1(t))/(P0(t)P2(t) · · · P2n(t))

where each Pk is an integer polynomial with all roots of absolute value q−k/2. That means each a has absolute value qk for some 0 ≤ k ≤ n. Each b has absolute value q(2k−1)/2 for some 0 ≤ k ≤ n.

Over it all is the conjectured link to topology. Let B0, B1, . . . B2n be the Betti numbers of the complex manifold defined by the same polynomials. That is, each Bk gives the number of k-dimensional holes or handles on the continuous space of complex number solutions to the equations. And recall an algebraically n-dimensional complex manifold is topologically 2n-dimensional. Then each Pk has degree Bk. And E is the Euler number of the manifold, the alternating sum

k=02n (−1)kBk

On its face the topology of a continuous manifold is worlds apart from arithmetic over finite fields. But the topology of this manifold tells how many a’s and b’s there are with each absolute value. This implies useful numerical approximations to the numbers of roots Ns. Special cases of these conjectures, with aspects of the topology, were proved before Weil, and he proved more. All dealt with curves (1-dimensional) or hypersurfaces (defined by a single polynomial).

Weil presented the topology as motivating the conjectures for higher dimensional varieties. He especially pointed out how the whole series of conjectures would follow quickly if we could treat the spaces of finite field solutions as topological manifolds. The topological strategy was powerfully seductive but seriously remote from existing tools. Weil’s arithmetic spaces were not even precisely defined. To all appearances they would be finite or (over the algebraic closures of the finite fields) countable and so everywhere discontinuous. Topological manifold methods could hardly apply.

Abelian Categories, or Injective Resolutions are Diagrammatic. Note Quote.

DqkJq

Jean-Pierre Serre gave a more thoroughly cohomological turn to the conjectures than Weil had. Grothendieck says

Anyway Serre explained the Weil conjectures to me in cohomological terms around 1955 – and it was only in these terms that they could possibly ‘hook’ me …I am not sure anyone but Serre and I, not even Weil if that is possible, was deeply convinced such [a cohomology] must exist.

Specifically Serre approached the problem through sheaves, a new method in topology that he and others were exploring. Grothendieck would later describe each sheaf on a space T as a “meter stick” measuring T. The cohomology of a given sheaf gives a very coarse summary of the information in it – and in the best case it highlights just the information you want. Certain sheaves on T produced the Betti numbers. If you could put such “meter sticks” on Weil’s arithmetic spaces, and prove standard topological theorems in this form, the conjectures would follow.

By the nuts and bolts definition, a sheaf F on a topological space T is an assignment of Abelian groups to open subsets of T, plus group homomorphisms among them, all meeting a certain covering condition. Precisely these nuts and bolts were unavailable for the Weil conjectures because the arithmetic spaces had no useful topology in the then-existing sense.

At the École Normale Supérieure, Henri Cartan’s seminar spent 1948-49 and 1950-51 focussing on sheaf cohomology. As one motive, there was already de Rham cohomology on differentiable manifolds, which not only described their topology but also described differential analysis on manifolds. And during the time of the seminar Cartan saw how to modify sheaf cohomology as a tool in complex analysis. Given a complex analytic variety V Cartan could define sheaves that reflected not only the topology of V but also complex analysis on V.

These were promising for the Weil conjectures since Weil cohomology would need sheaves reflecting algebra on those spaces. But understand, this differential analysis and complex analysis used sheaves and cohomology in the usual topological sense. Their innovation was to find particular new sheaves which capture analytic or algebraic information that a pure topologist might not focus on.

The greater challenge to the Séminaire Cartan was, that along with the cohomology of topological spaces, the seminar looked at the cohomology of groups. Here sheaves are replaced by G-modules. This was formally quite different from topology yet it had grown from topology and was tightly tied to it. Indeed Eilenberg and Mac Lane created category theory in large part to explain both kinds of cohomology by clarifying the links between them. The seminar aimed to find what was common to the two kinds of cohomology and they found it in a pattern of functors.

The cohomology of a topological space X assigns to each sheaf F on X a series of Abelian groups HnF and to each sheaf map f : F → F′ a series of group homomorphisms Hnf : HnF → HnF′. The definition requires that each Hn is a functor, from sheaves on X to Abelian groups. A crucial property of these functors is:

HnF = 0 for n > 0

for any fine sheaf F where a sheaf is fine if it meets a certain condition borrowed from differential geometry by way of Cartan’s complex analytic geometry.

The cohomology of a group G assigns to each G-module M a series of Abelian groups HnM and to each homomorphism f : M →M′ a series of homomorphisms HnF : HnM → HnM′. Each Hn is a functor, from G-modules to Abelian groups. These functors have the same properties as topological cohomology except that:

HnM = 0 for n > 0

for any injective module M. A G-module I is injective if: For every G-module inclusion N M and homomorphism f : N → I there is at least one g : M → I making this commute

Untitled

Cartan could treat the cohomology of several different algebraic structures: groups, Lie groups, associative algebras. These all rest on injective resolutions. But, he could not include topological spaces, the source of the whole, and still one of the main motives for pursuing the other cohomologies. Topological cohomology rested on the completely different apparatus of fine resolutions. As to the search for a Weil cohomology, this left two questions: What would Weil cohomology use in place of topological sheaves or G-modules? And what resolutions would give their cohomology? Specifically, Cartan & Eilenberg defines group cohomology (like several other constructions) as a derived functor, which in turn is defined using injective resolutions. So the cohomology of a topological space was not a derived functor in their technical sense. But a looser sense was apparently current.

Grothendieck wrote to Serre:

I have realized that by formulating the theory of derived functors for categories more general than modules, one gets the cohomology of spaces at the same time at small cost. The existence follows from a general criterion, and fine sheaves will play the role of injective modules. One gets the fundamental spectral sequences as special cases of delectable and useful general spectral sequences. But I am not yet sure if it all works as well for non-separated spaces and I recall your doubts on the existence of an exact sequence in cohomology for dimensions ≥ 2. Besides this is probably all more or less explicit in Cartan-Eilenberg’s book which I have not yet had the pleasure to see.

Here he lays out the whole paper, commonly cited as Tôhoku for the journal that published it. There are several issues. For one thing, fine resolutions do not work for all topological spaces but only for the paracompact – that is, Hausdorff spaces where every open cover has a locally finite refinement. The Séminaire Cartan called these separated spaces. The limitation was no problem for differential geometry. All differential manifolds are paracompact. Nor was it a problem for most of analysis. But it was discouraging from the viewpoint of the Weil conjectures since non-trivial algebraic varieties are never Hausdorff.

Serre replied using the same loose sense of derived functor:

The fact that sheaf cohomology is a special case of derived func- tors (at least for the paracompact case) is not in Cartan-Sammy. Cartan was aware of it and told [David] Buchsbaum to work on it, but he seems not to have done it. The interest of it would be to show just which properties of fine sheaves we need to use; and so one might be able to figure out whether or not there are enough fine sheaves in the non-separated case (I think the answer is no but I am not at all sure!).

So Grothendieck began rewriting Cartan-Eilenberg before he had seen it. Among other things he preempted the question of resolutions for Weil cohomology. Before anyone knew what “sheaves” it would use, Grothendieck knew it would use injective resolutions. He did this by asking not what sheaves “are” but how they relate to one another. As he later put it, he set out to:

consider the set13 of all sheaves on a given topological space or, if you like, the prodigious arsenal of all the “meter sticks” that measure it. We consider this “set” or “arsenal” as equipped with its most evident structure, the way it appears so to speak “right in front of your nose”; that is what we call the structure of a “category”…From here on, this kind of “measuring superstructure” called the “category of sheaves” will be taken as “incarnating” what is most essential to that space.

The Séminaire Cartan had shown this structure in front of your nose suffices for much of cohomology. Definitions and proofs can be given in terms of commutative diagrams and exact sequences without asking, most of the time, what these are diagrams of.  Grothendieck went farther than any other, insisting that the “formal analogy” between sheaf cohomology and group cohomology should become “a common framework including these theories and others”. To start with, injectives have a nice categorical sense: An object I in any category is injective if, for every monic N → M and arrow f : N → I there is at least one g : M → I such that

Untitled

Fine sheaves are not so diagrammatic.

Grothendieck saw that Reinhold Baer’s original proof that modules have injective resolutions was largely diagrammatic itself. So Grothendieck gave diagrammatic axioms for the basic properties used in cohomology, and called any category that satisfies them an Abelian category. He gave further diagrammatic axioms tailored to Baer’s proof: Every category satisfying these axioms has injective resolutions. Such a category is called an AB5 category, and sometimes around the 1960s a Grothendieck category though that term has been used in several senses.

So sheaves on any topological space have injective resolutions and thus have derived functor cohomology in the strict sense. For paracompact spaces this agrees with cohomology from fine, flabby, or soft resolutions. So you can still use those, if you want them, and you will. But Grothendieck treats paracompactness as a “restrictive condition”, well removed from the basic theory, and he specifically mentions the Weil conjectures.

Beyond that, Grothendieck’s approach works for topology the same way it does for all cohomology. And, much further, the axioms apply to many categories other than categories of sheaves on topological spaces or categories of modules. They go far beyond topological and group cohomology, in principle, though in fact there were few if any known examples outside that framework when they were given.

Conjuncted: Demise of Ontology

string_theory_11322

The demise of ontology in string theory opens new perspectives on the positions of Quine and Larry Laudan. Laudan stressed the discontinuity of ontological claims throughout the history of scientific theories. String theory’s comment on this observation is very clear: The ontological claim is no appropriate element of highly developed physical theories. External ontological objects are reduced to the status of an approximative concept that only makes sense as long as one does not look too closely into the theory’s mathematical fine-structure. While one may consider the electron to be an object like a table, just smaller, the same verdict on, let’s say, a type IIB superstring is not justifiable. In this light it is evident that an ontological understanding of scientific objects cannot have any realist quality and must always be preliminary. Its specific form naturally depends on the type of approximation. Eventually all ontological claims are bound to evaporate in the complex structures of advanced physics. String theory thus confirms Laudan’s assertion and integrates it into a solid physical background picture.

In a remarkable way string theory awards new topicality to Quine’s notion of underdeterminism. The string theoretical scale-limit to new phenomenology that makes Quine’s concept of a theoretical scheme fits all possible phenomenological data. In a sense string theory moves Quine’s concept from the regime of abstract and shadowy philosophical definitions to the regime of the physically meaningful. Quine’s notion of underdeterminism also remains unaffected by the emerging principle of theoretical uniqueness, which so seriously undermines the position of modest underdeterminism. Since theoretical uniqueness reveals itself in the context of new so far undetected phenomenology, Quine’s purely ontological approach remains safely beyond its grasp. But the best is still to come: The various equivalent superstring theories appear as empirically equivalent but ‘logically incompatible’ theories of the very type implied by Quine’s underdeterminism hypothesis. The different string theories are not theoretically incompatible and unrelated concepts. On the contrary they are merely different representations of one overall theoretical structure. Incompatible are the ontological claims which can be imputed to the various representations. It is only at this level that Quine’s conjecture applies to string theory. And it is only at this level that it can be meaningful at all. Quine is no adherent of external realism and thus can afford a very wide interpretation of the notion ‘ontological object’. For him a world view’s ontology can well comprise oddities like spacetime points or mathematical sets. In this light the duality phenomenon could be taken to imply a shift of ontology away from an external ‘corporal’ regime towards a purely mathematical one. 

To put external and mathematical ontologies into the same category blurs the central message the new physical developments have in store for philosophy of science. This message emerges much clearer if formulated within the conceptual framework of scientific realism: An extrapolation of the notion ‘external ontological object’ from the visible to the invisible regime remains possible up to quantum field theory if one wants to have it. It fails fundamentally at the stage of string theory. String theory simply is no theory about invisible external objects.