Derivative Pricing Theory: Call, Put Options and “Black, Scholes'” Hedged Portfolio.Thought of the Day 152.0

black-scholes-formula-excel-here-is-the-formula-for-the-black-model-for-pricing-call-and-put-option-contracts-black-scholes-formula-excel-spreadsheet

screenshot

Fischer Black and Myron Scholes revolutionized the pricing theory of options by showing how to hedge continuously the exposure on the short position of an option. Consider the writer of a call option on a risky asset. S/he is exposed to the risk of unlimited liability if the asset price rises above the strike price. To protect the writer’s short position in the call option, s/he should consider purchasing a certain amount of the underlying asset so that the loss in the short position in the call option is offset by the long position in the asset. In this way, the writer is adopting the hedging procedure. A hedged position combines an option with its underlying asset so as to achieve the goal that either the asset compensates the option against loss or otherwise. By adjusting the proportion of the underlying asset and option continuously in a portfolio, Black and Scholes demonstrated that investors can create a riskless hedging portfolio where the risk exposure associated with the stochastic asset price is eliminated. In an efficient market with no riskless arbitrage opportunity, a riskless portfolio must earn an expected rate of return equal to the riskless interest rate.

Black and Scholes made the following assumptions on the financial market.

  1. Trading takes place continuously in time.
  2. The riskless interest rate r is known and constant over time.
  3. The asset pays no dividend.
  4. There are no transaction costs in buying or selling the asset or the option, and no taxes.
  5. The assets are perfectly divisible.
  6. There are no penalties to short selling and the full use of proceeds is permitted.
  7. There are no riskless arbitrage opportunities.

The stochastic process of the asset price St is assumed to follow the geometric Brownian motion

dSt/St = μ dt + σ dZt —– (1)

where μ is the expected rate of return, σ is the volatility and Zt is the standard Brownian process. Both μ and σ are assumed to be constant. Consider a portfolio that involves short selling of one unit of a call option and long holding of Δt units of the underlying asset. The portfolio value Π (St, t) at time t is given by

Π = −c + Δt St —– (2)

where c = c(St, t) denotes the call price. Note that Δt changes with time t, reflecting the dynamic nature of hedging. Since c is a stochastic function of St, we apply the Ito lemma to compute its differential as follows:

dc = ∂c/∂t dt + ∂c/∂St dSt + σ2/2 St2 ∂2c/∂St2 dt

such that

-dc + Δt dS= (-∂c/∂t – σ2/2 St2 ∂2c/∂St2)dt + (Δ– ∂c/∂St)dSt

= [-∂c/∂t – σ2/2 St2 ∂2c/∂St+ (Δ– ∂c/∂St)μSt]dt + (Δ– ∂c/∂St)σSdZt

The cumulative financial gain on the portfolio at time t is given by

G(Π (St, t )) = ∫0t -dc + ∫0t Δu dSu

= ∫0t [-∂c/∂u – σ2/2 Su22c/∂Su2 + (Δ– ∂c/∂Su)μSu]du + ∫0t (Δ– ∂c/∂Su)σSdZ—– (3)

The stochastic component of the portfolio gain stems from the last term, ∫0t (Δ– ∂c/∂Su)σSdZu. Suppose we adopt the dynamic hedging strategy by choosing Δu = ∂c/∂Su at all times u < t, then the financial gain becomes deterministic at all times. By virtue of no arbitrage, the financial gain should be the same as the gain from investing on the risk free asset with dynamic position whose value equals -c + Su∂c/∂Su. The deterministic gain from this dynamic position of riskless asset is given by

Mt = ∫0tr(-c + Su∂c/∂Su)du —– (4)

By equating these two deterministic gains, G(Π (St, t)) and Mt, we have

-∂c/∂u – σ2/2 Su22c/∂Su2 = r(-c + Su∂c/∂Su), 0 < u < t

which is satisfied for any asset price S if c(S, t) satisfies the equation

∂c/∂t + σ2/2 S22c/∂S+ rS∂c/∂S – rc = 0 —– (5)

This parabolic partial differential equation is called the Black–Scholes equation. Strangely, the parameter μ, which is the expected rate of return of the asset, does not appear in the equation.

To complete the formulation of the option pricing model, let’s prescribe the auxiliary condition. The terminal payoff at time T of the call with strike price X is translated into the following terminal condition:

c(S, T ) = max(S − X, 0) —– (6)

for the differential equation.

Since both the equation and the auxiliary condition do not contain ρ, one concludes that the call price does not depend on the actual expected rate of return of the asset price. The option pricing model involves five parameters: S, T, X, r and σ. Except for the volatility σ, all others are directly observable parameters. The independence of the pricing model on μ is related to the concept of risk neutrality. In a risk neutral world, investors do not demand extra returns above the riskless interest rate for bearing risks. This is in contrast to usual risk averse investors who would demand extra returns above r for risks borne in their investment portfolios. Apparently, the option is priced as if the rates of return on the underlying asset and the option are both equal to the riskless interest rate. This risk neutral valuation approach is viable if the risks from holding the underlying asset and option are hedgeable.

The governing equation for a put option can be derived similarly and the same Black–Scholes equation is obtained. Let V (S, t) denote the price of a derivative security with dependence on S and t, it can be shown that V is governed by

∂V/∂t + σ2/2 S22V/∂S+ rS∂V/∂S – rV = 0 —– (7)

The price of a particular derivative security is obtained by solving the Black–Scholes equation subject to an appropriate set of auxiliary conditions that model the corresponding contractual specifications in the derivative security.

The original derivation of the governing partial differential equation by Black and Scholes focuses on the financial notion of riskless hedging but misses the precise analysis of the dynamic change in the value of the hedged portfolio. The inconsistencies in their derivation stem from the assumption of keeping the number of units of the underlying asset in the hedged portfolio to be instantaneously constant. They take the differential change of portfolio value Π to be

dΠ =−dc + Δt dSt,

which misses the effect arising from the differential change in Δt. The ability to construct a perfectly hedged portfolio relies on the assumption of continuous trading and continuous asset price path. It has been commonly agreed that the assumed Geometric Brownian process of the asset price may not truly reflect the actual behavior of the asset price process. The asset price may exhibit jumps upon the arrival of a sudden news in the financial market. The interest rate is widely recognized to be fluctuating over time in an irregular manner rather than being constant. For an option on a risky asset, the interest rate appears only in the discount factor so that the assumption of constant/deterministic interest rate is quite acceptable for a short-lived option. The Black–Scholes pricing approach assumes continuous hedging at all times. In the real world of trading with transaction costs, this would lead to infinite transaction costs in the hedging procedure.

Categories of Pointwise Convergence Topology: Theory(ies) of Bundles.

Let H be a fixed, separable Hilbert space of dimension ≥ 1. Lets denote the associated projective space of H by P = P(H). It is compact iff H is finite-dimensional. Let PU = PU(H) = U(H)/U(1) be the projective unitary group of H equipped with the compact-open topology. A projective bundle over X is a locally trivial bundle of projective spaces, i.e., a fibre bundle P → X with fibre P(H) and structure group PU(H). An application of the Banach-Steinhaus theorem shows that we may identify projective bundles with principal PU(H)-bundles and the pointwise convergence topology on PU(H).

If G is a topological group, let GX denote the sheaf of germs of continuous functions G → X, i.e., the sheaf associated to the constant presheaf given by U → F(U) = G. Given a projective bundle P → X and a sufficiently fine good open cover {Ui}i∈I of X, the transition functions between trivializations P|Ui can be lifted to bundle isomorphisms gij on double intersections Uij = Ui ∩ Uj which are projectively coherent, i.e., over each of the triple intersections Uijk = Ui ∩ Uj ∩ Uk the composition gki gjk gij is given as multiplication by a U(1)-valued function fijk : Uijk → U(1). The collection {(Uij, fijk)} defines a U(1)-valued two-cocycle called a B-field on X,which represents a class BP in the sheaf cohomology group H2(X, U(1)X). On the other hand, the sheaf cohomology H1(X, PU(H)X) consists of isomorphism classes of principal PU(H)-bundles, and we can consider the isomorphism class [P] ∈ H1(X,PU(H)X).

There is an isomorphism

H1(X, PU(H)X) → H2(X, U(1)X) provided by the

boundary map [P] ↦ BP. There is also an isomorphism

H2(X, U(1)X) → H3(X, ZX) ≅ H3(X, Z)

The image δ(P) ∈ H3(X, Z) of BP is called the Dixmier-Douady invariant of P. When δ(P) = [H] is represented in H3(X, R) by a closed three-form H on X, called the H-flux of the given B-field BP, we will write P = PH. One has δ(P) = 0 iff the projective bundle P comes from a vector bundle E → X, i.e., P = P(E). By Serre’s theorem every torsion element of H3(X,Z) arises from a finite-dimensional bundle P. Explicitly, consider the commutative diagram of exact sequences of groups given by

Untitled

where we identify the cyclic group Zn with the group of n-th roots of unity. Let P be a projective bundle with structure group PU(n), i.e., with fibres P(Cn). Then the commutative diagram of long exact sequences of sheaf cohomology groups associated to the above commutative diagram of groups implies that the element BP ∈ H2(X, U(1)X) comes from H2(X, (Zn)X), and therefore its order divides n.

One also has δ(P1 ⊗ P2) = δ(P1) + δ(P2) and δ(P) = −δ(P). This follows from the commutative diagram

Untitled

and the fact that P ⊗ P = P(E) where E is the vector bundle of Hilbert-Schmidt endomorphisms of P . Putting everything together, it follows that the cohomology group H3(X, Z) is isomorphic to the group of stable equivalence classes of principal PU(H)-bundles P → X with the operation of tensor product.

We are now ready to define the twisted K-theory of the manifold X equipped with a projective bundle P → X, such that Px = P(H) ∀ x ∈ X. We will first give a definition in terms of Fredholm operators, and then provide some equivalent, but more geometric definitions. Let H be a Z2-graded Hilbert space. We define Fred0(H) to be the space of self-adjoint degree 1 Fredholm operators T on H such that T2 − 1 ∈ K(H), together with the subspace topology induced by the embedding Fred0(H) ֒→ B(H) × K(H) given by T → (T, T2 − 1) where the algebra of bounded linear operators B(H) is given the compact-open topology and the Banach algebra of compact operators K = K(H) is given the norm topology.

Let P = PH → X be a projective Hilbert bundle. Then we can construct an associated bundle Fred0(P) whose fibres are Fred0(H). We define the twisted K-theory group of the pair (X, P) to be the group of homotopy classes of maps

K0(X, H) = [X, Fred0(PH)]

The group K0(X, H) depends functorially on the pair (X, PH), and an isomorphism of projective bundles ρ : P → P′ induces a group isomorphism ρ∗ : K0(X, H) → K0(X, H′). Addition in K0(X, H) is defined by fibre-wise direct sum, so that the sum of two elements lies in K0(X, H2) with [H2] = δ(P ⊗ P(C2)) = δ(P) = [H]. Under the isomorphism H ⊗ C2 ≅ H, there is a projective bundle isomorphism P → P ⊗ P(C2) for any projective bundle P and so K0(X, H2) is canonically isomorphic to K0(X, H). When [H] is a non-torsion element of H3(X, Z), so that P = PH is an infinite-dimensional bundle of projective spaces, then the index map K0(X, H) → Z is zero, i.e., any section of Fred0(P) takes values in the index zero component of Fred0(H).

Let us now describe some other models for twisted K-theory which will be useful in our physical applications later on. A definition in algebraic K-theory may given as follows. A bundle of projective spaces P yields a bundle End(P) of algebras. However, if H is an infinite-dimensional Hilbert space, then one has natural isomorphisms H ≅ H ⊕ H and

End(H) ≅ Hom(H ⊕ H, H) ≅ End(H) ⊕ End(H)

as left End(H)-modules, and so the algebraic K-theory of the algebra End(H) is trivial. Instead, we will work with the Banach algebra K(H) of compact operators on H with the norm topology. Given that the unitary group U(H) with the compact-open topology acts continuously on K(H) by conjugation, to a given projective bundle PH we can associate a bundle of compact operators EH → X given by

EH = PH ×PU K

with δ(EH) = [H]. The Banach algebra AH := C0(X, EH) of continuous sections of EH vanishing at infinity is the continuous trace C∗-algebra CT(X, H). Then the twisted K-theory group K(X, H) of X is canonically isomorphic to the algebraic K-theory group K(AH).

We will also need a smooth version of this definition. Let AH be the smooth subalgebra of AH given by the algebra CT(X, H) = C(X, L1PH),

where L1PH = PH ×PUL1. Then the inclusion CT(X, H) → CT(X, H) induces an isomorphism KCT(X, H) → KCT(X, H) of algebraic K-theory groups. Upon choosing a bundle gerbe connection, one has an isomorphism KCT(X, H) ≅ K(X, H) with the twisted K-theory defined in terms of projective Hilbert bundles P = PH over X.

Finally, we propose a general definition based on K-theory with coefficients in a sheaf of rings. It parallels the bundle gerbe approach to twisted K-theory. Let B be a Banach algebra over C. Let E(B, X) be the category of continuous B-bundles over X, and let C(X, B) be the sheaf of continuous maps X → B. The ring structure in B equips C(X, B) with the structure of a sheaf of rings over X. We can therefore consider left (or right) C(X, B)-modules, and in particular the category LF C(X, B) of locally free C(X, B)-modules. Using the functor in the usual way, for X an equivalence of additive categories

E(B, X) ≅ LF (C(X, B))

Since these are both additive categories, we can apply the Grothendieck functor to each of them and obtain the abelian groups K(LF(C(X, B))) and K(E(B, X)). The equivalence of categories ensures that there is a natural isomorphism of groups

K(LF (C(X, B))) ≅ K(E(B, X))

This motivates the following general definition. If A is a sheaf of rings over X, then we define the K-theory of X with coefficients in A to be the abelian group

K(X, A) := K LF(A)

For example, consider the case B = C. Then C(X, C) is just the sheaf of continuous functions X → C, while E(C, X) is the category of complex vector bundles over X. Using the isomorphism of K-theory groups we then have

K(X, C(X,C)) := K(LF (C(X, C))) ≅ K (E(C, X)) = K0(X)

The definition of twisted K-theory uses another special instance of this general construction. For this, we define an Azumaya algebra over X of rank m to be a locally trivial algebra bundle over X with fibre isomorphic to the algebra of m × m complex matrices over C, Mm(C). An example is the algebra End(E) of endomorphisms of a complex vector bundle E → X. We can define an equivalence relation on the set A(X) of Azumaya algebras over X in the following way. Two Azumaya algebras A, A′ are called equivalent if there are vector bundles E, E′ over X such that the algebras A ⊗ End(E), A′ ⊗ End(E′) are isomorphic. Then every Azumaya algebra of the form End(E) is equivalent to the algebra of functions C(X) on X. The set of all equivalence classes is a group under the tensor product of algebras, called the Brauer group of X and denoted Br(X). By Serre’s theorem there is an isomorphism

δ : Br(X) → tor(H3(X, Z))

where tor(H3(X, Z)) is the torsion subgroup of H3(X, Z).

If A is an Azumaya algebra bundle, then the space of continuous sections C(X, A) of X is a ring and we can consider the algebraic K-theory group K(A) := K0(C(X,A)) of equivalence classes of projective C(X, A)-modules, which depends only on the equivalence class of A in the Brauer group. Under the equivalence, we can represent the Brauer group Br(X) as the set of isomorphism classes of sheaves of Azumaya algebras. Let A be a sheaf of Azumaya algebras, and LF(A) the category of locally free A-modules. Then as above there is an isomorphism

K(X, C(X, A)) ≅ K Proj (C(X, A))

where Proj (C(X, A)) is the category of finitely-generated projective C(X, A)-modules. The group on the right-hand side is the group K(A). For given [H] ∈ tor(H3(X, Z)) and A ∈ Br(X) such that δ(A) = [H], this group can be identified as the twisted K-theory group K0(X, H) of X with twisting A. This definition is equivalent to the description in terms of bundle gerbe modules, and from this construction it follows that K0(X, H) is a subgroup of the ordinary K-theory of X. If δ(A) = 0, then A is equivalent to C(X) and we have K(A) := K0(C(X)) = K0(X). The projective C(X, A)-modules over a rank m Azumaya algebra A are vector bundles E → X with fibre Cnm ≅ (Cm)⊕n, which is naturally an Mm(C)-module.

 

Intuition

intuition-psychology

During his attempt to axiomatize the category of all categories, Lawvere says

Our intuition tells us that whenever two categories exist in our world, then so does the corresponding category of all natural transformations between the functors from the first category to the second (The Category of Categories as a Foundation).

However, if one tries to reduce categorial constructions to set theory, one faces some serious problems in the case of a category of functors. Lawvere (who, according to his aim of axiomatization, is not concerned by such a reduction) relies here on “intuition” to stress that those working with categorial concepts despite these problems have the feeling that the envisaged construction is clear, meaningful and legitimate. Not the reducibility to set theory, but an “intuition” to be specified answers for clarity, meaningfulness and legitimacy of a construction emerging in a mathematical working situation. In particular, Lawvere relies on a collective intuition, a common sense – for he explicitly says “our intuition”. Further, one obviously has to deal here with common sense on a technical level, for the “we” can only extend to a community used to the work with the concepts concerned.

In the tradition of philosophy, “intuition” means immediate, i.e., not conceptually mediated cognition. The use of the term in the context of validity (immediate insight in the truth of a proposition) is to be thoroughly distinguished from its use in the sensual context (the German Anschauung). Now, language is a manner of representation, too, but contrary to language, in the context of images the concept of validity is meaningless.

Obviously, the aspect of cognition guiding is touched on here. Especially the sensual intuition can take the guiding (or heuristic) function. There have been many working situations in history of mathematics in which making the objects of investigation accessible to a sensual intuition (by providing a Veranschaulichung) yielded considerable progress in the development of the knowledge concerning these objects. As an example, take the following account by Emil Artin of Emmy Noether’s contribution to the theory of algebras:

Emmy Noether introduced the concept of representation space – a vector space upon which the elements of the algebra operate as linear transformations, the composition of the linear transformation reflecting the multiplication in the algebra. By doing so she enables us to use our geometric intuition.

Similarly, Fréchet thinks to have really “powered” research in the theory of functions and functionals by the introduction of a “geometrical” terminology:

One can [ …] consider the numbers of the sequence [of coefficients of a Taylor series] as coordinates of a point in a space [ …] of infinitely many dimensions. There are several advantages to proceeding thus, for instance the advantage which is always present when geometrical language is employed, since this language is so appropriate to intuition due to the analogies it gives birth to.

Mathematical terminology often stems from a current language usage whose (intuitive, sensual) connotation is welcomed and serves to give the user an “intuition” of what is intended. While Category Theory is often classified as a highly abstract matter quite remote from intuition, in reality it yields, together with its applications, a multitude of examples for the role of current language in mathematical conceptualization.

This notwithstanding, there is naturally also a tendency in contemporary mathematics to eliminate as much as possible commitments to (sensual) intuition in the erection of a theory. It seems that algebraic geometry fulfills only in the language of schemes that essential requirement of all contemporary mathematics: to state its definitions and theorems in their natural abstract and formal setting in which they can be considered independent of geometric intuition (Mumford D., Fogarty J. Geometric Invariant Theory).

In the pragmatist approach, intuition is seen as a relation. This means: one uses a piece of language in an intuitive manner (or not); intuitive use depends on the situation of utterance, and it can be learned and transformed. The reason for this relational point of view, consists in the pragmatist conviction that each cognition of an object depends on the means of cognition employed – this means that for pragmatism there is no intuitive (in the sense of “immediate”) cognition; the term “intuitive” has to be given a new meaning.

What does it mean to use something intuitively? Heinzmann makes the following proposal: one uses language intuitively if one does not even have the idea to question validity. Hence, the term intuition in the Heinzmannian reading of pragmatism takes a different meaning, no longer signifies an immediate grasp. However, it is yet to be explained what it means for objects in general (and not only for propositions) to “question the validity of a use”. One uses an object intuitively, if one is not concerned with how the rules of constitution of the object have been arrived at, if one does not focus the materialization of these rules but only the benefits of an application of the object in the present context. “In principle”, the cognition of an object is determined by another cognition, and this determination finds its expression in the “rules of constitution”; one uses it intuitively (one does not bother about the being determined of its cognition), if one does not question the rules of constitution (does not focus the cognition which determines it). This is precisely what one does when using an object as a tool – because in doing so, one does not (yet) ask which cognition determines the object. When something is used as a tool, this constitutes an intuitive use, whereas the use of something as an object does not (this defines tool and object). Here, each concept in principle can play both roles; among two concepts, one may happen to be used intuitively before and the other after the progress of insight. Note that with respect to a given cognition, Peirce when saying “the cognition which determines it” always thinks of a previous cognition because he thinks of a determination of a cognition in our thought by previous thoughts. In conceptual history of mathematics, however, one most often introduced an object first as a tool and only after having done so did it come to one’s mind to ask for “the cognition which determines the cognition of this object” (that means, to ask how the use of this object can be legitimized).

The idea that it could depend on the situation whether validity is questioned or not has formerly been overlooked, perhaps because one always looked for a reductionist epistemology where the capacity called intuition is used exclusively at the last level of regression; in a pragmatist epistemology, to the contrary, intuition is used at every level in form of the not thematized tools. In classical systems, intuition was not simply conceived as a capacity; it was actually conceived as a capacity common to all human beings. “But the power of intuitively distinguishing intuitions from other cognitions has not prevented men from disputing very warmly as to which cognitions are intuitive”. Moreover, Peirce criticises strongly cartesian individualism (which has it that the individual has the capacity to find the truth). We could sum up this philosophy thus: we cannot reach definite truth, only provisional; significant progress is not made individually but only collectively; one cannot pretend that the history of thought did not take place and start from scratch, but every cognition is determined by a previous cognition (maybe by other individuals); one cannot uncover the ultimate foundation of our cognitions; rather, the fact that we sometimes reach a new level of insight, “deeper” than those thought of as fundamental before, merely indicates that there is no “deepest” level. The feeling that something is “intuitive” indicates a prejudice which can be philosophically criticised (even if this does not occur to us at the beginning).

In our approach, intuitive use is collectively determined: it depends on the particular usage of the community of users whether validity criteria are or are not questioned in a given situation of language use. However, it is acknowledged that for example scientific communities develop usages making them communities of language users on their own. Hence, situations of language use are not only partitioned into those where it comes to the users’ mind to question validity criteria and those where it does not, but moreover this partition is specific to a particular community (actually, the community of language users is established partly through a peculiar partition; this is a definition of the term “community of language users”). The existence of different communities with different common senses can lead to the following situation: something is used intuitively by one group, not intuitively by another. In this case, discussions inside the discipline occur; one has to cope with competing common senses (which are therefore not really “common”). This constitutes a task for the historian.

Noneism. Part 2.

nPS6M

Noneism is a very rigourous and original philosophical doctrine, by and large superior to the classical mathematical philosophies. But there are some problems concerning the different ways of characterizing a universe of objects. It is very easy to understand the way a writer characterizes the protagonists of the novels he writes. But what about the characterization of the universe of natural numbers? Since in most kinds of civilizations the natural numbers are characterized the same way, we have the impression that the subject does not intervene in the forging of the characteristics of natural numbers. These numbers appear to be what they are, with total independence of the creative activity of the cognitive subject. There is, of course, the creation of theorems, but the potentially infinite sequence of natural numbers resists any effort to subjectivize its characteristics. It cannot be changed. A noneist might reply that natural numbers are non-existent, that they have no being, and that, in this respect, they are identical with mythological Objects. Moreover, the formal system of natural numbers can be interpreted in many ways: for instance, with respect to a universe of Skolem numbers. This is correct, but it does not explain why the properties of some universes are independent from subjective creation. It is an undeniable fact that there are two kinds of objectual characteristics. On the one hand, we have the characteristics created by subjective imagination or speculative thought; on the other hand, we find some characteristics that are not created by anybody; their corresponding Objects are, in most cases, non-existent but, at the same time, they are not invented. They are just found. The origin of the former characteristics is very easy to understand; the origin of the last ones is, a mystery.

Now, the subject-independence of a universe, suggests that it belongs to a Platonic realm. And as far as transafinite set theory is concerned, the subject-independence of its characteristics is much less evident than the characteristic subject-independence of the natural numbers. In the realm of the finite, both characteristics are subject-independent and can be reduced to combinatorics. The only difference between both is that, according to the classical Platonistic interpretation of mathematics, there can only be a single mathematical universe and that, to deductively study its properties, one can only employ classical logic. But this position is not at all unobjectionable. Once the subject-independence of the natural numbers system’s characteristics is posited, it becomes easy to overstep the classical phobia concerning the possibility of characterizing non-classical objective worlds. Euclidean geometry is incompatible with elliptical and hyperbolic geometries and, nevertheless, the validity of the first one does not invalidate the other ones. And vice versa, the fact that hyperbolic and other kinds of geometry are consistently characterized, does not invalidate the good old Euclidean geometry. And the fact that we have now several kinds of non-Cantorian set theories, does not invalidate the classical Cantorian set theory.

Of course, an universally non-Platonic point of view that includes classical set theory can also be assumed. But concerning natural numbers it would be quite artificial. It is very difficult not to surrender to the famous Kronecker’s dictum: God created natural numbers, men created all the rest. Anyhow, it is not at all absurd to adopt a whole platonistic conception of mathematics. And it is quite licit to adopt a noneist position. But if we do this, the origin of the natural numbers’ characteristics becomes misty. However, forgetting this cloudiness, the leap from noneist universes to the platonistic ones, and vice versa, becomes like a flip-flop connecting objectological with ontological (ideal) universes, like a kind of rabbit-duck Gestalt or a Sherrington staircase. So, the fundamental question with respect to the subject-dependent or subject-independent mathematical theories, is: are they created, or are they found? Regarding some theories, subject-dependency is far more understandable; and concerning other ones, subject-independency is very difficult, if not impossible, to negate.

From an epistemological point of view, the fact of non-subject dependent characteristic traits of a universe would mean that there is something like intellectual intuition. The properties of natural numbers, the finite properties of sets (or combinatorics), some geometric axioms, for instance, in Euclidean geometry, the axioms of betweenness, etc., would be apprehended in a manner, that pretty well coincides with the (nowadays rather discredited) concept of synthetical a priori knowledge. This aspect of mathematical knowledge shows that the old problem concerning the analytic and the a priori synthetical knowledge, in spite of the prevailing Quinean pragmatic conception, must be radically reset.

Rhizomatic Topology and Global Politics. A Flirtatious Relationship.

 

rhizome

Deleuze and Guattari see concepts as rhizomes, biological entities endowed with unique properties. They see concepts as spatially representable, where the representation contains principles of connection and heterogeneity: any point of a rhizome must be connected to any other. Deleuze and Guattari list the possible benefits of spatial representation of concepts, including the ability to represent complex multiplicity, the potential to free a concept from foundationalism, and the ability to show both breadth and depth. In this view, geometric interpretations move away from the insidious understanding of the world in terms of dualisms, dichotomies, and lines, to understand conceptual relations in terms of space and shapes. The ontology of concepts is thus, in their view, appropriately geometric, a multiplicity defined not by its elements, nor by a center of unification and comprehension and instead measured by its dimensionality and its heterogeneity. The conceptual multiplicity, is already composed of heterogeneous terms in symbiosis, and is continually transforming itself such that it is possible to follow, and map, not only the relationships between ideas but how they change over time. In fact, the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference. They outline the unique contribution of geometric models to the understanding of contingent structure:

Principle of cartography and decalcomania: a rhizome is not amenable to any structural or generative model. It is a stranger to any idea of genetic axis or deep structure. A genetic axis is like an objective pivotal unity upon which successive stages are organized; deep structure is more like a base sequence that can be broken down into immediate constituents, while the unity of the product passes into another, transformational and subjective, dimension. (Deleuze and Guattari)

The word that Deleuze and Guattari use for ‘multiplicities’ can also be translated to the topological term ‘manifold.’ If we thought about their multiplicities as manifolds, there are a virtually unlimited number of things one could come to know, in geometric terms, about (and with) our object of study, abstractly speaking. Among those unlimited things we could learn are properties of groups (homological, cohomological, and homeomorphic), complex directionality (maps, morphisms, isomorphisms, and orientability), dimensionality (codimensionality, structure, embeddedness), partiality (differentiation, commutativity, simultaneity), and shifting representation (factorization, ideal classes, reciprocity). Each of these functions allows for a different, creative, and potentially critical representation of global political concepts, events, groupings, and relationships. This is how concepts are to be looked at: as manifolds. With such a dimensional understanding of concept-formation, it is possible to deal with complex interactions of like entities, and interactions of unlike entities. Critical theorists have emphasized the importance of such complexity in representation a number of times, speaking about it in terms compatible with mathematical methods if not mathematically. For example, Foucault’s declaration that: practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in (apparently simple) concepts deployed both in global politics.  This leads to a shift in the concept of danger as well, where danger is not an objective condition but “an effect of interpretation”. Critical thinking about how-possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses, sending a wave of added complexity through other concepts as well. This work seeking complexity serves one of the major underlying functions of critical theorizing: finding invisible injustices in (modernist, linear, structuralist) givens in the operation and analysis of global politics.

In a geometric sense, this complexity could be thought about as multidimensional mapping. In theoretical geometry, the process of mapping conceptual spaces is not primarily empirical, but for the purpose of representing and reading the relationships between information, including identification, similarity, differentiation, and distance. The reason for defining topological spaces in math, the essence of the definition, is that there is no absolute scale for describing the distance or relation between certain points, yet it makes sense to say that an (infinite) sequence of points approaches some other (but again, no way to describe how quickly or from what direction one might be approaching). This seemingly weak relationship, which is defined purely ‘locally’, i.e., in a small locale around each point, is often surprisingly powerful: using only the relationship of approaching parts, one can distinguish between, say, a balloon, a sheet of paper, a circle, and a dot.

To each delineated concept, one should distinguish and associate a topological space, in a (necessarily) non-explicit yet definite manner. Whenever one has a relationship between concepts (here we think of the primary relationship as being that of constitution, but not restrictively, we ‘specify’ a function (or inclusion, or relation) between the topological spaces associated to the concepts). In these terms, a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented. Such an approach can be leveraged for thinking about conceptual components, dimensionality, and structure. In these terms, dimensions can be thought of as properties or qualities, each with their own (often-multidimensional) properties or qualities. A key goal of the modeling of conceptual space being representation means that a key (mathematical and theoretical) goal of concept space mapping is

associationism, where associations between different kinds of information elements carry the main burden of representation. (Conceptual_Spaces_as_a_Framework_for_Knowledge_Representation)

To this end,

objects in conceptual space are represented by points, in each domain, that characterize their dimensional values. A concept geometry for conceptual spaces

These dimensional values can be arranged in relation to each other, as Gardenfors explains that

distances represent degrees of similarity between objects represented in space and therefore conceptual spaces are “suitable for representing different kinds of similarity relation. Concept

These similarity relationships can be explored across ideas of a concept and across contexts, but also over time, since “with the aid of a topological structure, we can speak about continuity, e.g., a continuous change” a possibility which can be found only in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations.

Conjuncted: Gauge Theory

morphism

Weyl introduced as a phase factor an exponential in which the phase α is preceded by the imaginary unit i, e.g., e+iqα(x), in the wave function for the wave equations (for instance, the Dirac equation is (iγμμ − m)ψ = 0). It is here that Weyl correctly formulated gauge theory as a symmetry principle from which electromagnetism could be derived. It had been shown that for a quantum theory of charged particles interacting with the electromagnetic field, invariance under a gauge transformation of the potentials required multiplication of the wave function by the now well-know phase factor. Yang cited Weyl’s gauge theory results as reported by Pauli as a source for Yang-Mills gauge theory; although Yang didn’t find out until much later that these were Weyl’s results. Moreover, Pauli did not explicitly mention Weyl’s geometric interpretation. It was only much after Yang and Mills published their article that Yang realized the connection between their work and geometry. Yang says

Whitehead’s Anti-Substantivilism, or Process & Reality as a Cosmology to-be. Thought of the Day 39.0

whiteheads-process-philosophy

Treating “stuff” as some kind of metaphysical primitive is mere substantivilism – and fundamentally question-begging. One has replaced an extra-theoretic referent of the wave-function (unless one defers to some quasi-literalist reading of the nature of the stochastic amplitude function ζ[X(t)] as somehow characterizing something akin to being a “density of stuff”, and moreover the logic and probability (Born Rules) must ultimately be obtained from experimentally obtained scattering amplitudes) with something at least as equally mystifying, as the argument against decoherence goes on to show:

In other words, you have a state vector which gives rise to an outcome of a measurement and you cannot understand why this is so according to your theory.

As a response to Platonism, one can likewise read Process and Reality as essentially anti-substantivilist.

Consider, for instance:

Those elements of our experience which stand out clearly and distinctly [giving rise to our substantial intuitions] in our consciousness are not its basic facts, [but] they are . . . late derivatives in the concrescence of an experiencing subject. . . .Neglect of this law [implies that] . . . [e]xperience has been explained in a thoroughly topsy-turvy fashion, the wrong end first (161).

To function as an object is to be a determinant of the definiteness of an actual occurrence [occasion] (243).

The phenomenological ontology offered in Process and Reality is richly nuanced (including metaphysical primitives such as prehensions, occasions, and their respectively derivative notions such as causal efficacy, presentational immediacy, nexus, etc.). None of these suggest metaphysical notions of substance (i.e., independently existing subjects) as a primitive. The case can perhaps be made concerning the discussion of eternal objects, but such notions as discussed vis-à-vis the process of concrescence are obviously not metaphysically primitive notions. Certainly these metaphysical primitives conform in a more nuanced and articulated manner to aspects of process ontology. “Embedding” – as the notion of emergence is a crucial constituent in the information-theoretic, quantum-topological, and geometric accounts. Moreover, concerning the issue of relativistic covariance, it is to be regarded that Process and Reality is really a sketch of a cosmology-to-be . . . [in the spirit of ] Kant [who] built on the obsolete ideas of space, time, and matter of Euclid and Newton. Whitehead set out to suggest what a philosophical cosmology might be that builds on Newton.

Optimal Hedging…..

hedging

Risk management is important in the practices of financial institutions and other corporations. Derivatives are popular instruments to hedge exposures due to currency, interest rate and other market risks. An important step of risk management is to use these derivatives in an optimal way. The most popular derivatives are forwards, options and swaps. They are basic blocks for all sorts of other more complicated derivatives, and should be used prudently. Several parameters need to be determined in the processes of risk management, and it is necessary to investigate the influence of these parameters on the aims of the hedging policies and the possibility of achieving these goals.

The problem of determining the optimal strike price and optimal hedging ratio is considered, where a put option is used to hedge market risk under a constraint of budget. The chosen option is supposed to finish in-the-money at maturity in the, such that the predicted loss of the hedged portfolio is different from the realized loss. The aim of hedging is to minimize the potential loss of investment under a specified level of confidence. In other words, the optimal hedging strategy is to minimize the Value-at-Risk (VaR) under a specified level of risk.

A stock is supposed to be bought at time zero with price S0, and to be sold at time T with uncertain price ST. In order to hedge the market risk of the stock, the company decides to choose one of the available put options written on the same stock with maturity at time τ, where τ is prior and close to T, and the n available put options are specified by their strike prices Ki (i = 1, 2,··· , n). As the prices of different put options are also different, the company needs to determine an optimal hedge ratio h (0 ≤ h ≤ 1) with respect to the chosen strike price. The cost of hedging should be less than or equal to the predetermined hedging budget C. In other words, the company needs to determine the optimal strike price and hedging ratio under the constraint of hedging budget.

Suppose the market price of the stock is S0 at time zero, the hedge ratio is h, the price of the put option is P0, and the riskless interest rate is r. At time T, the time value of the hedging portfolio is

S0erT + hP0erT —– (1)

and the market price of the portfolio is

ST + h(K − Sτ)+ er(T−τ) —– (2)

therefore the loss of the portfolio is

L = (S0erT + hP0erT) − (ST +h(K−Sτ)+ er(T−τ)) —– (3)

where x+ = max(x, 0), which is the payoff function of put option at maturity.

For a given threshold v, the probability that the amount of loss exceeds v is denoted as

α = Prob{L ≥ v} —– (4)

in other words, v is the Value-at-Risk (VaR) at α percentage level. There are several alternative measures of risk, such as CVaR (Conditional Value-at-Risk), ESF (Expected Shortfall), CTE (Conditional Tail Expectation), and other coherent risk measures. The criterion of optimality is to minimize the VaR of the hedging strategy.

The mathematical model of stock price is chosen to be a geometric Brownian motion, i.e.

dSt/St = μdt + σdBt —– (5)

where St is the stock price at time t (0 < t ≤ T), μ and σ are the drift and the volatility of stock price, and Bt is a standard Brownian motion. The solution of the stochastic differential equation is

St = S0 eσBt + (μ−1/2σ2)t —– (6)

where B0 = 0, and St is lognormally distributed.

Proposition:

For a given threshold of loss v, the probability that the loss exceeds v is

Prob {L ≥ v} = E [I{X ≤ c1} FY (g(X) − X)] + E [I{X ≥ c1} FY (c2 − X)] —– (7)

where E[X] is the expectation of random variable X. I{X < c} is the index function of X such that I{X < c} = 1 when {X < c} is true, otherwise I{X < c} = 0. FY (y) is the cumulative distribution function of random variable Y , and

c1 = 1/σ [ln(K/S0) − (μ−1/2σ2)τ] ,

g(X) = 1/σ [(ln (S0 + hP0)erT − h (K − f(X)) er(T−τ) −v)/S0 − (μ − 1/2σ2) T],

f(X) = S0 eσX + (μ−1/2σ2)τ,

c2 = 1/σ [(ln (S0 + hP0) erT − v)/S0 − (μ− 1/2σ2) T

X and Y are both normally distributed, where X ∼ N(0,√τ), Y ∼ N(0,√(T−τ).

For a specified hedging strategy, Q(v) = Prob {L ≥ v} is a decreasing function of v. The VaR under α level can be obtained from equation

Q(v) = α —– (8)

The expectations in Proposition can be calculated with Monte Carlo simulation methods, and the optimal hedging strategy which has the smallest VaR can be obtained from equation (8) by numerical searching methods….

Badiou’s Diagrammatic Claim of Democratic Materialism Cuts Grothendieck’s Topos. Note Quote.

badiou18

Let us focus on the more abstract, elementary definition of a topos and discuss materiality in the categorical context. The materiality of being can, indeed, be defined in a way that makes no material reference to the category of Sets itself.

The stakes between being and materiality are thus reverted. From this point of view, a Grothendieck-topos is not one of sheaves over sets but, instead, it is a topos which is not defined based on a specific geometric morphism E → Sets – a materialization – but rather a one for which such a materialization exists only when the topos itself is already intervened by an explicitly given topos similar to Sets. Therefore, there is no need to start with set-theoretic structures like sieves or Badiou’s ‘generic’ filters.

Strong Postulate, Categorical Version: For a given materialization the situation E is faithful to the atomic situation of truth (Setsγ∗(Ω)op) if the materialization morphism itself is bounded and thus logical.

In particular, this alternative definition suggests that materiality itself is not inevitably a logical question. Therefore, for this definition to make sense, let us look at the question of materiality from a more abstract point of view: what are topoi or ‘places’ of reason that are not necessarily material or where the question of materiality differs from that defined against the ‘Platonic’ world of Sets? Can we deploy the question of materiality without making any reference – direct or sheaf-theoretic – to the question of what the objects ‘consist of’, that is, can we think about materiality without crossing Kant’s categorical limit of the object? Elementary theory suggests that we can.

Elementary Topos:  An elementary topos E is a category which

  1. has finite limits, or equivalently E has so called pull-backs and a terminal object 1,
  2. is Cartesian closed, which means that for each object X there is an exponential functor (−)X : E → E which is right adjoint to the functor (−) × X, and finally
  3. axiom of truth E retains an object called the subobject classifier Ω, which is equipped with an arrow 1 →true Ω such that for each monomorphism σ : Y ֒→ X in E, there is a unique classifying map φσ : X → Ω making σ : Y ֒→ X a pull-back of φσ along the arrow true.

Grothendieck-topos: In respect to this categorical definition, a Grothendieck-topos is a topos with the following conditions satisfies:

(1) E has all set-indexed coproducts, and they are disjoint and universal,

(2) equivalence relations in E have universal co-equalisers,

(3) every equivalence relation in E is effective, and every epimorphism in E is a coequaliser,

(4) E has ‘small hom-sets’, i.e. for any two objects X, Y , the morphisms of E from X to Y are parametrized by a set, and finally

(5) E has a set of generators (not necessarily monic in respect to 1 as in the case of locales).

Together the five conditions can be taken as an alternative definition of a Grothendieck-topos. We should still demonstrate that Badiou’s world of T-sets is actually the category of sheaves Shvs (T, J) and that it will, consequentially, hold up to those conditions of a topos listed above. To shift to the categorical setting, one first needs to define a relation between objects. These relations, the so called ‘natural transformations’ we encountered in relation Yoneda lemma, should satisfy conditions Badiou regards as ‘complex arrangements’.

Relation: A relation from the object (A, Idα) to the object (B,Idβ) is a map ρ : A → B such that

Eβ ρ(a) = Eα a and ρ(a / p) = ρ(a) / p.

It is a rather easy consequence of these two pre-suppositions that it respects the order relation ≤ one retains Idα (a, b) ≤ Idβ (ρ(a), ρ(b)) and that if a‡b are two compatible elements, then also ρ(a)‡ρ(b). Thus such a relation itself is compatible with the underlying T-structures.

Given these definitions, regardless of Badiou’s confusion about the structure of the ‘power-object’, it is safe to assume that Badiou has demonstrated that there is at least a category of T-Sets if not yet a topos. Its objects are defined as T-sets situated in the ‘world m’ together with their respective equalization functions Idα. It is obviously Badiou’s ‘diagrammatic’ aim to demonstrate that this category is a topos and, ultimately, to reduce any ‘diagrammatic’ claim of ‘democratic materialism’ to the constituted, non-diagrammatic objects such as T-sets. That is, by showing that the particular set of objects is a categorical makes him assume that every category should take a similar form: a classical mistake of reasoning referred to as affirming the consequent.

Homogeneity: Leibniz Contra Euler. Note Quote.

1200px-RationalBezier2D.png

Euler insists that the relation of equality holds between any infinitesimal and zero. Similarly, Leibniz worked with a generalized relation of “equality” which was an equality up to a negligible term. Leibniz codified this relation in terms of his transcendental law of homogeneity (TLH), or lex homogeneorum transcendentalis in the original Latin. Leibniz had already referred to the law of homogeneity in his first work on the calculus: “the only remaining differential quantities, namely dx, dy, are found always outside the numerators and roots, and each member is acted on by either dx, or by dy, always with the law of homogeneity maintained with regard to these two quantities, in whatever manner the calculation may turn out.”

The TLH governs equations involving differentials. Bos interprets it as follows:

A quantity which is infinitely small with respect to an- other quantity can be neglected if compared with that quantity. Thus all terms in an equation except those of the highest order of infinity, or the lowest order of infinite smallness, can be discarded. For instance,

a + dx = a —– (1)

dx+ddy = dx

etc. The resulting equations satisfy this . . . requirement of homogeneity.

(here the expression ddx denotes a second-order differential obtained as a second difference). Thus, formulas like Euler’s

a + dx = a —– (2)

(where a “is any finite quantity”; (Euler) belong in the Leibnizian tradition of drawing inferences in accordance with the TLH and as reported by Bos in formula (1) above. The principle of cancellation of infinitesimals was, of course, the very basis of the technique. However, it was also the target of Berkeley’s charge of a logical inconsistency (Berkeley). This can be expressed in modern notation by the conjunction (dx ≠ 0) ∧ (dx = 0). But the Leibnizian framework does not suffer from an inconsistency of type (dx ≠ 0) ∧ (dx = 0) given the more general relation of “equality up to”; in other words, the dx is not identical to zero but is merely discarded at the end of the calculation in accordance with the TLH.

Relations of equality: What Euler and Leibniz appear to have realized more clearly than their contemporaries is that there is more than one relation falling under the general heading of “equality”. Thus, to explain formulas like (2), Euler elaborated two distinct ways, arithmetic and geometric, of comparing quantities. He described the two modalities of comparison in the following terms:

Since we are going to show that an infinitely small quantity is really zero (cyphra), we must meet the objection of why we do not always use the same symbol 0 for infinitely small quantities, rather than some special ones…

[S]ince we have two ways to compare them [a more pre- cise translation would be “there are two modalities of comparison”], either arithmetic or geometric, let us look at the quotients of quantities to be compared in order to see the difference. (Euler)

Furthermore,

If we accept the notation used in the analysis of the infi- nite, then dx indicates a quantity that is infinitely small, so that both dx = 0 and a dx = 0, where a is any finite quantity. Despite this, the geometric ratio a dx : dx is finite, namely a : 1. For this reason, these two infinitely small quantities, dx and adx, both being equal to 0, cannot be confused when we consider their ratio. In a similar way, we will deal with infinitely small quantities dx and dy.

Having defined the two modalities of comparison of quantities, arithmetic and geometric, Euler proceeds to clarify the difference between them as follows:

Let a be a finite quantity and let dx be infinitely small. The arithmetic ratio of equals is clear:

Since ndx = 0, we have

a ± ndx − a = 0 —– (3)

On the other hand, the geometric ratio is clearly of equals, since

(a ± ndx)/a =1 —– (4)

While Euler speaks of distinct modalities of comparison, he writes them down symbolically in terms of two distinct relations, both denoted by the equality sign “=”; namely, (3) and (4). Euler concludes as follows:

From this we obtain the well-known rule that the infinitely small vanishes in comparison with the finite and hence can be neglected [with respect to it].

Note that in the Latin original, the italicized phrase reads infinite parva prae finitis evanescant, atque adeo horum respectu reiici queant. The term evanescant can mean either vanish or lapse, but the term prae makes it read literally as “the infinitely small vanishes before (or by the side of ) the finite,” implying that the infinitesimal disappears because of the finite, and only once it is compared to the finite.

A possible interpretation is that any motion or activity involved in the term evanescant does not indicate that the infinitesimal quantity is a dynamic entity that is (in and of itself) in a state of disappearing, but rather is a static entity that changes, or disappears, only “with respect to” (horum respectu) a finite entity. To Euler, the infinitesimal has a different status depending on what it is being compared to. The passage suggests that Euler’s usage accords more closely with reasoning exploiting static infinitesimals than with dynamic limit-type reasoning.