Conjectural Existence of the Categorial Complex Branes for Generalized Calabi-Yau.

Geometric Langlands Duality can be formulated as follows: Let C be a Riemann surface (compact, without boundary), G be a compact reductive Lie group, GC be its complexification, and Mflat(G, C) be the moduli space of stable flat GC-connections on C. The Langlands dual of G is another compact reductive Lie group LG defined by the condition that its weight and coweight lattices are exchanged relative to G. Let Bun(LG, C) be the moduli stack of holomorphic LG-bundles on C. One of the statements of Geometric Langlands Duality is that the derived category of coherent sheaves on Mflat(G, C) is equivalent to the derived category of D-modules over Bun(LG, C).

Mflat(G, C) is mirror to another moduli space which, roughly speaking, can be described as the cotangent bundle to Bun(LG, C). The category of A-branes on T Bun(LG, C) (with the canonical symplectic form) is equivalent to the category of B-branes on a noncommutative deformation of T Bun(LG, C). The latter is the same as the category of (analytic) D-modules on Bun(LG, C).

So, what exactly is, the relationship between A-branes and noncommutative B-branes. This relationship arises whenever the target space X is the total space of the cotangent bundle to a complex manifold Y. It is understood that the  symplectic form ω is proportional to the canonical symplectic form on T Y. With the B-field vanishing, and Y as a complex, we regard ω as the real part of a holomorphic symplectic form Ω. If qi are holomorphic coordinates on Y, and pi are dual coordinates on the fibers of T Y,  Ω can be written as

Ω = 1/ħdpi ∧ dqi = dΘ

Since ω (as well as Ω) is exact, the closed A-model of X is rather trivial: there are no nontrivial instantons, and the quantum cohomology ring is isomorphic to the classical one.

We would like to understand the category of A-branes on X = T Y. The key observation is that ∃ a natural coisotropic A-brane on X well-defined up to tensoring with a flat line bundle on X. Its curvature 2-form is exact and given by

F = Im Ω

If we denote by I the natural almost complex structure on X coming from the complex structure on Y , we have F = ωI, and therefore the endomorphism ω−1F = I squares to −1. Therefore any unitary connection on a trivial line bundle over X whose curvature is F defines a coisotropic A-brane. 

Now, what about the endomorphisms of the canonical coisotropic A-brane, i.e., the algebra of BRST-closed open string vertex operators? This is easy if Y is an affine space. If one covers Y with charts each of which is an open subset of Cn, and then argues that the computation can be performed locally on each chart and the results “glued together”, one gets closer to the fact that the algebra in question is the cohomology of a certain sheaf of algebras, whose local structure is the same as for Y = Cn. In general, the path integral defining the correlators of vertex operators does not have any locality properties in the target space. Each term in perturbation theory depends only on the infinitesimal neighbourhood of a point. This shows that the algebra of open-string vertex operators, regarded as a formal power series in ħ, is the cohomology of a sheaf of algebras, which is locally isomorphic to a similar sheaf for X = Cn × Cn.

Let us apply these observations to the canonical coisotropic A-brane on X = T Y. Locally, we can identify Y with a region in Cn by means of holomorphic coordinate functions q1, . . . , qn. Up to BRST-exact terms, the action of the A-model on a disc Σ 􏰠takes the form

S = 1/ħ ∫∂Σ φ (pidqi)

where φ is a map from Σ to X. This action is identical to the action of a particle on Y with zero Hamiltonian, except that qi are holomorphic coordinates on Y rather than ordinary coordinates. The BRST-invariant open-string vertex operators can be taken to be holomorphic functions of p, q. Therefore quantization is locally straightforward and gives a noncommutative deformation of the algebra of holomorphic functions on T Y corresponding to a holomorphic Poisson bivector

P = ħ∂/∂pi ∧ ∂/∂qi

One can write an explicit formula for the deformed product:

􏰋(f ⋆ g)(p, q) = exp(􏰋ħ/2(∂2/∂pi∂q̃i  −  ∂2/∂qi∂p̃i )) f(p, q) g (p̃, q̃)|p̃ = p, q̃ = q

This product is known as the Moyal-Wigner product, which is a formal power series in ħ that may have zero radius of convergence. To rectify the situation, one can restrict to functions which are polynomial in the fiber coordinates pi. Such locally-defined functions on T Y can be thought of as symbols of differential operators; the Moyal-Wigner product in this case reduces to the product of symbols and is a polynomial in ħ. Thus locally the sheaf of open-string vertex operators is modelled on the sheaf of holomorphic differential operators on Y (provided we restrict to operators polynomial in pi).

Locally, there is no difference between the sheaf of holomorphic differential operators D(Y ) and the sheaf of holomorphic differential operatorsD(Y, L) on a holomorphic line bundle L over Y. Thus the sheaf of open-string vertex operators could be any of the sheaves D(Y, L). Moreover, the classical problem is symmetric under pi → −pi combined with the orientation reversal of Σ; if we require that quantization preserve this symmetry, then the algebra of open-string vertex operators must be isomorphic to its opposite algebra. It is known that the opposite of the sheaf D(Y, L) is the sheaf D(Y, L−1 ⊗ KY), so symmetry under pi → −pi requires L to be a square root of the canonical line bundle KY. It does not matter which square root one takes, since they all differ by flat line bundles on Y, and tensoring L by a flat line bundle does not affect the sheaf D(Y, L). The conclusion is that the sheaf of open-string vertex operators for the canonical coisotropic A-brane α on X = T Y is isomorphic to the sheaf of noncommutative algebras D(Y, K1/2). One can use this fact to associate Y to any A-brane β on X a twisted D-module, i.e., a sheaf of modules over D(Y, K1/2). Consider the A-model with target X on a strip Σ = I × R, where I is a unit interval, and impose boundary conditions corresponding to branes α and β on the two boundaries of Σ. Upon quantization of this model, one gets a sheaf on vector spaces on Y which is a module over the sheaf of open-string vertex operators inserted at the α boundary. A simple example is to take β to be the zero section of T Y with a trivial line bundle. Then the corresponding sheaf is simply the sheaf of sections of KY1/2, with a tautological action of D(Y, KY1/2).

One can argue that the map from A-branes to (complexes of) D-modules can be extended to an equivalence of categories of A-branes on X and the derived category of D-modules on Y. The argument relies on the conjectural existence of the category of generalized complex branes for any generalized Calabi-Yau. One can regard the Geometric Langlands Duality as a nonabelian generalization. 

Advertisement

Lévy Process as Combination of a Brownian Motion with Drift and Infinite Sum of Independent Compound Poisson Processes: Introduction to Martingales. Part 4.

Every piecewise constant Lévy process Xt0 can be represented in the form for some Poisson random measure with intensity measure of the form ν(dx)dt where ν is a finite measure, defined by

ν(A) = E[#{t ∈ [0,1] : ∆Xt0 ≠ 0, ∆Xt0 ∈ A}], A ∈ B(Rd) —– (1)

Given a Brownian motion with drift γt + Wt, independent from X0, the sum Xt = Xt0 + γt + Wt defines another Lévy process, which can be decomposed as:

Xt = γt + Wt + ∑s∈[0,t] ΔXs = γt + Wt + ∫[0,t]xRd xJX (ds x dx) —– (2)

where JX is a Poisson random measure on [0,∞[×Rd with intensity ν(dx)dt.

Can every Lévy process be represented in this form? Given a Lévy process Xt, we can still define its Lévy measure ν as above. ν(A) is still finite for any compact set A such that 0 ∉ A: if this were not true, the process would have an infinite number of jumps of finite size on [0, T], which contradicts the cadlag property. So ν defines a Radon measure on Rd \ {0}. But ν is not necessarily a finite measure: the above restriction still allows it to blow up at zero and X may have an infinite number of small jumps on [0, T]. In this case the sum of the jumps becomes an infinite series and its convergence imposes some conditions on the measure ν, under which we obtain a decomposition of X.

Let (Xt)t≥0 be a Lévy process on Rd and ν its Lévy measure.

ν is a Radon measure on Rd \ {0} and verifies:

|x|≤1 |x|2 v(dx) < ∞

The jump measure of X, denoted by JX, is a Poisson random measure on [0,∞[×Rd with intensity measure ν(dx)dt.

∃ a vector γ and a d-dimensional Brownian motion (Bt)t≥0 with covariance matrix A such that

Xt = γt + Bt + Xtl + limε↓0 X’εt —– (3)

where

Xtl = ∫|x|≥1,s∈[0,t] xJX (ds x dx)

X’εt = ∫ε≤|x|<1,s∈[0,t] x{JX (ds x dx) – ν(dx)ds}

≡ ∫ε≤|x|<1,s∈[0,t] xJ’X (ds x dx)

The terms in (3) are independent and the convergence in the last term is almost sure and uniform in t on [0,T].

The Lévy-Itô decomposition entails that for every Lévy process ∃ a vector γ, a positive definite matrix A and a positive measure ν that uniquely determine its distribution. The triplet (A,ν,γ) is called characteristic tripletor Lévy triplet of the process Xt. γt + Bt is a continuous Gaussian Lévy process and every Gaussian Lévy process is continuous and can be written in this form and can be described by two parameters: the drift γ and the covariance matrix of Brownian motion, denoted by A. The other two terms are discontinuous processes incorporating the jumps of Xt and are described by the Lévy measure ν. The condition ∫|y|≥1 ν(dy) < ∞ means that X has a finite number of jumps with absolute value larger than 1. So the sum

Xtl = ∑|∆Xs|≥10≤s≤t ∆Xs

contains almost surely a finite number of terms and Xtl is a compound Poisson process. There is nothing special about the threshold ∆X = 1: for any ε > 0, the sum of jumps with amplitude between ε and 1:

Xεt = ∑1>|∆Xs|≥ε0≤s≤t ∆Xs = ∫ε≤|x|≤1,s∈[0,t] xJX(ds x dx) —– (4)

is again a well-defined compound Poisson process. However, contrarily to the compound Poisson case, ν can have a singularity at zero: there can be infinitely many small jumps and their sum does not necessarily converge. This prevents us from making ε go to 0 directly in (4). In order to obtain convergence we have to center the remainder term, i.e., replace the jump integral by its compensated version,

X’εt = ∫ε≤|x|≤1,s∈[0,t] xJ’X (ds x dx) —– (5)

which, is a martingale. While Xε can be interpreted as an infinite superposition of independent Poisson processes, X’εshould be seen as an infinite superposition of independent compensated, i.e., centered Poisson processes to which a central-limit type argument can be applied to show convergence. An important implication of the Lévy-Itô decomposition is that every Lévy process is a combination of a Brownian motion with drift and a possibly infinite sum of independent compound Poisson processes. This also means that every Lévy process can be approximated with arbitrary precision by a jump-diffusion process, that is by the sum of Brownian motion with drift and a compound Poisson process.

Fréchet Spaces and Presheave Morphisms.

hqdefault

hqdefault1

A topological vector space V is both a topological space and a vector space such that the vector space operations are continuous. A topological vector space is locally convex if its topology admits a basis consisting of convex sets (a set A is convex if (1 – t) + ty ∈ A ∀ x, y ∈ A and t ∈ [0, 1].

We say that a locally convex topological vector space is a Fréchet space if its topology is induced by a translation-invariant metric d and the space is complete with respect to d, that is, all the Cauchy sequences are convergent.

A seminorm on a vector space V is a real-valued function p such that ∀ x, y ∈ V and scalars a we have:

(1) p(x + y) ≤ p(x) + p(y),

(2) p(ax) = |a|p(x),

(3) p(x) ≥ 0.

The difference between the norm and the seminorm comes from the last property: we do not ask that if x ≠ 0, then p(x) > 0, as we would do for a norm.

If {pi}{i∈N} is a countable family of seminorms on a topological vector space V, separating points, i.e. if x ≠ 0, there is an i with pi(x) ≠ 0, then ∃ a translation-invariant metric d inducing the topology, defined in terms of the {pi}:

d(x, y) = ∑i=1 1/2i pi(x – y)/(1 + pi(x – y))

The following characterizes Fréchet spaces, giving an effective method to construct them using seminorms.

A topological vector space V is a Fréchet space iff it satisfies the following three properties:

  • it is complete as a topological vector space;
  • it is a Hausdorff space;
  • its topology is induced by a countable family of seminorms {pi}{i∈N}, i.e., U ⊂ V is open iff for every u ∈ U ∃ K ≥ 0 and ε > 0 such that {v|pk(u – v) < ε ∀ k ≤ K} ⊂ U.

We say that a sequence (xn) in V converges to x in the Fréchet space topology defined by a family of seminorms iff it converges to x with respect to each of the given seminorms. In other words, xn → x, iff pi(xn – x) → 0 for each i.

Two families of seminorms defined on the locally convex vector space V are said to be equivalent if they induce the same topology on V.

To construct a Fréchet space, one typically starts with a locally convex topological vector space V and defines a countable family of seminorms pk on V inducing its topology and such that:

  1. if x ∈ V and pk(x) = 0 ∀ k ≥ 0, then x = 0 (separation property);
  2. if (xn) is a sequence in V which is Cauchy with respect to each seminorm, then ∃ x ∈ V such that (xn) converges to x with respect to each seminorm (completeness property).

The topology induced by these seminorms turns V into a Fréchet space; property (1) ensures that it is Hausdorff, while the property (2) guarantees that it is complete. A translation-invariant complete metric inducing the topology on V can then be defined as above.

The most important example of Fréchet space, is the vector space C(U), the space of smooth functions on the open set U ⊆ Rn or more generally the vector space C(M), where M is a differentiable manifold.

For each open set U ⊆ Rn (or U ⊂ M), for each K ⊂ U compact and for each multi-index I , we define

||ƒ||K,I := supx∈K |(∂|I|/∂xI (ƒ)) (x)|, ƒ ∈ C(U)

Each ||.||K,I defines a seminorm. The family of seminorms obtained by considering all of the multi-indices I and the (countable number of) compact subsets K covering U satisfies the properties (1) and (1) detailed above, hence makes C(U) into a Fréchet space. The sets of the form

|ƒ ∈ C(U)| ||ƒ – g||K,I < ε

with fixed g ∈ C(U), K ⊆ U compact, and multi-index I are open sets and together with their finite intersections form a basis for the topology.

All these constructions and results can be generalized to smooth manifolds. Let M be a smooth manifold and let U be an open subset of M. If K is a compact subset of U and D is a differential operator over U, then

pK,D(ƒ) := supx∈K|D(ƒ)|

is a seminorm. The family of all the seminorms  pK,D with K and D varying among all compact subsets and differential operators respectively is a separating family of seminorms endowing CM(U) with the structure of a complete locally convex vector space. Moreover there exists an equivalent countable family of seminorms, hence CM(U) is a Fréchet space. Let indeed {Vj} be a countable open cover of U by open coordinate subsets, and let, for each j, {Kj,i} be a countable family of compact subsets of Vj such that ∪i Kj,i = Vj. We have the countable family of seminorms

pK,I := supx∈K |(∂|I|/∂xI (ƒ)) (x)|, K ∈  {Kj,i}

inducing the topology. CM(U) is also an algebra: the product of two smooth functions being a smooth function.

A Fréchet space V is said to be a Fréchet algebra if its topology can be defined by a countable family of submultiplicative seminorms, i.e., a countable family {qi)i∈N of seminorms satisfying

qi(ƒg) ≤qi (ƒ) qi(g) ∀ i ∈ N

Let F be a sheaf of real vector spaces over a manifold M. F is a Fréchet sheaf if:

(1)  for each open set U ⊆ M, F(U) is a Fréchet space;

(2)  for each open set U ⊆ M and for each open cover {Ui} of U, the topology of F(U) is the initial topology with respect to the restriction maps F(U) → F(Ui), that is, the coarsest topology making the restriction morphisms continuous.

As a consequence, we have the restriction map F(U) → F(V) (V ⊆ U) as continuous. A morphism of sheaves ψ: F → F’ is said to be continuous if the map F(U) → F'(U) is open for each open subset U ⊆ M.

The Statistical Physics of Stock Markets. Thought of the Day 143.0

This video is an Order Routing Animation

The externalist view argues that we can make sense of, and profit from stock markets’ behavior, or at least few crucial properties of it, by crunching numbers and looking for patterns and regularities in certain sets of data. The notion of data, hence, is a key element in such an understanding and the quantitative side of the problem is prominent even if it does not mean that a qualitative analysis is ignored. The point here that the outside view maintains that it provides a better understanding than the internalist view. To this end, it endorses a functional perspective on finance and stock markets in particular.

The basic idea of the externalist view is that there are general properties and behavior of stock markets that can be detected and studied through mathematical lens, and they do not depend so much on contextual or domain-specific factors. The point at stake here is that the financial systems can be studied and approached at different scales, and it is virtually impossible to produce all the equations describing at a micro level all the objects of the system and their relations. So, in response, this view focuses on those properties that allow us to get an understanding of the behavior of the systems at a global level without having to produce a detailed conceptual and mathematical account of the inner ‘machinery’ of the system. Hence the two roads: The first one is to embrace an emergentist view on stock market, that is a specific metaphysical, ontological, and methodological thesis, while the second one is to embrace a heuristic view, that is the idea that the choice to focus on those properties that are tractable by the mathematical models is a pure problem-solving option.

A typical view of the externalist approach is the one provided, for instance, by statistical physics. In describing collective behavior, this discipline neglects all the conceptual and mathematical intricacies deriving from a detailed account of the inner, individual, and at micro level functioning of a system. Concepts such as stochastic dynamics, self-similarity, correlations (both short- and long-range), and scaling are tools to get this aim. Econophysics is a stock example in this sense: it employs methods taken from mathematics and mathematical physics in order to detect and forecast the driving forces of stock markets and their critical events, such as bubbles, crashes and their tipping points. Under this respect, markets are not ‘dark boxes’: you can see their characteristics from the outside, or better you can see specific dynamics that shape the trends of stock markets deeply and for a long time. Moreover, these dynamics are complex in the technical sense. This means that this class of behavior is such to encompass timescales, ontology, types of agents, ecologies, regulations, laws, etc. and can be detected, even if not strictly predictable. We can focus on the stock markets as a whole, on few of their critical events, looking at the data of prices (or other indexes) and ignoring all the other details and factors since they will be absorbed in these global dynamics. So this view provides a look at stock markets such that not only they do not appear as a unintelligible casino where wild gamblers face each other, but that shows the reasons and the properties of a systems that serve mostly as a means of fluid transactions that enable and ease the functioning of free markets.

Moreover the study of complex systems theory and that of stock markets seem to offer mutual benefits. On one side, complex systems theory seems to offer a key to understand and break through some of the most salient stock markets’ properties. On the other side, stock markets seem to provide a ‘stress test’ of the complexity theory. Didier Sornette expresses the analogies between stock markets and phase transitions, statistical mechanics, nonlinear dynamics, and disordered systems mold the view from outside:

Take our personal life. We are not really interested in knowing in advance at what time we will go to a given store or drive to a highway. We are much more interested in forecasting the major bifurcations ahead of us, involving the few important things, like health, love, and work, that count for our happiness. Similarly, predicting the detailed evolution of complex systems has no real value, and the fact that we are taught that it is out of reach from a fundamental point of view does not exclude the more interesting possibility of predicting phases of evolutions of complex systems that really count, like the extreme events. It turns out that most complex systems in natural and social sciences do exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their posterior evolution. Such extreme events express more than anything else the underlying “forces” usually hidden by almost perfect balance and thus provide the potential for a better scientific understanding of complex systems.

Phase transitions, critical points, extreme events seem to be so pervasive in stock markets that they are the crucial concepts to explain and, in case, foresee. And complexity theory provides us a fruitful reading key to understand their dynamics, namely their generation, growth and occurrence. Such a reading key proposes a clear-cut interpretation of them, which can be explained again by means of an analogy with physics, precisely with the unstable position of an object. Complexity theory suggests that critical or extreme events occurring at large scale are the outcome of interactions occurring at smaller scales. In the case of stock markets, this means that, unlike many approaches that attempt to account for crashes by searching for ‘mechanisms’ that work at very short time scales, complexity theory indicates that crashes have causes that date back months or year before it. This reading suggests that it is the increasing, inner interaction between the agents inside the markets that builds up the unstable dynamics (typically the financial bubbles) that eventually ends up with a critical event, the crash. But here the specific, final step that triggers the critical event: the collapse of the prices is not the key for its understanding: a crash occurs because the markets are in an unstable phase and any small interference or event may trigger it. The bottom line: the trigger can be virtually any event external to the markets. The real cause of the crash is its overall unstable position, the proximate ‘cause’ is secondary and accidental. Or, in other words, a crash could be fundamentally endogenous in nature, whilst an exogenous, external, shock is simply the occasional triggering factors of it. The instability is built up by a cooperative behavior among traders, who imitate each other (in this sense is an endogenous process) and contribute to form and reinforce trends that converge up to a critical point.

The main advantage of this approach is that the system (the market) would anticipate the crash by releasing precursory fingerprints observable in the stock market prices: the market prices contain information on impending crashes and this implies that:

if the traders were to learn how to decipher and use this information, they would act on it and on the knowledge that others act on it; nevertheless, the crashes would still probably happen. Our results suggest a weaker form of the “weak efficient market hypothesis”, according to which the market prices contain, in addition to the information generally available to all, subtle information formed by the global market that most or all individual traders have not yet learned to decipher and use. Instead of the usual interpretation of the efficient market hypothesis in which traders extract and consciously incorporate (by their action) all information contained in the market prices, we propose that the market as a whole can exhibit “emergent” behavior not shared by any of its constituents.

In a nutshell, the critical events emerge in a self-organized and cooperative fashion as the macro result of the internal and micro interactions of the traders, their imitation and mirroring.

 

Long Term Capital Management. Note Quote.

Long Term Capital Management, or LTCM, was a hedge fund founded in 1994 by John Meriwether, the former head of Salomon Brothers’s domestic fixed-income arbitrage group. Meriwether had grown the arbitrage group to become Salomon’s most profitable group by 1991, when it was revealed that one of the traders under his purview had astonishingly submitted a false bid in a U.S. Treasury bond auction. Despite reporting the trade immediately to CEO John Gutfreund, the outcry from the scandal forced Meriwether to resign.

Meriwether revived his career several years later with the founding of LTCM. Amidst the beginning of one of the greatest bull markets the global markets had ever seen, Meriwether assembled a team of some of the world’s most respected economic theorists to join other refugees from the arbitrage group at Salomon. The board of directors included Myron Scholes, a coauthor of the famous Black-Scholes formula used to price option contracts, and MIT Sloan professor Robert Merton, both of whom would later share the 1997 Nobel Prize for Economics. The firm’s impressive brain trust, collectively considered geniuses by most of the financial world, set out to raise a $1 billion fund by explaining to investors that their profoundly complex computer models allowed them to price securities according to risk more accurately than the rest of the market, in effect “vacuuming up nickels that others couldn’t see.”

One typical LTCM trade concerned the divergence in price between long-term U.S. Treasury bonds. Despite offering fundamentally the same (minimal) default risk, those issued more recently – known as “on-the-run” securities – traded more heavily than those “off-the-run” securities issued just months previously. Heavier trading meant greater liquidity, which in turn resulted in ever-so-slightly higher prices. As “on-the-run” securities become “off-the-run” upon the issuance of a new tranche of Treasury bonds, the price discrepancy generally disappears with time. LTCM sought to exploit that price convergence by shorting the more expensive “on-the-run” bond while purchasing the “off- the-run” security.

By early 1998 the intellectual firepower of its board members and the aggressive trading practices that had made the arbitrage group at Salomon so successful had allowed LTCM to flourish, growing its initial $1 billion of investor equity to $4.72 billion. However, the miniscule spreads earned on arbitrage trades could not provide the type of returns sought by hedge fund investors. In order to make transactions such as these worth their while, LTCM had to employ massive leverage in order to magnify its returns. Ultimately, the fund’s equity component sat atop more than $124.5 billion in borrowings for total assets of more than $129 billion. These borrowings were merely the tip of the ice-berg; LTCM also held off-balance-sheet derivative positions with a notional value of more than $1.25 trillion.

Untitled

The fund’s success began to pose its own problems. The market lacked sufficient capacity to absorb LTCM’s bloated size, as trades that had been profitable initially became impossible to conduct on a massive scale. Moreover, a flood of arbitrage imitators tightened the spreads on LTCM’s “bread-and-butter” trades even further. The pressure to continue delivering returns forced LTCM to find new arbitrage opportunities, and the fund diversified into areas where it could not pair its theoretical insights with trading experience. Soon LTCM had made large bets in Russia and in other emerging markets, on S&P futures, and in yield curve, junk bond, merger, and dual-listed securities arbitrage.

Combined with its style drift, the fund’s more than 26 leverage put LTCM in an increasingly precarious bubble, which was eventually burst by a combination of factors that forced the fund into a liquidity crisis. In contrast to Scholes’s comments about plucking invisible, riskless nickels from the sky, financial theorist Nassim Taleb later compared the fund’s aggressive risk taking to “picking up pennies in front of a steamroller,” a steamroller that finally came in the form of 1998’s market panic. The departure of frequent LTCM counterparty Salomon Brothers from the arbitrage market that summer put downward pressure on many of the fund’s positions, and Russia’s default on its government-issued bonds threw international credit markets into a downward spiral. Panicked investors around the globe demonstrated a “flight to quality,” selling the risky securities in which LTCM traded and purchasing U.S. Treasury securities, further driving up their price and preventing a price convergence upon which the fund had bet so heavily.

None of LTCM’s sophisticated theoretical models had contemplated such an internationally correlated credit market collapse, and the fund began hemorrhaging money, losing nearly 20% of its equity in May and June alone. Day after day, every market in which LTCM traded turned against it. Its powerless brain trust watched in horror as its equity shrank to $600 million in early September without any reduction in borrowing, resulting in an unfathomable 200 leverage ratio. Sensing the fund’s liquidity crunch, Bear Stearns refused to continue acting as a clearinghouse for the fund’s trades, throwing LTCM into a panic. Without the short-term credit that enabled its entire trading operations, the fund could not continue and its longer-term securities grew more illiquid by the day.

Obstinate in their refusal to unwind what they still considered profitable trades hammered by short-term market irrationality, LTCM’s partners refused a buyout offer of $250 million by Goldman Sachs, ING Barings, and Warren Buffet’s Berkshire Hathaway. However, LTCM’s role as a counterparty in thousands of derivatives trades that touched investment firms around the world threatened to provoke a wider collapse in international securities markets if the fund went under, so the U.S. Federal Reserve stepped in to maintain order. Wishing to avoid the precedent of a government bailout of a hedge fund and the moral hazard it could subsequently encourage, the Fed invited every major investment bank on Wall Street to an emergency meeting in New York and dictated the terms of the $3.625 billion bailout that would preserve market liquidity. The Fed convinced Bankers Trust, Barclays, Chase, Credit Suisse First Boston, Deutsche Bank, Goldman Sachs, Merrill Lynch, J.P. Morgan, Morgan Stanley, Salomon Smith Barney, and UBS – many of whom were investors in the fund – to contribute $300 million apiece, with $125 million coming from Société Générale and $100 million from Lehman Brothers and Paribas. Eventually the market crisis passed, and each bank managed to liquidate its position at a slight profit. Only one bank contacted by the Fed refused to join the syndicate and share the burden in the name of preserving market integrity.

That bank was Bear Stearns.

Bear’s dominant trading position in bonds and derivatives had won it the profitable business of acting as a settlement house for nearly all of LTCM’s trading in those markets. On September 22, 1998, just days before the Fed-organized bailout, Bear put the final nail in the LTCM coffin by calling in a short-term debt in the amount of $500 million in an attempt to limit its own exposure to the failing hedge fund, rendering it insolvent in the process. Ever the maverick in investment banking circles, Bear stubbornly refused to contribute to the eventual buyout, even in the face of a potentially apocalyptic market crash and despite the millions in profits it had earned as LTCM’s prime broker. In typical Bear fashion, James Cayne ignored the howls from other banks that failure to preserve confidence in the markets through a bailout would bring them all down in flames, famously growling through a chewed cigar as the Fed solicited contributions for the emergency financing, “Don’t go alphabetically if you want this to work.”

Market analysts were nearly unanimous in describing the lessons learned from LTCM’s implosion; in effect, the fund’s profound leverage had placed it in such a precarious position that it could not wait for its positions to turn profitable. While its trades were sound in principal, LTCM’s predicted price convergence was not realized until long after its equity had been wiped out completely. A less leveraged firm, they explained, might have realized lower profits than the 40% annual return LTCM had offered investors up until the 1998 crisis, but could have weathered the storm once the market turned against it. In the words of economist John Maynard Keynes, the market had remained irrational longer than LTCM could remain solvent. The crisis further illustrated the importance not merely of liquidity but of perception in the less regulated derivatives markets. Once LTCM’s ability to meet its obligations was called into question, its demise became inevitable, as it could no longer find counterparties with whom to trade and from whom it could borrow to continue operating.

The thornier question of the Fed’s role in bailing out an overly aggressive investment fund in the name of market stability remained unresolved, despite the Fed’s insistence on private funding for the actual buyout. Though impossible to foresee at the time, the issue would be revisited anew less than ten years later, and it would haunt Bear Stearns. With negative publicity from Bear’s $38.5 million settlement with the SEC regarding charges that it had ignored fraudulent behavior by a client for whom it cleared trades and LTCM’s collapse behind it, Bear Stearns continued to grow under Cayne’s leadership, with its stock price appreciating some 600% from his assumption of control in 1993 until 2008. However, a rapid-fire sequence of negative events began to unfurl in the summer of 2007 that would push Bear into a liquidity crunch eerily similar to the one that felled LTCM.

Accelerated Capital as an Anathema to the Principles of Communicative Action. A Note Quote on the Reciprocity of Capital and Ethicality of Financial Economics

continuum

Markowitz portfolio theory explicitly observes that portfolio managers are not (expected) utility maximisers, as they diversify, and offers the hypothesis that a desire for reward is tempered by a fear of uncertainty. This model concludes that all investors should hold the same portfolio, their individual risk-reward objectives are satisfied by the weighting of this ‘index portfolio’ in comparison to riskless cash in the bank, a point on the capital market line. The slope of the Capital Market Line is the market price of risk, which is an important parameter in arbitrage arguments.

Merton had initially attempted to provide an alternative to Markowitz based on utility maximisation employing stochastic calculus. He was only able to resolve the problem by employing the hedging arguments of Black and Scholes, and in doing so built a model that was based on the absence of arbitrage, free of turpe-lucrum. That the prescriptive statement “it should not be possible to make sure profits”, is a statement explicit in the Efficient Markets Hypothesis and in employing an Arrow security in the context of the Law of One Price. Based on these observations, we conject that the whole paradigm for financial economics is built on the principle of balanced reciprocity. In order to explore this conjecture we shall examine the relationship between commerce and themes in Pragmatic philosophy. Specifically, we highlight Robert Brandom’s (Making It Explicit Reasoning, Representing, and Discursive Commitment) position that there is a pragmatist conception of norms – a notion of primitive correctnesses of performance implicit in practice that precludes and are presupposed by their explicit formulation in rules and principles.

The ‘primitive correctnesses’ of commercial practices was recognised by Aristotle when he investigated the nature of Justice in the context of commerce and then by Olivi when he looked favourably on merchants. It is exhibited in the doux-commerce thesis, compare Fourcade and Healey’s contemporary description of the thesis Commerce teaches ethics mainly through its communicative dimension, that is, by promoting conversations among equals and exchange between strangers, with Putnam’s description of Habermas’ communicative action based on the norm of sincerity, the norm of truth-telling, and the norm of asserting only what is rationally warranted …[and] is contrasted with manipulation (Hilary Putnam The Collapse of the Fact Value Dichotomy and Other Essays)

There are practices (that should be) implicit in commerce that make it an exemplar of communicative action. A further expression of markets as centres of communication is manifested in the Asian description of a market brings to mind Donald Davidson’s (Subjective, Intersubjective, Objective) argument that knowledge is not the product of a bipartite conversations but a tripartite relationship between two speakers and their shared environment. Replacing the negotiation between market agents with an algorithm that delivers a theoretical price replaces ‘knowledge’, generated through communication, with dogma. The problem with the performativity that Donald MacKenzie (An Engine, Not a Camera_ How Financial Models Shape Markets) is concerned with is one of monism. In employing pricing algorithms, the markets cannot perform to something that comes close to ‘true belief’, which can only be identified through communication between sapient humans. This is an almost trivial observation to (successful) market participants, but difficult to appreciate by spectators who seek to attain ‘objective’ knowledge of markets from a distance. To appreciate the relevance to financial crises of the position that ‘true belief’ is about establishing coherence through myriad triangulations centred on an asset rather than relying on a theoretical model.

Shifting gears now, unless the martingale measure is a by-product of a hedging approach, the price given by such martingale measures is not related to the cost of a hedging strategy therefore the meaning of such ‘prices’ is not clear. If the hedging argument cannot be employed, as in the markets studied by Cont and Tankov (Financial Modelling with Jump Processes), there is no conceptual framework supporting the prices obtained from the Fundamental Theorem of Asset Pricing. This lack of meaning can be interpreted as a consequence of the strict fact/value dichotomy in contemporary mathematics that came with the eclipse of Poincaré’s Intuitionism by Hilbert’s Formalism and Bourbaki’s Rationalism. The practical problem of supporting the social norms of market exchange has been replaced by a theoretical problem of developing formal models of markets. These models then legitimate the actions of agents in the market without having to make reference to explicitly normative values.

The Efficient Market Hypothesis is based on the axiom that the market price is determined by the balance between supply and demand, and so an increase in trading facilitates the convergence to equilibrium. If this axiom is replaced by the axiom of reciprocity, the justification for speculative activity in support of efficient markets disappears. In fact, the axiom of reciprocity would de-legitimise ‘true’ arbitrage opportunities, as being unfair. This would not necessarily make the activities of actual market arbitrageurs illicit, since there are rarely strategies that are without the risk of a loss, however, it would place more emphasis on the risks of speculation and inhibit the hubris that has been associated with the prelude to the recent Crisis. These points raise the question of the legitimacy of speculation in the markets. In an attempt to understand this issue Gabrielle and Reuven Brenner identify the three types of market participant. ‘Investors’ are preoccupied with future scarcity and so defer income. Because uncertainty exposes the investor to the risk of loss, investors wish to minimise uncertainty at the cost of potential profits, this is the basis of classical investment theory. ‘Gamblers’ will bet on an outcome taking odds that have been agreed on by society, such as with a sporting bet or in a casino, and relates to de Moivre’s and Montmort’s ‘taming of chance’. ‘Speculators’ bet on a mis-calculation of the odds quoted by society and the reason why speculators are regarded as socially questionable is that they have opinions that are explicitly at odds with the consensus: they are practitioners who rebel against a theoretical ‘Truth’. This is captured in Arjun Appadurai’s argument that the leading agents in modern finance believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control . . . [they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”.

In the context of Pragmatism, financial speculators embody pluralism, a concept essential to Pragmatic thinking and an antidote to the problem of radical uncertainty. Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (The Gift), exploring the moral force behind reciprocity in primitive and archaic societies and goes on to say that the contemporary financial speculator is “betting on the obligation of return”, and this is the fundamental axiom of contemporary finance. David Graeber (Debt The First 5,000 Years) also recognises the fundamental position reciprocity has in finance, but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores uncertainty in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts”. In advocating that reciprocity need not be honoured, Graeber is not just challenging contemporary capitalism but also the foundations of the civitas, based on equality and reciprocity. The origins of Graeber’s argument are in the first half of the nineteenth century. In 1836 John Stuart Mill defined political economy as being concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end.

In Principles of Political Economy With Some of Their Applications to Social Philosophy, Mill defended Thomas Malthus’ An Essay on the Principle of Population, which focused on scarcity. Mill was writing at a time when Europe was struck by the Cholera pandemic of 1829–1851 and the famines of 1845–1851 and while Lord Tennyson was describing nature as “red in tooth and claw”. At this time, society’s fear of uncertainty seems to have been replaced by a fear of scarcity, and these standards of objectivity dominated economic thought through the twentieth century. Almost a hundred years after Mill, Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”. Dichotomies emerge in the aftermath of the Cartesian revolution that aims to remove doubt from philosophy. Theory and practice, subject and object, facts and values, means and ends are all separated. In this environment ex cathedra norms, in particular utility (profit) maximisation, encroach on commercial practice.

In order to set boundaries on commercial behaviour motivated by profit maximisation, particularly when market uncertainty returned after the Nixon shock of 1971, society imposes regulations on practice. As a consequence, two competing ethics, functional Consequential ethics guiding market practices and regulatory Deontological ethics attempting stabilise the system, vie for supremacy. It is in this debilitating competition between two essentially theoretical ethical frameworks that we offer an explanation for the Financial Crisis of 2007-2009: profit maximisation, not speculation, is destabilising in the presence of radical uncertainty and regulation cannot keep up with motivated profit maximisers who can justify their actions through abstract mathematical models that bare little resemblance to actual markets. An implication of reorienting financial economics to focus on the markets as centres of ‘communicative action’ is that markets could become self-regulating, in the same way that the legal or medical spheres are self-regulated through professions. This is not a ‘libertarian’ argument based on freeing the Consequential ethic from a Deontological brake. Rather it argues that being a market participant entails restricting norms on the agent such as sincerity and truth telling that support knowledge creation, of asset prices, within a broader objective of social cohesion. This immediately calls into question the legitimacy of algorithmic/high- frequency trading that seems an anathema in regard to the principles of communicative action.

OnionBots: Subverting Privacy Infrastructure for Cyber Attacks

Untitled

Currently, bots are monitored and controlled by a botmaster, who issues commands. The transmission of theses commands, which are known as C&C messages, can be centralized, peer-to-peer or hybrid. In the centralized architecture the bots contact the C&C servers to receive instructions from the botmaster. In this construction the message propagation speed and convergence is faster, compared to the other architectures. It is easy to implement, maintain and monitor. However, it is limited by a single point of failure. Such botnets can be disrupted by taking down or blocking access to the C&C server. Many centralized botnets use IRC or HTTP as their communication channel. GT- Bots, Agobot/Phatbot, and clickbot.a are examples of such botnets. To evade detection and mitigation, attackers developed more sophisticated techniques to dynamically change the C&C servers, such as: Domain Generation Algorithm (DGA) and fast-fluxing (single flux, double flux).

Single-fluxing is a special case of fast-flux method. It maps multiple (hundreds or even thousands) IP addresses to a domain name. These IP addresses are registered and de-registered at rapid speed, therefore the name fast-flux. These IPs are mapped to particular domain names (e.g., DNS A records) with very short TTL values in a round robin fashion. Double-fluxing is an evolution of single-flux technique, it fluxes both IP addresses of the associated fully qualified domain names (FQDN) and the IP address of the responsible DNS servers (NS records). These DNS servers are then used to translate the FQDNs to their corresponding IP addresses. This technique provides an additional level of protection and redundancy. Domain Generation Algorithms (DGA), are the algorithms used to generate a list of domains for botnets to contact their C&C. The large number of possible domain names makes it difficult for law enforcements to shut them down. Torpig and Conficker are famous examples of these botnets.

A significant amount of research focuses on the detection of malicious activities from the network perspective, since the traffic is not anonymized. BotFinder uses the high-level properties of the bot’s network traffic and employs machine learning to identify the key features of C&C communications. DISCLOSURE uses features from NetFlow data (e.g., flow sizes, client access patterns, and temporal behavior) to distinguish C&C channels.

The next step in the arms race between attackers and defenders was moving from a centralized scheme to a peer-to-peer C&C. Some of these botnets use an already existing peer-to-peer protocol, while others use customized protocols. For example earlier versions of Storm used Overnet, and the new versions use a customized version of Overnet, called Stormnet. Meanwhile other botnets such as Walowdac and Gameover Zeus organize their communication channels in different layers….(onionbots Subverting Privacy Infrastructure for Cyber Attacks)