Superconformal Spin/Field Theories: When Vector Spaces have same Dimensions: Part 1, Note Quote.

1-s2.0-S0001870802000592-gr7

A spin structure on a surface means a double covering of its space of non-zero tangent vectors which is non-trivial on each individual tangent space. On an oriented 1-dimensional manifold S it means a double covering of the space of positively-oriented tangent vectors. For purposes of gluing, this is the same thing as a spin structure on a ribbon neighbourhood of S in an orientable surface. Each spin structure has an automorphism which interchanges its sheets, and this will induce an involution T on any vector space which is naturally associated to a 1-manifold with spin structure, giving the vector space a mod 2 grading by its ±1-eigenspaces. A topological-spin theory is a functor from the cobordism category of manifolds with spin structures to the category of super vector spaces with its graded tensor structure. The functor is required to take disjoint unions to super tensor products, and additionally it is required that the automorphism of the spin structure of a 1-manifold induces the grading automorphism T = (−1)degree of the super vector space. This choice of the supersymmetry of the tensor product rather than the naive symmetry which ignores the grading is forced by the geometry of spin structures if the possibility of a semisimple category of boundary conditions is to be allowed. There are two non-isomorphic circles with spin structure: S1ns, with the Möbius or “Neveu-Schwarz” structure, and S1r, with the trivial or “Ramond” structure. A topological-spin theory gives us state spaces Cns and Cr, corresponding respectively to S1ns and S1r.

There are four cobordisms with spin structures which cover the standard annulus. The double covering can be identified with its incoming end times the interval [0,1], but then one has a binary choice when one identifies the outgoing end of the double covering over the annulus with the chosen structure on the outgoing boundary circle. In other words, alongside the cylinders A+ns,r = S1ns,r × [0,1] which induce the identity maps of Cns,r there are also cylinders Ans,r which connect S1ns,r to itself while interchanging the sheets. These cylinders Ans,r induce the grading automorphism on the state spaces. But because Ans ≅ A+ns by an isomorphism which is the identity on the boundary circles – the Dehn twist which “rotates one end of the cylinder by 2π” – the grading on Cns must be purely even. The space Cr can have both even and odd components. The situation is a little more complicated for “U-shaped” cobordisms, i.e., cylinders with two incoming or two outgoing boundary circles. If the boundaries are S1ns there is only one possibility, but if the boundaries are S1r there are two, corresponding to A±r. The complication is that there seems no special reason to prefer either of the spin structures as “positive”. We shall simply choose one – let us call it P – with incoming boundary S1r ⊔ S1r, and use P to define a pairing Cr ⊗ Cr → C. We then choose a preferred cobordism Q in the other direction so that when we sew its right-hand outgoing S1r to the left-hand incoming one of P the resulting S-bend is the “trivial” cylinder A+r. We shall need to know, however, that the closed torus formed by the composition P ◦ Q has an even spin structure. The Frobenius structure θ on C restricts to 0 on Cr.

There is a unique spin structure on the pair-of-pants cobordism in the figure below, which restricts to S1ns on each boundary circle, and it makes Cns into a commutative Frobenius algebra in the usual way.

Untitled

If one incoming circle is S1ns and the other is S1r then the outgoing circle is S1r, and there are two possible spin structures, but the one obtained by removing a disc from the cylinder A+r is preferred: it makes Cr into a graded module over Cns. The chosen U-shaped cobordism P, with two incoming circles S1r, can be punctured to give us a pair of pants with an outgoing S1ns, and it induces a graded bilinear map Cr × Cr → Cns which, composing with the trace on Cns, gives a non-degenerate inner product on Cr. At this point the choice of symmetry of the tensor product becomes important. Let us consider the diffeomorphism of the pair of pants which shows us in the usual case that the Frobenius algebra is commutative. When we lift it to the spin structure, this diffeomorphism induces the identity on one incoming circle but reverses the sheets over the other incoming circle, and this proves that the cobordism must have the same output when we change the input from S(φ1 ⊗ φ2) to T(φ1) ⊗ φ2, where T is the grading involution and S : Cr ⊗ Cr → Cr ⊗ Cr is the symmetry of the tensor category. If we take S to be the symmetry of the tensor category of vector spaces which ignores the grading, this shows that the product on the graded vector space Cr is graded-symmetric with the usual sign; but if S is the graded symmetry then we see that the product on Cr is symmetric in the naive sense.

There is an analogue for spin theories of the theorem which tells us that a two-dimensional topological field theory “is” a commutative Frobenius algebra. It asserts that a spin-topological theory “is” a Frobenius algebra C = (Cns ⊕ CrC) with the following property. Let {φk} be a basis for Cns, with dual basis {φk} such that θCkφm) = δmk, and let βk and βk be similar dual bases for Cr. Then the Euler elements χns := ∑ φkφk and χr = ∑ βkβk are independent of the choices of bases, and the condition we need on the algebra C is that χns = χr. In particular, this condition implies that the vector spaces Cns and Cr have the same dimension. In fact, the Euler elements can be obtained from cutting a hole out of the torus. There are actually four spin structures on the torus. The output state is necessarily in Cns. The Euler elements for the three even spin structures are equal to χe = χns = χr. The Euler element χo corresponding to the odd spin structure, on the other hand, is given by χo = ∑(−1)degβkβkβk.

A spin theory is very similar to a Z/2-equivariant theory, which is the structure obtained when the surfaces are equipped with principal Z/2-bundles (i.e., double coverings) rather than spin structures.

It seems reasonable to call a spin theory semisimple if the algebra Cns is semisimple, i.e., is the algebra of functions on a finite set X. Then Cr is the space of sections of a vector bundle E on X, and it follows from the condition χns = χr that the fibre at each point must have dimension 1. Thus the whole structure is determined by the Frobenius algebra Cns together with a binary choice at each point x ∈ X of the grading of the fibre Ex of the line bundle E at x.

We can now see that if we had not used the graded symmetry in defining the tensor category we should have forced the grading of Cr to be purely even. For on the odd part the inner product would have had to be skew, and that is impossible on a 1-dimensional space. And if both Cns and Cr are purely even then the theory is in fact completely independent of the spin structures on the surfaces.

A concrete example of a two-dimensional topological-spin theory is given by C = C ⊕ Cη where η2 = 1 and η is odd. The Euler elements are χe = 1 and χo = −1. It follows that the partition function of a closed surface with spin structure is ±1 according as the spin structure is even or odd.

The most common theories defined on surfaces with spin structure are not topological: they are 2-dimensional conformal field theories with N = 1 supersymmetry. It should be noticed that if the theory is not topological then one does not expect the grading on Cns to be purely even: states can change sign on rotation by 2π. If a surface Σ has a conformal structure then a double covering of the non-zero tangent vectors is the complement of the zero-section in a two-dimensional real vector bundle L on Σ which is called the spin bundle. The covering map then extends to a symmetric pairing of vector bundles L ⊗ L → TΣ which, if we regard L and TΣ as complex line bundles in the natural way, induces an isomorphism L ⊗C L ≅ TΣ. An N = 1 superconformal field theory is a conformal-spin theory which assigns a vector space HS,L to the 1-manifold S with the spin bundle L, and is equipped with an additional map

Γ(S,L) ⊗ HS,L → HS,L

(σ,ψ) ↦ Gσψ,

where Γ(S,L) is the space of smooth sections of L, such that Gσ is real-linear in the section σ, and satisfies G2σ = Dσ2, where Dσ2 is the Virasoro action of the vector field σ2 related to σ ⊗ σ by the isomorphism L ⊗C L ≅ TΣ. Furthermore, when we have a cobordism (Σ,L) from (S0,L0) to (S1,L1) and a holomorphic section σ of L which restricts to σi on Si we have the intertwining property

Gσ1 ◦ UΣ,L = UΣ,L ◦ Gσ0

….

Gothic: Once Again Atheistic Materialism and Hedonistic Flirtations. Drunken Risibility.

 

2017-Time-limited-Sale-No-Painting-By-font-b-Numbers-b-font-Wall-Art-Skull-Kiss

The machinery of the Gothic, traditionally relegated to both a formulaic and a sensational aesthetic, gradually evolved into a recyclable set of images, motifs and narrative devices that surpass temporal, spatial and generic categories. From the moment of its appearance the Gothic has been obsessed with presenting itself as an imitation.

Recent literary theory has extensively probed into the power of the Gothic to evade temporal and generic limits and into the aesthetic, narratological and ideological implications this involves. Officially granting the Gothic the elasticity it has always entailed has resulted in a reconfiguration of its spectrum both synchronically – by acknowledging its influence on numerous postmodern fictions – and diachronically – by rescripting, in hindsight, the history of its canon so as to allow space for ambiguous presences.

Both transgressive and hybrid in form and content, the Gothic has been accepted as a malleable genre, flexible enough to create more freely, in Borgesian fashion, its own precursors. The genre flouted what are considered the basic principles of good prose writing: adherence to verisimilitude and avoidance of both narrative diversions and moralising – all of which are, of course, made to be deliberately upset. Many merely cite the epigrammatic power of the essay’s most renowned phrase, that the rise of the Gothic “was the inevitable result of the revolutionary shocks which all of Europe has suffered”.

The eighteenth-century French materialist philosophy purported the displacement of metaphysical investigations into the meaning of life by materialist explorations. Julien Offray de La Mettrie, a French physician and philosopher, the earliest of materialist writers of the Enlightenment, published the materialist manifesto L’ Homme machine (Man a Machine), that did away with the transcendentalism of the soul, banished all supernatural agencies by claiming that mind is as mechanical as matter and equated humans with machines. In his words: “The human body is a machine that winds up its own springs: it is a living image of the perpetual motion”. French materialist thought resulted in the publication of the great 28-volume Encyclopédie, ou Dictionnaire raisonné des sciences, des arts et des méttrie par une société de gens de lettres by Denis Diderot and Jean Le Rond d’ Alembert, and which was grounded on purely materialist principles, against all kinds of metaphysical thinking. Diderot’s atheist materialism set the tone of the Encyclopédie, which, for both editors, was the ideal vehicle […] for reshaping French high culture and attitudes, as well as the perfect instrument with which to insinuate their radical Weltanschauung surreptitiously, using devious procedures, into the main arteries of French Society, embedding their revolutionary philosophic manifesto in a vast compilation ostensibly designed to provide plain information and basic orientation but in fact subtly challenging and transforming attitudes in every respect. While materialist thinkers ultimately disowned La Mettrie because he ran counter to their systematic moral, political and social naturalism, someone like Sade remained deeply influenced and inspired for his indebtedness to La Mettrie’s atheism and hedonism, particularly to the perception of virtue and vice as relative notions − the result of socialisation and at odds with nature.

 

Two Conceptions of Morphogenesis – World as a Dense Evolutionary Plasma of Perpetual Differentiation and Innovation. Thought of the Day 57.0

adriettemyburgh3

Sanford Kwinter‘s two conceptions of morhpogenesis, of which, one is appropriate to a world capable of sustaining transcendental ontological categories, while the other is inherent in a world of perfect immanence. According to the classical, hylomorphic model, a necessarily limited number of possibilities (forms or images) are reproduced (mirrored in reality) over a substratum, in a linear time-line. The insufficiency of such a model, however, is evident in its inability to find a place for novelty. Something either is or is not possible. This model cannot account for new possibilities and it fails to confront the inevitable imperfections and degradations evident in all of its realizations. It is indeed the inevitability of corruption and imperfection inherent in classical creation that points to the second mode of morphogenesis. This mode is dependent on an understanding of the world as a ceaseless pullulation and unfolding, a dense evolutionary plasma of perpetual differentiation and innovation. In this world forms are not carried over from some transcendent realm, but instead singularities and events emerge from within a rich plasma through the continual and dynamic interaction of forces. The morphogenetic process at work in such a world is not one whereby an active subject realizes forms from a set of transcendent possibilities, but rather one in which virtualities are actualized through the constant movement inherent in the very forces that compose the world. Virtuality is understood as the free difference or singularity, not yet combined with other differences into a complex ensemble or salient form. It is of course this immanentist description of the world and its attendant mode of morphogenesis that are viable. There is no threshold beneath which classical objects, states, or relations cease to have meaning yet beyond which they are endowed with a full pedigree and privileged status. Indeed, it is the nature of real time to ensure a constant production of innovation and change in all conditions. This is evidenced precisely by the imperfections introduced in an act of realizing a form. The classical mode of morphogenesis, then, has to be understood as a false model which is imposed on what is actually a rich, perpetually transforming universe. But the sort of novelty which the enactment of the classical model produces, a novelty which from its own perspective must be construed as a defect is not a primary concern if the novelty is registered as having emerged from a complex collision of forces. Above all, it is a novelty uncontaminated by procrustean notions of subjectivity and creation.

Right-(Left-)derived Functors

vZZBx

Fix an abelian category A, let J be a Δ-subcategory of K(A), let DJ be the corresponding derived category, and let

Q = QJ : J → DJ

be the canonical Δ-functor. For any Δ-functors F and G from J to another Δ-category E, or from DJ to E, Hom(F, G) will denote the abelian group of Δ-functor morphisms from F to G.

A Δ-functor F : J → E is right-derivable if there exists a Δ-functor

RF : DJ → E

and a morphism of Δ-functors

ζ : F → RF ◦ Q

such that for every Δ-functor G : DJ → E the composed map

Hom(RF, G) →natural Hom(RF ◦ Q, G ◦ Q) →via ζ Hom(F, G ◦ Q)

is an isomorphism, (the map “via ζ” is an isomorphism). The Δ-functor F is left-derivable if there exists a Δ-functor

LF : DJ → E

and a morphism of Δ-functors

ζ : LF ◦ Q → F

such that for every Δ-functor G : DJ → E the composed map

Hom(G, LF) →natural Hom(G ◦ Q, LF ◦ Q) →via ζ Hom(G ◦ Q, F)

is an isomorphism (the map “via ζ” is an isomorphism).

Such a pair (RF, ζ) and (LF, ζ) are called the right-derived and left-derived functors of F respectively. Composition with Q gives an embedding of Δ-functor categories

High Frequency Traders: A Case in Point.

Events on 6th May 2010:

At 2:32 p.m., against [a] backdrop of unusually high volatility and thinning liquidity, a large fundamental trader (a mutual fund complex) initiated a sell program to sell a total of 75,000 E-Mini [S&P 500 futures] contracts (valued at approximately $4.1 billion) as a hedge to an existing equity position. […] This large fundamental trader chose to execute this sell program via an automated execution algorithm (“Sell Algorithm”) that was programmed to feed orders into the June 2010 E-Mini market to target an execution rate set to 9% of the trading volume calculated over the previous minute, but without regard to price or time. The execution of this sell program resulted in the largest net change in daily position of any trader in the E-Mini since the beginning of the year (from January 1, 2010 through May 6, 2010). [. . . ] This sell pressure was initially absorbed by: high frequency traders (“HFTs”) and other intermediaries in the futures market; fundamental buyers in the futures market; and cross-market arbitrageurs who transferred this sell pressure to the equities markets by opportunistically buying E-Mini contracts and simultaneously selling products like SPY [(S&P 500 exchange-traded fund (“ETF”))], or selling individual equities in the S&P 500 Index. […] Between 2:32 p.m. and 2:45 p.m., as prices of the E-Mini rapidly declined, the Sell Algorithm sold about 35,000 E-Mini contracts (valued at approximately $1.9 billion) of the 75,000 intended. [. . . ] By 2:45:28 there were less than 1,050 contracts of buy-side resting orders in the E-Mini, representing less than 1% of buy-side market depth observed at the beginning of the day. [. . . ] At 2:45:28 p.m., trading on the E-Mini was paused for five seconds when the Chicago Mercantile Exchange (“CME”) Stop Logic Functionality was triggered in order to prevent a cascade of further price declines. […] When trading resumed at 2:45:33 p.m., prices stabilized and shortly thereafter, the E-Mini began to recover, followed by the SPY. [. . . ] Even though after 2:45 p.m. prices in the E-Mini and SPY were recovering from their severe declines, sell orders placed for some individual securities and Exchange Traded Funds (ETFs) (including many retail stop-loss orders, triggered by declines in prices of those securities) found reduced buying interest, which led to further price declines in those securities. […] [B]etween 2:40 p.m. and 3:00 p.m., over 20,000 trades (many based on retail-customer orders) across more than 300 separate securities, including many ETFs, were executed at prices 60% or more away from their 2:40 p.m. prices. [. . . ] By 3:08 p.m., [. . . ] the E-Mini prices [were] back to nearly their pre-drop level [. . . and] most securities had reverted back to trading at prices reflecting true consensus values.

In the ordinary course of business, HFTs use their technological advantage to profit from aggressively removing the last few contracts at the best bid and ask levels and then establishing new best bids and asks at adjacent price levels ahead of an immediacy-demanding customer. As an illustration of this “immediacy absorption” activity, consider the following stylized example, presented in Figure and described below.

Untitled 2

Suppose that we observe the central limit order book for a stock index futures contract. The notional value of one stock index futures contract is $50. The market is very liquid – on average there are hundreds of resting limit orders to buy or sell multiple contracts at either the best bid or the best offer. At some point during the day, due to temporary selling pressure, there is a total of just 100 contracts left at the best bid price of 1000.00. Recognizing that the queue at the best bid is about to be depleted, HFTs submit executable limit orders to aggressively sell a total of 100 contracts, thus completely depleting the queue at the best bid, and very quickly submit sequences of new limit orders to buy a total of 100 contracts at the new best bid price of 999.75, as well as to sell 100 contracts at the new best offer of 1000.00. If the selling pressure continues, then HFTs are able to buy 100 contracts at 999.75 and make a profit of $1,250 dollars among them. If, however, the selling pressure stops and the new best offer price of 1000.00 attracts buyers, then HFTs would very quickly sell 100 contracts (which are at the very front of the new best offer queue), “scratching” the trade at the same price as they bought, and getting rid of the risky inventory in a few milliseconds.

This type of trading activity reduces, albeit for only a few milliseconds, the latency of a price move. Under normal market conditions, this trading activity somewhat accelerates price changes and adds to the trading volume, but does not result in a significant directional price move. In effect, this activity imparts a small “immediacy absorption” cost on all traders, including the market makers, who are not fast enough to cancel the last remaining orders before an imminent price move.

This activity, however, makes it both costlier and riskier for the slower market makers to maintain continuous market presence. In response to the additional cost and risk, market makers lower their acceptable inventory bounds to levels that are too small to offset temporary liquidity imbalances of any significant size. When the diminished liquidity buffer of the market makers is pierced by a sudden order flow imbalance, they begin to demand a progressively greater compensation for maintaining continuous market presence, and prices start to move directionally. Just as the prices are moving directionally and volatility is elevated, immediacy absorption activity of HFTs can exacerbate a directional price move and amplify volatility. Higher volatility further increases the speed at which the best bid and offer queues are being depleted, inducing HFT algorithms to demand immediacy even more, fueling a spike in trading volume, and making it more costly for the market makers to maintain continuous market presence. This forces more risk averse market makers to withdraw from the market, which results in a full-blown market crash.

Empirically, immediacy absorption activity of the HFTs should manifest itself in the data very differently from the liquidity provision activity of the Market Makers. To establish the presence of these differences in the data, we test the following hypotheses:

Hypothesis H1: HFTs are more likely than Market Makers to aggressively execute the last 100 contracts before a price move in the direction of the trade. Market Makers are more likely than HFTs to have the last 100 resting contracts against which aggressive orders are executed.

Hypothesis H2: HFTs trade aggressively in the direction of the price move. Market Makers get run over by a price move.

Hypothesis H3: Both HFTs and Market Makers scratch trades, but HFTs scratch more.

To statistically test our “immediacy absorption” hypotheses against the “liquidity provision” hypotheses, we divide all of the trades during the 405 minute trading day into two subsets: Aggressive Buy trades and Aggressive Sell trades. Within each subset, we further aggregate multiple aggressive buy or sell transactions resulting from the execution of the same order into Aggressive Buy or Aggressive Sell sequences. The intuition is as follows. Often a specific trade is not a stand alone event, but a part of a sequence of transactions associated with the execution of the same order. For example, an order to aggressively sell 10 contracts may result in four Aggressive Sell transactions: for 2 contracts, 1 contract, 4 contracts, and 3 contracts, respectively, due to the specific sequence of resting bids against which this aggressive sell order was be executed. Using the order ID number, we are able to aggregate these four transactions into one Aggressive Sell sequence for 10 contracts.

Testing Hypothesis H1. Aggressive removal of the last 100 contracts by HFTs; passive provision of the last 100 resting contracts by the Market Makers. Using the Aggressive Buy sequences, we label as a “price increase event” all occurrences of trading sequences in which at least 100 contracts consecutively executed at the same price are followed by some number of contracts at a higher price. To examine indications of low latency, we focus on the the last 100 contracts traded before the price increase and the first 100 contracts at the next higher price (or fewer if the price changes again before 100 contracts are executed). Although we do not look directly at the limit order book data, price increase events are defined to capture occasions where traders use executable buy orders to lift the last remaining offers in the limit order book. Using Aggressive sell trades, we define “price decrease events” symmetrically as occurrences of sequences of trades in which 100 contracts executed at the same price are followed by executions at lower prices. These events are intended to capture occasions where traders use executable sell orders to hit the last few best bids in the limit order book. The results are presented in Table below

Untitled 2

For price increase and price decrease events, we calculate each of the six trader categories’ shares of Aggressive and Passive trading volume for the last 100 contracts traded at the “old” price level before the price increase or decrease and the first 100 contracts traded at the “new” price level (or fewer if the number of contracts is less than 100) after the price increase or decrease event.

Table above presents, for the six trader categories, volume shares for the last 100 contracts at the old price and the first 100 contracts at the new price. For comparison, the unconditional shares of aggressive and passive trading volume of each trader category are also reported. Table has four panels covering (A) price increase events on May 3-5, (B) price decrease events on May 3-5, (C) price increase events on May 6, and (D) price decrease events on May 6. In each panel there are six rows of data, one row for each trader category. Relative to panels A and C, the rows for Fundamental Buyers (BUYER) and Fundamental Sellers (SELLER) are reversed in panels B and D to emphasize the symmetry between buying during price increase events and selling during price decrease events. The first two columns report the shares of Aggressive and Passive contract volume for the last 100 contracts before the price change; the next two columns report the shares of Aggressive and Passive volume for up to the next 100 contracts after the price change; and the last two columns report the “unconditional” market shares of Aggressive and Passive sides of all Aggressive buy volume or sell volume. For May 3-5, the data are based on volume pooled across the three days.

Consider panel A, which describes price increase events associated with Aggressive buy trades on May 3-5, 2010. High Frequency Traders participated on the Aggressive side of 34.04% of all aggressive buy volume. Strongly consistent with immediacy absorption hypothesis, the participation rate rises to 57.70% of the Aggressive side of trades on the last 100 contracts of Aggressive buy volume before price increase events and falls to 14.84% of the Aggressive side of trades on the first 100 contracts of Aggressive buy volume after price increase events.

High Frequency Traders participated on the Passive side of 34.33% of all aggressive buy volume. Consistent with hypothesis, the participation rate on the Passive side of Aggressive buy volume falls to 28.72% of the last 100 contracts before a price increase event. It rises to 37.93% of the first 100 contracts after a price increase event.

These results are inconsistent with the idea that high frequency traders behave like textbook market makers, suffering adverse selection losses associated with being picked off by informed traders. Instead, when the price is about to move to a new level, high frequency traders tend to avoid being run over and take the price to the new level with Aggressive trades of their own.

Market Makers follow a noticeably more passive trading strategy than High Frequency Traders. According to panel A, Market Makers are 13.48% of the Passive side of all Aggressive trades, but they are only 7.27% of the Aggressive side of all Aggressive trades. On the last 100 contracts at the old price, Market Makers’ share of volume increases only modestly, from 7.27% to 8.78% of trades. Their share of Passive volume at the old price increases, from 13.48% to 15.80%. These facts are consistent with the interpretation that Market Makers, unlike High Frequency Traders, do engage in a strategy similar to traditional passive market making, buying at the bid price, selling at the offer price, and suffering losses when the price moves against them. These facts are also consistent with the hypothesis that High Frequency Traders have lower latency than Market Makers.

Intuition might suggest that Fundamental Buyers would tend to place the Aggressive trades which move prices up from one tick level to the next. This intuition does not seem to be corroborated by the data. According to panel A, Fundamental Buyers are 21.53% of all Aggressive trades but only 11.61% of the last 100 Aggressive contracts traded at the old price. Instead, Fundamental Buyers increase their share of Aggressive buy volume to 26.17% of the first 100 contracts at the new price.

Taking into account symmetry between buying and selling, panel B shows the results for Aggressive sell trades during May 3-5, 2010, are almost the same as the results for Aggressive buy trades. High Frequency Traders are 34.17% of all Aggressive sell volume, increase their share to 55.20% of the last 100 Aggressive sell contracts at the old price, and decrease their share to 15.04% of the last 100 Aggressive sell contracts at the new price. Market Makers are 7.45% of all Aggressive sell contracts, increase their share to only 8.57% of the last 100 Aggressive sell trades at the old price, and decrease their share to 6.58% of the last 100 Aggressive sell contracts at the new price. Fundamental Sellers’ shares of Aggressive sell trades behave similarly to Fundamental Buyers’ shares of Aggressive Buy trades. Fundamental Sellers are 20.91% of all Aggressive sell contracts, decrease their share to 11.96% of the last 100 Aggressive sell contracts at the old price, and increase their share to 24.87% of the first 100 Aggressive sell contracts at the new price.

Panels C and D report results for Aggressive Buy trades and Aggressive Sell trades for May 6, 2010. Taking into account symmetry between buying and selling, the results for Aggressive buy trades in panel C are very similar to the results for Aggressive sell trades in panel D. For example, Aggressive sell trades by Fundamental Sellers were 17.55% of Aggressive sell volume on May 6, while Aggressive buy trades by Fundamental Buyers were 20.12% of Aggressive buy volume on May 6. In comparison with the share of Fundamental Buyers and in comparison with May 3-5, the Flash Crash of May 6 is associated with a slightly lower – not higher – share of Aggressive sell trades by Fundamental Sellers.

The number of price increase and price decrease events increased dramatically on May 6, consistent with the increased volatility of the market on that day. On May 3-5, there were 4100 price increase events and 4062 price decrease events. On May 6 alone, there were 4101 price increase events and 4377 price decrease events. There were therefore approximately three times as many price increase events per day on May 6 as on the three preceding days.

A comparison of May 6 with May 3-5 reveals significant changes in the trading patterns of High Frequency Traders. Compared with May 3-5 in panels A and B, the share of Aggressive trades by High Frequency Traders drops from 34.04% of Aggressive buys and 34.17% of Aggressive sells on May 3-5 to 26.98% of Aggressive buy trades and 26.29% of Aggressive sell trades on May 6. The share of Aggressive trades for the last 100 contracts at the old price declines by even more. High Frequency Traders’ participation rate on the Aggressive side of Aggressive buy trades drops from 57.70% on May 3-5 to only 38.86% on May 6. Similarly, the participation rate on the Aggressive side of Aggressive sell trades drops from and 55.20% to 38.67%. These declines are largely offset by increases in the participation rate by Opportunistic Traders on the Aggressive side of trades. For example, Opportunistic Traders’ share of the Aggressive side of the last 100 contracts traded at the old price rises from 19.21% to 34.26% for Aggressive buys and from 20.99% to 33.86% for Aggressive sells. These results suggest that some Opportunistic Traders follow trading strategies for which low latency is important, such as index arbitrage, cross-market arbitrage, or opportunistic strategies mimicking market making.

Testing Hypothesis H2. HFTs trade aggressively in the direction of the price move; Market Makers get run over by a price move. To examine this hypothesis, we analyze whether High Frequency Traders use Aggressive trades to trade in the direction of contemporaneous price changes, while Market Makers use Passive trades to trade in the opposite direction from price changes. To this end, we estimate the regression equation

Δyt = α + Φ . Δyt-1 + δ . yt-1 + Σi=120i . Δpt-1 /0.25] + εt

(where yt and Δyt denote inventories and change in inventories of High Frequency Traders for each second of a trading day; t = 0 corresponds to the opening of stock trading on the NYSE at 8:30:00 a.m. CT (9:30:00 ET) and t = 24, 300 denotes the close of Globex at 15:15:00 CT (4:15 p.m. ET); Δpt denotes the price change in index point units between the high-low midpoint of second t-1 and the high-low midpoint of second t. Regressing second-by-second changes in inventory levels of High Frequency Traders on the level of their inventories the previous second, the change in their inventory levels the previous second, the change in prices during the current second, and lagged price changes for each of the previous 20 previous seconds.)

for Passive and Aggressive inventory changes separately.

Untitled

Table above presents the regression results of the two components of change in holdings on lagged inventory, lagged change in holdings and lagged price changes over one second intervals. Panel A and Panel B report the results for May 3-5 and May 6, respectively. Each panel has four columns, reporting estimated coefficients where the dependent variables are net Aggressive volume (Aggressive buys minus Aggressive sells) by High Frequency Traders (∆AHFT), net Passive volume by High Frequency Traders (∆PHFT), net Aggressive volume by Market Makers (∆AMM), and net Passive volume by Market Makers (∆PMM).

We observe that for lagged inventories (NPHFTt−1), the estimated coefficients for Aggressive and Passive trades by High Frequency Traders are δAHFT = −0.005 (t = −9.55) and δPHFT = −0.001 (t = −3.13), respectively. These coefficient estimates have the interpretation that High Frequency Traders use Aggressive trades to liquidate inventories more intensively than passive trades. In contrast, the results for Market Makers are very different. For lagged inventories (NPMMt−1), the estimated coefficients for Aggressive and Passive volume by Market Makers are δAMM = −0.002 (t = −6.73) and δPMM = −0.002 (t = −5.26), respectively. The similarity of these coefficients estimates has the interpretation that Market Makers favor neither Aggressive trades nor Passive trades when liquidating inventories.

For contemporaneous price changes (in the current second) (∆Pt−1), the estimated coefficient Aggressive and Passive volume by High Frequency Traders are β0 = 57.78 (t = 31.94) and β0 = −25.69 (t = −28.61), respectively. For Market Makers, the estimated coefficients for Aggressive and Passive trades are β0 = 6.38 (t = 18.51) and β0 = −19.92 (t = −37.68). These estimated coefficients have the interpretation that in seconds in which prices move up one tick, High Frequency traders are net buyers of about 58 contracts with Aggressive trades and net sellers of about 26 contracts with Passive trades in that same second, while Market Makers are net buyers of about 6 contracts with Aggressive trades and net sellers of about 20 contracts with Passive trades. High Frequency Traders and Market Makers are similar in that they both use Aggressive trades to trade in the direction of price changes, and both use Passive trades to trade against the direction of price changes. High Frequency Traders and Market Makers are different in that Aggressive net purchases by High Frequency Traders are greater in magnitude than the Passive net purchases, while the reverse is true for Market Makers.

For lagged price changes, coefficient estimates for Aggressive trades by High Frequency Traders and Market Makers are positive and statistically significant at lags 1-4 and lags 1-10, respectively. These results have the interpretation that both High Frequency Traders’ and Market Makers’ trade on recent price momentum, but the trading is compressed into a shorter time frame for High Frequency Traders than for Market Makers.

For lagged price changes, coefficient estimates for Passive volume by High Frequency Traders and Market Makers are negative and statistically significant at lags 1 and lags 1-3, respectively. Panel B of Table presents results for May 6. Similar to May 3-5, High Frequency Traders tend to use Aggressive trades more intensely than Passive trades to liquidate inventories, while Market Makers do not show this pattern. Also similar to May 3-5, High Frequency Trades and Market makers use Aggressive trades to trade in the contemporaneous direction of price changes and use Passive trades to trade in the direction opposite price changes, with Aggressive trading greater than Passive trading for High Frequency Traders and the reverse for Market Makers. In comparison with May 3-5, the coefficients are smaller in magnitude on May 6, indicating reduced liquidity at each tick. For lagged price changes, the coefficients associated with Aggressive trading by High Frequency Traders change from positive to negative at lags 1-4, and the positive coefficients associated with Aggressive trading by Market Makers change from being positive and statistically significant at lags lags 1-10 to being positive and statistically significant only at lags 1-3. These results illustrate accelerated trading velocity in the volatile market conditions of May 6.

We further examine how high frequency trading activity is related to market prices. Figure below illustrates how prices change after HFT trading activity in a given second. The upper-left panel presents results for buy trades for May 3-5, the upper right panel presents results for buy trades on May 6, and the lower-left and lower-right present corresponding results for sell trades. For an “event” second in which High Frequency Traders are net buyers, net Aggressive Buyers, and net Passive Buyers value-weighted average prices paid by the High Frequency Traders in that second are subtracted from the value-weighted average prices for all trades in the same second and each of the following 20 seconds. The results are averaged across event seconds, weighted by the magnitude of High Frequency Traders’ net position change in the event second. The upper-left panel presents results for May 3-5, the upper-right panel presents results for May 6, and the lower two panels present results for sell trades calculated analogously. Price differences on the vertical axis are scaled so that one unit equals one tick ($12.50 per contract).

Untitled 2

When High Frequency Traders are net buyers on May 3-5, prices rise by 17% of a tick in the next second. When HFTs execute Aggressively or Passively, prices rise by 20% and 2% of a tick in the next second, respectively. In subsequent seconds, prices in all cases trend downward by about 5% of a tick over the subsequent 19 seconds. For May 3-5, the results are almost symmetric for selling.

When High Frequency Traders are buying on May 6, prices increase by 7% of a tick in the next second. When they are aggressive buyers or passive buyers, prices increase by increase 25% of a tick or decrease by 5% of a tick in the next second, respectively. In subsequent seconds, prices generally tend to drift downwards. The downward drift is especially pronounced after Passive buying, consistent with the interpretation that High Frequency Traders were “run over” when their resting limit buy orders were “run over” in the down phase of the Flash Crash. When High Frequency Traders are net sellers, the results after one second are analogous to buying. After aggressive selling, prices continue to drift down for 20 seconds, consistent with the interpretation that High Frequency Traders made profits from Aggressive sales during the down phase of the Flash Crash.

Testing Hypothesis H3. Both HFTs and Market Makers scratch trades; HFTs scratch more. A textbook market maker will try to buy at the bid price, sell at the offer price, and capture the bid-ask spread as a profit. Sometimes, after buying at the bid price, market prices begin to fall before the market maker can make a one tick profit by selling his inventory at the best offer price. To avoid taking losses in this situation, one component of a traditional market making strategy is to “scratch trades in the presence of changing market conditions by quickly liquidating a position at the same price at which it was acquired. These scratched trades represent inventory management trades designed to lower the cost of adverse selection. Since many competing market makers may try to scratch trades at the same time, traders with the lowest latency will tend to be more successful in their attempts to scratch trades and thus more successful in their ability to avoid losses when market conditions change.

To examine whether and to what extent traders engage in trade scratching, we sequence each trader’s trades for the day using audit trail sequence numbers which not only sort trades by second but also sort trades chronologically within each second. We define an “immediately scratched trade” as a trade with the properties that the next trade in the sorted sequence (1) occurred in the same second, (2) was executed at the same price, (3) was in the opposite direction, i.e., buy followed by sell or sell followed by buy. For each of the trading accounts in our sample, we calculate the number of immediately scratched trades, then compare the number of scratched trades across the six trader categories.

The results of this analysis are presented in the table below. Panel A provides results for May 3-5 and panel B for May 6. In each panel, there are five rows of data, one for each trader category. The first three columns report the total number of trades, the total number of immediately scratched trades, and the percentage of trades that are immediately scratched by traders in five categories. For May 3-6, the reported numbers are from the pooled data.

Untitled 2

This table presents statistics for immediate trade scratching which measures how many times a trader changes his/her direction of trading in a second aggregated over a day. We define a trade direction change as a buy trade right after a sell trade or vice versa at the same price level in the same second.

This table shows that High Frequency Traders scratched 2.84 % of trades on May 3-5 and 4.26 % on May 6; Market Makers scratched 2.49 % of trades on May 3-5 and 5.53 % of trades on May 6. While the percentages of immediately scratched trades by Market Makers is slightly higher than that for High Frequency Traders on May 6, the percentages for both groups are very similar. The fourth, fifth, and sixth columns of the Table report the mean, standard deviation, and median of the number of scratched trades for the traders in each category.

Although the percentages of scratched trades are similar, the mean number of immediately scratched trades by High Frequency Traders is much greater than for Market Makers: 540.56 per day on May 3-5 and 1610.75 on May 6 for High Frequency Traders versus 13.35 and 72.92 for Market Makers. The differences between High Frequency Traders and Market Makers reflect differences in volume traded. The Table shows that High Frequency Traders and Market Makers scratch a significantly larger percentage of their trades than other trader categories.

Textual Temporality. Note Quote.

InSolitude-Sister

Time is essentially a self-opening and an expanding into the world. Heidegger says that it is, therefore, difficult to go any further here by comparisons. The interpretation of Dasein as temporality in a universal ontological way is an undecidable question which remains “completely unclear” to him. Time as a philosophical problem is a kind of question which no one knows how to raise because of its inseparability from our nature. As Gadamer notes, we can say what time is in virtue of a self-evident preconception of what is, for what is present is always understood by that preconception. Insofar as it makes no claim to provide a valid universality, philosophical discussion is not a systematic determination of time, i.e., one which requires going back beyond time (in its connection with other categories).

In his doctrine of the productivity of the hermeneutical circle in temporal being, Heidegger develops the primacy of futurity for possible recollection and retention of what is already presented by history. History is present to us only in the light of futurity. In Gadamer’s interpretation, it is rather our prejudices that necessarily constitute our being. His view that prejudices are biases in our openness to the world does not signify the character of prejudices which in turn themselves are regarded as an a priori text in the terms already assumed. Based upon this, prejudices in this sense are not empty, but rather carry a significance which refers to being. Thus we can say that prejudices are our openness to the being-in-the-world. That is, being destined to different openness, we face the reference of our hermeneutical attributions. Therefore, the historicity of the temporal being is anything except what is past.

Clearly, the past is not some occurrence, not some incident in my Dasein, but its past; it is not some ‘what’ about Dasein, some event that happens to Dasein and alters it. This past is not a ‘what,’ but a ‘how,’ indeed it is the authentic ‘how’ (wie) of any temporal being. The past brings all ‘what,’ all taking care of and making plans, back into the ‘how’ which is the basic stand of a historical investigation.

Rather than encountering a past-oriented object, hermeneutical experience is a concern towards the text (or texts) which has been presented to us. Understanding is not possible merely because our part of interpretation is realized only when a “text” is read as a fulfillment of all the requirements of the tradition.

For Gadamer and Ricoeur the past as a text always changes its meaning in relation to the ever-developing world of texts; so it seems that the future is recognized as textual or the textual character of the future. In this sense the text itself is not tradition, but expectation. Upon this text the hermeneutical difference essentially can be extended. Consequently, philosophy is no history of hermeneutical events, but philosophical question evokes the historicity of our thinking and knowing. It is not by accident that Hegel, who tried to write the history of philosophy, raised history itself to the state of absolute mind.

What matters in the question concerning time is attaining an answer in terms in which the different ways of being temporal become comprehensible. What matters is allowing a possible connection between that which is in time and authentic temporality to become visible from the very beginning. However, the problem behind this theory still remains even after long exposure of the Heideggerian interpretation of whether Being-in-the-world can result from temporal being or vice versa. After the more hermeneutical investigation, it seems that Being-in-the-world must be comprehensive only through Being-in-time.

But, in The Concept of Time, Heidegger has already taken into consideration the broader grasp of the text by considering Being as the origin of the hermeneutics of time. If human Being is in time in a distinctive sense, so that we can read from it what time is, then this Dasein must be characterized by the fundamental determinations of its Being. Indeed, then being temporal, correctly understood, would be the fundamental assertion of Dasein with respect to its Being.

As a result, only the interpretation of being as its reference by way of temporality can make clear why and how this feature of being earlier, of apriority, pertains to being. The a priori character of being as the origin of temporalization calls for a specific kind of approach to being-a-priori whose basic components constitute a phenomenology which is hermeneutical.

Heidegger notes that with regard to Dasein, self-understanding reopens the possibility for a theory of time that is not self-enclosed. Dasein comes back to that which it is and takes over as the being that it is. In coming back to itself, it brings everything that it is back again into its own most peculiar chosen can-be. It makes it clear that, although ontologically the text is closest to each and any of its interpretations in its own event, ontically it is closest to itself. But it must be remembered that this phenomenology does not determine completely references of the text by characterizing the temporalization of the text. Through phenomenological research regarding the text, in hermeneutics we are informed only of how the text gets exhibited and unveiled.

Badiou Contra Grothendieck Functorally. Note Quote.

What makes categories historically remarkable and, in particular, what demonstrates that the categorical change is genuine? On the one hand, Badiou fails to show that category theory is not genuine. But, on the other, it is another thing to say that mathematics itself does change, and that the ‘Platonic’ a priori in Badiou’s endeavour is insufficient, which could be demonstrated empirically.

Yet the empirical does not need to stand only in a way opposed to mathematics. Rather, it relates to results that stemmed from and would have been impossible to comprehend without the use of categories. It is only through experience that we are taught the meaning and use of categories. An experience obviously absent from Badiou’s habituation in mathematics.

To contrast, Grothendieck opened up a new regime of algebraic geometry by generalising the notion of a space first scheme-theoretically (with sheaves) and then in terms of groupoids and higher categories. Topos theory became synonymous to the study of categories that would satisfy the so called Giraud’s axioms based on Grothendieck’s geometric machinery. By utilising such tools, Pierre Deligne was able to prove the so called Weil conjectures, mod-p analogues of the famous Riemann hypothesis.

These conjectures – anticipated already by Gauss – concern the so called local ζ-functions that derive from counting the number of points of an algebraic variety over a finite field, an algebraic structure similar to that of for example rational Q or real numbers R but with only a finite number of elements. By representing algebraic varieties in polynomial terms, it is possible to analyse geometric structures analogous to Riemann hypothesis but over finite fields Z/pZ (the whole numbers modulo p). Such ‘discrete’ varieties had previously been excluded from topological and geometric inquiry, while it now occurred that geometry was no longer overshadowed by a need to decide between ‘discrete’ and ‘continuous’ modalities of the subject (that Badiou still separates).

Along with the continuous ones, also discrete variates could then be studied based on Betti numbers, and similarly as what Cohen’s argument made manifest in set-theory, there seemed to occur ‘deeper’, topological precursors that had remained invisible under the classical formalism. In particular, the so called étale-cohomology allowed topological concepts (e.g., neighbourhood) to be studied in the context of algebraic geometry whose classical, Zariski-description was too rigid to allow a meaningful interpretation. Introducing such concepts on the basis of Jean-Pierre Serre’s suggestion, Alexander Grothendieck did revolutionarize the field of geometry, and Pierre Deligne’s proof of the Weil-conjenctures, not to mention Wiles’ work on Fermat’s last theorem that subsequentely followed.

Grothendieck’s crucial insight drew on his observation that if morphisms of varieties were considered by their ‘adjoint’ field of functions, it was possible to consider geometric morphisms as equivalent to algebraic ones. The algebraic category was restrictive, however, because field-morphisms are always monomorphisms which makes geometric morphisms: to generalize the notion of a neighbourhood to algebraic category he needed to embed algebraic fields into a larger category of rings. While a traditional Kuratowski covering space is locally ‘split’ – as mathematicians call it – the same was not true for the dual category of fields. In other words, the category of fields did not have an operator analogous to pull-backs (fibre products) unless considered as being embedded within rings from which pull-backs have a co-dual expressed by the tensor operator ⊗. Grothendieck thus realized he could replace ‘incorporeal’ or contained neighborhoods U ֒→ X by a more relational description: as maps U → X that are not necessarily monic, but which correspond to ring-morphisms instead.

Topos theory applies similar insight but not in the context of only specific varieties but for the entire theory of sets instead. Ultimately, Lawvere and Tierney realized the importance of these ideas to the concept of classification and truth in general. Classification of elements between two sets comes down to a question: does this element belong to a given set or not? In category of S ets this question calls for a binary answer: true or false. But not in a general topos in which the composition of the subobject-classifier is more geometric.

Indeed, Lawvere and Tierney then considered this characteristc map ‘either/or’ as a categorical relationship instead without referring to its ‘contents’. It was the structural form of this morphism (which they called ‘true’) and as contrasted with other relationships that marked the beginning of geometric logic. They thus rephrased the binary complete Heyting algebra of classical truth with the categorical version Ω defined as an object, which satisfies a specific pull-back condition. The crux of topos theory was then the so called Freyd–Mitchell embedding theorem which effectively guaranteed the explicit set of elementary axioms so as to formalize topos theory. The Freyd–Mitchell embedding theorem says that every abelian category is a full subcategory of a category of modules over some ring R and that the embedding is an exact functor. It is easy to see that not every abelian category is equivalent to RMod for some ring R. The reason is that RMod has all small limits and colimits. But for instance the category of finitely generated R-modules is an abelian category but lacks these properties.

But to understand its significance as a link between geometry and language, it is useful to see how the characteristic map (either/or) behaves in set theory. In particular, by expressing truth in this way, it became possible to reduce Axiom of Comprehension, which states that any suitable formal condition λ gives rise to a peculiar set {x ∈ λ}, to a rather elementary statement regarding adjoint functors.

At the same time, many mathematical structures became expressible not only as general topoi but in terms of a more specific class of Grothendieck-topoi. There, too, the ‘way of doing mathematics’ is different in the sense that the object-classifier is categorically defined and there is no empty set (initial object) but mathematics starts from the terminal object 1 instead. However, there is a material way to express the ‘difference’ such topoi make in terms of set theory: for every such a topos there is a sheaf-form enabling it to be expressed as a category of sheaves S etsC for a category C with a specific Grothendieck-topology.

Symmetry, Cohomology and Homotopy. Note Quote Didactic.

Given a compact Kähler manifold Y, we know that the cohomology groups H(Y,C) have a Hodge decomposition Hp,q. Now because we have Poincaré duality, and the comparisons between singular and De Rham cohomologie, we know that any other space Z that has the same homotopy type as Y will have the same cohomology groups. Consequently they will share the same Hodge diamond, thus its symmetries.

HodgeDiamond

This means that the symmetry of the Hodge diamond is mostly attached to the homotopy type of Y . This is not surprising anymore because it can already be seen from the equivalence of the De Rham cohomology which is analytic, and the Betti cohomology which is something purely simplicial. In fact, it can also be seen from the (smooth) homotopy invariance of De Rham cohomology.

This symmetry can be understood using the Quillen-Segal formalism as follows. Given Y , let’s consider Ytop ∈ Top. We have a Quillen equivalence U : Top → sSetQ, where U = Sing is the singular functor whose left adjoint is the geometric realization. When we consider the comma category sSetQU[Top] = sSet ↓ U, we are literally creating in French a “trait d’union”, between the two categories. And when we consider the subcategory of Quillen-Segal objects then we have a triangle that descends to a triangle of equivalences between the homotopy categories. In fact there is a much better statement.

 IMG_20170419_063100 (1)

It turns out that if we choose the Joyal model structure sSetJ, we get the Homotopy hypothesis.

A fibrant replacement of Y in the model category sSetQU[Top], is a trivial fibration F → U(Y), where F is fibrant in sSetQ, that is a Kan complex. But a Kan complex is exactly an ∞-groupoid. ∞-Groupoids generalize groupoids, and still are category-like. In particular we can take their opposite (or dual), just like we consider the opposite category Cop of a usual category.

Given Y as above, we can think of the mirror of Y as the opposite ∞-groupoid Fop. A good approximation of Fop can be obtained by the schematization functor à la Toën applied to the simplicial set (quasicategory) underlying Fop. We can take as model for F the fundamental ∞-groupoid Π(Y). And depending on the dimension it’s enough to stop at the corresponding n-groupoid. Toën schematization functor can also be obtained from the Quillen-Segal formalism applied to the embedding

U : Sh(Var(C)) ֒→ sPresh(Var(C),

where on the right hand side we consider the model category of simplicial presheaves à la Jardine-Joyal. The representability of the π0 of the schematization has to be determined by descent along the equivalence type.

Industrial Semiosis. Note Quote.

rUNdh

The concept of Industrial Semiosis categorizes the product life-cycle processes along three semiotic levels of meaning emergence: 1) the ontogenic level that deals with the life history data and future expectations about a single occurrence of a product; 2) the typogenic level that holds the processes related to a product type or generation; and 3) the phylogenic level that embraces the meaning-affecting processes common to all of the past and current types and occurrences of a product. The three levels naturally differ by the characteristic durational times of the grouped semiosis processes: as one moves from the lowest, ontogenic level to the higher levels, the objects become larger and more complicated and have slower dynamics in both original interpretation and meaning change. The semantics of industrial semiosis in industry investigates the relationships that hold between the syntactical elements — the signs in language, models, data — and the objects that matter in industry, such as customers, suppliers, work-pieces, products, processes, resources, tools, time, space, investments, costs, etc. The pragmatics of industrial semiosis deals with the expression and appeal functions of all kinds of languages, data and models and their interpretations in the setting of any possible enterprise context, as part of the enterprise realising its mission by enterprising, engineering, manufacturing, servicing, re-engineering, competing, etc. The relevance of the presented definitions for infor- mation systems engineering is still limited and vague: the definitions are very general and hardly reflect any knowledge about the industrial domain and its objects, nor do they reflect knowledge about the ubiquitous information infrastructure and the sign systems it accommodates.

A product (as concept) starts its development with initially coinciding onto-, typo-, and phylogenesis processes but distinct and pre-existing semiotic levels of interpretation. The concept is evolved, and typogenesis works to reorganize the relationships between the onto- and phylogenesis processes, as the variety of objects involved in product development increases. Product types and their interactions mediate – filter and buffer – between the levels above and below: not all variety of distinctions remains available for re-organization as phylos, nor every lowest-level object have a material relevance there. The phylogenic level is buffered against variations at the ontogenic level by the stabilizing mediations at the typogenic level.

The dynamics of the interactions between the semiotic levels can well be described in terms of the basic processes of variation and selection. In complex system evolution, variation stands for the generation of a variety of simultaneously present, distinct entities (synchronic variety), or of subsequent, distinct states of the same entity (diachronic variety). Variation makes variety increase and produces more distinctions. Selection means, in essence, the elimination of certain distinct entities and/or states, and it reduces the number of remaining entities and/or states.

From a semiotic point of view, the variety of a product intended to operate in an environment is determined by the devised product structure (i.e. the relations established between product parts – its synchronic variety) and the possible relations between the product and the anticipated environment (i.e. the product feasible states – its potential diachronic variety), which together aggregate the product possible configurations. The variety is defined on the ontogenic level that includes elements for description of both the structure and environment. The ontogenesis is driven by variation that goes through different configurations of the product and eventually discovers (by distinction selection at every stage of the product life cycle) configurations, which are stable on one or another time-scale. A constraint on the configurations is then imposed, resulting in the selective retention – emergence of a new meaning for a (not necessarily new) sign – at the typogenic level. The latter decreases the variety but specializes the ontogenic level so that only those distinctions ultimately remain, which fit to the environment (i.e. only dynamically stable relation patterns are preserved). Analogously but at a slower time- scale, the typogenesis results in the emergence of a new meaning on the phylogenic level that consecutively specializes the lower levels. Thus, the main semiotic principle of product development is such that the dynamics of the meaning-making processes always seeks to decrease the number of possible relations between the product and its environment and hence, the semiosis of product life cycle is naturally simplified. At the same time, however, the ‘natural’ dynamics is such that augments the evolutive potential of the product concept for increasing its organizational richness: the emergence of new signs (that may lead to the emergence of new levels of interpretation) requires a new kind of information and new descriptive categories must be given to deal with the still same product.

Philosophizing Twistors via Fibration

The basic issue, is a question of so called time arrow. This issue is an important subject of examination in mathematical physics as well as ontology of spacetime and philosophical anthropology. It reveals crucial contradiction between the knowledge about time, provided by mathematical models of spacetime in physics and psychology of time and its ontology. The essence of the contradiction lies in the invariance of the majority of fundamental equations in physics with regard to the reversal of the direction of the time arrow (i. e. the change of a variable t to -t in equations). Neither metric continuum, constituted by the spaces of concurrency in the spacetime of the classical mechanics before the formulation of the Particular Theory of Relativity, the spacetime not having metric but only affine structure, nor Minkowski’s spacetime nor the GTR spacetime (pseudo-Riemannian), both of which have metric structure, distinguish the categories of past, present and future as the ones that are meaningful in physics. Every event may be located with the use of four coordinates with regard to any curvilinear coordinate system. That is what clashes remarkably with the human perception of time and space. Penrose realizes and understands the necessity to formulate such theory of spacetime that would remove this discrepancy. He remarked that although we feel the passage of time, we do not perceive the “passage” of any of the space dimensions. Theories of spacetime in mathematical physics, while considering continua and metric manifolds, cannot explain the difference between time dimension and space dimensions, they are also unable to explain by means of geometry the unidirection of the passage of time, which can be comprehended only by means of thermodynamics. The theory of spaces of twistors is aimed at better and crucial for the ontology of nature understanding of the problem of the uniqueness of time dimension and the question of time arrow. There are some hypotheses that the question of time arrow would be easier to solve thanks to the examination of so called spacetime singularities and the formulation of the asymmetric in time quantum theory of gravitation — or the theory of spacetime in microscale.

The unique role of twistors in TGD

Although Lorentzian geometry is the mathematical framework of classical general relativity and can be seen as a good model of the world we live in, the theoretical-physics community has developed instead many models based on a complex space-time picture.

(1) When one tries to make sense of quantum field theory in flat space-time, one finds it very convenient to study the Wick-rotated version of Green functions, since this leads to well defined mathematical calculations and elliptic boundary-value problems. At the end, quantities of physical interest are evaluated by analytic continuation back to real time in Minkowski space-time.

(2) The singularity at r = 0 of the Lorentzian Schwarzschild solution disappears on the real Riemannian section of the corresponding complexified space-time, since r = 0 no longer belongs to this manifold. Hence there are real Riemannian four-manifolds which are singularity-free, and it remains to be seen whether they are the most fundamental in modern theoretical physics.

(3) Gravitational instantons shed some light on possible boundary conditions relevant for path-integral quantum gravity and quantum cosmology.  Unprimed and primed spin-spaces are not (anti-)isomorphic if Lorentzian space-time is replaced by a complex or real Riemannian manifold. Thus, for example, the Maxwell field strength is represented by two independent symmetric spinor fields, and the Weyl curvature is also represented by two independent symmetric spinor fields and since such spinor fields are no longer related by complex conjugation (i.e. the (anti-)isomorphism between the two spin-spaces), one of them may vanish without the other one having to vanish as well. This property gives rise to the so-called self-dual or anti-self-dual gauge fields, as well as to self-dual or anti-self-dual space-times.

(5) The geometric study of this special class of space-time models has made substantial progress by using twistor-theory techniques. The underlying idea is that conformally invariant concepts such as null lines and null surfaces are the basic building blocks of the world we live in, whereas space-time points should only appear as a derived concept. By using complex-manifold theory, twistor theory provides an appropriate mathematical description of this key idea.

A possible mathematical motivation for twistors can be described as follows.  In two real dimensions, many interesting problems are best tackled by using complex-variable methods. In four real dimensions, however, the introduction of two complex coordinates is not, by itself, sufficient, since no preferred choice exists. In other words, if we define the complex variables

z1 ≡ x1 + ix2 —– (1)

z2 ≡ x3 + ix4 —– (2)

we rely too much on this particular coordinate system, and a permutation of the four real coordinates x1, x2, x3, x4 would lead to new complex variables not well related to the first choice. One is thus led to introduce three complex variables u, z1u, z2u : the first variable u tells us which complex structure to use, and the next two are the

complex coordinates themselves. In geometric language, we start with the complex projective three-space P3(C) with complex homogeneous coordinates (x, y, u, v), and we remove the complex projective line given by u = v = 0. Any line in P3(C) − P1(C) is thus given by a pair of equations

x = au + bv —– (3)

y = cu + dv —– (4)

In particular, we are interested in those lines for which c = −b, d = a. The determinant ∆ of (3) and (4) is thus given by

∆ = aa +bb + |a|2 + |b|2 —– (5)

which implies that the line given above never intersects the line x = y = 0, with the obvious exception of the case when they coincide. Moreover, no two lines intersect, and they fill out the whole of P3(C) − P1(C). This leads to the fibration P3(C) − P1(C) → R4 by assigning to each point of P3(C) − P1(C) the four coordinates Re(a), Im(a), Re(b), Im(b). Restriction of this fibration to a plane of the form

αu + βv = 0 —— (6)

yields an isomorphism C2 ≅ R4, which depends on the ratio (α,β) ∈ P1(C). This is why the picture embodies the idea of introducing complex coordinates.

∆=a

Such a fibration depends on the conformal structure of R4. Hence, it can be extended to the one-point compactification S4 of R4, so that we get a fibration P3(C) → S4 where the line u = v = 0, previously excluded, sits over the point at ∞ of S4 = R∪ ∞ . This fibration is naturally obtained if we use the quaternions H to identify C4 with H2 and the four-sphere S4 with P1(H), the quaternion projective line. We should now recall that the quaternions H are obtained from the vector space R of real numbers by adjoining three symbols i, j, k such that

i2 = j2 = k2 =−1 —– (7)

ij = −ji = k,  jk = −kj =i,  ki = −ik = j —– (8)

Thus, a general quaternion ∈ H is defined by

x ≡ x1 + x2i + x3j + x4k —– (9)

where x1, x2, x3, x4 ∈ R4, whereas the conjugate quaternion x is given by

x ≡ x1 – x2i – x3j – x4k —– (10)

Note that conjugation obeys the identities

(xy) = y x —– (11)

xx = xx = ∑μ=14 x2μ ≡ |x|2 —– (12)

If a quaternion does not vanish, it has a unique inverse given by

x-1 ≡ x/|x|2 —– (13)

Interestingly, if we identify i with √−1, we may view the complex numbers C as contained in H taking x3 = x4 = 0. Moreover, every quaternion x as in (9) has a unique decomposition

x = z1 + z2j —– (14)

where z1 ≡ x1 + x2i, z2 ≡ x3 + x4i, by virtue of (8). This property enables one to identify H with C2, and finally H2 with C4, as we said following (6)

The map σ : P3(C) → P3(C) defined by

σ(x, y, u, v) = (−y, x, −v, u) —– (15)

preserves the fibration because c = −b, d = a, and induces the antipodal map on each fibre.

maxresdefault4