Hochschild Cohomology Tethers to Closed String Algebra by way of Cyclicity.

Untitled

When we have an open and closed Topological Field Theory (TFT) each element ξ of the closed algebra C defines an endomorphism ξa = ia(ξ) ∈ Oaa of each object a of B, and η ◦ ξa = ξb ◦ η for each morphism η ∈ Oba from a to b. The family {ξa} thus constitutes a natural transformation from the identity functor 1B : B → B to itself.

For any C-linear category B we can consider the ring E of natural transformations of 1B. It is automatically commutative, for if {ξa}, {ηa} ∈ E then ξa ◦ ηa = ηa ◦ ξa by the definition of naturality. (A natural transformation from 1B to 1B is a collection of elements {ξa ∈ Oaa} such that ξa ◦ f = f ◦ ξb for each morphism f ∈ Oab from b to a. But we can take a = b and f = ηa.) If B is a Frobenius category then there is a map πab : Obb → Oaa for each pair of objects a, b, and we can define jb : Obb → E by jb(η)a = πab(η) for η ∈ Obb. In other words, jb is defined so that the Cardy condition ιa ◦ jb = πab holds. But the question arises whether we can define a trace θ : E → C to make E into a Frobenius algebra, and with the property that

θaa(ξ)η) = θ(ξja(η)) —– (1)

∀ ξ ∈ E and η ∈ Oaa. This is certainly true if B is a semisimple Frobenius category with finitely many simple objects, for then E is just the ring of complex-valued functions on the set of classes of these simple elements, and we can readily define θ : E → C by θ(εa) = θa(1a)2, where a is an irreducible object, and εa ∈ E is the characteristic function of the point a in the spectrum of E. Nevertheless, a Frobenius category need not be semisimple, and we cannot, unfortunately, take E as the closed string algebra in the general case. If, for example, B has just one object a, and Oaa is a commutative local ring of dimension greater than 1, then E = Oaa, and so ιa : E → Oaa is an isomorphism, and its adjoint map ja ought to be an isomorphism too. But that contradicts the Cardy condition, as πaa is multiplication by ∑ψiψi, which must be nilpotent.

The commutative algebra E of natural endomorphisms of the identity functor of a linear category B is called the Hochschild cohomology HH0(B) of B in degree 0. The groups HHp(B) for p > 0, vanish if B is semisimple, but in the general case they appear to be relevant to the construction of a closed string algebra from B. For any Frobenius category B there is a natural homomorphism K(B) → HH0(B) from the Grothendieck group of B, which assigns to an object a the transformation whose value on b is πba(1a) ∈ Obb. In the semisimple case this homomorphism induces an isomorphism K(B) ⊗ C → HH0(B).

For any additive category B the Hochschild cohomology is defined as the cohomology of the cochain complex in which a k-cochain F is a rule that to each composable k-tuple of morphisms

Y0φ1 Y1φ2 ··· →φk Yk —– (2)

assigns F(φ1,…,φk) ∈ Hom(Y0,Yk). The differential in the complex is defined by

(dF)(φ1,…,φk+1) = F(φ2,…,φk+1) ◦ φ1 + ∑i=1k(−1)i F(φ1,…,φi+1 ◦ φi,…,φk+1) + (−1)k+1φk+1 ◦ F(φ1,…,φk) —– (3)

(Notice, in particular, that a 0-cochain assigns an endomorphism FY to each object Y, and is a cocycle if the endomorphisms form a natural transformation. Similarly, a 2-cochain F gives a possible infinitesimal deformation F(φ1, φ2) of the composition law (φ1, φ2) ↦ φ2 ◦ φ1 of the category, and the deformation preserves the associativity of composition iff F is a cocycle.)

In the case of a category B with a single object whose algebra of endomorphisms is O the cohomology just described is usually called the Hochschild cohomology of the algebra O with coefficients in O regarded as a O-bimodule. This must be carefully distinguished from the Hochschild cohomology with coefficients in the dual O-bimodule O. But if O is a Frobenius algebra it is isomorphic as a bimodule to O, and the two notions of Hochschild cohomology need not be distinguished. The same applies to a Frobenius category B: because Hom(Yk, Y0) is the dual space of Hom(Y0, Yk) we can think of a k-cochain as a rule which associates to each composable k-tuple of morphisms a linear function of an element φ0 ∈ Hom(Yk, Y0). In other words, a k-cochain is a rule which to each “circle” of k + 1 morphisms

···→φ0 Y0φ1 Y1 →φ2···→φk Ykφ0··· —– (4)

assigns a complex number F(φ01,…,φk).

If in this description we restrict ourselves to cochains which are cyclically invariant under rotating the circle of morphisms (φ01,…,φk) then we obtain a sub-cochain complex of the Hochschild complex whose cohomology is called the cyclic cohomology HC(B) of the category B. The cyclic cohomology, which evidently maps to the Hochschild cohomology is a more natural candidate for the closed string algebra associated to B than is the Hochschild cohomology. A very natural Frobenius category on which to test these ideas is the category of holomorphic vector bundles on a compact Calabi-Yau manifold.

Advertisements

Acceleration in String Theory – Savdeep Sethi

If it is true string theory cannot accommodate stable dark energy, that may be a reason to doubt string theory. But it is a reason to doubt dark energy – that is, dark energy in its most popular form, called a cosmological constant. The idea originated in 1917 with Einstein and was revived in 1998 when astronomers discovered that not only is spacetime expanding – the rate of that expansion is picking up. The cosmological constant would be a form of energy in the vacuum of space that never changes and counteracts the inward pull of gravity. But it is not the only possible explanation for the accelerating universe. An alternative is “quintessence,” a field pervading spacetime that can evolve. According to Cumrun Vafa, Harvard, “Regardless of whether one can realize a stable dark energy in string theory or not, it turns out that the idea of having dark energy changing over time is actually more natural in string theory. If this is the case, then one can measure this sliding of dark energy by astrophysical observations currently taking place.”

So far all astrophysical evidence supports the cosmological constant idea, but there is some wiggle room in the measurements. Upcoming experiments such as Europe’s Euclid space telescope, NASA’s Wide-Field Infrared Survey Telescope (WFIRST) and the Simons Observatory being built in Chile’s desert will look for signs dark energy was stronger or weaker in the past than the present. “The interesting thing is that we’re already at a sensitivity level to begin to put pressure on [the cosmological constant theory].” Paul Steinhardt, Princeton University says. “We don’t have to wait for new technology to be in the game. We’re in the game now.” And even skeptics of Vafa’s proposal support the idea of considering alternatives to the cosmological constant. “I actually agree that [a changing dark energy field] is a simplifying method for constructing accelerated expansion,” Eva Silverstein, Stanford University says. “But I don’t think there’s any justification for making observational predictions about the dark energy at this point.”

Quintessence is not the only other option. In the wake of Vafa’s papers, Ulf Danielsson, a physicist at Uppsala University and colleagues proposed another way of fitting dark energy into string theory. In their vision our universe is the three-dimensional surface of a bubble expanding within a larger-dimensional space. “The physics within this surface can mimic the physics of a cosmological constant,” Danielsson says. “This is a different way of realizing dark energy compared to what we’ve been thinking so far.”

Closed String Algebra as a Graded-Commutative Algebra C: Cochain Complex Differentials: Part 2, Note Quote.

Screen Shot 2018-08-09 at 11.13.06 AM

The most general target category we can consider is a symmetric tensor category: clearly we need a tensor product, and the axiom HY1⊔Y2 ≅ HY1 ⊗ HY2 only makes sense if there is an involutory canonical isomorphism HY1 ⊗ HY2 ≅ HY2 ⊗ HY1 .

A very common choice in physics is the category of super vector spaces, i.e., vector spaces V with a mod 2 grading V = V0 ⊕ V1, where the canonical isomorphism V ⊗ W ≅ W ⊗ V is v ⊗ w ↦ (−1)deg v deg ww ⊗ v. One can also consider the category of Z-graded vector spaces, with the same sign convention for the tensor product.

In either case the closed string algebra is a graded-commutative algebra C with a trace θ : C → C. In principle the trace should have degree zero, but in fact the commonly encountered theories have a grading anomaly which makes the trace have degree −n for some integer n.

We define topological-spinc theories, which model 2d theories with N = 2 supersymmetry, by replacing “manifolds” with “manifolds with spinc structure”.

A spinc structure on a surface with a conformal structure is a pair of holomorphic line bundles L1, L2 with an isomorphism L1 ⊗ L2 ≅ TΣ of holomorphic line bundles. A spin structure is the particular case when L1 = L2. On a 1-manifold S a spinc structure means a spinc structure on a ribbon neighbourhood of S in a surface with conformal structure. An N = 2 superconformal theory assigns a vector space HS;L1,L2 to each 1-manifold S with spinc structure, and an operator

US0;L1,L2: HS0;L1,L2 → HS1;L1,L2

to each spinc-cobordism from S0 to S1. To explain the rest of the structure we need to define the N = 2 Lie superalgebra associated to a spin1-manifold (S;L1,L2). Let G = Aut(L1) denote the group of bundle isomorphisms L1 → L1 which cover diffeomorphisms of S. (We can identify this group with Aut(L2).) It has a homomorphism onto the group Diff+(S) of orientation-preserving diffeomorphisms of S, and the kernel is the group of fibrewise automorphisms of L1, which can be identified with the group of smooth maps from S to C×. The Lie algebra Lie(G) is therefore an extension of the Lie algebra Vect(S) of Diff+(S) by the commutative Lie algebra Ω0(S) of smooth real-valued functions on S. Let Λ0S;L1,L2 denote the complex Lie algebra obtained from Lie(G) by complexifying Vect(S). This is the even part of a Lie super algebra whose odd part is Λ1S;L1,L2 = Γ(L1) ⊕ Γ(L2). The bracket Λ1 ⊗ Λ1 → Λ0 is completely determined by the property that elements of Γ(L1) and of Γ(L2) anticommute among themselves, while the composite

Γ(L1) ⊗ Γ(L2) → Λ0 → VectC(S)

takes (λ12) to λ1λ2 ∈ Γ(TS).

In an N = 2 theory we require the superalgebra Λ(S;L1,L2) to act on the vector space HS;L1,L2, compatibly with the action of the group G, and with a similar intertwining property with the cobordism operators to that of the N = 1 case. For an N = 2 theory the state space always has an action of the circle group coming from its embedding in G as the group of fibrewise multiplications on L1 and L2. Equivalently, the state space is always Z-graded.

An N = 2 theory always gives rise to two ordinary conformal field theories by equipping a surface Σ with the spinc structures (C,TΣ) and (TΣ,C). These are called the “A-model” and the “B-model” associated to the N = 2 theory. In each case the state spaces are cochain complexes in which the differential is the action of the constant section of the trivial component of the spinc-structure.

Superconformal Spin/Field Theories: When Vector Spaces have same Dimensions: Part 1, Note Quote.

1-s2.0-S0001870802000592-gr7

A spin structure on a surface means a double covering of its space of non-zero tangent vectors which is non-trivial on each individual tangent space. On an oriented 1-dimensional manifold S it means a double covering of the space of positively-oriented tangent vectors. For purposes of gluing, this is the same thing as a spin structure on a ribbon neighbourhood of S in an orientable surface. Each spin structure has an automorphism which interchanges its sheets, and this will induce an involution T on any vector space which is naturally associated to a 1-manifold with spin structure, giving the vector space a mod 2 grading by its ±1-eigenspaces. A topological-spin theory is a functor from the cobordism category of manifolds with spin structures to the category of super vector spaces with its graded tensor structure. The functor is required to take disjoint unions to super tensor products, and additionally it is required that the automorphism of the spin structure of a 1-manifold induces the grading automorphism T = (−1)degree of the super vector space. This choice of the supersymmetry of the tensor product rather than the naive symmetry which ignores the grading is forced by the geometry of spin structures if the possibility of a semisimple category of boundary conditions is to be allowed. There are two non-isomorphic circles with spin structure: S1ns, with the Möbius or “Neveu-Schwarz” structure, and S1r, with the trivial or “Ramond” structure. A topological-spin theory gives us state spaces Cns and Cr, corresponding respectively to S1ns and S1r.

There are four cobordisms with spin structures which cover the standard annulus. The double covering can be identified with its incoming end times the interval [0,1], but then one has a binary choice when one identifies the outgoing end of the double covering over the annulus with the chosen structure on the outgoing boundary circle. In other words, alongside the cylinders A+ns,r = S1ns,r × [0,1] which induce the identity maps of Cns,r there are also cylinders Ans,r which connect S1ns,r to itself while interchanging the sheets. These cylinders Ans,r induce the grading automorphism on the state spaces. But because Ans ≅ A+ns by an isomorphism which is the identity on the boundary circles – the Dehn twist which “rotates one end of the cylinder by 2π” – the grading on Cns must be purely even. The space Cr can have both even and odd components. The situation is a little more complicated for “U-shaped” cobordisms, i.e., cylinders with two incoming or two outgoing boundary circles. If the boundaries are S1ns there is only one possibility, but if the boundaries are S1r there are two, corresponding to A±r. The complication is that there seems no special reason to prefer either of the spin structures as “positive”. We shall simply choose one – let us call it P – with incoming boundary S1r ⊔ S1r, and use P to define a pairing Cr ⊗ Cr → C. We then choose a preferred cobordism Q in the other direction so that when we sew its right-hand outgoing S1r to the left-hand incoming one of P the resulting S-bend is the “trivial” cylinder A+r. We shall need to know, however, that the closed torus formed by the composition P ◦ Q has an even spin structure. The Frobenius structure θ on C restricts to 0 on Cr.

There is a unique spin structure on the pair-of-pants cobordism in the figure below, which restricts to S1ns on each boundary circle, and it makes Cns into a commutative Frobenius algebra in the usual way.

Untitled

If one incoming circle is S1ns and the other is S1r then the outgoing circle is S1r, and there are two possible spin structures, but the one obtained by removing a disc from the cylinder A+r is preferred: it makes Cr into a graded module over Cns. The chosen U-shaped cobordism P, with two incoming circles S1r, can be punctured to give us a pair of pants with an outgoing S1ns, and it induces a graded bilinear map Cr × Cr → Cns which, composing with the trace on Cns, gives a non-degenerate inner product on Cr. At this point the choice of symmetry of the tensor product becomes important. Let us consider the diffeomorphism of the pair of pants which shows us in the usual case that the Frobenius algebra is commutative. When we lift it to the spin structure, this diffeomorphism induces the identity on one incoming circle but reverses the sheets over the other incoming circle, and this proves that the cobordism must have the same output when we change the input from S(φ1 ⊗ φ2) to T(φ1) ⊗ φ2, where T is the grading involution and S : Cr ⊗ Cr → Cr ⊗ Cr is the symmetry of the tensor category. If we take S to be the symmetry of the tensor category of vector spaces which ignores the grading, this shows that the product on the graded vector space Cr is graded-symmetric with the usual sign; but if S is the graded symmetry then we see that the product on Cr is symmetric in the naive sense.

There is an analogue for spin theories of the theorem which tells us that a two-dimensional topological field theory “is” a commutative Frobenius algebra. It asserts that a spin-topological theory “is” a Frobenius algebra C = (Cns ⊕ CrC) with the following property. Let {φk} be a basis for Cns, with dual basis {φk} such that θCkφm) = δmk, and let βk and βk be similar dual bases for Cr. Then the Euler elements χns := ∑ φkφk and χr = ∑ βkβk are independent of the choices of bases, and the condition we need on the algebra C is that χns = χr. In particular, this condition implies that the vector spaces Cns and Cr have the same dimension. In fact, the Euler elements can be obtained from cutting a hole out of the torus. There are actually four spin structures on the torus. The output state is necessarily in Cns. The Euler elements for the three even spin structures are equal to χe = χns = χr. The Euler element χo corresponding to the odd spin structure, on the other hand, is given by χo = ∑(−1)degβkβkβk.

A spin theory is very similar to a Z/2-equivariant theory, which is the structure obtained when the surfaces are equipped with principal Z/2-bundles (i.e., double coverings) rather than spin structures.

It seems reasonable to call a spin theory semisimple if the algebra Cns is semisimple, i.e., is the algebra of functions on a finite set X. Then Cr is the space of sections of a vector bundle E on X, and it follows from the condition χns = χr that the fibre at each point must have dimension 1. Thus the whole structure is determined by the Frobenius algebra Cns together with a binary choice at each point x ∈ X of the grading of the fibre Ex of the line bundle E at x.

We can now see that if we had not used the graded symmetry in defining the tensor category we should have forced the grading of Cr to be purely even. For on the odd part the inner product would have had to be skew, and that is impossible on a 1-dimensional space. And if both Cns and Cr are purely even then the theory is in fact completely independent of the spin structures on the surfaces.

A concrete example of a two-dimensional topological-spin theory is given by C = C ⊕ Cη where η2 = 1 and η is odd. The Euler elements are χe = 1 and χo = −1. It follows that the partition function of a closed surface with spin structure is ±1 according as the spin structure is even or odd.

The most common theories defined on surfaces with spin structure are not topological: they are 2-dimensional conformal field theories with N = 1 supersymmetry. It should be noticed that if the theory is not topological then one does not expect the grading on Cns to be purely even: states can change sign on rotation by 2π. If a surface Σ has a conformal structure then a double covering of the non-zero tangent vectors is the complement of the zero-section in a two-dimensional real vector bundle L on Σ which is called the spin bundle. The covering map then extends to a symmetric pairing of vector bundles L ⊗ L → TΣ which, if we regard L and TΣ as complex line bundles in the natural way, induces an isomorphism L ⊗C L ≅ TΣ. An N = 1 superconformal field theory is a conformal-spin theory which assigns a vector space HS,L to the 1-manifold S with the spin bundle L, and is equipped with an additional map

Γ(S,L) ⊗ HS,L → HS,L

(σ,ψ) ↦ Gσψ,

where Γ(S,L) is the space of smooth sections of L, such that Gσ is real-linear in the section σ, and satisfies G2σ = Dσ2, where Dσ2 is the Virasoro action of the vector field σ2 related to σ ⊗ σ by the isomorphism L ⊗C L ≅ TΣ. Furthermore, when we have a cobordism (Σ,L) from (S0,L0) to (S1,L1) and a holomorphic section σ of L which restricts to σi on Si we have the intertwining property

Gσ1 ◦ UΣ,L = UΣ,L ◦ Gσ0

….

Conjuncted: Long-Term Capital Management. Note Quote.

3022051-14415416419579177-Shock-Exchange_origin

From Lowenstein‘s

The real culprit in 1994 was leverage. If you aren’t in debt, you can’t go broke and can’t be made to sell, in which case “liquidity” is irrelevant. but, a leveraged firm may be forced to sell, lest fast accumulating losses put it out of business. Leverage always gives rise to this same brutal dynamic, and its dangers cannot be stressed too often…

One of LTCM‘s first trades involved the thirty-year Treasury bond, which are issued by the US Government to finance the federal budget. Some $170 billion of them trade everyday, and are considered the least risky investments in the world. but a funny thing happens to thirty-year Treasurys six months or so after they are issued: they are kept in safes and drawers for long-term keeps. with fewer left in the circulation, the bonds become harder to trade. Meanwhile, the Treasury issues new thirty-year bond, which has its day in the sun. On Wall Street, the older bond, which has about 29-and-a-half years left to mature, is known as off the run; while the shiny new one is on the run. Being less liquid, the older one is considered less desirable, and begins to trade at a slight discount. And as arbitrageurs would say, a spread opens.

LTCM with its trademark precision calculated that owning one bond and shorting another was twenty-fifth as risky as owning either outright. Thus, it reckoned, it would prudently leverage this long/short arbitrage twenty-five times. This multiplied its potential for profit, but also its potential for loss. In any case, borrow it did. It paid for the cheaper off the run bonds with money it had borrowed from a Wall Street bank, or from several banks. And the other bonds, the ones it sold short, it obtained through a loan, as well. Actually, the transaction was more involved, though it was among the simplest in LTCM’s repertoire. No sooner than LTCM buy off the run bonds than it loaned them to some other Wall street firm, which then wired cash to LTCM as collateral. Then LTCM turned around and used this cash as a collateral on the bonds it borrowed. On Wall street, such short-term, collateralized loans are known as “repo financing”. The beauty of the trade was that LTCM’s cash transactions were in perfect balance. The money that LTCM spent going long matched the money that it collected going short. The collateral it paid equalled the collateral it collected. In other words, LTCM pulled off the entire transaction without using a single dime of its own cash. Maintaining the position wasn’t completely cost free, however. Though, a simple trade, it actually entailed four different payment streams. LTCM collected interest on the collateral it paid out and paid interest at a slightly higher-rate on the collateral it took in. It made some of this deficit back because of the difference in the initial margin, or the slightly higher coupon on the bond it owned as compared to the bond it shorted. This, overall cost a few basis points to LTCM each month.

Algorithmic Trading. Thought of the Day 151.0

HFT order routing

One of the first algorithmic trading strategies consisted of using a volume-weighted average price, as the price at which orders would be executed. The VWAP introduced by Berkowitz et al. can be calculated as the dollar amount traded for every transaction (price times shares traded) divided by the total shares traded for a given period. If the price of a buy order is lower than the VWAP, the trade is executed; if the price is higher, then the trade is not executed. Participants wishing to lower the market impact of their trades stress the importance of market volume. Market volume impact can be measured through comparing the execution price of an order to a benchmark. The VWAP benchmark is the sum of every transaction price paid, weighted by its volume. VWAP strategies allow the order to dilute the impact of orders through the day. Most institutional trading occurs in filling orders that exceed the daily volume. When large numbers of shares must be traded, liquidity concerns can affect price goals. For this reason, some firms offer multiday VWAP strategies to respond to customers’ requests. In order to further reduce the market impact of large orders, customers can specify their own volume participation by limiting the volume of their orders to coincide with low expected volume days. Each order is sliced into several days’ orders and then sent to a VWAP engine for the corresponding days. VWAP strategies fall into three categories: sell order to a broker-dealer who guarantees VWAP; cross the order at a future date at VWAP; or trade the order with the goal of achieving a price of VWAP or better.

The second algorithmic trading strategy is the time-weighted average price (TWAP). TWAP allows traders to slice a trade over a certain period of time, thus an order can be cut into several equal parts and be traded throughout the time period specified by the order. TWAP is used for orders which are not dependent on volume. TWAP can overcome obstacles such as fulfilling orders in illiquid stocks with unpredictable volume. Conversely, high-volume traders can also use TWAP to execute their orders over a specific time by slicing the order into several parts so that the impact of the execution does not significantly distort the market.

Yet, another type of algorithmic trading strategy is the implementation shortfall or the arrival price. The implementation shortfall is defined as the difference in return between a theoretical portfolio and an implemented portfolio. When deciding to buy or sell stocks during portfolio construction, a portfolio manager looks at the prevailing prices (decision prices). However, several factors can cause execution prices to be different from decision prices. This results in returns that differ from the portfolio manager’s expectations. Implementation shortfall is measured as the difference between the dollar return of a paper portfolio (paper return) where all shares are assumed to transact at the prevailing market prices at the time of the investment decision and the actual dollar return of the portfolio (real portfolio return). The main advantage of the implementation shortfall-based algorithmic system is to manage transactions costs (most notably market impact and timing risk) over the specified trading horizon while adapting to changing market conditions and prices.

The participation algorithm or volume participation algorithm is used to trade up to the order quantity using a rate of execution that is in proportion to the actual volume trading in the market. It is ideal for trading large orders in liquid instruments where controlling market impact is a priority. The participation algorithm is similar to the VWAP except that a trader can set the volume to a constant percentage of total volume of a given order. This algorithm can represent a method of minimizing supply and demand imbalances (Kendall Kim – Electronic and Algorithmic Trading Technology).

Smart order routing (SOR) algorithms allow a single order to exist simultaneously in multiple markets. They are critical for algorithmic execution models. It is highly desirable for algorithmic systems to have the ability to connect different markets in a manner that permits trades to flow quickly and efficiently from market to market. Smart routing algorithms provide full integration of information among all the participants in the different markets where the trades are routed. SOR algorithms allow traders to place large blocks of shares in the order book without fear of sending out a signal to other market participants. The algorithm matches limit orders and executes them at the midpoint of the bid-ask price quoted in different exchanges.

Handbook of Trading Strategies for Navigating and Profiting From Currency, Bond, Stock Markets

“The Scam” – Debashis Basu and Sucheta Dalal – Was it the Beginning of the End?

harshad-mehta-pti

“India is a turnaround scrip in the world market.”

“Either you kill, or you get killed” 

— Harshad Mehta

“Though normally quite reasonable and courteous, there was one breed of brokers he truly detested. to him and other kids in the money markets, brokers were meant to be treated like loyal dogs.”

— Broker

The first two claims by Harshad Mehta could be said to form the central theme of the book, The Scam, while the third statement is testimony to the fact of how compartmentalization within the camaraderie proved efficacious to the broker-trader nexus getting nixed, albeit briefly. The authors Debasish Basu and Sucheta Dalal have put a rigorous investigation into unraveling the complexity of what in popular culture has come to be known as the first big securities scam in India in the early 90s. That was only the beginning, for securities scams, banking frauds and financial crimes have since become a recurrent feature, thanks to increasing mathematization and financialization of market practices, stark mismatches on regulatory scales of The Reserve Bank of India (RBI), Public Sector Banks and foreign banks, and stock-market-oriented economization. The last in particular has severed the myth that stock markets are speculative and had no truck with the banking system, by capitalizing and furthering the only link between the two, and that being banks providing loans against shares subject to high margins.  

The scam which took the country by storm in 1992 had a central figure in Harshad Mehta, though the book does a most amazing archaeology into unearthing other equally, if not more important figures that formed a collusive network of deceit and bilk. The almost spider-like weave, not anywhere near in comparison to a similar network that emanated from London and spread out from Tokyo and billed as the largest financial scandal of manipulating LIBOR, thanks to Thomas Hayes by the turn of the century, nevertheless magnified the crevices existing within the banking system and bridging it with the once-closed secretive and closed bond market. So, what exactly was the scam and why did it rock India’s economic boat, especially when the country was opening up to liberal policies and amalgamating itself with globalization? 

As Basu and Dalal say, simply put, the first traces of the scam were observed when the State Bank of India (SBI), Main Branch, Mumbai discovered that it was short by Rs. 574 crore in securities. In other words, the antiquated manually written books kept at the Office of Public Debt at the RBI showed that Rs. 1170.95 crore of an 11.5% of central government loan of 2010 maturity was standing against SBI’s name on the 29th February 1992 figure of Rs. 1744.95 crore in SBI’s books, a clear gap of Rs. 574 crore, with the discrepancy apparently held in Securities General Ledger (SGL). Of the Rs. 574 crore missing, Rs. 500 crore were transferred to Harshad Mehta’s account. Now, an SGL contains the details to support the general ledger control account. For instance, the subsidiary ledger for accounts receivable contains all the information on each of the credit sales to customers, each customer’s remittance, return of merchandise, discounts and so on. Now, SGLs were a prime culprit when it came to conceiving the illegalities that followed. SGLs were issued as substitutes for actual securities by a cleverly worked out machination. Bank Receipts (BRs) were invoked as replacement for SGLs, which on the one hand confirmed that the bank had sold the securities at the rates mentioned therein, while on the other prevented the SGLs from bouncing. BRs is a shrewd plot line whereby the bank could put a deal through, even if their Public Debt Office (PDO) was in the negative. Why was this circumvention clever was precisely because had the transactions taken place through SGLs, they would have simply bounced, and BRs acted as a convenient run-around, and also because BRs were unsupported by securities. In order to derive the most from BRs, a Ready Forward Deal (RFD) was introduced that prevented the securities from moving back and forth in actuality. Sucheta Dalal had already exposed the use of this instrument by Harshad Mehta way back in 1992 while writing for the Times of India. The RFD was essentially a secured short-term (generally 15 day) loan from open bank to another, where the banks would lend against Government securities. The borrowing bank sells the securities to the lending bank and buys them back at the end of the period of the loan, typically at a slightly higher price. Harshad Mehta roped in two relatively obscure and unknown little banks in Bank of Karad and Mumbai Mercantile Cooperative Bank (MMCB) to issue fake BRs, or BRs not backed by Government securities. It were these fake BRs that were eventually exchanged with other banks that paid Mehta unbeknownst of the fact that they were in fact dealing with fake BRs. 

By a cunning turn of reason, and not to rest till such payments were made to reflect on the stock market, Harshad Mehta began to artificially enhance share prices by going on a buying spree. To maximize profits on such investments, the broker, now the darling of the stock market and referred to as the Big Bull decided to sell off the shares and in the process retiring the BRs. Little did anyone know then, that the day shares were sold, the market would crash, and crash it did. Mehta’s maneuvers lent a feel-good factor to the stock market until the scam erupted, and when it did erupt, many banks were swindled to a massive loss of Rs. 4000 crore, for they held on to BRs that had no value attached to them. The one that took the most stinging loss was the State Bank of India and it was payback time. The mechanism by which the money was paid back cannot be understood unless one gets to the root of an RBI subsidiary, National Housing Bank (NHB). When the State Bank of India directed Harshad Mehta to produce either the securities or return the money, Mehta approached the NHB seeking help, for the thaw between the broker and RBI’s subsidiary had grown over the years, the discovery of which had appalled officials at the Reserve Bank. This only lends credibility to the broker-banker collusion, the likes of which only got murkier as the scam was getting unravelled. NHB did come to rescue Harshad Mehta by issuing a cheque in favor of ANZ Grindlays Bank. The deal again proved to be one-handed as NHB did not get securities in return from Harshad Mehta, and eventually the cheque found its way into Mehta’s ANZ account, which helped clear the dues due to the SBI. The most pertinent question here was why did RBI’s subsidiary act so collusively? This could only make sense, once one is in the clear that Harshad Mehta delivered considerable profits to the NHB by way of ready forward deals (RFDs). If this has been the flow chart of payment routes to SBI, the authors of The Scam point out to how the SBI once again debited Harshad Mehta’s account, which had by then exhausted its balance. This was done by releasing a massive overdraft of Rs. 707 crore, which is essentially an extension of a credit by a lending institution when the account gets exhausted. Then the incredulous happened! This overdraft was released against no security!, and the deal was acquiesced to since there was a widespread belief within the director-fold of the SBI that most of what was paid to the NHB would have come back to SBI subsidies from where SBI had got its money in the first place. 

The Scam is neatly divided into two books comprising 23 chapters, with the first part delineating the rise of Harshad Mehta as a broker superstar, The Big Bull. He is not the only character to be pilloried as the nexus meshed all the way from Mumbai (then Bombay) to Kolkata (then Calcutta) to Bengaluru (then Bangalore) to Delhi and Chennai (then Madras) with a host of jobbers, market makers, brokers and traders who were embezzling funds off the banks, colluded by the banks on overheating the stock market in a country that was only officially trying to jettison the tag of Nehruvian socialism. But, it wasn’t merely individuated, but the range of complicitous relations also grabbed governmental and private institutions and firms. Be it the Standard Chartered, or the Citibank, or monetizing the not-even in possession of assets bought; forward selling the transaction to make it appear cash-neutral; or lending money to the corporate sector as clean credit implying banks taking risks on the borrowers unapproved by the banks because it did not fall under the mainline corporate lending, rules and regulations of the RBI were flouted and breached with increasing alacrity and in clear violations of guidelines. But credit is definitely due to S Venkitaraman, the Governor of the RBI, who in his two-year at the helm of affairs exposed the scam, but was meted out a disturbing treatment at the hands of some of members of the Joint Parliamentary Committee. Harshad Mehta had grown increasingly confident of his means and mechanisms to siphon-off money using inter-bank transactions, and when he was finally apprehended, he was charged with 72 criminal offenses and more than 600 civil action suits were filed against him leading to his arrest by the CBI in the November of 1992. Banished from the stock market, he did make a comeback as a market guru before the Bombay High Court convicted him to prison. But, the seamster that he was projected to be, he wouldn’t rest without creating chaos and commotion, and one such bomb was dropped by him claiming to have paid the Congress Prime minister PV Narsimha Rao a hefty sum to knock him off the scandal. Harshad Mehta passed away from a cardiac arrest while in prison in Thane, but his legacy continued within the folds he had inspired and spread far and wide. 

684482-parekhketan-052118

Ketan Parekh forms a substantial character of Book 2 of The Scam. Often referred to as Midas in privy for his ability to turn whatever he touched into gold on Dalal Street by his financial trickery, he decided to take the unfinished project of Harshad Mehta to fruition. Known for his timid demeanor, Parekh from a brokers family and with his training as a Chartered Accountant, he was able to devise a trading ring that helped him rig stock prices keeping his vested interests at the forefront. He was a bull on the wild run, whose match was found in a bear cartel that hammered prices of K-10 stocks precipitating payment crisis. K-10 stocks were colloquially named for these driven in sets of 10, and the promotion of these was done through creating bellwethers and seeking support fro Foreign Institutional Investors (FIIs). India was already seven years old into the LPG regime, but still sailing the rough seas of economic transitioning into smooth sailing. This wasn’t the most conducive of timing to appropriate profits, but a prodigy that he was, his ingenuity lay in instrumentalizing the jacking up of shares prices to translate it into the much needed liquidity. this way, he was able to keep FIIs and promoters satisfied and multiply money on his own end. This, in financial jargon goes by the name circular trading, but his brilliance was epitomized by his timing of dumping devalued shares with institutions like the Life Insurance Corporation of India (LIC) and Unit Trust of India (UTI). But, what differentiated him from Harshad Mehta was his staying off public money or expropriating public institutions. such was his prowess that share markets would tend to catch cold when he sneezed and his modus operandi was invest into small companies through private placements, manipulate the markets to rig shares and sell them to devalue the same. But lady luck wouldn’t continue to shine on him as with the turn of the century, Parekh, who had invested heavily into information stocks was hit large by the collapse of the dotcom bubble. Add to that when NDA government headed by Atal Bihari Vajpayee presented the Union Budget in 2001, the Bombay Stock Exchange (BSE) Sensex crashed prompting the Government to dig deep into such a market reaction. SEBI’s (Securities and Exchange Board of India) investigation revealed the rogue nature of Ketan Parekh as a trader, who was charged with shaking the very foundations of Indian financial markets. Ketan Parekh has been banned from trading until 2017, but SEBI isn’t too comfortable with the fact that his proteges are carrying forward the master’s legacy. Though such allegations are yet to be put to rest. 

The legacy of Harshad Mehta and Ketan Parekh continue to haunt financial markets in the country to date, and were only signatures of what was to follow in the form of plaguing banking crisis, public sector banks are faced with. As Basu and Dalal write, “in money markets the first signs of rot began to appear in the mid-1980s. After more than a decade of so-called social banking, banks found themselves groaning under a load of investments they were forced to make to maintain the Statutory Liquidity Ratio. The investments were in low-interest bearing loans issued by the central and state governments that financed the government’s ever-increasing appetite for cash. Banks intended to hold these low-interest government bonds till maturity. But each time a new set of loans came with a slightly higher interest rate called the coupon rate, the market price of older securities fell, and thereafter banks began to book losses, which eroded their profitability,” the situation is a lot more grim today. RBI’s autonomy has come under increased threat, and the question that requires the most incision is to find a resolution to what one Citibank executive said, “RBI guidelines are just that, guidelines. Not the law of the land.” 

The Scam, as much as a personal element of deceit faced during the tumultuous times, is a brisk read, with some minor hurdles in the form of technicalities that intersperse the volume and tend to disrupt the plot lines. Such technical details are in the realm of share markets and unless negotiated well with either a prior knowledge, or hyperlinking tends to derail the speed, but in no should be considered as a book not worth looking at. As a matter of fact, the third edition with its fifth reprint is testimony to the fact that the book’s market is alive and ever-growing. One only wonders at the end of it as to where have all such journalists disappeared from this country. That Debashis Basu and Sucheta Dalal, partners in real life are indeed partners in crime if they aim at exposing financial crimes of such magnitudes for the multitude in this country who would otherwise be bereft of such understandings had it not been for them.