Conjuncted: Long-Term Capital Management. Note Quote.

3022051-14415416419579177-Shock-Exchange_origin

From Lowenstein‘s

The real culprit in 1994 was leverage. If you aren’t in debt, you can’t go broke and can’t be made to sell, in which case “liquidity” is irrelevant. but, a leveraged firm may be forced to sell, lest fast accumulating losses put it out of business. Leverage always gives rise to this same brutal dynamic, and its dangers cannot be stressed too often…

One of LTCM‘s first trades involved the thirty-year Treasury bond, which are issued by the US Government to finance the federal budget. Some $170 billion of them trade everyday, and are considered the least risky investments in the world. but a funny thing happens to thirty-year Treasurys six months or so after they are issued: they are kept in safes and drawers for long-term keeps. with fewer left in the circulation, the bonds become harder to trade. Meanwhile, the Treasury issues new thirty-year bond, which has its day in the sun. On Wall Street, the older bond, which has about 29-and-a-half years left to mature, is known as off the run; while the shiny new one is on the run. Being less liquid, the older one is considered less desirable, and begins to trade at a slight discount. And as arbitrageurs would say, a spread opens.

LTCM with its trademark precision calculated that owning one bond and shorting another was twenty-fifth as risky as owning either outright. Thus, it reckoned, it would prudently leverage this long/short arbitrage twenty-five times. This multiplied its potential for profit, but also its potential for loss. In any case, borrow it did. It paid for the cheaper off the run bonds with money it had borrowed from a Wall Street bank, or from several banks. And the other bonds, the ones it sold short, it obtained through a loan, as well. Actually, the transaction was more involved, though it was among the simplest in LTCM’s repertoire. No sooner than LTCM buy off the run bonds than it loaned them to some other Wall street firm, which then wired cash to LTCM as collateral. Then LTCM turned around and used this cash as a collateral on the bonds it borrowed. On Wall street, such short-term, collateralized loans are known as “repo financing”. The beauty of the trade was that LTCM’s cash transactions were in perfect balance. The money that LTCM spent going long matched the money that it collected going short. The collateral it paid equalled the collateral it collected. In other words, LTCM pulled off the entire transaction without using a single dime of its own cash. Maintaining the position wasn’t completely cost free, however. Though, a simple trade, it actually entailed four different payment streams. LTCM collected interest on the collateral it paid out and paid interest at a slightly higher-rate on the collateral it took in. It made some of this deficit back because of the difference in the initial margin, or the slightly higher coupon on the bond it owned as compared to the bond it shorted. This, overall cost a few basis points to LTCM each month.

Advertisement

Algorithmic Trading. Thought of the Day 151.0

HFT order routing

One of the first algorithmic trading strategies consisted of using a volume-weighted average price, as the price at which orders would be executed. The VWAP introduced by Berkowitz et al. can be calculated as the dollar amount traded for every transaction (price times shares traded) divided by the total shares traded for a given period. If the price of a buy order is lower than the VWAP, the trade is executed; if the price is higher, then the trade is not executed. Participants wishing to lower the market impact of their trades stress the importance of market volume. Market volume impact can be measured through comparing the execution price of an order to a benchmark. The VWAP benchmark is the sum of every transaction price paid, weighted by its volume. VWAP strategies allow the order to dilute the impact of orders through the day. Most institutional trading occurs in filling orders that exceed the daily volume. When large numbers of shares must be traded, liquidity concerns can affect price goals. For this reason, some firms offer multiday VWAP strategies to respond to customers’ requests. In order to further reduce the market impact of large orders, customers can specify their own volume participation by limiting the volume of their orders to coincide with low expected volume days. Each order is sliced into several days’ orders and then sent to a VWAP engine for the corresponding days. VWAP strategies fall into three categories: sell order to a broker-dealer who guarantees VWAP; cross the order at a future date at VWAP; or trade the order with the goal of achieving a price of VWAP or better.

The second algorithmic trading strategy is the time-weighted average price (TWAP). TWAP allows traders to slice a trade over a certain period of time, thus an order can be cut into several equal parts and be traded throughout the time period specified by the order. TWAP is used for orders which are not dependent on volume. TWAP can overcome obstacles such as fulfilling orders in illiquid stocks with unpredictable volume. Conversely, high-volume traders can also use TWAP to execute their orders over a specific time by slicing the order into several parts so that the impact of the execution does not significantly distort the market.

Yet, another type of algorithmic trading strategy is the implementation shortfall or the arrival price. The implementation shortfall is defined as the difference in return between a theoretical portfolio and an implemented portfolio. When deciding to buy or sell stocks during portfolio construction, a portfolio manager looks at the prevailing prices (decision prices). However, several factors can cause execution prices to be different from decision prices. This results in returns that differ from the portfolio manager’s expectations. Implementation shortfall is measured as the difference between the dollar return of a paper portfolio (paper return) where all shares are assumed to transact at the prevailing market prices at the time of the investment decision and the actual dollar return of the portfolio (real portfolio return). The main advantage of the implementation shortfall-based algorithmic system is to manage transactions costs (most notably market impact and timing risk) over the specified trading horizon while adapting to changing market conditions and prices.

The participation algorithm or volume participation algorithm is used to trade up to the order quantity using a rate of execution that is in proportion to the actual volume trading in the market. It is ideal for trading large orders in liquid instruments where controlling market impact is a priority. The participation algorithm is similar to the VWAP except that a trader can set the volume to a constant percentage of total volume of a given order. This algorithm can represent a method of minimizing supply and demand imbalances (Kendall Kim – Electronic and Algorithmic Trading Technology).

Smart order routing (SOR) algorithms allow a single order to exist simultaneously in multiple markets. They are critical for algorithmic execution models. It is highly desirable for algorithmic systems to have the ability to connect different markets in a manner that permits trades to flow quickly and efficiently from market to market. Smart routing algorithms provide full integration of information among all the participants in the different markets where the trades are routed. SOR algorithms allow traders to place large blocks of shares in the order book without fear of sending out a signal to other market participants. The algorithm matches limit orders and executes them at the midpoint of the bid-ask price quoted in different exchanges.

Handbook of Trading Strategies for Navigating and Profiting From Currency, Bond, Stock Markets

“The Scam” – Debashis Basu and Sucheta Dalal – Was it the Beginning of the End?

harshad-mehta-pti

“India is a turnaround scrip in the world market.”

“Either you kill, or you get killed” 

— Harshad Mehta

“Though normally quite reasonable and courteous, there was one breed of brokers he truly detested. to him and other kids in the money markets, brokers were meant to be treated like loyal dogs.”

— Broker

The first two claims by Harshad Mehta could be said to form the central theme of the book, The Scam, while the third statement is testimony to the fact of how compartmentalization within the camaraderie proved efficacious to the broker-trader nexus getting nixed, albeit briefly. The authors Debasish Basu and Sucheta Dalal have put a rigorous investigation into unraveling the complexity of what in popular culture has come to be known as the first big securities scam in India in the early 90s. That was only the beginning, for securities scams, banking frauds and financial crimes have since become a recurrent feature, thanks to increasing mathematization and financialization of market practices, stark mismatches on regulatory scales of The Reserve Bank of India (RBI), Public Sector Banks and foreign banks, and stock-market-oriented economization. The last in particular has severed the myth that stock markets are speculative and had no truck with the banking system, by capitalizing and furthering the only link between the two, and that being banks providing loans against shares subject to high margins.  

The scam which took the country by storm in 1992 had a central figure in Harshad Mehta, though the book does a most amazing archaeology into unearthing other equally, if not more important figures that formed a collusive network of deceit and bilk. The almost spider-like weave, not anywhere near in comparison to a similar network that emanated from London and spread out from Tokyo and billed as the largest financial scandal of manipulating LIBOR, thanks to Thomas Hayes by the turn of the century, nevertheless magnified the crevices existing within the banking system and bridging it with the once-closed secretive and closed bond market. So, what exactly was the scam and why did it rock India’s economic boat, especially when the country was opening up to liberal policies and amalgamating itself with globalization? 

As Basu and Dalal say, simply put, the first traces of the scam were observed when the State Bank of India (SBI), Main Branch, Mumbai discovered that it was short by Rs. 574 crore in securities. In other words, the antiquated manually written books kept at the Office of Public Debt at the RBI showed that Rs. 1170.95 crore of an 11.5% of central government loan of 2010 maturity was standing against SBI’s name on the 29th February 1992 figure of Rs. 1744.95 crore in SBI’s books, a clear gap of Rs. 574 crore, with the discrepancy apparently held in Securities General Ledger (SGL). Of the Rs. 574 crore missing, Rs. 500 crore were transferred to Harshad Mehta’s account. Now, an SGL contains the details to support the general ledger control account. For instance, the subsidiary ledger for accounts receivable contains all the information on each of the credit sales to customers, each customer’s remittance, return of merchandise, discounts and so on. Now, SGLs were a prime culprit when it came to conceiving the illegalities that followed. SGLs were issued as substitutes for actual securities by a cleverly worked out machination. Bank Receipts (BRs) were invoked as replacement for SGLs, which on the one hand confirmed that the bank had sold the securities at the rates mentioned therein, while on the other prevented the SGLs from bouncing. BRs is a shrewd plot line whereby the bank could put a deal through, even if their Public Debt Office (PDO) was in the negative. Why was this circumvention clever was precisely because had the transactions taken place through SGLs, they would have simply bounced, and BRs acted as a convenient run-around, and also because BRs were unsupported by securities. In order to derive the most from BRs, a Ready Forward Deal (RFD) was introduced that prevented the securities from moving back and forth in actuality. Sucheta Dalal had already exposed the use of this instrument by Harshad Mehta way back in 1992 while writing for the Times of India. The RFD was essentially a secured short-term (generally 15 day) loan from open bank to another, where the banks would lend against Government securities. The borrowing bank sells the securities to the lending bank and buys them back at the end of the period of the loan, typically at a slightly higher price. Harshad Mehta roped in two relatively obscure and unknown little banks in Bank of Karad and Mumbai Mercantile Cooperative Bank (MMCB) to issue fake BRs, or BRs not backed by Government securities. It were these fake BRs that were eventually exchanged with other banks that paid Mehta unbeknownst of the fact that they were in fact dealing with fake BRs. 

By a cunning turn of reason, and not to rest till such payments were made to reflect on the stock market, Harshad Mehta began to artificially enhance share prices by going on a buying spree. To maximize profits on such investments, the broker, now the darling of the stock market and referred to as the Big Bull decided to sell off the shares and in the process retiring the BRs. Little did anyone know then, that the day shares were sold, the market would crash, and crash it did. Mehta’s maneuvers lent a feel-good factor to the stock market until the scam erupted, and when it did erupt, many banks were swindled to a massive loss of Rs. 4000 crore, for they held on to BRs that had no value attached to them. The one that took the most stinging loss was the State Bank of India and it was payback time. The mechanism by which the money was paid back cannot be understood unless one gets to the root of an RBI subsidiary, National Housing Bank (NHB). When the State Bank of India directed Harshad Mehta to produce either the securities or return the money, Mehta approached the NHB seeking help, for the thaw between the broker and RBI’s subsidiary had grown over the years, the discovery of which had appalled officials at the Reserve Bank. This only lends credibility to the broker-banker collusion, the likes of which only got murkier as the scam was getting unravelled. NHB did come to rescue Harshad Mehta by issuing a cheque in favor of ANZ Grindlays Bank. The deal again proved to be one-handed as NHB did not get securities in return from Harshad Mehta, and eventually the cheque found its way into Mehta’s ANZ account, which helped clear the dues due to the SBI. The most pertinent question here was why did RBI’s subsidiary act so collusively? This could only make sense, once one is in the clear that Harshad Mehta delivered considerable profits to the NHB by way of ready forward deals (RFDs). If this has been the flow chart of payment routes to SBI, the authors of The Scam point out to how the SBI once again debited Harshad Mehta’s account, which had by then exhausted its balance. This was done by releasing a massive overdraft of Rs. 707 crore, which is essentially an extension of a credit by a lending institution when the account gets exhausted. Then the incredulous happened! This overdraft was released against no security!, and the deal was acquiesced to since there was a widespread belief within the director-fold of the SBI that most of what was paid to the NHB would have come back to SBI subsidies from where SBI had got its money in the first place. 

The Scam is neatly divided into two books comprising 23 chapters, with the first part delineating the rise of Harshad Mehta as a broker superstar, The Big Bull. He is not the only character to be pilloried as the nexus meshed all the way from Mumbai (then Bombay) to Kolkata (then Calcutta) to Bengaluru (then Bangalore) to Delhi and Chennai (then Madras) with a host of jobbers, market makers, brokers and traders who were embezzling funds off the banks, colluded by the banks on overheating the stock market in a country that was only officially trying to jettison the tag of Nehruvian socialism. But, it wasn’t merely individuated, but the range of complicitous relations also grabbed governmental and private institutions and firms. Be it the Standard Chartered, or the Citibank, or monetizing the not-even in possession of assets bought; forward selling the transaction to make it appear cash-neutral; or lending money to the corporate sector as clean credit implying banks taking risks on the borrowers unapproved by the banks because it did not fall under the mainline corporate lending, rules and regulations of the RBI were flouted and breached with increasing alacrity and in clear violations of guidelines. But credit is definitely due to S Venkitaraman, the Governor of the RBI, who in his two-year at the helm of affairs exposed the scam, but was meted out a disturbing treatment at the hands of some of members of the Joint Parliamentary Committee. Harshad Mehta had grown increasingly confident of his means and mechanisms to siphon-off money using inter-bank transactions, and when he was finally apprehended, he was charged with 72 criminal offenses and more than 600 civil action suits were filed against him leading to his arrest by the CBI in the November of 1992. Banished from the stock market, he did make a comeback as a market guru before the Bombay High Court convicted him to prison. But, the seamster that he was projected to be, he wouldn’t rest without creating chaos and commotion, and one such bomb was dropped by him claiming to have paid the Congress Prime minister PV Narsimha Rao a hefty sum to knock him off the scandal. Harshad Mehta passed away from a cardiac arrest while in prison in Thane, but his legacy continued within the folds he had inspired and spread far and wide. 

684482-parekhketan-052118

Ketan Parekh forms a substantial character of Book 2 of The Scam. Often referred to as Midas in privy for his ability to turn whatever he touched into gold on Dalal Street by his financial trickery, he decided to take the unfinished project of Harshad Mehta to fruition. Known for his timid demeanor, Parekh from a brokers family and with his training as a Chartered Accountant, he was able to devise a trading ring that helped him rig stock prices keeping his vested interests at the forefront. He was a bull on the wild run, whose match was found in a bear cartel that hammered prices of K-10 stocks precipitating payment crisis. K-10 stocks were colloquially named for these driven in sets of 10, and the promotion of these was done through creating bellwethers and seeking support fro Foreign Institutional Investors (FIIs). India was already seven years old into the LPG regime, but still sailing the rough seas of economic transitioning into smooth sailing. This wasn’t the most conducive of timing to appropriate profits, but a prodigy that he was, his ingenuity lay in instrumentalizing the jacking up of shares prices to translate it into the much needed liquidity. this way, he was able to keep FIIs and promoters satisfied and multiply money on his own end. This, in financial jargon goes by the name circular trading, but his brilliance was epitomized by his timing of dumping devalued shares with institutions like the Life Insurance Corporation of India (LIC) and Unit Trust of India (UTI). But, what differentiated him from Harshad Mehta was his staying off public money or expropriating public institutions. such was his prowess that share markets would tend to catch cold when he sneezed and his modus operandi was invest into small companies through private placements, manipulate the markets to rig shares and sell them to devalue the same. But lady luck wouldn’t continue to shine on him as with the turn of the century, Parekh, who had invested heavily into information stocks was hit large by the collapse of the dotcom bubble. Add to that when NDA government headed by Atal Bihari Vajpayee presented the Union Budget in 2001, the Bombay Stock Exchange (BSE) Sensex crashed prompting the Government to dig deep into such a market reaction. SEBI’s (Securities and Exchange Board of India) investigation revealed the rogue nature of Ketan Parekh as a trader, who was charged with shaking the very foundations of Indian financial markets. Ketan Parekh has been banned from trading until 2017, but SEBI isn’t too comfortable with the fact that his proteges are carrying forward the master’s legacy. Though such allegations are yet to be put to rest. 

The legacy of Harshad Mehta and Ketan Parekh continue to haunt financial markets in the country to date, and were only signatures of what was to follow in the form of plaguing banking crisis, public sector banks are faced with. As Basu and Dalal write, “in money markets the first signs of rot began to appear in the mid-1980s. After more than a decade of so-called social banking, banks found themselves groaning under a load of investments they were forced to make to maintain the Statutory Liquidity Ratio. The investments were in low-interest bearing loans issued by the central and state governments that financed the government’s ever-increasing appetite for cash. Banks intended to hold these low-interest government bonds till maturity. But each time a new set of loans came with a slightly higher interest rate called the coupon rate, the market price of older securities fell, and thereafter banks began to book losses, which eroded their profitability,” the situation is a lot more grim today. RBI’s autonomy has come under increased threat, and the question that requires the most incision is to find a resolution to what one Citibank executive said, “RBI guidelines are just that, guidelines. Not the law of the land.” 

The Scam, as much as a personal element of deceit faced during the tumultuous times, is a brisk read, with some minor hurdles in the form of technicalities that intersperse the volume and tend to disrupt the plot lines. Such technical details are in the realm of share markets and unless negotiated well with either a prior knowledge, or hyperlinking tends to derail the speed, but in no should be considered as a book not worth looking at. As a matter of fact, the third edition with its fifth reprint is testimony to the fact that the book’s market is alive and ever-growing. One only wonders at the end of it as to where have all such journalists disappeared from this country. That Debashis Basu and Sucheta Dalal, partners in real life are indeed partners in crime if they aim at exposing financial crimes of such magnitudes for the multitude in this country who would otherwise be bereft of such understandings had it not been for them. 

Hyperimmunity ≈ Hypersimplicity. Algorithmic Complexities.

1*aXW_00lLrZn0_V4ytKoQUw

The notions of hyperimmune degrees have applications in computability theory, and in the study of its interaction with algorithmic randomness. There are several ways to define these notions, and lets concentrate on the concept of domination. A function g is dominated by a function f if g(n) ≤ f(n) for almost all n. It is sometimes technically useful to work with the following closely related concept: A function g is majorized by a function f if g(n) ≤ f(n) ∀ n.

A degree a is hyperimmune if there is a function f ≤T a that is not dominated by any computable function (or, equivalently, not majorized by any computable function). Otherwise, a is hyperimmune-free.

While 0 is clearly hyperimmune-free, all other ∆02 degrees are hyperimmune.

If a < b ≤ a′, then b is hyperimmune. In particular, every nonzero degree below 0′, and hence every nonzero degree, is hyperimmune.

We carry out the proof for the case a = 0. The general result follows by a straightforward relativization.

Let B be a set such that φ T φ’. we need to find a function g ≤B such that is not majorized by any computable function. Since B is ∆02, it has a computable approximation {Bs}s∈N. Define

g(n) = μs ≥ x (Bs ↾ n = B ↾ n)

g(n) is not the stage s by which the approximation to B ↾ n has stabilized, but rather the first stage s at which Bs ↾ n is correct. Clearly, g ≤B. Hypothesizing that no computable function majorizes g. Suppose h is computable and majorizes g. Hypothesizing on the addendum, B is computable as well. To compute B ↾ m, search for an n > m such that Bt ↾ m = Bn ↾ m ∀ t ∈ [n, h(n)]. Such an n must exist because there is a stage at which the approximation to B ↾ m stabilizes. By the definition of g and the choice of h, we have g(n) ∈ [n, h(n)], so B ↾ m = Bg(n) ↾ m = Bn ↾ m. Thus B is computable, which is a contradiction.

On the other hand, there do exist nonzero hyperimmune-free degrees.

We define a noncomputable set A of hyperimmune-free degree, using a technique known as forcing with computable perfect trees. A function tree is a function T: 2 → 2 such T(σ0) and T(σ1) are incompatible extensions of T(σ). For a tree T, let [T] be a collection of all X for which there is an infinite sequence β such that T(σ) ≺ X ∀ σ ≺ β. Building a sequence of {Ti}i∈N of computable trees such that [T0] ⊇ [T1] ⊇ [T2] ⊇ [T3]…since each [Ti] is closed, ∩n[Tn] ≠ 0. We will take A to be any element of this intersection.

The name “hyperimmune degree” comes from the notion of a hyperimmune set, which is a strong array that is a computable collection of disjoint finite sets {Fi}i∈N (which means not only that the Fi are uniformly computable, but that the function i ↦ max Fi is computable). A set A is hyperimmune if for all strong arrays {Fi}i∈N, there is an i such that Fi ⊂ Â. A set is hypersimple if its complement is hyperimmune.

Algorithmic Randomness and Complexity

Network Theoretic of the Fermionic Quantum State – Epistemological Rumination. Thought of the Day 150.0

galitski_moatv3

In quantum physics, fundamental particles are believed to be of two types: fermions or bosons, depending on the value of their spin (an intrinsic ‘angular moment’ of the particle). Fermions have half-integer spin and cannot occupy a quantum state (a configuration with specified microscopic degrees of freedom, or quantum numbers) that is already occupied. In other words, at most one fermion at a time can occupy one quantum state. The resulting probability that a quantum state is occupied is known as the Fermi-Dirac statistics.

Now, if we want to convert this into a model with maximum entropy, where the real movement is defined topologically, then we require a reproduction of heterogeneity that is observed. The starting recourse is network theory with an ensemble of networks where each vertex i has the same degree ki as in the real network. This choice is justified by the fact that, being an entirely local topological property, the degree is expected to be directly affected by some intrinsic (non-topological) property of vertices. The caveat is that the real shouldn’t be compared with the randomized, which could otherwise lead to interpreting the observed as ‘unavoidable’ topological constraints, in the sense that the violation of the observed values would lead to an ‘impossible’, or at least very unrealistic values.

The resulting model is known as the Configuration Model, and is defined as a maximum-entropy ensemble of graphs with given degree sequence. The degree sequence, which is the constraint defining the model, is nothing but the ordered vector k of degrees of all vertices (where the ith component ki is the degree of vertex i). The ordering preserves the ‘identity’ of vertices: in the resulting network ensemble, the expected degree ⟨ki⟩ of each vertex i is the same as the empirical value ki for that vertex. In the Configuration Model, the graph probability is given by

P(A) = ∏i<jqij(aij) =  ∏i<jpijaij (1 – pij)1-aij —– (1)

where qij(a) = pija (1 – pij)1-a is the probability that particular entry of the adjacency matrix A takes the value aij = a, which is a Bernoulli process with different pairs of vertices characterized by different connection probabilities pij. A Bernoulli trial (or Bernoulli process) is the simplest random event, i.e. one characterized by only two possible outcomes. One of the two outcomes is referred to as the ‘success’ and is assigned a probability p. The other outcome is referred to as the ‘failure’, and is assigned the complementary probability 1 − p. These probabilities read

⟨aij⟩ = pij = (xixj)/(1 + xixj) —– (2)

where xi is the Lagrange multiplier obtained by ensuring that the expected degree of the corresponding vertex i equals its observed value: ⟨ki⟩ = ki ∀ i. As always happens in maximum-entropy ensembles, the probabilistic nature of configurations implies that the constraints are valid only on average (the angular brackets indicate an average over the ensemble of realizable networks). Also note that pij is a monotonically increasing function of xi and xj. This implies that ⟨ki⟩ is a monotonically increasing function of xi. An important consequence is that two variables i and j with the same degree ki = kj must have the same value xi = xj.

Unknown

(2) provides an interesting connection with quantum physics, and in particular the statistical mechanics of fermions. The ‘selection rules’ of fermions dictate that only one particle at a time can occupy a single-particle state, exactly as each pair of vertices in binary networks can be either connected or disconnected. In this analogy, every pair i, j of vertices is a ‘quantum state’ identified by the ‘quantum numbers’ i and j. So each link of a binary network is like a fermion that can be in one of the available states, provided that no two objects are in the same state. (2) indicates the expected number of particles/links in the state specified by i and j. With no surprise, it has the same form of the so-called Fermi-Dirac statistics describing the expected number of fermions in a given quantum state. The probabilistic nature of links allows also for the presence of empty states, whose occurrence is now regulated by the probability coefficients (1 − pij). The Configuration Model allows the whole degree sequence of the observed network to be preserved (on average), while randomizing other (unconstrained) network properties. now, when one compares the higher-order (unconstrained) observed topological properties with their expected values calculated over the maximum-entropy ensemble, it should be indicative of the fact that the degree of sequence is informative in explaining the rest of the topology, which is a consequent via probabilities in (2). Colliding these into a scatter plot, the agreement between model and observations can be simply assessed as follows: the less scattered the cloud of points around the identity function, the better the agreement between model and reality. In principle, a broadly scattered cloud around the identity function would indicate the little effectiveness of the chosen constraints in reproducing the unconstrained properties, signaling the presence of genuine higher-order patterns of self-organization, not simply explainable in terms of the degree sequence alone. Thus, the ‘fermionic’ character of the binary model is the mere result of the restriction that no two binary links can be placed between any two vertices, leading to a mathematical result which is formally equivalent to the one of quantum statistics.

Skeletal of the Presentation on AIIB and Blue Economy in Mumbai during the Peoples’ Convention on 22nd June 2018

Main features in AIIB Financing

  1. investments in regional members
  2. supports longer tenors and appropriate grace period
  3. mobilize funding through insurance, banks, funds and sovereign wealth (like the China Investment Corporation (CIC) in the case of China)
  4. funds on economic/financial considerations and on project benefits, eg. global climate, energy security, productivity improvement etc.

Public Sector:

  1. sovereign-backed financing (sovereign guarantee)
  2. loan/guarantee

Private Sector:

  1. non-sovereign-backed financing (private sector, State Owned Enterprises (SOEs), sub-sovereign and municipalities)
  2. loans and equity
  3. bonds, credit enhancement, funds etc.

—— portfolio is expected to grow steadily with increasing share of standalone projects from 27% in 2016 to 39% in 2017 and 42% in 2018 (projected)

—— share of non-sovereign-backed projects has increased from 1% in 2016 to 36% of portfolio in 2017. share of non-sovereign-backed projects is projected to account for about 30% in 2018

Untitled

Why would AIIB be interested in the Blue Economy?

  1. To appropriate (expropriate) the potential of hinterlands
  2. increasing industrialization
  3. increasing GDP
  4. increasing trade
  5. infrastructure development
  6. Energy and Minerals in order to bring about a changing landscape
  7. Container: regional collaboration and competition

AIIB wishes to change the landscape of infrastructure funding across its partner countries, laying emphasis on cross-country and cross-sectoral investments in the shipping sector — Yee Ean Pang, Director General, Investment Operations, AIIB.

He also opined that in the shipping sector there is a need for private players to step in, with 40-45 per cent of stake in partnership being offered to private players.

Untitled

Projects aligned with Sagarmala are being considered for financial assistance by the Ministry of Shipping under two main headings:

1. Budgetary Allocations from the Ministry of Shipping

    a. up to 50% of the project cost in the form of budgetary grant

    b. Projects having high social impact but low/no Internal Rate of Return (IRR) may be provided funding, in convergence with schemes of other central line ministries. IRR is a metric used in capital budgeting to estimate the profitability of potential investments. It is a discount rate that makes the net present value (NPV) of all cash flows from a particular project equal to zero. NPV is the difference between the present value of cash inflows and present value of cash outflows over a period of time. IRR is sometimes referred to as “economic rate of return” or “discounted cash flow rate of return.” The use of “internal” refers to the omission of external factors, such as the cost of capital or inflation, from the calculation.

2. Funding in the form of equity by Sagarmala Development Co. Ltd.

    a. SDCL to provide 49% equity funding to residual projects

    b. monitoring is to be jointly done by SDCL and implementing agency at the SPV level

    c.  project proponent to bear operation and maintenance costs of the project

     i. importantly, expenses incurred for project development to be treated as part of SDCL’s equity contribution

     ii. preferences to be given to projects where land is being contributed by the project proponent

What are the main financing issues?

  1. Role of MDBs and BDBs for promotion of shipping sector in the country
  2. provision of long-term low-cost loans to shipping companies for procurement of vessels
  3. PPPs (coastal employment zones, port connectivity projects), EPCs, ECBs (port expansion and new port development), FDI in Make in India 2.0 of which shipping is a major sector identified, and conventional bank financing for port modernization and port connectivity

the major constraining factors, however, are:

  1. uncertainty in the shipping sector, cyclical business nature
  2. immature financial markets