The Banking Business…Note Quote

retailandcommercialbanking

Why is lending indispensable to banking? This not-so new question has garnered a lot of steam, especially in the wake of 2007-08 crisis. In India, however, this question has become quite a staple of CSOs purportedly carrying out research and analysis in what has, albeit wrongly, begun to be considered offshoots of neoliberal policies of capitalism favoring cronyism on one hand, and marginalizing priority sector focus by nationalized banks on the other. Though, it is a bit far-fetched to call this analysis mushrooming on artificially-tilled grounds, it nevertheless isn’t justified for the leaps such analyses assume don’t exist. The purpose of this piece is precisely to demystify and be a correctional to such erroneous thoughts feeding activism. 

The idea is to launch from the importance of lending practices to banking, and why if such practices weren’t the norm, banking as a business would falter. Monetary and financial systems are creations of double entry-accounting, in that, when banks lend, the process is a creation of a matrix/(ces) of new assets and new liabilities. Monetary system is a counterfactual, which is a bookkeeping mechanism for the intermediation of real economic activity giving a semblance of reality to finance capitalism in substance and form. Let us say, a bank A lends to a borrower. By this process, a new asset and a new liability is created for A, in that, there is a debit under bank assets, and a simultaneous credit on the borrower’s account. These accounting entries enhance bank’s and borrower’s  respective categories, making it operationally different from opening bank accounts marked by deposits. The bank now has an asset equal to the amount of the loan and a liability equal to the deposit. Put a bit more differently, bank A writes a cheque or draft for the borrower, thus debiting the borrower’s loan account and crediting a payment liability account. Now, this borrower decides to deposit this cheque/draft at a different bank B, which sees the balance sheet of B grow by the same amount, with a payment due asset and a deposit liability. This is what is a bit complicated and referred to as matrix/(ces) at the beginning of this paragraph. The obvious complication is due to a duplication of balance sheet across the banks A and B, which clearly stands in need of urgent resolution. This duplication is categorized under the accounting principle of ‘Float’, and is the primary requisite for resolving duplicity. Float is the amount of time it takes for money to move from one account to another. The time period is significant because it’s as if the funds are in two places at once. The money is still in the cheque writer’s account, and the cheque recipient may have deposited funds to their bank as well. The resolution is reached when the bank B clears the cheque/draft and receives a reserve balance credit in exchange, at which point the bank A sheds both reserve balances and its payment liability. Now, what has happened is that the systemic balance sheet has grown by the amount of the original loan and deposit, even if these are domiciles in two different banks A and B. In other words, B’s balance sheet has an increased deposits and reserves, while A’s balance sheet temporarily unchanged due to loan issued offset reserves decline. It needs to be noted that here a reserve requirement is created in addition to a capital requirement, the former with the creation of a deposit, while the latter with the creation of a loan, implying that loans create capital requirement, whereas deposits create reserve requirement.  Pari Passu, bank A will seek to borrow new funding from money markets and bank B could lend funds into these markets. This is a natural reaction to the fluctuating reserve distribution created at banks A and B. This course of normalization of reserve fluctuations is a basic function of commercial bank reserve management. Though, this is a typical case involving just two banks, a meshwork of different banks, their counterparties, are involved in such transactions that define present-day banking scenario, thus highlighting complexity referred to earlier. 

Now, there is something called the Cash Reserve Ratio (CRR), whereby banks in India (and elsewhere as well) are required to hold a certain proportion of their deposits in the form of cash. However, these banks don’t hold these as cash with themselves for they deposit such cash (also known as currency chests) with the Reserve Bank of India (RBI). For example, if the bank’s deposits increase by Rs. 100, and if the CRR is 4% (this is the present CRR stipulated by the RBI), then the banks will have to hold Rs. 4 with the RBI, and the bank will be able to use only Rs. 96 for investments and lending, or credit purpose. Therefore, higher the CRR, lower is the amount that banks will be able to use for lending and investment. CRR is a tool used by the RBI to control liquidity in the banking system. Now, if the bank A lends out Rs. 100, it incurs a reserve requirement of Rs. 4, or in other words, for every Rs. 100 loan, there is a simultaneous reserve requirement of Rs. 4 created in the form of reserve liability. But, there is a further ingredient to this banking complexity in the form of Tier-1 and Tier-2 capital as laid down by BASEL Accords, to which India is a signatory. Under the accord, bank’s capital consists of tier-1 and tier-2 capital, where tier-1 is bank’s core capital, while tier-2 is supplementary, and the sum of these two is bank’s total capital. This is a crucial component and is considered highly significant by regulators (like the RBI, for instance), for the capital ratio is used to determine and rank bank’s capital adequacy. tier-1 capital consists of shareholders’ equity and retained earnings, and gives a measure of when the bank must absorb losses without ceasing business operations. BASEL-3 has capped the minimum tier-1 capital ratio at 6%, which is calculated by dividing bank’s tier-1 capital by its total risk-based assets. Tier-2 capital includes revaluation reserves, hybrid capital instruments and subordinated term debt, general loan-loss revenues, and undisclosed reserves. tier-2 capital is supplementary since it is less reliable than tier-1 capital. According to BASEL-3, the minimum total capital ratio is 8%, which indicates the minimum tier-2 capital ratio at 2%, as opposed to 6% for the tier-1 capital ratio. Going by these norms, a well capitalized bank in India must have a 8% combined tier-1 and tier-2 capital ratio, meaning that for every Rs. 100 bank loan, a simultaneous regulatory capital liability of Rs. 8 of tier-1/tier-2 is generated. Further, if a Rs. 100 loan has created a Rs. 100 deposit, it has actually created an asset of Rs. 100 for the bank, while at the same time a liability of Rs. 112, which is the sum of deposits and required reserves and capital. On the face of it, this looks like a losing deal for the bank. But, there is more than meets the eye here. 

Assume bank A lends Mr. Amit Modi Rs. 100, by crediting Mr. Modi’s deposit account held at A with Rs. 100. Two new liabilities are immediately created that need urgent addressing, viz. reserve and capital requirement. One way to raise Rs. 8 of required capital, bank A sells shares, or raise equity-like debt or retain earnings. The other way is to attach an origination fee of 10% (sorry for the excessively high figure here, but for sake of brevity, let’s keep it at 10%). This 10% origination fee helps maintain retained earnings and assist satisfying capital requirements. Now, what is happening here might look unique, but is the key to any banking business of lending, i.e. the bank A is meeting its capital requirement by discounting a deposit it created of its own loan, and thereby reducing its liability without actually reducing its asset. To put it differently, bank A extracts a 10% fee from Rs. 100 it loans, thus depositing an actual sum of only Rs. 90. With this, A’s reserve requirement decrease by Rs. 3.6 (remember 4% is the CRR). This in turn means that the loan of Rs. 100 made by A actually creates liabilities worth Rs. Rs. 108.4 (4-3.6 = 0.4 + 8). The RBI, which imposes the reserve requirement will follow up new deposit creation with a systemic injection sufficient to accommodate the requirement of bank B that has issued the deposit. And this new requirement is what is termed the targeted asset for the bank. It will fund this asset in the normal course of its asset-liability management process, just as it would any other asset. At the margin, the bank actually has to compete for funding that will draw new reserve balances into its position with the RBI. This action of course is commingled with numerous other such transactions that occur in the normal course of reserve management. The sequence includes a time lag between the creation of the deposit and the activation of the corresponding reserve requirement against that deposit. A bank in theory can temporarily be at rest in terms of balance sheet growth, and still be experiencing continuous shifting in the mix of asset and liability types, including shifting of deposits. Part of this deposit shifting is inherent in a private sector banking system that fosters competition for deposit funding. The birth of a demand deposit in particular is separate from retaining it through competition. Moreover, the fork in the road that was taken in order to construct a private sector banking system implies that the RBI is not a mere slush fund that provides unlimited funding to the banking system.  

The originating accounting entries in the above case are simple, a loan asset and a deposit liability. But this is only the start of the story. Commercial bank ‘asset-liability management’ functions oversee the comprehensive flow of funds in and out of individual banks. They control exposure to the basic banking risks of liquidity and interest rate sensitivity. Somewhat separately, but still connected within an overarching risk management framework, banks manage credit risk by linking line lending functions directly to the process of internal risk assessment and capital allocation. Banks require capital, especially equity capital, to take risk, and to take credit risk in particular. Interest rate risk and interest margin management are critical aspects of bank asset-liability management. The asset-liability management function provides pricing guidance for deposit products and related funding costs for lending operations. This function helps coordinate the operations of the left and the right hand sides of the balance sheet. For example, a central bank interest rate change becomes a cost of funds signal that transmits to commercial bank balance sheets as a marginal pricing influence. The asset-liability management function is the commercial bank coordination function for this transmission process, as the pricing signal ripples out to various balance sheet categories. Loan and deposit pricing is directly affected because the cost of funds that anchors all pricing in finance has been changed. In other cases, a change in the term structure of market interest rates requires similar coordination of commercial bank pricing implications. And this reset in pricing has implications for commercial bank approaches to strategies and targets for the compositional mix of assets and liabilities. The life of deposits is more dynamic than their birth or death. Deposits move around the banking system as banks compete to retain or attract them. Deposits also change form. Demand deposits can convert to term deposits, as banks seek a supply of longer duration funding for asset-liability matching purposes. And they can convert to new debt or equity securities issued by a particular bank, as buyers of these instruments draw down their deposits to pay for them. All of these changes happen across different banks, which can lead to temporary imbalances in the nominal matching of assets and liabilities, which in turn requires active management of the reserve account level, with appropriate liquidity management responses through money market operations in the short term, or longer term strategic adjustment in approaches to loan and deposit market share. The key idea here is that banks compete for deposits that currently exist in the system, including deposits that can be withdrawn on demand, or at maturity in the case of term deposits. And this competition extends more comprehensively to other liability forms such as debt, as well as to the asset side of the balance sheet through market share strategies for various lending categories. All of this balance sheet flux occurs across different banks, and requires that individual banks actively manage their balance sheets to ensure that assets are appropriately and efficiently funded with liabilities and equity. The ultimate purpose of reserve management is not reserve positioning per se. The end goal is balance sheets are in balance. The reserve system records the effect of this balance sheet activity. And even if loan books remain temporarily unchanged, all manner of other banking system assets and liabilities may be in motion. This includes securities portfolios, deposits, debt liabilities, and the status of the common equity and retained earnings account. And of course, loan books don’t remain unchanged for very long, in which case the loan/deposit growth dynamic comes directly into play on a recurring basis. 

Commercial banks’ ability to create money is constrained by capital. When a bank creates a new loan, with an associated new deposit, the bank’s balance sheet size increases, and the proportion of the balance sheet that is made up of equity (shareholders’ funds, as opposed to customer deposits, which are debt, not equity) decreases. If the bank lends so much that its equity slice approaches zero, as happened in some banks prior to the financial crisis, even a very small fall in asset prices is enough to render it insolvent. Regulatory capital requirements are intended to ensure that banks never reach such a fragile position. In contrast, central banks’ ability to create money is constrained by the willingness of their government to back them, and the ability of that government to tax the population. In practice, most central bank money these days is asset-backed, since central banks create new money when they buy assets in open market operations or Quantitative Easing, and when they lend to banks. However, in theory a central bank could literally spirit money from thin air without asset purchases or lending to banks. This is Milton Friedman’s famous helicopter drop. The central bank would become technically insolvent as a result, but provided the government is able to tax the population, that wouldn’t matter. The ability of the government to tax the population depends on the credibility of the government and the productive capacity of the economy. Hyperinflation can occur when the supply side of the economy collapses, rendering the population unable and/or unwilling to pay taxes. It can also occur when people distrust a government and its central bank so much that they refuse to use the currency that the central bank creates. Distrust can come about because people think the government is corrupt and/or irresponsible, or because they think that the government is going to fall and the money it creates will become worthless. But nowhere in the genesis of hyperinflation does central bank insolvency feature….

 

Incomplete Markets and Calibrations for Coherence with Hedged Portfolios. Thought of the Day 154.0

 

comnatSWD_2018_0252_FIN2.ENG.xhtml.SWD_2018_0252_FIN2_ENG_01027.jpg

In complete market models such as the Black-Scholes model, probability does not really matter: the “objective” evolution of the asset is only there to define the set of “impossible” events and serves to specify the class of equivalent measures. Thus, two statistical models P1 ∼ P2 with equivalent measures lead to the same option prices in a complete market setting.

This is not true anymore in incomplete markets: probabilities matter and model specification has to be taken seriously since it will affect hedging decisions. This situation is more realistic but also more challenging and calls for an integrated approach between option pricing methods and statistical modeling. In incomplete markets, not only does probability matter but attitudes to risk also matter: utility based methods explicitly incorporate these into the hedging problem via utility functions. While these methods are focused on hedging with the underlying asset, common practice is to use liquid call/put options to hedge exotic options. In incomplete markets, options are not redundant assets; therefore, if options are available as hedging instruments they can and should be used to improve hedging performance.

While the lack of liquidity in the options market prevents in practice from using dynamic hedges involving options, options are commonly used for static hedging: call options are frequently used for dealing with volatility or convexity exposures and for hedging barrier options.

What are the implications of hedging with options for the choice of a pricing rule? Consider a contingent claim H and assume that we have as hedging instruments a set of benchmark options with prices Ci, i = 1 . . . n and terminal payoffs Hi, i = 1 . . . n. A static hedge of H is a portfolio composed from the options Hi, i = 1 . . . n and the numeraire, in order to match as closely as possible the terminal payoff of H:

H = V0 + ∑i=1n xiHi + ∫0T φdS + ε —– (1)

where ε is an hedging error representing the nonhedgeable risk. Typically Hi are payoffs of call or put options and are not possible to replicate using the underlying so adding them to the hedge portfolio increases the span of hedgeable claims and reduces residual risk.

Consider a pricing rule Q. Assume that EQ[ε] = 0 (otherwise EQ[ε] can be added to V0). Then the claim H is valued under Q as:

e-rTEQ[H] = V0 ∑i=1n xe-rTEQ[Hi] —– (2)

since the stochastic integral term, being a Q-martingale, has zero expectation. On the other hand, the cost of setting up the hedging portfolio is:

V0 + ∑i=1n xCi —– (3)

So the value of the claim given by the pricing rule Q corresponds to the cost of the hedging portfolio if the model prices of the benchmark options Hi correspond to their market prices Ci:

∀i = 1, …, n

e-rTEQ[Hi] = Ci∗ —– (4)

This condition is called calibration, where a pricing rule verifies the calibration of the option prices Ci, i = 1, . . . , n. This condition is necessary to guarantee the coherence between model prices and the cost of hedging with portfolios and if the model is not calibrated then the model price for a claim H may have no relation with the effective cost of hedging it using the available options Hi. If a pricing rule Q is specified in an ad hoc way, the calibration conditions will not be verified, and thus one way to ensure them is to incorporate them as constraints in the choice of the pricing measure Q.

Conjuncted: Long-Term Capital Management. Note Quote.

3022051-14415416419579177-Shock-Exchange_origin

From Lowenstein‘s

The real culprit in 1994 was leverage. If you aren’t in debt, you can’t go broke and can’t be made to sell, in which case “liquidity” is irrelevant. but, a leveraged firm may be forced to sell, lest fast accumulating losses put it out of business. Leverage always gives rise to this same brutal dynamic, and its dangers cannot be stressed too often…

One of LTCM‘s first trades involved the thirty-year Treasury bond, which are issued by the US Government to finance the federal budget. Some $170 billion of them trade everyday, and are considered the least risky investments in the world. but a funny thing happens to thirty-year Treasurys six months or so after they are issued: they are kept in safes and drawers for long-term keeps. with fewer left in the circulation, the bonds become harder to trade. Meanwhile, the Treasury issues new thirty-year bond, which has its day in the sun. On Wall Street, the older bond, which has about 29-and-a-half years left to mature, is known as off the run; while the shiny new one is on the run. Being less liquid, the older one is considered less desirable, and begins to trade at a slight discount. And as arbitrageurs would say, a spread opens.

LTCM with its trademark precision calculated that owning one bond and shorting another was twenty-fifth as risky as owning either outright. Thus, it reckoned, it would prudently leverage this long/short arbitrage twenty-five times. This multiplied its potential for profit, but also its potential for loss. In any case, borrow it did. It paid for the cheaper off the run bonds with money it had borrowed from a Wall Street bank, or from several banks. And the other bonds, the ones it sold short, it obtained through a loan, as well. Actually, the transaction was more involved, though it was among the simplest in LTCM’s repertoire. No sooner than LTCM buy off the run bonds than it loaned them to some other Wall street firm, which then wired cash to LTCM as collateral. Then LTCM turned around and used this cash as a collateral on the bonds it borrowed. On Wall street, such short-term, collateralized loans are known as “repo financing”. The beauty of the trade was that LTCM’s cash transactions were in perfect balance. The money that LTCM spent going long matched the money that it collected going short. The collateral it paid equalled the collateral it collected. In other words, LTCM pulled off the entire transaction without using a single dime of its own cash. Maintaining the position wasn’t completely cost free, however. Though, a simple trade, it actually entailed four different payment streams. LTCM collected interest on the collateral it paid out and paid interest at a slightly higher-rate on the collateral it took in. It made some of this deficit back because of the difference in the initial margin, or the slightly higher coupon on the bond it owned as compared to the bond it shorted. This, overall cost a few basis points to LTCM each month.

Algorithmic Trading. Thought of the Day 151.0

HFT order routing

One of the first algorithmic trading strategies consisted of using a volume-weighted average price, as the price at which orders would be executed. The VWAP introduced by Berkowitz et al. can be calculated as the dollar amount traded for every transaction (price times shares traded) divided by the total shares traded for a given period. If the price of a buy order is lower than the VWAP, the trade is executed; if the price is higher, then the trade is not executed. Participants wishing to lower the market impact of their trades stress the importance of market volume. Market volume impact can be measured through comparing the execution price of an order to a benchmark. The VWAP benchmark is the sum of every transaction price paid, weighted by its volume. VWAP strategies allow the order to dilute the impact of orders through the day. Most institutional trading occurs in filling orders that exceed the daily volume. When large numbers of shares must be traded, liquidity concerns can affect price goals. For this reason, some firms offer multiday VWAP strategies to respond to customers’ requests. In order to further reduce the market impact of large orders, customers can specify their own volume participation by limiting the volume of their orders to coincide with low expected volume days. Each order is sliced into several days’ orders and then sent to a VWAP engine for the corresponding days. VWAP strategies fall into three categories: sell order to a broker-dealer who guarantees VWAP; cross the order at a future date at VWAP; or trade the order with the goal of achieving a price of VWAP or better.

The second algorithmic trading strategy is the time-weighted average price (TWAP). TWAP allows traders to slice a trade over a certain period of time, thus an order can be cut into several equal parts and be traded throughout the time period specified by the order. TWAP is used for orders which are not dependent on volume. TWAP can overcome obstacles such as fulfilling orders in illiquid stocks with unpredictable volume. Conversely, high-volume traders can also use TWAP to execute their orders over a specific time by slicing the order into several parts so that the impact of the execution does not significantly distort the market.

Yet, another type of algorithmic trading strategy is the implementation shortfall or the arrival price. The implementation shortfall is defined as the difference in return between a theoretical portfolio and an implemented portfolio. When deciding to buy or sell stocks during portfolio construction, a portfolio manager looks at the prevailing prices (decision prices). However, several factors can cause execution prices to be different from decision prices. This results in returns that differ from the portfolio manager’s expectations. Implementation shortfall is measured as the difference between the dollar return of a paper portfolio (paper return) where all shares are assumed to transact at the prevailing market prices at the time of the investment decision and the actual dollar return of the portfolio (real portfolio return). The main advantage of the implementation shortfall-based algorithmic system is to manage transactions costs (most notably market impact and timing risk) over the specified trading horizon while adapting to changing market conditions and prices.

The participation algorithm or volume participation algorithm is used to trade up to the order quantity using a rate of execution that is in proportion to the actual volume trading in the market. It is ideal for trading large orders in liquid instruments where controlling market impact is a priority. The participation algorithm is similar to the VWAP except that a trader can set the volume to a constant percentage of total volume of a given order. This algorithm can represent a method of minimizing supply and demand imbalances (Kendall Kim – Electronic and Algorithmic Trading Technology).

Smart order routing (SOR) algorithms allow a single order to exist simultaneously in multiple markets. They are critical for algorithmic execution models. It is highly desirable for algorithmic systems to have the ability to connect different markets in a manner that permits trades to flow quickly and efficiently from market to market. Smart routing algorithms provide full integration of information among all the participants in the different markets where the trades are routed. SOR algorithms allow traders to place large blocks of shares in the order book without fear of sending out a signal to other market participants. The algorithm matches limit orders and executes them at the midpoint of the bid-ask price quoted in different exchanges.

Handbook of Trading Strategies for Navigating and Profiting From Currency, Bond, Stock Markets

Transmission of Eventual Lending Rates: MCLRs. Note Quote.

w_money-lead-kDBH--621x414@LiveMint

Given that capital market instruments are not subject to MCLR/base rate regulations, the issuances of Commercial Paper/bonds reflect the current interest rates as banks are able to buy/subscribe new deposits reflecting extant interest rates, making transmission instantaneous. 

The fundamental challenge we have here is that there is no true floating rate liability structure for banks. One can argue that banks themselves will have to develop the floating rate deposit product, but customer response, given the complexity and uncertainty for the depositor, has been at best lukewarm. In an environment where the banking system is fighting multiple battles – asset quality, weak growth, challenges on transition to Ind AS accounting practice, rapid digitization leading to new competition from non-bank players, vulnerability in the legacy IT systems –  creating a mindset for floating rate deposits hardly appears to be a priority. 

In this context, it is clear that Marginal Costs of Funds Based Lending Rates (MCLRs) have largely come down in line with policy rates. MCLR is built on four components – marginal cost of funds, negative carry on account of cash reserve ratio (CRR), operating costs and tenor premium. Marginal cost of funds is the marginal cost of borrowing and return on net worth for banks. The operating cost includes cost of providing the loan product including cost of raising funds. Tenor premium arises from loan commitments with longer tenors. Some data indicate that while MCLR has indeed tracked policy rates (especially post-demonetization), as liquidity has been abundant, average leading rates have not yet reflected the fall in MCLR rates. This is simply because MCLR reset happens over a period of time depending on the benchmark MCLR used for sanctioning the loans. 

Before jumping the gun that this is a flaw in the structure as the benefit of lower interest rates is significantly lagging, the benefit will be to the borrower when the interest cycle turns. In fact, given that MCLR benchmarks vary from one month to one year, unlike base rate, banks are in a better situation to cut MCLRs, as not the entire book resets immediately. The stakeholders must therefore want for a few more months before concluding on the effectiveness of transmission on eventual lending rates. 

BASEL III: The Deflationary Symbiotic Alliance Between Governments and Banking Sector. Thought of the Day 139.0

basel_reforms

The Bank for International Settlements (BIS) is steering the banks to deal with government debt, since the governments have been running large deficits to deal with the catastrophe of BASEL 2-inspired mortgaged-backed securities collapse. The deficits are ranged anywhere between 3 to 7 per cent of the GDP, and in cases even higher. These deficits were being used to create a floor under growth by stimulating the economy and bailing out financial institutions that got carried away by the wholesale funding of real estate. And this is precisely what BASEL 2 promulgated, i.e. encouraging financial institutions to hold mortgage-backed securities for investments.

In comes the BASEL 3 rules that implore than banks must be in compliance with these regulations. But, who gets to decide these regulations? Actually, banks do, since they then come on board for discussions with the governments, and such negotiations are catered to bail banks out with government deficits in order to oil the engine of economic growth. The logic here underlines the fact that governments can continue to find a godown of sorts for their deficits, while the banks can buy government debt without any capital commitment and make a good spread without the risk, thus serving the interests of the both parties involved mutually. Moreover, for the government, the process is political, as no government would find it acceptable to be objective in its viewership of letting a bubble deflate, because any process of deleveraging would cause the banks to offset their lending orgy, which is detrimental to the engineered economic growth. Importantly, without these deficits, the financial system could go down the deflationary spiral, which might turn out to be a difficult proposition to recover if there isn’t any complicity in rhyme and reason accorded to this particular dysfunctional and symbiotic relationship. So, whats the implication of all this? The more government debt banks hold, the less overall capital they need. And who says so? BASEL 3.

But, the mesh just seems to be building up here. In the same way that banks engineered counterfeit AAA-backed securities that were in fact an improbable financial hoax, how can countries that have government debt/GDP ratio to the tune of 90 – 120 per cent get a Standard&Poor’s ratings of a double-A? They have these ratings because they belong to a apical club that gives their members exclusive rights to a high rating even if they are irresponsible with their issuing of debts. Well, is that this simple? Yes and no. Yes, as is above, and no is merely clothing itself in a bit of an economic jargon, in that these are the countries where the government debt can be held without any capital against it. In other words, if a debt cannot be held, it cannot be issued, and that is the reason why countries are striving for issuing debts that have a zero weighting.

Let us take snippets across gradations of BASEL 1, 2 and 3. In BASEL 1, the unintended consequences were that banks were all buying equity in cross-owned companies. When the unwinding happened, equity just fell apart, since any beginning of a financial crisis is tailored to smash bank equities to begin with. Thats the first wound to rationality. In BASEL 2, banks were told to hold as much AAA-rated paper as they wanted with no capital against it. What happened if these ratings were downgraded? It would trigger a tsunami cutting through pension and insurance schemes to begin with forcing them to sell their papers and pile up huge losses meant to absorbed by capital, which doesn’t exist against these papers. So whatever gets sold is politically cushioned and buffered for by the governments, for the risks cannot be afforded to get any more denser as that explosion would sound the catastrophic death knell for the economy. BASEL 3 doesn’t really help, even if it mandated to hold a concentrated portfolio of government debt without any capital against it, for absorption of losses in case of a crisis hitting would have to exhumed through government bail-outs in scenarios where government debts are a century plus. So, are the banks in-stability, or given to more instability via BASEL 3?  The incentives to ever more hold government securities increase bank exposure to sovereign bonds, adding to existing exposure of government securities via repurchase transactions, investments and trading inventories. A ratings downgrade results in a fall in value of bonds triggering losses. Banks would then face calls for additional collateral, which would drain liquidity, and which would then require additional capital as way of compensation. where would this capital come in from, if not for the governments to source it? One way out would be recapitalization through government debt. On the other hand, the markets are required to hedge against the large holdings of government securities and so short stocks, currencies and insurance companies are all made to stare in the face of volatility that rips through them, of which the net resultant is falling liquidity. So, this vicious cycle would continue to cycle its way through any downgrades. And thats why the deflationary symbiotic alliance between the governments and banking sector isn’t anything more than high-fatigue tolerance….

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Statistical Arbitrage. Thought of the Day 123.0

eg_arb_usd_hedge

In the perfect market paradigm, assets can be bought and sold instantaneously with no transaction costs. For many financial markets, such as listed stocks and futures contracts, the reality of the market comes close to this ideal – at least most of the time. The commission for most stock transactions by an institutional trader is just a few cents a share, and the bid/offer spread is between one and five cents. Also implicit in the perfect market paradigm is a level of liquidity where the act of buying or selling does not affect the price. The market is composed of participants who are so small relative to the market that they can execute their trades, extracting liquidity from the market as they demand, without moving the price.

That’s where the perfect market vision starts to break down. Not only does the demand for liquidity move prices, but it also is the primary driver of the day-by-day movement in prices – and the primary driver of crashes and price bubbles as well. The relationship between liquidity and the prices of related stocks also became the primary driver of one of the most powerful trading models in the past 20 years – statistical arbitrage.

If you spend any time at all on a trading floor, it becomes obvious that something more than information moves prices. Throughout the day, the 10-year bond trader gets orders from the derivatives desk to hedge a swap position, from the mortgage desk to hedge mortgage exposure, from insurance clients who need to sell bonds to meet liabilities, and from bond mutual funds that need to invest the proceeds of new accounts. None of these orders has anything to do with information; each one has everything to do with a need for liquidity. The resulting price changes give the market no signal concerning information; the price changes are only the result of the need for liquidity. And the party on the other side of the trade who provides this liquidity will on average make money for doing so. For the liquidity demander, time is more important than price; he is willing to make a price concession to get his need fulfilled.

Liquidity needs will be manifest in the bond traders’ own activities. If their inventory grows too large and they feel overexposed, they will aggressively hedge or liquidate a portion of the position. And they will do so in a way that respects the liquidity constraints of the market. A trader who needs to sell 2,000 bond futures to reduce exposure does not say, “The market is efficient and competitive, and my actions are not based on any information about prices, so I will just put those contracts in the market and everybody will pay the fair price for them.” If the trader dumps 2,000 contracts into the market, that offer obviously will affect the price even though the trader does not have any new information. Indeed, the trade would affect the market price even if the market knew the selling was not based on an informational edge.

So the principal reason for intraday price movement is the demand for liquidity. This view of the market – a liquidity view rather than an informational view – replaces the conventional academic perspective of the role of the market, in which the market is efficient and exists solely for conveying information. Why the change in roles? For one thing, it’s harder to get an information advantage, what with the globalization of markets and the widespread dissemination of real-time information. At the same time, the growth in the number of market participants means there are more incidents of liquidity demand. They want it, and they want it now.

Investors or traders who are uncomfortable with their level of exposure will be willing to pay up to get someone to take the position. The more uncomfortable the traders are, the more they will pay. And well they should, because someone else is getting saddled with the risk of the position, someone who most likely did not want to take on that position at the existing market price. Thus the demand for liquidity not only is the source of most price movement; it is at the root of most trading strategies. It is this liquidity-oriented, tectonic market shift that has made statistical arbitrage so powerful.

Statistical arbitrage originated in the 1980s from the hedging demand of Morgan Stanley’s equity block-trading desk, which at the time was the center of risk taking on the equity trading floor. Like other broker-dealers, Morgan Stanley continually faced the problem of how to execute large block trades efficiently without suffering a price penalty. Often, major institutions discover they can clear a large block trade only at a large discount to the posted price. The reason is simple: Other traders will not know if there is more stock to follow, and the large size will leave them uncertain about the reason for the trade. It could be that someone knows something they don’t and they will end up on the wrong side of the trade once the news hits the street. The institution can break the block into a number of smaller trades and put them into the market one at a time. Though that’s a step in the right direction, after a while it will become clear that there is persistent demand on one side of the market, and other traders, uncertain who it is and how long it will continue, will hesitate.

The solution to this problem is to execute the trade through a broker-dealer’s block-trading desk. The block-trading desk gives the institution a price for the entire trade, and then acts as an intermediary in executing the trade on the exchange floor. Because the block traders know the client, they have a pretty good idea if the trade is a stand-alone trade or the first trickle of a larger flow. For example, if the institution is a pension fund, it is likely it does not have any special information, but it simply needs to sell the stock to meet some liability or to buy stock to invest a new inflow of funds. The desk adjusts the spread it demands to execute the block accordingly. The block desk has many transactions from many clients, so it is in a good position to mask the trade within its normal business flow. And it also might have clients who would be interested in taking the other side of the transaction.

The block desk could end up having to sit on the stock because there is simply no demand and because throwing the entire position onto the floor will cause prices to run against it. Or some news could suddenly break, causing the market to move against the position held by the desk. Or, in yet a third scenario, another big position could hit the exchange floor that moves prices away from the desk’s position and completely fills existing demand. A strategy evolved at some block desks to reduce this risk by hedging the block with a position in another stock. For example, if the desk received an order to buy 100,000 shares of General Motors, it might immediately go out and buy 10,000 or 20,000 shares of Ford Motor Company against that position. If news moved the stock price prior to the GM block being acquired, Ford would also likely be similarly affected. So if GM rose, making it more expensive to fill the customer’s order, a position in Ford would also likely rise, partially offsetting this increase in cost.

This was the case at Morgan Stanley, where there were maintained a list of pairs of stocks – stocks that were closely related, especially in the short term, with other stocks – in order to have at the ready a solution for partially hedging positions. By reducing risk, the pairs trade also gave the desk more time to work out of the trade. This helped to lessen the liquidity-related movement of a stock price during a big block trade. As a result, this strategy increased the profit for the desk.

The pairs increased profits. Somehow that lightbulb didn’t go on in the world of equity trading, which was largely devoid of principal transactions and systematic risk taking. Instead, the block traders epitomized the image of cigar-chewing gamblers, playing market poker with millions of dollars of capital at a clip while working the phones from one deal to the next, riding in a cloud of trading mayhem. They were too busy to exploit the fact, or it never occurred to them, that the pairs hedging they routinely used held the secret to a revolutionary trading strategy that would dwarf their desk’s operations and make a fortune for a generation of less flamboyant, more analytical traders. Used on a different scale and applied for profit making rather than hedging, their pairwise hedges became the genesis of statistical arbitrage trading. The pairwise stock trades that form the elements of statistical arbitrage trading in the equity market are just one more flavor of spread trades. On an individual basis, they’re not very good spread trades. It is the diversification that comes from holding many pairs that makes this strategy a success. But even then, although its name suggests otherwise, statistical arbitrage is a spread trade, not a true arbitrage trade.

Fragmentation – Lit and Dark Electronic Exchanges. Thought of the Day 116.0

Untitled

Exchanges also control the amount and degree of granularity of the information you receive (e.g., you can use the consolidated/public feed at a low cost or pay a relatively much larger cost for direct/proprietary feeds from the exchanges). They also monetise the need for speed by renting out computer/server space next to their matching engines, a process called colocation. Through coloca­tion, exchanges can provide uniform service to trading clients at competitive rates. Having the traders’ trading engines at a common location owned by the exchange simplifies the exchange’s ability to provide uniform service as it can control the hardware connecting each client to the trading engine, the cable (so all have the same cable of the same length), and the network. This ensures that all traders in colocation have the same fast access, and are not disadvantaged (at least in terms of exchange-provided hardware). Naturally, this imposes a clear distinction between traders who are colocated and those who are not. Those not colocated will always have a speed disadvantage. It then becomes an issue for reg­ulators who have to ensure that exchanges keep access to colocation sufficiently competitive.

The issue of distance from the trading engine brings us to another key dimen­sion of trading nowadays, especially in US equity markets, namely fragmentation. A trader in US equities markets has to be aware that there are up to 13 lit electronic exchanges and more than 40 dark ones. Together with this wide range of trading options, there is also specific regulation (the so-called ‘trade-through’ rules) which affects what happens to market orders sent to one exchange if there are better execution prices at other exchanges. The interaction of multiple trading venues, latency when moving be­tween these venues, and regulation introduces additional dimensions to keep in mind when designing success l trading strategies.

The role of time is fundamental in the usual price-time priority electronic ex­change, and in a fragmented market, the issue becomes even more important. Traders need to be able to adjust their trading positions fast in response to or in anticipation of changes in market circumstances, not just at the local exchange but at other markets as well. The race to be the first in or out of a certain position is one of the focal points of the debate on the benefits and costs of ‘high-frequency trading’.

The importance of speed permeates the whole process of designing trading algorithms, from the actual code, to the choice of programming language, to the hardware it is implemented on, to the characteristics of the connection to the matching engine, and the way orders are routed within an exchange and between exchanges. Exchanges, being aware of the importance of speed, have adapted and, amongst other things, moved well beyond the basic two types of orders (Market Orders and Limit Orders). Any trader should be very well-informed regarding all the different order types available at the exchanges, what they are and how they may be used.

When coding an algorithm one should be very aware of all the possible types of orders allowed, not just in one exchange, but in all competing exchanges where one’s asset of interest is traded. Being uninformed about the variety of order types can lead to significant losses. Since some of these order types allow changes and adjustments at the trading engine level, they cannot be beaten in terms of latency by the trader’s engine, regardless of how efficiently your algorithms are coded and hardwired.

Untitled

Another important issue to be aware of is that trading in an exchange is not free, but the cost is not the same for all traders. For example, many exchanges run what is referred to as a maker-taker system of fees whereby a trader sending an MO (and hence taking liquidity away from the market) pays a trading fee, while a trader whose posted LO is filled by the MO (that is, the LO with which the MO is matched) will a pay much lower trading fee, or even receive a payment (a rebate) from the exchange for providing liquidity (making the market). On the other hand, there are markets with an inverted fee schedule, a taker-maker system where the fee structure is the reverse: those providing liquidity pay a higher fee than those taking liquidity (who may even get a rebate). The issue of exchange fees is quite important as fees distort observed market prices (when you make a transaction the relevant price for you is the net price you pay/receive, which is the published price net of fees).

Credit Risk Portfolio. Note Quote.

maxresdefault

The recent development in credit markets is characterized by a flood of innovative credit risky structures. State-of-the-art portfolios contain derivative instruments ranging from simple, nearly commoditized contracts such as credit default swap (CDS), to first- generation portfolio derivatives such as first-to-default (FTD) baskets and collateralized debt obligation (CDO) tranches, up to complex structures involving spread options and different asset classes (hybrids). These new structures allow portfolio managers to implement multidimensional investment strategies, which seamlessly conform to their market view. Moreover, the exploding liquidity in credit markets makes tactical (short-term) overlay management very cost efficient. While the outperformance potential of an active portfolio management will put old-school investment strategies (such as buy-and-hold) under enormous pressure, managing a highly complex credit portfolio requires the introduction of new optimization technologies.

New derivatives allow the decoupling of business processes in the risk management industry (in banking, as well as in asset management), since credit treasury units are now able to manage specific parts of credit risk actively and independently. The traditional feedback loop between risk management and sales, which was needed to structure the desired portfolio characteristics only by selective business acquisition, is now outdated. Strategic cross asset management will gain in importance, as a cost-efficient overlay management can now be implemented by combining liquid instruments from the credit universe.

In any case, all these developments force portfolio managers to adopt an integrated approach. All involved risk factors (spread term structures including curve effects, spread correlations, implied default correlations, and implied spread volatilities) have to be captured and integrated into appropriate risk figures. We have a look on constant proportion debt obligations (CPDOs) as a leveraged exposure on credit indices, constant proportion portfolio insurance (CPPI) as a capital guaranteed instrument, CDO tranches to tap the correlation market, and equity futures to include exposure to stock markets in the portfolio.

For an integrated credit portfolio management approach, it is of central importance to aggregate risks over various instruments with different payoff characteristics. In this chapter, we will see that a state-of-the-art credit portfolio contains not only linear risks (CDS and CDS index contracts) but also nonlinear risks (such as FTD baskets, CDO tranches, or credit default swaptions). From a practitioner’s point of view there is a simple solution for this risk aggregation problem, namely delta-gamma management. In such a framework, one approximates the risks of all instruments in a portfolio by its first- and second-order sensitivities and aggregates these sensitivities to the portfolio level. Apparently, for a proper aggregation of risk factors, one has to take the correlation of these risk factors into account. However, for credit risky portfolios, a simplistic sensitivity approach will be inappropriate, as can be seen by the characteristics of credit portfolio risks shows:

  • Credit risky portfolios usually involve a larger number of reference entities. Hence, one has to take a large number of sensitivities into account. However, this is a phenomenon that is already well known from the management of stock portfolios. The solution is to split the risk for each constituent into a systematic risk (e.g., a beta with a portfolio hedging tool) and an alpha component which reflects the idiosyncratic part of the risk.

  • However, in contrast to equities, credit risk is not one dimensional (i.e., one risky security per issuer) but at least two dimensional (i.e., a set of instruments with different maturities). This is reflected in the fact that there is a whole term structure of credit spreads. Moreover, taking also different subordination levels (with different average recovery rates) into account, credit risk becomes a multidimensional object for each reference entity.
  • While most market risks can be satisfactorily approximated by diffusion processes, for credit risk the consideration of events (i.e., jumps) is imperative. The most apparent reason for this is that the dominating element of credit risk is event risk. However, in a market perspective, there are more events than the ultimate default event that have to be captured. Since one of the main drivers of credit spreads is the structure of the underlying balance sheet, a change (or the risk of a change) in this structure usually triggers a large movement in credit spreads. The best-known example for such an event is a leveraged buyout (LBO).
  • For credit market players, correlation is a very special topic, as a central pricing parameter is named implied correlation. However, there are two kinds of correlation parameters that impact a credit portfolio: price correlation and event correlation. While the former simply deals with the dependency between two price (i.e., spread) time series under normal market conditions, the latter aims at describing the dependency between two price time series in case of an event. In its simplest form, event correlation can be seen as default correlation: what is the risk that company B defaults given that company A has defaulted? While it is already very difficult to model this default correlation, for practitioners event correlation is even more complex, since there are other events than just the default event, as already mentioned above. Hence, we can modify the question above: what is the risk that spreads of company B blow out given that spreads of company A have blown out? In addition, the notion of event correlation can also be used to capture the risk in capital structure arbitrage trades (i.e., trading stock versus bonds of one company). In this example, the question might be: what is the risk that the stock price of company A jumps given that its bond spreads have blown out? The complicated task in this respect is that we do not only have to model the joint event probability but also the direction of the jumps. A brief example highlights why this is important. In case of a default event, spreads will blow out accompanied by a significant drop in the stock price. This means that there is a negative correlation between spreads and stock prices. However, in case of an LBO event, spreads will blow out (reflecting the deteriorated credit quality because of the higher leverage), while stock prices rally (because of the fact that the acquirer usually pays a premium to buy a majority of outstanding shares).

These show that a simple sensitivity approach – e.g., calculate and tabulate all deltas and gammas and let a portfolio manager play with – is not appropriate. Further risk aggregation (e.g., beta management) and risk factors that capture the event risk are needed. For the latter, a quick solution is the so-called instantaneous default loss (IDL). The IDL expresses the loss incurred in a credit risk instrument in case of a credit event. For single-name CDS, this is simply the loss given default (LGD). However, for a portfolio derivative such as a mezzanine tranche, this figure does not directly refer to the LGD of the defaulted item, but to the changed subordination of the tranche because of the default. Hence, this figure allows one to aggregate various instruments with respect to credit events.