Minimum Support Price (MSP) for Farmers – Ruminations for the Grassroots.

24farm1

Minimum Support Price (MSP) is an insurance given by the Government of India to insure farmers and agricultural workers against any sharp fall in farm prices. MSP is a policy instrument at the disposal of the government and is introduced based on the recommendations of the Commission for Agricultural Costs and Prices (CACP) generally at the beginning of sowing season. The major objective of MSP is protecting and supporting farmers during bumper production periods by pouring food grains for public distribution. There are two ways in which an effective MSP can be implemented, viz. procurement of commodities and as remunerative. The remunerative nature for farmers compensates the difference between MSP and the prices received by them.

With the agrarian crisis looming large, the policies need to emphasize on measures that can bring forth immediate results. These results could be achieved through the components of price and non-price factors. Non-price factors are long-term oriented and rely on market reforms, institutional reforms and innovations in technology in order to bring in an upward drift growth and income brackets of the farmers. Price factors are short-term oriented that necessitate immediate upward drift in remunerative prices for farm produce. It is within the ambit of price factors that MSP stands. The government notifies MSP for 23 commodities and FRP (fair and remunerative price) for sugarcane. These crops cover about 84% of total area under cultivation in all the seasons of a year. About 5% area is under fodder crops which is not amenable for MSP intervention. According to this arithmetic, close to 90% of the total cultivated area is applicable to MSP intervention, leaving a small segment of producers amenable to price benefits, if the MSP were to be fully implemented.

So, how exactly does the CACP determine the Minimum Support Price (MSP)? CACP takes the following factors under consideration while determining the MSP:

  1. Cost of cultivation per hectare and structure of costs across various regions in the country and the changes therein.
  2. Cost of production per quintal across various regions of the country and the changes therein.
  3. Prices of various inputs and the changes therein.
  4. Market prices of products and the changes therein.
  5. Prices of commodities sold by the farmers and of those purchased by them and the changes therein.
  6. supply-related information like area, yield and production, imports, exports and domestic availability and stocks with the Government/Public agencies or industry.
  7. Demand-related information, which includes the total and per capita consumption, trends and capacity of the processing industry.
  8. Prices in the international markets and the changes therein.
  9. Prices of the derivatives of the farm products such as sugar, jaggery, jute, edible and non-edible oils, cotton yarns and changes therein.
  10. Cost of processing of agricultural products and the changes therein.
  11. Cost of marketing and services, storage, transportation, processing, taxes/fees, and margins retained by market functionaries, and
  12. Macroeconomic variables such as general level of prices, consumer price indices and those reflecting monetary and fiscal factors.

As can be seen, this is an extensive set of parameters that the Commission relies on for calculating the Minimum Support Price (MSP). But, then the question is: where does the Commission get access to this data set? The data is generally gathered from agricultural scientists, farmer leaders, social workers, central ministries, Food Corporation of India (FCI), National Agricultural Cooperative Marketing Federation of India (NAFED), Cotton Corporation of India (CCI), Jute Corporation of India, traders’ organizations and research institutes. The Commission then calculates the MSP and sends it to the Central Government for approval, which then sends it to the states for their suggestions. Once the states given their nods, the Cabinet Committee on Economic Affairs subscribes to these figures that are then released on CACP portals.

During the first year of UPA-1 Government in the centre in 2004, a National Commission on Farmers (NCF) was formed with M S Swaminathan (Research Foundation) as its Chairman. One of the major objectives of the Commission was to make farm commodities cost-competitive and profitable. To achieve this task, a three-tiered structure for calculating the farming cost was devised, viz. A2, FL and C2. A2 is the actual paid out costs, while, A2+FL is the actual paid-out cost plus imputed value of family labour, where imputing is assigning a value to something by inference from the value of the products or processes to which it contributes. C2 is the comprehensive cost including imputed rent and interest on owned land and capital. It is evident that C2 > A2+FL > A2

The Commission for Agricultural Costs and Prices (CACP) while recommending prices takes into account all important factors including costs of production, changes in input prices, input/output parity, trends in market prices, inter crop price parity, demand and supply situation, parity between prices paid and prices received by the farmers etc. In fixing the support prices, CACP relies on the cost concept which covers all items of expenses of cultivation including that of the imputed value of the inputs owned by the farmers such as rental value of owned land and interest on fixed capital. some of the important cost concepts are C2 and C3:

C3: C2 + 10% of C2 to account for managerial remuneration to the farmer.

Swaminathan Commission Report categorically states that farmers should get an MSP, which is 50% higher than the comprehensive cost of production. this cost + 50% formula came from the Swaminathan Commission and it had categorically stated that the cost of production is the comprehensive cost of production, which is C2 and not A2+FL. C2 includes all actual expenses in cash and kind incurred in the production by the actual owner + rent paid for leased land + imputed value of family labour + interest on the value of owned capital assets (excluding land) + rental value of the owned land (net of land revenue). Costs of production are calculated both on a per quintal and per hectare basis. Since cost variation are large over states, CACP recommends that MSP should be considered on the basis of C2. However, increases in MSP have been so substantial in case of paddy and wheat that in most of the states, MSPs are way above not only the C2, but even C3 as well.

Screen Shot 2019-02-05 at 12.14.15 PM

This is where the political economy of MSP stares back at the hapless farmers. Though 23 crops are to be notified on MSP, not more than 3 are are actually ensured. The Indian farm sector is also plagued by low scale production restricted by small-sized holdings, which ensures that margin over cost within the prevailing system generates at best low income for the farmers. This is precisely the point of convergence of reasons why the farmers have been demanding effective implementation of MSP by keeping the MSP 50% higher than the costs incurred. Farmers and farmers’ organizations have demanded that the MSP be increased to cost of production + 50%, since for them, cost of production has meant C2 and not A2+FL. At present, the CACP adds A2 and FL to determine the MSP. The Government then adds 50% of the value obtained by adding A2 and FL only to fix the MSP, thus ignoring C2. What the farmers and farmers’ organizations have been demanding is an addition of 50% to C2 to fix the MSP, which is sadly missing the hole point of Governmental announcements. This difference between what the farmers want and what the government gives is a reason behind so much unrest as regards support prices to the farmers.

Ramesh Chand, who is currently serving in the NITI Aayog, is still a voice of reason over and above what the Government has been implementing by way of sops. Chand has also recommended that the interest on working capital should be given for the whole season against the existing half-season, and the actual rental value prevailing in the village should be considered without a ceiling on the rent. Moreover, post-harvest costs, cleaning, grading, drying, packaging, marketing and transportation should be included. C2 should be hiked by 10% to account for the risk premium and managerial charges.

According to Ramesh Chand of NITI Aayog, there is an urgent need to take into account the market clearance price in recommending the MSP. This would reflect both the demand and supply sides. When the MSP is fixed depending on the demand-side factors, then the need for government intervention to implement MSPs would be reduced only to the situation where the markets are not competitive or when the private trade turns exploitative. However, if there is a deficiency price payment mechanism or crops for which an MSP declared but the purchase doesn’t materialize, then the Government should compensate the farmers for the difference between the MSP and lower market price. such a mechanism has been implemented in Madhya Pradesh under the name of Bhavantar Bhugtan Yojana (BBY), where the Government, rather than accept its poor track record in procurement directly from the farmers has been compensating the farmers with direct cash transfers when the market prices fall below MSP. The scheme has had its downsides with long delays in payments and heavy transaction costs. There is also a glut in supply with the markets getting flooded with low-quality grains, which then depress the already low crop prices. Unless, his and MS Swaminathan’s recommendations are taken seriously, the solution to the agrarian crisis is hiding towards a capitalist catastrophe. And why does one say that?

In order to negotiate the price deficient mechanism towards resolution, the Government is left with another option in the form of procurement. But, here is a paradox. The Government clearly does not have the bandwidth to first create a system and then manage the procurement of crops for which the MSP has been announced, which now number 20. If there is a dead-end reached here, the likelihood of Government turning towards private markets cannot be ruled out. And once that turn is taken, thee markets would become vulnerable to whims and fancies of local politicians who would normally have influencing powers in their functioning, thus taking the system on their discretionary rides.

There obviously are certain questions that deem an answer and these fall within the ambit of policy making. For instance, is there a provision in the budget to increase the ambit of farmers who are covered by the MSP? Secondly, calculations of MSP involve private costs and benefits, and thus exhibit one side of the story. For an exhaustive understanding, social costs and benefits must also be incorporated. With a focus primarily on private costs and benefits, socially wasteful production and specialization is encouraged, like paddy production in north India with attendant consequences to which we have become grim witnesses. Would this double-bind ever be overcome is a policy matter, and at the moment what is being witnessed is a policy paralysis and lack of political will transforming only in embanking the vote bank. Thats a pity!

Equilibrium Market Prices are Unique – Convexity and Concavity Utility Functions on a Linear Case. Note Quote + Didactics.

slide_8

Consider a market consisting of a set B of buyers and a set A of divisible goods. Assume |A| = n and |B| = n′. We are given for each buyer i the amount ei of money she possesses and for each good j the amount bj of this good. In addition, we are given the utility functions of the buyers. Our critical assumption is that these functions are linear. Let uij denote the utility derived by i on obtaining a unit amount of good j. Thus if the buyer i is given xij units of good j, for 1 ≤ j ≤ n, then the happiness she derives is

j=1nuijxij —— (1)

Prices p1, . . . , pn of the goods are said to be market clearing prices if, after each buyer is assigned an optimal basket of goods relative to these prices, there is no surplus or deficiency of any of the goods. So, is it possible to compute such prices in polynomial time?

First observe that without loss of generality, we may assume that each bj is unit – by scaling the uij’s appropriately. The uij’s and ei’s are in general rational; by scaling appropriately, they may be assumed to be integral. By making the mild assumption that each good has a potential buyer, i.e., a buyer who derives nonzero utility from this good. Under this assumption, market clearing prices do exist.

It turns out that equilibrium allocations for Fisher’s linear case are captured as optimal solutions to a remarkable convex program, the Eisenberg–Gale convex program.

A convex program whose optimal solution is an equilibrium allocation must have as constraints the packing constraints on the xij’s. Furthermore, its objective function, which attempts to maximize utilities derived, should satisfy the following:

  1. If the utilities of any buyer are scaled by a constant, the optimal allocation remains unchanged.
  2. If the money of a buyer b is split among two new buyers whose utility functions are the same as that of b then sum of the optimal allocations of the new buyers should be an optimal allocation for b.

The money weighted geometric mean of buyers’ utilities satisfies both these conditions:

max (∏i∈Auiei)1/∑iei —– (2)

then, the following objective function is equivalent:

max (∏i∈Auiei) —– (3)

Its log is used in the Eisenberg–Gale convex program:

maximize, ∑i=1n’eilogui

subject to

ui = ∑j=1nuijxij ∀ i ∈ B

i=1n’ xij ≤ 1 ∀ j ∈ A

xij ≥ 0 ∀ i ∈ B, j ∈ A —– (4)

where xij is the amount of good j allocated to buyer i. Interpret Lagrangian variables, say pj’s, corresponding to the second set of conditions as prices of goods. Optimal solutions to xij’s and pj’s must satisfy the following:

    1. ∀ j ∈ A : p≥ 0
    2. ∀ j ∈ A : p> 0 ⇒ ∑i∈A xij = 1
    3. ∀ i ∈ B, j ∈ A : uij/pj ≤ ∑j∈Auijxij/ei
    4. ∀ i ∈ B, j ∈ A : xij > 0 ⇒ uij/pj = ∑j∈Auijxij/ei

From these conditions, one can derive that an optimal solution to convex program (4) must satisfy the market clearing conditions.

For the linear case of Fisher’s model:

  1. If each good has a potential buyer, equilibrium exists.
  2. The set of equilibrium allocations is convex.
  3. Equilibrium utilities and prices are unique.
  4. If all uij’s and ei’s are rational, then equilibrium allocations and prices are also rational. Moreover, they can be written using polynomially many bits in the length of the instance.

Corresponding to good j there is a buyer i such that uij > 0. By the third condition as stated above,

pj ≥ eiuij/∑juijxij > 0

By the second condition, ∑i∈A xij = 1, implying that prices of all goods are positive and all goods are fully sold. The third and fourth conditions imply that if buyer i gets good j then j must be among the goods that give buyer i maximum utility per unit money spent at current prices. Hence each buyer gets only a bundle consisting of her most desired goods, i.e., an optimal bundle.

The fourth condition is equivalent to

∀ i ∈ B, j ∈ A : eiuijxij/∑j∈Auijxij = pjxij

Summing over all j

∀ i ∈ B : eij uijxij/∑j∈Auijxij = pjxij

⇒ ∀ i ∈ B : ei = ∑jpjxij

Hence the money of each buyer is fully spent completing the proof that market equilibrium exists. Since each equilibrium allocation is an optimal solution to the Eisenberg-Gale convex program, the set of equilibrium allocations must form a convex set. Since log is a strictly concave function, if there is more than one equilibrium, the utility derived by each buyer must be the same in all equilibria. This fact, together with the fourth condition, gives that the equilibrium prices are unique.

The Statistical Physics of Stock Markets. Thought of the Day 143.0

This video is an Order Routing Animation

The externalist view argues that we can make sense of, and profit from stock markets’ behavior, or at least few crucial properties of it, by crunching numbers and looking for patterns and regularities in certain sets of data. The notion of data, hence, is a key element in such an understanding and the quantitative side of the problem is prominent even if it does not mean that a qualitative analysis is ignored. The point here that the outside view maintains that it provides a better understanding than the internalist view. To this end, it endorses a functional perspective on finance and stock markets in particular.

The basic idea of the externalist view is that there are general properties and behavior of stock markets that can be detected and studied through mathematical lens, and they do not depend so much on contextual or domain-specific factors. The point at stake here is that the financial systems can be studied and approached at different scales, and it is virtually impossible to produce all the equations describing at a micro level all the objects of the system and their relations. So, in response, this view focuses on those properties that allow us to get an understanding of the behavior of the systems at a global level without having to produce a detailed conceptual and mathematical account of the inner ‘machinery’ of the system. Hence the two roads: The first one is to embrace an emergentist view on stock market, that is a specific metaphysical, ontological, and methodological thesis, while the second one is to embrace a heuristic view, that is the idea that the choice to focus on those properties that are tractable by the mathematical models is a pure problem-solving option.

A typical view of the externalist approach is the one provided, for instance, by statistical physics. In describing collective behavior, this discipline neglects all the conceptual and mathematical intricacies deriving from a detailed account of the inner, individual, and at micro level functioning of a system. Concepts such as stochastic dynamics, self-similarity, correlations (both short- and long-range), and scaling are tools to get this aim. Econophysics is a stock example in this sense: it employs methods taken from mathematics and mathematical physics in order to detect and forecast the driving forces of stock markets and their critical events, such as bubbles, crashes and their tipping points. Under this respect, markets are not ‘dark boxes’: you can see their characteristics from the outside, or better you can see specific dynamics that shape the trends of stock markets deeply and for a long time. Moreover, these dynamics are complex in the technical sense. This means that this class of behavior is such to encompass timescales, ontology, types of agents, ecologies, regulations, laws, etc. and can be detected, even if not strictly predictable. We can focus on the stock markets as a whole, on few of their critical events, looking at the data of prices (or other indexes) and ignoring all the other details and factors since they will be absorbed in these global dynamics. So this view provides a look at stock markets such that not only they do not appear as a unintelligible casino where wild gamblers face each other, but that shows the reasons and the properties of a systems that serve mostly as a means of fluid transactions that enable and ease the functioning of free markets.

Moreover the study of complex systems theory and that of stock markets seem to offer mutual benefits. On one side, complex systems theory seems to offer a key to understand and break through some of the most salient stock markets’ properties. On the other side, stock markets seem to provide a ‘stress test’ of the complexity theory. Didier Sornette expresses the analogies between stock markets and phase transitions, statistical mechanics, nonlinear dynamics, and disordered systems mold the view from outside:

Take our personal life. We are not really interested in knowing in advance at what time we will go to a given store or drive to a highway. We are much more interested in forecasting the major bifurcations ahead of us, involving the few important things, like health, love, and work, that count for our happiness. Similarly, predicting the detailed evolution of complex systems has no real value, and the fact that we are taught that it is out of reach from a fundamental point of view does not exclude the more interesting possibility of predicting phases of evolutions of complex systems that really count, like the extreme events. It turns out that most complex systems in natural and social sciences do exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their posterior evolution. Such extreme events express more than anything else the underlying “forces” usually hidden by almost perfect balance and thus provide the potential for a better scientific understanding of complex systems.

Phase transitions, critical points, extreme events seem to be so pervasive in stock markets that they are the crucial concepts to explain and, in case, foresee. And complexity theory provides us a fruitful reading key to understand their dynamics, namely their generation, growth and occurrence. Such a reading key proposes a clear-cut interpretation of them, which can be explained again by means of an analogy with physics, precisely with the unstable position of an object. Complexity theory suggests that critical or extreme events occurring at large scale are the outcome of interactions occurring at smaller scales. In the case of stock markets, this means that, unlike many approaches that attempt to account for crashes by searching for ‘mechanisms’ that work at very short time scales, complexity theory indicates that crashes have causes that date back months or year before it. This reading suggests that it is the increasing, inner interaction between the agents inside the markets that builds up the unstable dynamics (typically the financial bubbles) that eventually ends up with a critical event, the crash. But here the specific, final step that triggers the critical event: the collapse of the prices is not the key for its understanding: a crash occurs because the markets are in an unstable phase and any small interference or event may trigger it. The bottom line: the trigger can be virtually any event external to the markets. The real cause of the crash is its overall unstable position, the proximate ‘cause’ is secondary and accidental. Or, in other words, a crash could be fundamentally endogenous in nature, whilst an exogenous, external, shock is simply the occasional triggering factors of it. The instability is built up by a cooperative behavior among traders, who imitate each other (in this sense is an endogenous process) and contribute to form and reinforce trends that converge up to a critical point.

The main advantage of this approach is that the system (the market) would anticipate the crash by releasing precursory fingerprints observable in the stock market prices: the market prices contain information on impending crashes and this implies that:

if the traders were to learn how to decipher and use this information, they would act on it and on the knowledge that others act on it; nevertheless, the crashes would still probably happen. Our results suggest a weaker form of the “weak efficient market hypothesis”, according to which the market prices contain, in addition to the information generally available to all, subtle information formed by the global market that most or all individual traders have not yet learned to decipher and use. Instead of the usual interpretation of the efficient market hypothesis in which traders extract and consciously incorporate (by their action) all information contained in the market prices, we propose that the market as a whole can exhibit “emergent” behavior not shared by any of its constituents.

In a nutshell, the critical events emerge in a self-organized and cooperative fashion as the macro result of the internal and micro interactions of the traders, their imitation and mirroring.

 

Financial Fragility in the Margins. Thought of the Day 114.0

F1.large

If micro-economic crisis is caused by the draining of liquidity from an individual company (or household), macro-economic crisis or instability, in the sense of a reduction in the level of activity in the economy as a whole, is usually associated with an involuntary outflow of funds from companies (or households) as a whole. Macro-economic instability is a ‘real’ economic phenomenon, rather than a monetary contrivance, the sense in which it is used, for example, by the International Monetary Fund to mean price inflation in the non-financial economy. Neo-classical economics has a methodological predilection for attributing all changes in economic activity to relative price changes, specifically the price changes that undoubtedly accompany economic fluctuations. But there is sufficient evidence to indicate that falls in economic activity follow outflows of liquidity from the industrial and commercial company sector. Such outflows then lead to the deflation of economic activity that is the signal feature of economic recession and depression.

Let us start with a consideration of how vulnerable financial futures market themselves are to illiquidity, since this would indicate whether the firms operating in the market are ever likely to need to realize claims elsewhere in order to meet their liabilities to the market. Paradoxically, the very high level of intra-broker trading is a safety mechanism for the market, since it raises the velocity of circulation of whatever liquidity there is in the market: traders with liabilities outside the market are much more likely to have claims against other traders to set against those claims. This may be illustrated by considering the most extreme case of a futures market dominated by intra-broker trading, namely a market in which there are only two dealers who buy and sell financial futures contracts only between each other as rentiers, in other words for a profit which may include their premium or commission. On the expiry date of the contracts, conventionally set at three-monthly intervals in actual financial futures markets, some of these contracts will be profitable, some will be loss-making. Margin trading, however, requires all the profitable contracts to be fully paid up in order for their profit to be realized. The trader whose contracts are on balance profitable therefore cannot realize his profits until he has paid up his contracts with the other broker. The other broker will return the money in paying up his contracts, leaving only his losses to be raised by an inflow of money. Thus the only net inflow of money that is required is the amount of profit (or loss) made by the traders. However, an accommodating gross inflow is needed in the first instance in order to make the initial margin payments and settle contracts so that the net profit or loss may be realized.

The existence of more traders, and the system for avoiding counterparty risk commonly found in most futures market, whereby contracts are made with a central clearing house, introduce sequencing complications which may cause problems: having a central clearing house avoids the possibility that one trader’s default will cause other traders to default on their obligations. But it also denies traders the facility of giving each other credit, and thereby reduces the velocity of circulation of whatever liquidity is in the market. Having to pay all obligations in full to the central clearing house increases the money (or gross inflow) that broking firms and investors have to put into the market as margin payments or on settlement days. This increases the risk that a firm with large net liabilities in the financial futures market will be obliged to realize assets in other markets to meet those liabilities. In this way, the integrity of the market is protected by increasing the effective obligations of all traders, at the expense of potentially unsettling claims on other markets.

This risk is enhanced by the trading of rentiers, or banks and entrepreneurs operating as rentiers, hedging their futures contracts in other financial markets. However, while such incidents generate considerable excitement around the markets at the time of their occurrence, there is little evidence that they could cause involuntary outflows from the corporate sector on such a scale as to produce recession in the real economy. This is because financial futures are still used by few industrial and commercial companies, and their demand for financial derivatives instruments is limited by the relative expense of these instruments and their own exposure to changes in financial parameters (which may more easily be accommodated by holding appropriate stocks of liquid assets, i.e., liquidity preference). Therefore, the future of financial futures depends largely on the interest in them of the contemporary rentiers in pension, insurance and various other forms of investment funds. Their interest, in turn, depends on how those funds approach their ‘maturity’.

However, the decline of pension fund surpluses poses important problems for the main securities markets of the world where insurance and pension funds are now the dominant investors, as well as for more peripheral markets like emerging markets, venture capital and financial futures. A contraction in the net cash inflow of investment funds will be reflected in a reduction in the funds that they are investing, and a greater need to realize assets when a change in investment strategy is undertaken. In the main securities markets of the world, a reduction in the ‘new money’ that pension and insurance funds are putting into those securities markets will slow down the rate of growth of the prices in those markets. How such a fall in the institutions’ net cash inflow will affect the more marginal markets, such as emerging markets, venture capital and financial futures, depends on how institutional portfolios are managed in the period of declining net contributions inflows.

In general, investment managers in their own firms, or as employees of merchant or investment banks, compete to manage institutions’ funds. Such competition is likely to increase as investment funds approach ‘maturity’, i.e., as their cash outflows to investors, pensioners or insurance policyholders, rises faster than their cash inflow from contributions and premiums, so that there are less additional funds to be managed. In principle, this should not affect financial futures markets, in the first instance, since, as argued above, the short-term nature of their instruments and the large proportion in their business of intra-market trade makes them much less dependent on institutional cash inflows. However, this does not mean that they would be unaffected by changes in the portfolio preferences of investment funds in response to lower returns from the main securities markets. Such lower returns make financial investments like financial futures, venture capital and emerging markets, which are more marginal because they are so hazardous, more attractive to normally conservative fund managers. Investment funds typically put out sections of portfolios to specialist fund managers who are awarded contracts to manage a section according to the soundness of their reputation and the returns that they have made hitherto in portfolios under their management. A specialist fund manager reporting high, but not abnormal, profits in a fund devoted to financial futures, is likely to attract correspondingly more funds to manage when returns are lower in the main markets’ securities, even if other investors in financial futures experienced large losses. In this way, the maturing of investment funds could cause an increased inflow of rentier funds into financial futures markets.

An inflow of funds into a financial market entails an increase in liabilities to the rentiers outside the market supplying those funds. Even if profits made in the market as a whole also increase, so too will losses. While brokers commonly seek to hedge their positions within the futures market, rentiers have much greater possibilities of hedging their contracts in another market, where they have assets. An inflow into futures markets means that on any settlement day there will therefore be larger net outstanding claims against individual banks or investment funds in respect of their financial derivatives contracts. With margin trading, much larger gross financial inflows into financial futures markets will be required to settle maturing contracts. Some proportion of this will require the sale of securities in other markets. But if liquidity in integrated cash markets for securities is reduced by declining net inflows into pension funds, a failure to meet settlement obligations in futures markets is the alternative to forced liquidation of other assets. In this way futures markets will become more fragile.

Moreover, because of the hazardous nature of financial futures, high returns for an individual firm are difficult to sustain. Disappointment is more likely to be followed by the transfer of funds to management in some other peripheral market that shows a temporary high profit. While this should not affect capacity utilization in the futures market, because of intra-market trade, it is likely to cause much more volatile trading, and an increase in the pace at which new instruments are introduced (to attract investors) and fall into disuse. Pension funds whose returns fall below those required to meet future liabilities because of such instability would normally be required to obtain additional contributions from employers and employees. The resulting drain on the liquidity of the companies affected would cause a reduction in their fixed capital investment. This would be a plausible mechanism for transmitting fragility in the financial system into full-scale decline in the real economy.

The proliferation of financial futures markets has only had been marginally successful in substituting futures contracts for Keynesian liquidity preference as a means of accommodating uncertainty. A closer look at the agents in those markets and their market mechanisms indicates that the price system in them is flawed and trading hazardous risks in them adds to uncertainty rather than reducing it. The hedging of financial futures contracts in other financial markets means that the resulting forced liquidations elsewhere in the financial system are a real source of financial instability that is likely to worsen as slower growth in stock markets makes speculative financial investments appear more attractive. Capital-adequacy regulations are unlikely to reduce such instability, and may even increase it by increasing the capital committed to trading in financial futures. Such regulations can also create an atmosphere of financial security around these markets that may increase unstable speculative flows of liquidity into the markets. For the economy as a whole, the real problems are posed by the involvement of non-financial companies in financial futures markets. With the exception of a few spectacular scandals, non-financial companies have been wary of using financial futures, and it is important that they should continue to limit their interest in financial futures markets. Industrial and commercial companies, which generate their own liquidity through trade and production and hence have more limited financial assets to realize in order to meet financial futures liabilities in times of distress, are more vulnerable to unexpected outflows of liquidity in proportion to their increased exposure to financial markets. The liquidity which they need to set aside to meet such unexpected liabilities inevitably means a reduced commitment to investment in fixed capital and new technology.

Bear Stearns. Note Quote.

Like many of its competitors, Bear Stearns saw the rise of the hedge fund industry during the 1990s and began managing its own funds with outside investor capital under the name Bear Stearns Asset Management (BSAM). Unlike its competitors, Bear hired all of its fund managers internally, with each manager specializing in a particular security or asset class. Objections by some Bear executives, such as co-president Alan Schwartz, that such concentration of risk could raise volatility were ignored, and the impressive returns posted by internal funds such as Ralph Cioffi’s High-Grade Structured Credit Strategies Fund quieted any concerns.

Cioffi’s fund was invested in sophisticated credit derivatives backed by mortgage securities. When the housing bubble burst, he redoubled his bets, raising a new Enhanced Leverage High-Grade Structured Credit Strategies Fund that would use 100 leverage (as compared to the 35 leverage employed by the original fund). The market continued to turn disastrously against the fund, which was soon stuck with billions of dollars worth of illiquid, unprofitable mortgages. In an attempt to salvage the situation and cut his losses, Cioffi launched a vehicle named Everquest Financial and sold its shares to the public. But when journalists at the Wall Street Journal revealed that Everquest’s primary assets were the “toxic waste” of money-losing mortgage securities, Bear had no choice but to cancel the public offering. With spectacular losses mounting daily, investors attempted to withdraw their remaining holdings. In order to free up cash for such redemptions, the fund had to liquidate assets at a loss, selling that only put additional downward pressure on its already underwater positions. Lenders to the fund began making margin calls and threatening to seize its $1.2 billion in collateral.

In a less turbulent market it might have worked, but the subprime crisis had spent weeks on the front page of financial newspapers around the globe, and every bank on Wall Street was desperate to reduce its own exposure. Insulted and furious that Bear had refused to inject any of its own capital to save the funds, Steve Black, J.P. Morgan Chase head of investment banking, called Schwartz and said, “We’re defaulting you.”

The default and subsequent seizure of $400 million in collateral by Merrill Lynch proved highly damaging to Bear Stearns’s reputation across Wall Street. In a desperate attempt to save face under the scrutiny of the SEC, James Cayne made the unprecedented move of using $1.6 billion of Bear’s own capital to prop up the hedge funds. By late July 2007 even Bear’s continued support could no longer prop up Cioffi’s two beleaguered funds, which paid back just $300 million of the credit its parent had extended. With their holdings virtually worthless, the funds had no choice but to file for bankruptcy protection.

On November 14, just two weeks after the Journal story questioning Cayne’s commitment and leadership, Bear Stearns reported that it would write down $1.2 billion in mortgage- related losses. (The figure would later grow to $1.9 billion.) CFO Molinaro suggested that the worst had passed, and to outsiders, at least, the firm appeared to have narrowly escaped disaster.

Behind the scenes, however, Bear management had already begun searching for a white knight, hiring Gary Parr at Lazard to examine its options for a cash injection. Privately, Schwartz and Parr spoke with Kohlberg Kravis Roberts & Co. founder Henry Kravis, who had first learned the leveraged buyout market while a partner at Bear Stearns in the 1960s. Kravis sought entry into the profitable brokerage business at depressed prices, while Bear sought an injection of more than $2 billion in equity capital (for a reported 20% of the company) and the calming effect that a strong, respected personality like Kravis would have upon shareholders. Ultimately the deal fell apart, largely due to management’s fear that KKR’s significant equity stake and the presence of Kravis on the board would alienate the firm’s other private equity clientele, who often competed with KKR for deals. Throughout the fall Bear continued to search for potential acquirers. With the market watching intently to see if Bear shored up its financing, Cayne managed to close only a $1 billion cross-investment with CITIC, the state-owned investment company of the People’s Republic of China.

Bear’s $0.89 profit per share in the first quarter of 2008 did little to quiet the growing whispers of its financial instability. It seemed that every day another major investment bank reported mortgage-related losses, and for whatever reason Bear’s name kept cropping up in discussions of the by-then infamous subprime crisis. Exacerbating Bear’s public relations problem, the SEC had launched an investigation into the collapse of the two BSAM hedge funds, and rumors of massive losses at three major hedge funds further rattled an already uneasy market. Nonetheless, Bear executives felt that the storm had passed, reasoning that its almost $21 billion in cash reserves had convinced the market of its long-term viability.

Instead, on Monday, March 10, 2008, Moody’s downgraded 163 tranches of mortgage- backed bonds issued by Bear across fifteen transactions. The credit rating agency had drawn sharp criticism for its role in the subprime meltdown from analysts who felt the company had overestimated the creditworthiness of mortgage-backed securities and failed to alert the market of the danger as the housing market turned. As a result, Moody’s was in the process of downgrading nearly all of its ratings, but as the afternoon wore on, Bear’s stock price seemed to be reacting far more negatively than those of competitor firms.

Wall Street’s drive toward ever more sophisticated communications devices had created an interconnected network of traders and bankers across the world. On most days, Internet chat and mobile e-mail devices relayed gossip about compensation, major employee departures, and even sports betting lines. On the morning of March 10, however, they were carrying one message to the exclusion of all others: Bear was having liquidity problems. At noon, CNBC took the story public on Power Lunch. As Bear’s stock price fell more than 10 percent to $63, Ace Greenberg frantically placed calls to various executives, demanding that someone publicly deny any such problems. When contacted himself, Greenberg told a CNBC correspondent that the rumors were “totally ridiculous,” angering CFO Molinaro, who felt that denying the rumor would only legitimize it and trigger further panic selling, making prophecies of Bear’s illiquidity self-fulfilling. Just two hours later, however, Bear appeared to have dodged a bullet. News of New York governor Eliot Spitzer’s involvement in a high-class prostitution ring wiped any financial rumors off the front page, leading Bear executives to believe the worst was once again behind them.

Instead, the rumors exploded anew the next day, as many interpreted the Federal Reserve’s announcement of a new $200 billion lending program to help financial institutions through the credit crisis as aimed specifically toward Bear Stearns. The stock dipped as low as $55.42 before closing at $62.97. Meanwhile, Bear executives faced a new crisis in the form of an explosion of novation requests, in which a party to a risky contract tries to eliminate its risky position by selling it to a third party. Credit Suisse, Deutsche Bank, and Goldman Sachs all reported a deluge of novation requests from firms trying to reduce their exposure to Bear’s credit risk. The speed and force of this explosion of novation requests meant that before Bear could act, both Goldman Sachs and Credit Suisse issued e-mails to their traders holding up any requests relating to Bear Stearns pending approval by their credit departments. Once again, the electronically linked gossip network of trading desks around the world dealt a blow to investor confidence in Bear’s stability, as a false rumor circulated that Credit Suisse’s memo had forbidden its traders from engaging in any trades with Bear. The decrease in confidence in Bear’s liquidity could be quantified by the rise in the cost of credit default swaps on Bear’s debt. The price of such an instrument – which effectively acts as five years of insurance against a default on $10 million of Bear’s debt – spiked to more than $626,000 from less than $100,000 in October, indicating heavy betting by some firms that Bear would be unable to pay its liabilities.

Untitled

Internally, Bear debated whether to address the rumors publicly, ultimately deciding to arrange a Wednesday morning interview of Schwartz by CNBC correspondent David Faber. Not wanting to encourage rumors with a hasty departure, Schwartz did the interview live from Bear’s annual media conference in Palm Beach. Chosen because of his perceived friendliness to Bear, Faber nonetheless opened the interview with a devastating question that claimed direct knowledge of a trader whose credit department had temporarily held up a trade with Bear. Later during the interview Faber admitted that the trade had finally gone through, but he had called into question Bear’s fundamental capacity to operate as a trading firm. One veteran trader later commented,

You knew right at that moment that Bear Stearns was dead, right at the moment he asked that question. Once you raise that idea, that the firm can’t follow through on a trade, it’s over. Faber killed him. He just killed him.

Despite sentiment at Bear that Schwartz had finally put the company’s best foot forward and refuted rumors of its illiquidity, hedge funds began pulling their accounts in earnest, bringing Bear’s reserves down to $15 billion. Additionally, repo lenders – whose overnight loans to investment banks must be renewed daily – began informing Bear that they would not renew the next morning, forcing the firm to find new sources of credit. Schwartz phoned Parr at Lazard, Molinaro reviewed Bear’s plans for an emergency sale in the event of a crisis, and one of the firm’s attorneys called the president of the Federal Reserve to explain Bear’s situation and implore him to accelerate the newly announced program that would allow investment banks to use mortgage securities as collateral for emergency loans from the Fed’s discount window, normally reserved for commercial banks.

The trickle of withdrawals that had begun earlier in the week turned into an unstoppable torrent of cash flowing out the door on Thursday. Meanwhile, Bear’s stock continued its sustained nosedive, falling nearly 15% to an intraday low of $50.48 before rallying to close down 1.5%. At lunch, Schwartz assured a crowded meeting of Bear executives that the whirlwind rumors were simply market noise, only to find himself interrupted by Michael Minikes, senior managing director,

Do you have any idea what is going on? Our cash is flying out the door! Our clients are leaving us!

Hedge fund clients jumped ship in droves. Renaissance Technologies withdrew approximately $5 billion in trading accounts, and D. E. Shaw followed suit with an equal amount. That evening, Bear executives assembled in a sixth-floor conference room to survey the carnage. In less than a week, the firm had burned through all but $5.9 billion of its $18.3 billion in reserves, and was still on the hook for $2.4 billion in short-term debt to Citigroup. With a panicked market making more withdrawals the next day almost certain, Schwartz accepted the inevitable need for additional financing and had Parr revisit merger discussions with J.P. Morgan Chase CEO James Dimon that had stalled in the fall. Flabbergasted at the idea that an agreement could be reached that night, Dimon nonetheless agreed to send a team of bankers over to analyze Bear’s books.

Parr’s call interrupted Dimon’s 52nd birthday celebration at a Greek restaurant just a few blocks away from Bear headquarters, where a phalanx of attorneys had begun preparing emergency bankruptcy filings and documents necessary for a variety of cash-injecting transactions. Facing almost certain insolvency in the next 24 hours, Schwartz hastily called an emergency board meeting late that night, with most board members dialing in remotely. Bear’s nearly four hundred subsidiaries would make a bankruptcy filing impossibly complicated, so Schwartz continued to cling to the hope for an emergency cash infusion to get Bear through Friday. As J.P. Morgan’s bankers pored over Bear’s positions, they balked at the firm’s precarious position and the continued size of its mortgage holdings, insisting that the Fed get involved in a bailout they considered far too risky to take on alone.

Its role as a counterparty in trillions of dollars’ worth of derivatives contracts bore an eerie similarity to LTCM, and the Fed once again saw the potential for financial Armageddon if Bear were allowed to collapse of its own accord. An emergency liquidation of the firm’s assets would have put strong downward pressure on global securities prices, exacerbating an already chaotic market environment. Facing a hard deadline of credit markets’ open on Friday morning, the Fed and J.P. Morgan wrangled back and forth on how to save Bear. Working around the clock, they finally reached an agreement wherein J.P. Morgan would access the Fed’s discount window and in turn offer Bear a $30 billion credit line that, as dictated by a last-minute insertion by J.P. Morgan general counsel Steven Cutler, would be good for 28 days. As the press release went public, Bear executives cheered; Bear would have almost a month to seek alternative financing.

Where Bear had seen a lifeline, however, the market saw instead a last desperate gasp for help. Incredulous Bear executives could only watch in horror as the firm’s capital continued to fly out of its coffers. On Friday morning Bear burned through the last of its reserves in a matter of hours. A midday conference call in which Schwartz confidently assured investors that the credit line would allow Bear to continue “business as usual” did little to stop the bleeding, and its stock lost almost half of its already depressed value, closing at $30 per share.

All day Friday, Parr set about desperately trying to save his client, searching every corner of the financial world for potential investors or buyers of all or part of Bear. Given the severity of the situation, he could rule out nothing, from a sale of the lucrative prime brokerage operations to a merger or sale of the entire company. Ideally, he hoped to find what he termed a “validating investor,” a respected Wall Street name to join the board, adding immediate credibility and perhaps quieting the now deafening rumors of Bear’s imminent demise. Sadly, only a few such personalities with the reputation and war chest necessary to play the role of savior existed, and most of them had already passed on Bear.

Nonetheless, Schwartz left Bear headquarters on Friday evening relieved that the firm had lived to see the weekend and secured 28 days of breathing room. During the ride home to Greenwich, an unexpected phone call from New York Federal Reserve President Timothy Geithner and Treasury Secretary Henry Paulson shattered that illusion. Paulson told a stunned Schwartz that the Fed’s line of credit would expire Sunday night, giving Bear 48 hours to find a buyer or file for bankruptcy. The demise of the 28-day clause remains a mystery; the speed necessary early Friday morning and the inclusion of the clause by J.P. Morgan’s general counsel suggest that Bear executives had misinterpreted it, although others believe that Paulson and Geithner had soured both on Bear’s prospects and on market perception of an emergency loan from the Fed as Friday wore on. Either way, the Fed had made up its mind, and a Saturday morning appeal from Schwartz failed to sway Geithner.

All day Saturday prospective buyers streamed through Bear’s headquarters to pick through the rubble as Parr attempted to orchestrate Bear’s last-minute salvation. Chaos reigned, with representatives from every major bank on Wall Street, J. C. Flowers, KKR, and countless others poring over Bear’s positions in an effort to determine the value of Bear’s massive illiquid holdings and how the Fed would help in financing. Some prospective buyers wanted just a piece of the dying bank, others the whole firm, with still others proposing more complicated multiple-step transactions that would slice Bear to ribbons. One by one, they dropped out, until J. C. Flowers made an offer for 90% of Bear for a total of up to $2.6 billion, but the offer was contingent on the private equity firm raising $20 billion from a bank consortium, and $20 billion in risky credit was unlikely to appear overnight.

That left J.P. Morgan. Apparently the only bank willing to come to the rescue, J.P. Morgan had sent no fewer than 300 bankers representing 16 different product groups to Bear headquarters to value the firm. The sticking point, as with all the bidders, was Bear’s mortgage holdings. Even after a massive write-down, it was impossible to assign a value to such illiquid (and publicly maligned) securities with any degree of accuracy. Having forced the default of the BSAM hedge funds that started this mess less than a year earlier.

On its final 10Q in March, Bear listed $399 billion in assets and $387 billion in liabilities, leaving just $12 billion in equity for a 32 leverage multiple. Bear initially estimated that this included $120 billion of “risk-weighted” assets, those that might be subject to subsequent write-downs. As J.P. Morgan’s bankers worked around the clock trying to get to the bottom of Bear’s balance sheet, they came to estimate the figure at nearly $220 billion. That pessimistic outlook, combined with Sunday morning’s New York Times article reiterating Bear’s recent troubles, dulled J.P. Morgan’s appetite for jumping onto what appeared to be a sinking ship. Later, one J.P. Morgan banker shuddered, recalling the article. “That article certainly had an impact on my thinking. Just the reputational aspects of it, getting into bed with these people.”

On Saturday morning J.P. Morgan backed out and Dimon told a shell-shocked Schwartz to pursue any other option available to him. The problem was, no such alternative existed. Knowing this, and the possibility that the liquidation of Bear could throw the world’s financial markets into chaos, Fed representatives immediately phoned Dimon. As it had in the LTCM case a decade ago, the Fed relied heavily on suasion, or “jawboning,” the longtime practice of attempting to influence market participants by appeals to reason rather than a declaration by fiat. For hours, J.P. Morgan’s and the Fed’s highest-ranking officials played a game of high-stakes poker, with each side bluffing and Bear’s future hanging in the balance. The Fed wanted to avoid unprecedented government participation in the bailout of a private investment firm, while J.P. Morgan wanted to avoid taking on any of the “toxic waste” in Bear’s mortgage holdings. “They kept saying, ‘We’re not going to do it,’ and we kept saying, ‘We really think you should do it,’” recalled one Fed official. “This went on for hours . . . They kept saying, ‘We can’t do this on our own.’” With the hours ticking away until Monday’s Australian markets would open at 6:00 p.m. New York time, both sides had to compromise.

On Sunday afternoon, Schwartz stepped out of a 1:00 emergency meeting of Bear’s board of directors to take the call from Dimon. The offer would come somewhere in the range of $4 to 5 per share. Hearing the news from Schwartz, the Bear board erupted with rage. Dialing in from the bridge tournament in Detroit, Cayne exploded, ranting furiously that the firm should file for bankruptcy protection under Chapter 11 rather than accept such a humiliating offer, which would reduce his 5.66 million shares – once worth nearly $1 billion – to less than $30 million in value. In reality, however, bankruptcy was impossible. As Parr explained, changes to the federal bankruptcy code in 2005 meant that a Chapter 11 filing would be tantamount to Bear falling on its sword, because regulators would have to seize Bear’s accounts, immediately ceasing the firm’s operations and forcing its liquidation. There would be no reorganization.

Even as Cayne raged against the $4 offer, the Fed’s concern over the appearance of a $30 billion loan to a failing investment bank while American homeowners faced foreclosures compelled Treasury Secretary Paulson to pour salt in Bear’s wounds. Officially, the Fed had remained hands-off in the LTCM bailout, relying on its powers of suasion to convince other banks to step up in the name of market stability. Just 10 years later, they could find no takers. The speed of Bear’s collapse, the impossibility of conducting true due diligence in such a compressed time frame, and the incalculable risk of taking on Bear’s toxic mortgage holdings scared off every buyer and forced the Fed from an advisory role into a principal role in the bailout. Worried that a price deemed at all generous to Bear might subsequently encourage moral hazard – increased risky behavior by investment banks secure in the knowledge that in a worst-case scenario, disaster would be averted by a federal bailout – Paulson determined that the transaction, while rescuing the firm, also had to be punitive to Bear shareholders. He called Dimon, who reiterated the contemplated offer range.

“That sounds high tome,” Paulson told the J.P. Morgan chief. “I think this should be done at a very low price.” It was moments later that Braunstein called Parr. “The number’s $2.” Under Delaware law, executives must act on behalf of both shareholders and creditors when a company enters the “zone of insolvency,” and Schwartz knew that Bear had rocketed through that zone over the past few days. Faced with bankruptcy or J.P. Morgan, Bear had no choice but to accept the embarrassingly low offer that represented a 97% discount off its $32 close on Friday evening. Schwartz convinced the weary Bear board that $2 would be “better than nothing,” and by 6:30 p.m., the deal was unanimously approved.

After 85 years in the market, Bear Stearns ceased to exist.

Conjuncted: Financialization of Natural Resources – Financial Analysis of the Blue Economy: Sagarmala’s Case in Point.

Image-by-Billy-Wilson

The financialization of natural resources is the process of replacing environmental regulation with markets. In order to bring nature under the control of markets, the planet’s natural resources need to be made into commodities that can be bought or sold for a profit. It is a means of transferring the stewardship of our common resources to private business interests. The financialization of nature is not about protecting the environment, rather it is about creating ways for the financial sector to continue to earn high profits. Although the sector has begun to rebound from the financial crisis, it is still below its pre-crisis levels of profit. By pushing into new areas, promoting the creation of new commodities, and exploiting the real threat of climate change for their own ends, financial companies and actors are placing the whole world at the risk of precarity.

A systemic increase in financial speculation on commodities mainly driven by deregulation of derivative markets, increasing involvement of investment banks, hedge funds and other institutional investor in commodity speculation and the emergence of new instruments such as index funds and exchange-traded funds. Financial deregulation over the last one decade has for the first time transformed commodities into financial assets. what we might call ‘financialization’, is thus penetrating all commodity markets and their functioning. Contrary to common sense and what civil society assumes, financial markets are going deeper and deeper into the real economy as a response to the financial crisis, so that speculative capital is structurally being intertwined with productive capital – in this case commodities and natural resources.

Marine ecology as a natural resource isn’t immune to commodification, and an array of financial agents are making it their indispensable destination, thrashing out new types of alliances converging around specific ideas about how maritime and coastal resources should be organized, and to whose benefit, under which terms and to what end? The commodification of marine ecology is what is referred to as Blue Economy, which is converging on the necessity of implementing policies across scales that are conducive to, what in the corridors of those promulgating it, a win-win-win situation in pursuit of ‘sustainable development’, entailing pro-poor, conservation-sensitive blue growth. What one cannot fail to notice here is that Blue Economy is close on heels to what Karl Marx called the necessary prerequisite to capitalism, primitive accumulation. If in the days of industrial revolution and at a time when Marx was writing, natural resources like lands were converted into commercial commodities, then today under the rubric of neoliberalism, the attack is on the natural resources in the form of converting them into speculative capital. But as commercial history has undergone a change, so has the notion of accumulation. Today’s accumulation is through the process of dispossession. In the green-grabbing frame, conservation initiatives have become a key force driving primitive accumulation, although, the form that primitive accumulation through conservation takes is very different from that initially described by Marx, as conservation initiatives involve taking nature out of production – as opposed to bringing them in through the initial enclosures described by Marx. Under such unfoldings, even the notional appropriation undergoes an unfolding, in that, it implies the transfer of ownership, use rights and control over resources that were once publicly or privately owned – or not even the subject of ownership – from the poor (or everyone including the poor) in to the hands of the powerful.

Moreover, for David Harvey, states under neoliberalism become increasingly oriented toward attracting foreign direct investment, i.e. specifically actors with the capital to invest whereas all others are overlooked and/or lose out. Central in all of these dimensions is the assumption in market-based neoliberal conservation that “once property rights are established and transaction costs are minimized, voluntary trade in environmental goods and bads will produce optimal, least-cost outcomes with little or no need for state involvement.”. This implies that win-win- win outcomes with benefits on all fronts spanning corporate investors, the local communities, biodiversity, national economies etc., are possible if only the right technocratic policies are put in place. By extension this also means side-stepping intrinsically political questions with reference to effective management through economic rationality informed by cutting-edge ecological science, in turn making the transition to the ‘green economy’ conflict-free as long as the “invisible hand of the market is guided by [neutral] scientific expertise”. While marine and coastal resources may have been largely overlooked in the discussions on green grabbing and neoliberal conservation, a robust, but small, critical literature has been devoted to looking specifically into the political economy of fisheries systems. Focusing on one sector in the outlined ‘blue economy’, this literature uncovers “how capitalist relations and dynamics (in their diverse and varying forms) shape and/or constitute fisheries systems.”

The question then is, how far viable or sustainable are these financial interventions? Financialization produces effects which can create long-term trends (such as those on functional income distribution) but can also change across different periods of economic growth, slowdown and recession. Interpreting the implications of financialization for sustainability, therefore, requires a methodological diverse and empirical dual-track approach which combines different methods of investigations. Even times of prosperity, despite their fragile and vulnerable nature, can endure for several years before collapsing due to high levels of indebtedness, which in turn amplify the real effects of a financial crisis and hinder the economic growth. Things begin to get a bit more complicated when financialization interferes with environment and natural resources, for then the losses are not just merely on a financial platform alone. Financialization has played a significant role in the recent price shocks in food and energy markets, while the wave of speculative investment in natural resources has and is likely to produce perverse environmental and social impact. Moreover, the so-called financialization of environmental conservation tends to enhance the financial value of environmental resources but it is selective: not all stakeholders have the same opportunities and not all uses and values of natural resources and services are accounted for. This mechanism brings new risks and challenges for environmental services and their users that are excluded by official systems of natural capital monetization and accounting. This is exactly the precarity one is staring at when dealing with Blue Economy.

Price-Earnings Ratio. Note Quote.

The price-earnings ratio (P/E) is arguably the most popular price multiple. There are numerous definitions and variations of the price-earnings ratio. In its simplest form, the price-earnings ratio relates current share price to earnings per share.

Untitled

The forward (or estimated) price-earnings ratio is based on the current stock price and the estimated earnings for future full scal years. Depending on how far out analysts are forecasting annual earnings (typically, for the current year and the next two fiscal years), a company can have multiple forward price-earnings ratios. The forward P/E will change as earnings estimates are revised when new information is released and quarterly earnings become available. Also, forward price-earnings ratios are calculated using estimated earnings based on the current fundamentals. A company’s fundamentals could change drastically over a short period of time and estimates may lag the changes as analysts digest the new facts and revise their outlooks.

The average price-earnings ratio attempts to smooth out the price-earnings ratio by reducing daily variation caused by stock price movements that may be the result of general volatility in the stock market. Different sources may calculate this figure differently. Average P/E is defined as the average of the high and low price-earnings ratios for a given year. The high P/E is calculated by dividing the high stock price for the year by the annual earnings per share fully diluted from continuing operations. The low P/E for the year is calculated using the low stock price for the year.

The relative price-earnings ratio helps to compare a company’s price-earnings ratio to the price-earnings ratio of the overall market, both currently and historically. Relative P/E is calculated by dividing the firm’s price-earnings ratio by the market’s price-earnings ratio.

The price-earnings ratio is used to gauge market expectation of future performance. Even when using historical earnings, the current price of a stock is a compilation of the market’s belief in future prospects. Broadly, a high price-earnings ratio means the market believes that that the company has strong future growth prospects. A low price-earnings ratio generally means the market has low earnings growth expectations for the firm or there is high risk or uncertainty of the firm actually achieving growth. However, looking at a price-earnings ratio alone may not be too illuminating. It will always be more useful to compare the price-earnings ratios of one company to those of other companies in the same industry and to the market in general. Furthermore, tracking a stock’s price-earnings ratio over time is useful in determining how the current valuation compares to historical trends.

Gordon growth model is a variant of the discounted cash flow model, is a method for valuing intrinsic value of a stock or business. Many researches on P/E ratios are based on this constant dividend growth model.

When investors purchase a stock, they expect two kinds of cash flows: dividend during holding shares and expected stock price at the end of shareholding. As the expected share price is decided by future dividend, then we can use the unlimited discount to value the current price of stocks.

A normal model for the intrinsic value of a stock:

V = D1/(1+R)1 + D2/(1+R)2 +…+ Dn/(1+R)n = ∑t=1 Dt/(1+R)t (n→∞) —– (1)

In (1)

V: intrinsic value of the stock;

Dt: dividend for the tth year

R: discount rate, namely required rate of return;

t: the year for dividend payment.

Assume the market is efficient, the share price should be equal to the intrinsic value of the stock, then equation (1) becomes:

P0 = D1/(1+R)1 + D2/(1+R)2 +…+ Dn/(1+R)n = ∑t=1 Dt/(1+R)t (n→∞) —– (2)

where P0: purchase price of the stock;

Dt: dividend for the tth year

R: discount rate, namely required rate of return;

t: the year for dividend payment.

Assume the dividend grows stably at the rate of g, we derive the constant dividend growth model.

That is Gordon constant dividend growth model:

P0 = D1/(1+R)1 + D2/(1+R)2 +…+ Dn/(1+R)n = D0(1+g)/(1+R)1 + D0(1+g)2/(1+R)2 +….+ D0(1+g)n/(1+R)n = ∑t=1 D0(1+g)t/(1+R)t —– (3)

When g is a constant, and R>g at the same time, then equation (3) can be modified as the following:

P0 = D0(1+g)/(R-g) = D1/(R-g) —– (4)

where, P0: purchase price of the stock;

D0: dividend at the purchase time;

D1: dividend for the 1st year;

R: discount rate, namely required rate of return;

g: the growth rate of dividend.

We suppose that the return on dividend b is fixed, then equation (4) divided by E1 is:

P0/E1 = (D1/E1)/(R-g) = b/(R-g) —– (5)

where, P0: purchase price of the stock;

D1: dividend for the 1st year;

E1: earnings per share (EPS) of the 1st year after purchase;

b: return on dividend;

R: discount rate, namely required rate of return;

g: the growth rate of dividend.

Therefrom we derive the P/E ratio theoretical computation model, from which appear factors deciding P/E directly, namely return on dividend, required rate of return and the growth rate of dividend. The P/E ratio is related positively to the return on dividend and required rate of return, and negatively to the growth rate of dividend.

Realistically speaking, most investors relate high P/E ratios to corporations with fast growth of future profits. However, the risk closely linked the speedy growth is also very important. They can counterbalance each other. For instance, when other elements are equal, the higher the risk of a stock, the lower is its P/E ratio, but high growth rate can counterbalance the high risk, thus lead to a high P/E ratio. P/E ratio reflects the rational investors’ expectation on the companies’ growth potential and risk in the future. The growth rate of dividend (g) and required rate of return (R) in the equation also response growth opportunity and risk factors.

Financial indices such as Dividend Payout Ratio, Liability-Assets (L/A) Ratio and indices that reflecting growth and profitability are employed in this paper as direct influence factors that have impact on companies’ P/E ratios.

Derived from (5), the dividend payout ratio has a direct positive effect on P/E ratio. When there is a high dividend payout ratio, the returns and stock value investors expected will also rise, which lead to a high P/E ratio. Conversely, the P/E ratio will be correspondingly lower.

Earnings per share (EPS) is another direct factor, while its impact on P/E ratio is negative. It reflects the relation between capital size and profit level of the company. When the profit level is the same, the larger the capital size, the lower the EPS will be, then the higher the P/E ratio will be. When the liability-assets ratio is high, which represents that the proportion of the equity capital is lower than debt capital, then the EPS will be high and finally the P/E ratio will led to be low. Therefore, the companies’ L/A ratio also negatively correlate to P/E ratio.

Some other financial indices including growth rate of EPS, ROE, growth rate of ROE, growth rate of net assets, growth rate of main business income and growth rate of main business profit should theoretically positively correlate to P/E ratios, because if the companies’ growth and profitability are both great, then investors’ expectation will be high, and then the stock prices and P/E ratios will be correspondingly high. Conversely, they will be low.

In the Gordon growth model, the growth of dividend is calculated based on the return on retained earnings reinvestment, r, therefore:

g = r (1-b) = retention ratio return on retained earnings.

As a result,

P0/E1 = b/(R-g) = b/(R-r(1-b)) —– (6)

Especially, when the expected return on retained earnings equal to the required rate of return (i.e. r = R) or when the retained earnings is zero (i.e. b=1),

There is:

P0/E1 = 1/R —– (7)

Obviously, in (7) the theoretical value of P/E ratio is the reciprocal of the required rate of return. According to the Capital Asset Pricing Model (CAPM), the average yields of the stock market should be equal to risk-free yield plus total risk premium. When there not exists any risk, then the required rate of return will equal to the market interest rate. Thus, the P/E ratio here turns into the reciprocal of the market interest rate.

As an important influence factor, the annual interest rate affect on both market average and companies’ individual P/E ratios. On the side of market average P/E ratio, when interest rate declines, funds will move to security markets, funds supply volume increasing will lead to the rise of share prices, and then rise in P/E ratios. In contrast, when interest rate rises, revulsion of capitals will reflow into banks, funds supply will be critical, share prices decline as well as P/E ratios. On the other side on the companies’ P/E ratio, the raise on interest rate will be albatross of companies, all other conditions remain, earnings will reduce, then equity will lessen, large deviation between operation performance and expected returns appears, can not support a high level of P/E ratio, so stock prices will decline. As a result, both market average and companies’ individual P/E ratios will be influenced by the annual interest rate.

It is also suitable to estimate the market average P/E ratio, and only when all the above assumptions are satisfied, that the practical P/E ratio amount to the theoretical value. However, different from the securities market, the interest rate is relatively rigid, especially to the strict control of interest rate countries; the interest rate adjustment is not so frequent, so that it is not synchronous with macroeconomic fundamentals. Reversely, the stock market reflects the macroeconomic fundamentals; high expectation of investors can raise up the stock prices, sequent the growth of the aggregate value of the whole market. Other market behaviors can also lead to changes of average P/E ratios. Therefore, it is impossible that the average P/E ratio is identical with the theoretical one. Variance exits inevitably, the key is to measure a rational range for this variance.

For the market average P/E ratio, P should be the aggregate value of listed stocks, and E is the total level of capital gains. To the maturity market, the reasonable average P/E ratio should be the reciprocal of the average yields of the market; usually the bank annual interest is used to represent the average yields of the market.

The return on retained earnings is an expected value in theory, while it is always hard to forecast, so the return on equity (ROE) is used to estimate the value.

(6) can then evolve as,

P0/E1 = b/(R-g) = b/(R-r(1-b)) = b/(R-ROE(1-b)) —– (8)

From (8) we can know, ROE is one of the influence factors to P/E ratio, which measures the value companies created for shareholders. It is positively correlated to the P/E ratio. The usefulness of any price-earnings ratio is limited to firms that have positive actual and expected earnings. Depending on the data source you use, companies with negative earnings will have a “null” value for a P/E while other sources will report a P/E of zero. In addition, earnings are subject to management assumptions and manipulation more than other income statement items such as sales, making it hard to get a true sense of value.

Credit Risk Portfolio. Note Quote.

maxresdefault

The recent development in credit markets is characterized by a flood of innovative credit risky structures. State-of-the-art portfolios contain derivative instruments ranging from simple, nearly commoditized contracts such as credit default swap (CDS), to first- generation portfolio derivatives such as first-to-default (FTD) baskets and collateralized debt obligation (CDO) tranches, up to complex structures involving spread options and different asset classes (hybrids). These new structures allow portfolio managers to implement multidimensional investment strategies, which seamlessly conform to their market view. Moreover, the exploding liquidity in credit markets makes tactical (short-term) overlay management very cost efficient. While the outperformance potential of an active portfolio management will put old-school investment strategies (such as buy-and-hold) under enormous pressure, managing a highly complex credit portfolio requires the introduction of new optimization technologies.

New derivatives allow the decoupling of business processes in the risk management industry (in banking, as well as in asset management), since credit treasury units are now able to manage specific parts of credit risk actively and independently. The traditional feedback loop between risk management and sales, which was needed to structure the desired portfolio characteristics only by selective business acquisition, is now outdated. Strategic cross asset management will gain in importance, as a cost-efficient overlay management can now be implemented by combining liquid instruments from the credit universe.

In any case, all these developments force portfolio managers to adopt an integrated approach. All involved risk factors (spread term structures including curve effects, spread correlations, implied default correlations, and implied spread volatilities) have to be captured and integrated into appropriate risk figures. We have a look on constant proportion debt obligations (CPDOs) as a leveraged exposure on credit indices, constant proportion portfolio insurance (CPPI) as a capital guaranteed instrument, CDO tranches to tap the correlation market, and equity futures to include exposure to stock markets in the portfolio.

For an integrated credit portfolio management approach, it is of central importance to aggregate risks over various instruments with different payoff characteristics. In this chapter, we will see that a state-of-the-art credit portfolio contains not only linear risks (CDS and CDS index contracts) but also nonlinear risks (such as FTD baskets, CDO tranches, or credit default swaptions). From a practitioner’s point of view there is a simple solution for this risk aggregation problem, namely delta-gamma management. In such a framework, one approximates the risks of all instruments in a portfolio by its first- and second-order sensitivities and aggregates these sensitivities to the portfolio level. Apparently, for a proper aggregation of risk factors, one has to take the correlation of these risk factors into account. However, for credit risky portfolios, a simplistic sensitivity approach will be inappropriate, as can be seen by the characteristics of credit portfolio risks shows:

  • Credit risky portfolios usually involve a larger number of reference entities. Hence, one has to take a large number of sensitivities into account. However, this is a phenomenon that is already well known from the management of stock portfolios. The solution is to split the risk for each constituent into a systematic risk (e.g., a beta with a portfolio hedging tool) and an alpha component which reflects the idiosyncratic part of the risk.

  • However, in contrast to equities, credit risk is not one dimensional (i.e., one risky security per issuer) but at least two dimensional (i.e., a set of instruments with different maturities). This is reflected in the fact that there is a whole term structure of credit spreads. Moreover, taking also different subordination levels (with different average recovery rates) into account, credit risk becomes a multidimensional object for each reference entity.
  • While most market risks can be satisfactorily approximated by diffusion processes, for credit risk the consideration of events (i.e., jumps) is imperative. The most apparent reason for this is that the dominating element of credit risk is event risk. However, in a market perspective, there are more events than the ultimate default event that have to be captured. Since one of the main drivers of credit spreads is the structure of the underlying balance sheet, a change (or the risk of a change) in this structure usually triggers a large movement in credit spreads. The best-known example for such an event is a leveraged buyout (LBO).
  • For credit market players, correlation is a very special topic, as a central pricing parameter is named implied correlation. However, there are two kinds of correlation parameters that impact a credit portfolio: price correlation and event correlation. While the former simply deals with the dependency between two price (i.e., spread) time series under normal market conditions, the latter aims at describing the dependency between two price time series in case of an event. In its simplest form, event correlation can be seen as default correlation: what is the risk that company B defaults given that company A has defaulted? While it is already very difficult to model this default correlation, for practitioners event correlation is even more complex, since there are other events than just the default event, as already mentioned above. Hence, we can modify the question above: what is the risk that spreads of company B blow out given that spreads of company A have blown out? In addition, the notion of event correlation can also be used to capture the risk in capital structure arbitrage trades (i.e., trading stock versus bonds of one company). In this example, the question might be: what is the risk that the stock price of company A jumps given that its bond spreads have blown out? The complicated task in this respect is that we do not only have to model the joint event probability but also the direction of the jumps. A brief example highlights why this is important. In case of a default event, spreads will blow out accompanied by a significant drop in the stock price. This means that there is a negative correlation between spreads and stock prices. However, in case of an LBO event, spreads will blow out (reflecting the deteriorated credit quality because of the higher leverage), while stock prices rally (because of the fact that the acquirer usually pays a premium to buy a majority of outstanding shares).

These show that a simple sensitivity approach – e.g., calculate and tabulate all deltas and gammas and let a portfolio manager play with – is not appropriate. Further risk aggregation (e.g., beta management) and risk factors that capture the event risk are needed. For the latter, a quick solution is the so-called instantaneous default loss (IDL). The IDL expresses the loss incurred in a credit risk instrument in case of a credit event. For single-name CDS, this is simply the loss given default (LGD). However, for a portfolio derivative such as a mezzanine tranche, this figure does not directly refer to the LGD of the defaulted item, but to the changed subordination of the tranche because of the default. Hence, this figure allows one to aggregate various instruments with respect to credit events.

In Praise of Libertarianism. Drunken Risibility

The-True-Political-Spectrum

Devotion to free markets is a sin??? Nah!!!. Like quantitative induction and philosophical deduction, economics has always had a political purpose, and the purpose has usually been libertarian. Economists are freedom nuts, which is to say that they look with suspicion on lawyerly plans to solve problems with new state compulsions and longer jail sentences. Economics at its philosophical birth, among physiocrats in Paris and moral philosophers in Edinburgh, was in favor of free markets and was suspicious of overblown states. Mostly it still is. Let things be, laissez faire, has been the economists’ cry against intervention. Let the trades begin.

True, not all economists are free traders. The non-free traders, often European and disproportionately French, point out that you can make other assumptions about how trade works, A’, and get other conclusions, C’, not so favorable to laissez faire. The free-trade theorem, which sounds so grand, is actually pretty easy to overturn. Suppose a big part of the economy – say the household – is, as the economists put it, “distorted” (e.g., suppose people in households do things for love: you can see that the economists have a somewhat peculiar idea of “distortion”). Then it follows rigorously (that is to say, mathematically) that free trade in other sectors (e.g., manufacturing) will not be the best thing. In fact it can make the average person worse off than restricted, protected, tariffed trade would.

And of course normal people – meaning non-economists – are not persuaded that free trade is always and everywhere a good thing. For example most people think free trade is a bad thing for the product or service they make. But, the reality is to think the need to blockade entry into the profession of being an economist: it is, in all agreement, scandalous that so many unqualified quacks are bilking consumers with adulterated economics.

And very many normal people of leftish views, even after communism, even after numerous disastrous experiments in central planning, think socialism deserves a chance. They think it obvious that socialism is after all fairer than unfettered capitalism. They think it obvious that regulation is after all necessary to restrain monopoly. They don’t realize that free markets have partially broken down inequality (for example, between men and women; “partially”) and partially undermined monopolies (for example, local monopolies in retailing) and have increased the income of the poor over two centuries by a factor of 18. The felony lies in, the lefties think, in exactly its free-market bias.

But, my dearly beloved friends on the left, think, think again. There really is a serious case to be made against government intervention and in favor of markets. Maybe not knockdown; maybe imperfect here or there; let’s chat about it; hmm, a serious case that serious people ought to take seriously. The case in favor of markets is on the contrary populist and egalitarian and person-respecting and bad-institution-breaking libertarianism. Don’t go to government to solve problems, said Adam Smith. As he didn’t say, to do so is to put the fox in charge of the hen house. The golden rule is, those who have the gold rule: so don’t expect a government run by men to help women, or a government run by Enron executives to help Enron employees.

Libertarianism is typical of economics, especially English-speaking economics, and most especially American economics. Most Americans if they can get clear of certain European errors, are radical libertarians under the skin. Give me liberty. Sweet land of liberty. Live free or die. But alas, no time, no time. Libraries of books have been written examining the numerous and weighty arguments for the market and against socialism. Really, that the average literary person believes the first few pages of The Communist Manifesto suffice for knowledge of economics and economic history, in which he professes great interest, is a bit of a scandal. As Cromwell said wearily to the General Assembly of the Church of Scotland, 3 August, 1650, “I beseech you, in the bowels of Christ, think it possible you may be mistaken.” Oh, permit one short libertarian riff.

Nor is government obstruction peculiar to the present-day Third World. In one decade in the eighteenth century, according to the Swedish economist and historian Eli Heckscher in his book, Mercantilism, the French government sent tens of thousands of souls to the galleys and executed 16,000 (that’s about 4.4 people a day over the ten years: you see the beauty of statistical thinking) for the hideous crime of… are you ready to hear the appalling evil these enemies of the State committed, fully justifying hanging them all, every damned one of their treasonable skins? … importing printed calico cloth. States do not change much from age to age. In view of How Muches and Oh, My Gods like these – the baleful oomph of governmental intrusions world-wide crushing harmless (indeed, beneficial) exchange, from marijuana to printed calico – perhaps laissez faire does not seem so obviously sinful, does it now? Consider, my dear leftist friends. Read and reflect. I beseech you, think it possible that, like statistics and mathematics, the libertarianism of economics is a virtue.

High Frequency Markets and Leverage

0*o9wpWk6YyXYGxntK

Leverage effect is a well-known stylized fact of financial data. It refers to the negative correlation between price returns and volatility increments: when the price of an asset is increasing, its volatility drops, while when it decreases, the volatility tends to become larger. The name “leverage” comes from the following interpretation of this phenomenon: When an asset price declines, the associated company becomes automatically more leveraged since the ratio of its debt with respect to the equity value becomes larger. Hence the risk of the asset, namely its volatility, should become more important. Another economic interpretation of the leverage effect, inverting causality, is that the forecast of an increase of the volatility should be compensated by a higher rate of return, which can only be obtained through a decrease in the asset value.

Some statistical methods enabling us to use high frequency data have been built to measure volatility. In financial engineering, it has become clear in the late eighties that it is necessary to introduce leverage effect in derivatives pricing frameworks in order to accurately reproduce the behavior of the implied volatility surface. This led to the rise of famous stochastic volatility models, where the Brownian motion driving the volatility is (negatively) correlated with that driving the price for stochastic volatility models.

Traditional explanations for leverage effect are based on “macroscopic” arguments from financial economics. Could microscopic interactions between agents naturally lead to leverage effect at larger time scales? We would like to know whether part of the foundations for leverage effect could be microstructural. To do so, our idea is to consider a very simple agent-based model, encoding well-documented and understood behaviors of market participants at the microscopic scale. Then we aim at showing that in the long run, this model leads to a price dynamic exhibiting leverage effect. This would demonstrate that typical strategies of market participants at the high frequency level naturally induce leverage effect.

One could argue that transactions take place at the finest frequencies and prices are revealed through order book type mechanisms. Therefore, it is an obvious fact that leverage effect arises from high frequency properties. However, under certain market conditions, typical high frequency behaviors, having probably no connection with the financial economics concepts, may give rise to some leverage effect at the low frequency scales. It is important to emphasize that leverage effect should be fully explained by high frequency features.

Another important stylized fact of financial data is the rough nature of the volatility process. Indeed, for a very wide range of assets, historical volatility time-series exhibit a behavior which is much rougher than that of a Brownian motion. More precisely, the dynamics of the log-volatility are typically very well modeled by a fractional Brownian motion with Hurst parameter around 0.1, that is a process with Hölder regularity of order 0.1. Furthermore, using a fractional Brownian motion with small Hurst index also enables to reproduce very accurately the features of the volatility surface.

hurst_fbm

The fact that for basically all reasonably liquid assets, volatility is rough, with the same order of magnitude for the roughness parameter, is of course very intriguing. Tick-by-tick price model is based on a bi-dimensional Hawkes process, which is a bivariate point process (Nt+, Nt)t≥0 taking values in (R+)2 and with intensity (λ+t, λt) of the form

Untitled

Here μ+ and μ are positive constants and the functions (φi)i=1,…4 are non-negative with associated matrix called kernel matrix. Hawkes processes are said to be self-exciting, in the sense that the instantaneous jump probability depends on the location of the past events. Hawkes processes are nowadays of standard use in finance, not only in the field of microstructure but also in risk management or contagion modeling. The Hawkes process generates behavior that mimics financial data in a pretty impressive way. And back-fitting, yields coorespndingly good results.  Some key problems remain the same whether you use a simple Brownian motion model or this marvelous technical apparatus.

In short, back-fitting only goes so far.

  • The essentially random nature of living systems can lead to entirely different outcomes if said randomness had occurred at some other point in time or magnitude. Due to randomness, entirely different groups would likely succeed and fail every time the “clock” was turned back to time zero, and the system allowed to unfold all over again. Goldman Sachs would not be the “vampire squid”. The London whale would never have been. This will boggle the mind if you let it.

  • Extraction of unvarying physical laws governing a living system from data is in many cases is NP-hard. There are far many varieties of actors and variety of interactions for the exercise to be tractable.

  • Given the possibility of their extraction, the nature of the components of a living system are not fixed and subject to unvarying physical laws – not even probability laws.

  • The conscious behavior of some actors in a financial market can change the rules of the game, some of those rules some of the time, or complete rewire the system form the bottom-up. This is really just an extension of the former point.

  • Natural mutations over time lead to markets reworking their laws over time through an evolutionary process, with never a thought of doing so.

ee2bb4_8eaf3fa3c14d4960aceae022db54340c

Thus, in this approach, Nt+ corresponds to the number of upward jumps of the asset in the time interval [0,t] and Nt to the number of downward jumps. Hence, the instantaneous probability to get an upward (downward) jump depends on the arrival times of the past upward and downward jumps. Furthermore, by construction, the price process lives on a discrete grid, which is obviously a crucial feature of high frequency prices in practice.

This simple tick-by-tick price model enables to encode very easily the following important stylized facts of modern electronic markets in the context of high frequency trading:

  1. Markets are highly endogenous, meaning that most of the orders have no real economic motivation but are rather sent by algorithms in reaction to other orders.
  2. Mechanisms preventing statistical arbitrages take place on high frequency markets. Indeed, at the high frequency scale, building strategies which are on average profitable is hardly possible.
  3. There is some asymmetry in the liquidity on the bid and ask sides of the order book. This simply means that buying and selling are not symmetric actions. Indeed, consider for example a market maker, with an inventory which is typically positive. She is likely to raise the price by less following a buy order than to lower the price following the same size sell order. This is because its inventory becomes smaller after a buy order, which is a good thing for her, whereas it increases after a sell order.
  4. A significant proportion of transactions is due to large orders, called metaorders, which are not executed at once but split in time by trading algorithms.

    In a Hawkes process framework, the first of these properties corresponds to the case of so-called nearly unstable Hawkes processes, that is Hawkes processes for which the stability condition is almost saturated. This means the spectral radius of the kernel matrix integral is smaller than but close to unity. The second and third ones impose a specific structure on the kernel matrix and the fourth one leads to functions φi with heavy tails.