Stephen Wolfram and Stochasticity of Financial Markets. Note Quote.

The most obvious feature of essentially all financial markets is the apparent randomness with which prices tend to fluctuate. Nevertheless, the very idea of chance in financial markets clashes with our intuitive sense of the processes regulating the market. All processes involved seem deterministic. Traders do not only follow hunches but act in accordance with specific rules, and even when they do appear to act on intuition, their decisions are not random but instead follow from the best of their knowledge of the internal and external state of the market. For example, traders copy other traders, or take the same decisions that have previously worked, sometimes reacting against information and sometimes acting in accordance with it. Furthermore, nowadays a greater percentage of the trading volume is handled algorithmically rather than by humans. Computing systems are used for entering trading orders, for deciding on aspects of an order such as the timing, price and quantity, all of which cannot but be algorithmic by definition.

Algorithmic however, does not necessarily mean predictable. Several types of irreducibility, from non-computability to intractability to unpredictability, are entailed in most non-trivial questions about financial markets.

Wolfram asks

whether the market generates its own randomness, starting from deterministic and purely algorithmic rules. Wolfram points out that the fact that apparent randomness seems to emerge even in very short timescales suggests that the randomness (or a source of it) that one sees in the market is likely to be the consequence of internal dynamics rather than of external factors. In economists’ jargon, prices are determined by endogenous effects peculiar to the inner workings of the markets themselves, rather than (solely) by the exogenous effects of outside events.

Wolfram points out that pure speculation, where trading occurs without the possibility of any significant external input, often leads to situations in which prices tend to show more, rather than less, random-looking fluctuations. He also suggests that there is no better way to find the causes of this apparent randomness than by performing an almost step-by-step simulation, with little chance of besting the time it takes for the phenomenon to unfold – the time scales of real world markets being simply too fast to beat. It is important to note that the intrinsic generation of complexity proves the stochastic notion to be a convenient assumption about the market, but not an inherent or essential one.

Economists may argue that the question is irrelevant for practical purposes. They are interested in decomposing time-series into a non-predictable and a presumably predictable signal in which they have an interest, what is traditionally called a trend. Whether one, both or none of the two signals is deterministic may be considered irrelevant as long as there is a part that is random-looking, hence most likely unpredictable and consequently worth leaving out.

What Wolfram’s simplified model show, based on simple rules, is that despite being so simple and completely deterministic, these models are capable of generating great complexity and exhibit (the lack of) patterns similar to the apparent randomness found in the price movements phenomenon in financial markets. Whether one can get the kind of crashes in which financial markets seem to cyclicly fall into depends on whether the generating rule is capable of producing them from time to time. Economists dispute whether crashes reflect the intrinsic instability of the market, or whether they are triggered by external events. Sudden large changes are Wolfram’s proposal for modeling market prices would have a simple program generate the randomness that occurs intrinsically. A plausible, if simple and idealized behavior is shown in the aggregate to produce intrinsically random behavior similar to that seen in price changes.


In the figure above, one can see that even in some of the simplest possible rule-based systems, structures emerge from a random-looking initial configuration with low information content. Trends and cycles are to be found amidst apparent randomness.

An example of a simple model of the market, where each cell of a cellular automaton corresponds to an entity buying or selling at each step. The behaviour of a given cell is determined by the behaviour of its two neighbors on the step before according to a rule. A rule like rule 90 is additive, hence reversible, which means that it does not destroy any information and has ‘memory’ unlike the random walk model. Yet, due to its random looking behaviour, it is not trivial shortcut the computation or foresee any successive step. There is some randomness in the initial condition of the cellular automaton rule that comes from outside the model, but the subsequent evolution of the system is fully deterministic.

internally generated suggesting large changes are more predictable – both in magnitude and in direction as the result of various interactions between agents. If Wolfram’s intrinsic randomness is what leads the market one may think one could then easily predict its behaviour if this were the case, but as suggested by Wolfram’s Principle of Computational Equivalence it is reasonable to expect that the overall collective behaviour of the market would look complicated to us, as if it were random, hence quite difficult to predict despite being or having a large deterministic component.

Wolfram’s Principle of Computational Irreducibility says that the only way to determine the answer to a computationally irreducible question is to perform the computation. According to Wolfram, it follows from his Principle of Computational Equivalence (PCE) that

almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication: when a system reaches a threshold of computational sophistication often reached by non-trivial systems, the system will be computationally irreducible.


Complexity Wrapped Uncertainty in the Bazaar


One could conceive a financial market as a set of N agents each of them taking a binary decision every time step. This is an extremely crude representation, but capture the essential feature that decision could be coded by binary symbols (buy = 0, sell = 1, for example). Although the extreme simplification, the above setup allow a “stylized” definition of price.

Let Nt0, Nt1 be the number of agents taking the decision 0, 1 respectively at the time t. Obviously, N = Nt0 + Nt1 for every t . Then, with the above definition of the binary code the price can be defined as:

pt = f(Nt0/Nt1)

where f is an increasing and convex function which also hold that:

a) f(0)=0

b) limx→∞ f(x) = ∞

c) limx→∞ f'(x) = 0

The above definition perfectly agree with the common believe about how offer and demand work. If Nt0 is small and Nt1 large, then there are few agents willing to buy and a lot of agents willing to sale, hence the price should be low. If on the contrary, Nt0 is large and Nt1 is small, then there are a lot of agents willing to buy and just few agents willing to sale, hence the price should be high. Notice that the winning choice is related with the minority choice. We exploit the above analogy to construct a binary time-series associated to each real time-series of financial markets. Let {pt}t∈N be the original real time-series. Then we construct a binary time-series {at}t∈N by the rule:

at = {1 pt > pt-1

at = {0 pt < pt-1

Physical complexity is defined as the number of binary digits that are explainable (or meaningful) with respect to the environment in a string η. In reference to our problem the only physical record one gets is the binary string built up from the original real time series and we consider it as the environment ε . We study the physical complexity of substrings of ε . The comprehension of their complex features has high practical importance. The amount of data agents take into account in order to elaborate their choice is finite and of short range. For every time step t, the binary digits at-l, at-l+1,…, at-1 carry some information about the behavior of agents. Hence, the complexity of these finite strings is a measure of how complex information agents face. The Kolmogorov – Chaitin complexity is defined as the length of the shortest program π producing the sequence η when run on universal Turing machine T:

K(η) = min {|π|: η = T(π)}

where π represent the length of π in bits, T(π) the result of running π on Turing machine T and K(η) the Kolmogorov-Chaitin complexity of sequence π. In the framework of this theory, a string is said to be regular if K(η) < η . It means that η can be described by a program π with length smaller than η length. The interpretation of a string should be done in the framework of an environment. Hence, let imagine a Turing machine that takes the string ε as input. We can define the conditional complexity K(η / ε) as the length of the smallest program that computes η in a Turing machine having ε as input:

K(η / ε) = min {|π|: η = CT(π, ε)}

We want to stress that K(η / ε) represents those bits in η that are random with respect to ε. Finally, the physical complexity can be defined as the number of bits that are meaningful in η with respect to ε :

K(η : ε) = |η| – K(η / ε)

η also represent the unconditional complexity of string η i.e., the value of complexity if the input would be ε = ∅ . Of course, the measure K (η : ε ) as defined in the above equation has few practical applications, mainly because it is impossible to know the way in which information about ε is encoded in η . However, if a statistical ensemble of strings is available to us, then the determination of complexity becomes an exercise in information theory. It can be proved that the average values C(η) of the physical complexity K(η : ε) taken over an ensemble Σ of strings of length η can be approximated by:

C|(η)| = 〈K(η : ε) ≅  |η| – K(η : ε), where

K(η : ε) = -∑η∈∑p(η / ε) log2p(η / ε)

and the sum is taking over all the strings η in the ensemble Σ. In a population of N strings in environment ε, the quantity n(η)/N, where n(s) denotes the number of strings equal to η in ∑, approximates p(η / ε) as N → ∞.

Let ε = {at}t∈N and l be a positive integer l ≥ 2. Let Σl be the ensemble of sequences of length l built up by a moving window of length l i.e., if η ∈ Σl then η = aiai+1ai+l−1 for some value of i. The selection of strings ε is related to periods before crashes and in contrast, period with low uncertainty in the market…..

Not Just Any Lair of Filth….Investment Environment = Ratio of Ordinary Profits to Total Capital – Long-Term Interest Rate

spire logo

In the stock market the price changes are subject to the law of demand and supply, that the price rises when there is excess demand, and the price falls when there is excess supply. It seems natural to assume that the price raises if the number of the buyer exceeds the number of the seller because there may be excess demand, and the price falls if the number of seller exceeds the number of the seller because there may be excess supply. Thus a trader, who expects a certain exchange profit through trading, will predict every other traders’ behaviour, and will choose the same behaviour as the other traders’ behaviour as thoroughly as possible he could. The decision-making of traders will be also influenced by changes of the firm’s fundamental value, which can be derived from analysis of present conditions and future prospects of the firm, and the return on the alternative asset (e.g. bonds). For simplicity’s sake of an empirical analysis, lets use the ratio of ordinary profits to total capital that is a typical measure of investment, as a proxy for changes of the fundamental value, and the long-term interest rate as a proxy for changes of the return on the alternative asset.

An investment environment is defined as

investment environment = ratio of ordinary profits to total capital – long- term interest rate

When the investment environment increases (decreases) a trader may think that now is the time for him to buy (sell) the stock. Formally let us assume that the investment attitude of trader i is determined by minimisation of the following disagreement function ei(x),

ei(x) = -1/2 ∑j=1Naijxixj – bisxi —– (1)

where aij denotes the strength of trader j’s influence on trader i, and bi denotes the strength of the reaction of trader i upon the change of the investment environment s which may be interpreted as an external field, and x denotes the vector of investment attitude x = (x1, x2, ……xN). The optimisation problem that should be solved for every trader to achieve minimisation of their disagreement functions ei(x) at the same time is formalised by

min E(x) = -1/2 ∑i=1Nj=1Naijxixj – ∑i=1Nbisxi —– (2)

Now let us assume that trader’s decision making is subject to a probabilistic rule. The summation over all possible configurations of agents’ investment attitude x = (x1,…..,xN) is computationally explosive with size of the number of trader N. Therefore under the circumstance that a large number of traders participates into trading, a probabilistic setting may be one of best means to analyse the collective behaviour of the many interacting traders. Let us introduce a random variable xk =(xk1,xk2,……,xkN), k=1,2,…..,K. The state of the agents’ investment attitude xk occur with probability P(xk) = Prob(xk) with the requirement 0 < P(xk) < 1 and ∑k=1KP(xk) = 1. Defining the amount of uncertainty before the occurrence of the state xk with probability P(xk) as the logarithmic function: I(xk) = −logP(xk). Under these assumptions the above optimisation problem is formalised by

min <E(x)> = ∑k=1NP(xk) E(xk) —– (3)

subject to H = − ∑k=1NP(xk)logP(xk), ∑k=-NNP(xk) = 1

where E(xk) = 1/2 ∑i=1NEi(xk)

xk is a state, and H is information entropy. P(xk) is the relative frequency the occurrence of the state xk. The well-known solutions of the above optimisation problem is

P(xk) = 1/Z e(-μE(xk)), Z = ∑k=1Ke(-E(xk)) k = 1, 2, …., K —– (4)

where the parameter μ may be interested as a market temperature describing a degree of randomness in the behaviour of traders. The probability distribution P(xk) is called the Boltzmann distribution where P(xk) is the probability that the traders’ investment attitude is in the state k with the function E(xk), and Z is the partition function. We call the optimising behaviour of the traders with interaction among the other traders a relative expectation formation.


Phenomenological Model for Stock Portfolios. Note Quote.


The data analysis and modeling of financial markets have been hot research subjects for physicists as well as economists and mathematicians in recent years. The non-Gaussian property of the probability distributions of price changes, in stock markets and foreign exchange markets, has been one of main problems in this field. From the analysis of the high-frequency time series of market indices, a universal property was found in the probability distributions. The central part of the distribution agrees well with Levy stable distribution, while the tail deviate from it and shows another power law asymptotic behavior. In probability theory, a distribution or a random variable is said to be stable if a linear combination of two independent copies of a random sample has the same distributionup to location and scale parameters. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it. The scaling property on the sampling time interval of data is also well described by the crossover of the two distributions. Several stochastic models of the fluctuation dynamics of stock prices are proposed, which reproduce power law behavior of the probability density. The auto-correlation of financial time series is also an important problem for markets. There is no time correlation of price changes in daily scale, while from more detailed data analysis an exponential decay with a characteristic time τ = 4 minutes was found. The fact that there is no auto-correlation in daily scale is not equal to the independence of the time series in the scale. In fact there is auto-correlation of volatility (absolute value of price change) with a power law tail.

Portfolio is a set of stock issues. The Hamiltonian of the system is introduced and is expressed by spin-spin interactions as in spin glass models of disordered magnetic systems. The interaction coefficients between two stocks are phenomenologically determined by empirical data. They are derived from the covariance of sequences of up and down spins using fluctuation-response theorem. We start with the Hamiltonian expression of our system that contain N stock issues. It is a function of the configuration S consisting of N coded price changes Si (i = 1, 2, …, N ) at equal trading time. The interaction coefficients are also dynamical variables, because the interactions between stocks are thought to change from time to time. We divide a coefficient into two parts, the constant part Jij, which will be phenomenologically determined later, and the dynamical part δJij. The Hamiltonian including the interaction with external fields hi (i = 1,2,…,N) is defined as

H [S, δ, J, h] = ∑<i,j>[δJij2/2Δij – (Jij + δJij)SiSj] – ∑ihiSi —– (1)

The summation is taken over all pairs of stock issues. This form of Hamiltonian is that of annealed spin glass. The fluctuations δJij are assumed to distribute according to Gaussian function. The main part of statistical physics is the evaluation of partition function that is given by the following functional in this case

Z[h] = ∑{si} ∫∏<i,j> dδJij/√(2πΔij) e-H [S, δ, J, h] —– (2)

The integration over the variables δJij is easily performed and gives

Z[h] = A {si} e-Heff[S, h] —– (3)

Here the effective Hamiltonian Heff[S,h] is defined as

Heff[S, h] = – <i,j>JijSiSj – ∑ihiSi —– (4)

and A = e(1/2 ∆ij) is just a normalization factor which is irrelevant to the following step. This form of Hamiltonian with constant Jij is that of quenched spin glass.

The constant interaction coefficients Jij are still undetermined. We use fluctuation-response theorem which relates the susceptibility χij with the covariance Cij between dynamical variables in order to determine those constants, which is given by the equation,

χij = ∂mi/∂hj |h=0 = Cij —– (5)

Thouless-Anderson-Palmer (TAP) equation for quenched spin glass is

mi =tanh(∑jJijmj + hi – ∑jJij2(1 – mj2)mi —– (6)

Equation (5) and the linear approximation of the equation (6) yield the equation

kik − Jik)Ckj = δij —– (7)

Interpreting Cij as the time average of empirical data over a observation time rather than ensemble average, the constant interaction coefficients Jij is phenomenologically determined by the equation (7).

The energy spectra of the system, simply the portfolio energy, is defined as the eigenvalues of the Hamiltonian Heff[S,0]. The probability density of the portfolio energy can be obtained in two ways. We can calculate the probability density from data by the equation

p(E) ΔE = p(E – ΔE/2 ≤ E ≤ E + ΔE/2) —– (8)

This is a fully consistent phenomenological model for stock portfolios, which is expressed by the effective Hamiltonian (4). This model will be also applicable to other financial markets that show collective time evolutions, e.g., foreign exchange market, options markets, inter-market interactions.




Any system that uses only single asset price (and possibly prices of multiple assets, but this case is not completely clear) as input. The price is actually secondary and typically fluctuates few percent a day in contrast with liquidity flow, that fluctuates in orders of magnitude. This also allows to estimate maximal workable time scale: the scale on which execution flow fluctuates at least in an order of magnitude (in 10 times).

Any system that has a built-in fixed time scale (e.g. moving average type of system). The market has no specific time scale.

Any “symmetric” system with just two signals “buy” and “sell” cannot make money. Minimal number of signals is four: “buy”, “sell position”, “sell short”, “cover short”. The system where e.g. “buy” and “cover short” is the same signal will eventually catastrophically lose money on an event when market go against position held. Short covering is buying back borrowed securities in order to close an open short position. Short covering refers to the purchase of the exact same security that was initially sold short, since the short-sale process involved borrowing the security and selling it in the market. For example, assume you sold short 100 shares of XYZ at $20 per share, based on your view that the shares were headed lower. When XYZ declines to $15, you buy back 100 shares of XYZ in the market to cover your short position (and pocket a gross profit of $500 from your short trade).

Any system entering the position (does not matter long or short) during liquidity excess (e.g. I > IIH) cannot make money. During liquidity excess price movement is typically large and “reverse to the moving average” type of system often use such event as position entering signal. The market after liquidity excess event bounces a little, then typically goes to the same direction. This give a risk of on what to bet: “little bounce” or “follow the market”. What one should do during liquidity excess event is to CLOSE existing position. This is very fundamental – if you have a position during market uncertainty – eventually you will lose money, you must have ZERO position during liquidity excess. This is very important element of the P&L trading strategy.

Any system not entering the position during liquidity deficit event (e.g. I < IIL) typically lose money. Liquidity deficit periods are characterized by small price movements and difficult to identify by price-based trading systems. Liquidity deficit actually means that at current price buyers and sellers do not match well, and substantial price movement is expected. This is very well known by most traders: before large market movement volatility (and e.g. standard deviation as its crude measure) becomes very low. The direction (whether one should go long or short) during liquidity deficit event can, to some extent, be determined by the balance of supply–demand generalization.

An important issue is to discuss is: what would happen to the markets when this strategy (enter on liquidity deficit, exit on liquidity excess) is applied on mass scale by market participants. In contrast with other trading strategies, which reduce liquidity at current price when applied (when price is moved to the uncharted territory, the liquidity drains out because supply or demand drains ), this strategy actually increases market liquidity at current price. This insensitivity to price value is expected to lead not to the strategy stopping to work when applied on mass scale by market participants, but starting to work better and better and to markets’ destabilization in the end.



Delta Neutral Volatility Trading


Price prediction is extremely difficult because, price fluctuations are small and are secondary to liquidity fluctuation. A question arises whether liquidity deficit can be traded directly. If we accept that liquidity deficit is an entity of the same nature as volatility then the answer is yes, and liquidity deficit can be traded through some kind of derivative instruments. Let us illustrate the approach on a simple case – options trading. Whatever option model is used, the key element of it is implied volatility. Implied volatility trading strategy can be implemented through trading some delta–neutral “synthetic asset”, built e.g. as long–short pairs of a call on an asset and an asset itself, call–put pairs or similar “delta–neutral vehicles”. Delta neutral is a portfolio strategy consisting of multiple positions with offsetting positive and negative deltas so that the overall delta of the assets in questions totals zero. A delta-neutral portfolio balances the response to market movements for a certain range to bring the net change of the position to zero. Delta measures how much an option’s price changes when the underlying security’s price changes. Optimal implementation of such “synthetic asset” depends on commissions, liquidity available, exchange access, etc. and varies from fund to fund. Assume we have built such delta–neutral instrument, the price of which depend on volatility only. How to trade it? We have the same two requirements: 1) Avoid catastrophic P&L drain and 2) Predict future value of volatility (forward volatility). Now, when trading delta–neutral strategy, this matches exactly our theory and trading algorithm becomes this.

  1. If for underlying asset we have (execution flow at time t = 0) I0 < IIL (liquidity deficit) then enter “long volatility” position for “delta–neutral” synthetic asset. This enter condition means that if current execution flow is low – future value of it will be high. If at current price,  the value of I0 is low – the price would change to increase future I.
  2. If for underlying asset we have (execution flow at time t = 0) I0 > IIH then close existing “long volatility” position for “delta–neutral” synthetic asset. At high I0 future value of I cannot be determined, it can either go down (typically) or increase even more (much more seldom, but just few such events sufficient to incur catastrophic P&L drain). According to main concept of P&L trading strategy, one should have zero position during market uncertainty.

The reason why this strategy is expected to be profitable is that experiments show that implied volatility is very much price fluctuation–dependent, and execution flow spikes I0 > IIH in underlying asset typically lead to substantial price move of it and then implied volatility increase for “synthetic asset”. This strategy is a typical “buy low volatility”, then “sell high volatility”. The key difference from regular case is that, instead of price volatility, liquidity deficit is used as a proxy to forward volatility. The described strategy never goes short volatility, so catastrophic P&L drain is unlikely. In addition to that, actual trading implementation requires the use of “delta–neutral” synthetic asset, what incurs substantial costs on commissions and execution, and thus actual P&L is difficult to estimate without existing setup for high–frequency option trading.


Fiscal Responsibility and Budget Management (FRBM) Act

The Government appointed a five-member Committee in May 2016, to review the Fiscal Responsibility and Budget Management (FRBM) Act and to examine a changed format including flexible FRBM targets. The Committee formation was announced during the 2016-17 budget by FM Arun Jaitely. The Panel was headed by the former MP and former Revenue and Expenditure Secretary NK Singh and included four other members, CEA Arvind Subramanian, former Finance Secretary Sumit Bose, the then Deputy Governor and present governor of the RBI Urjit Patel and Nathin Roy. There was a difference of opinion about the need for adopting a fixed FRBM target like fiscal deficit, and the divisive opinion lay precisely in not following through such a fixity in times when the government had to spend high to fight recession and support economic growth. The other side of the camp argued it being necessary to inculcate a feeling of fiscal discipline. During Budget speech in 2016, Mr Jaitley expressed this debate:

There is now a school of thought which believes that instead of fixed numbers as fiscal deficit targets, it may be better to have a fiscal deficit range as the target, which would give necessary policy space to the government to deal with dynamic situations. There is also a suggestion that fiscal expansion or contraction should be aligned with credit contraction or expansion, respectively, in the economy.

The need for a flexible FRBM target that allowed higher fiscal deficit during difficult/recessionary years and low targets during comfortable years, gives the government a breathing space to borrow more during tight years. In it report submitted in late January this year, the committee did advocate for a range rather than a fixed fiscal deficit target. Especially, fiscal management becomes all the more important post-demonetisation and the resultant slump in consumption expenditure. The view is that the government could be tempted to increase public spending to boost consumption. but, here is the catch: while ratings agencies do look at the fiscal discipline of a country when considering them for a ratings upgrade, they also look at the context and the growth rate of the economy, so the decision will not be a myopic one based only on the fiscal and revenue deficits.

Fiscal responsibility is an economic concept that has various definitions, depending on the economic theory held by the person or organization offering the definition. Some say being fiscally responsible is just a matter of cutting debt, while others say it’s about completely eliminating debt. Still others might argue that it’s a matter of controlling the level of debt without completely reducing it. Perhaps the most basic definition of fiscal responsibility is the act of creating, optimizing and maintaining a balanced budget.

“Fiscal” refers to money and can include personal finances, though it most often is used in reference to public money or government spending. This can involve income from taxes, revenue, investments or treasuries. In a governmental context, a pledge of fiscal responsibility is a government’s assurance that it will judiciously spend, earn and generate funds without placing undue hardship on its citizens. Fiscal responsibility includes a moral contract to maintain a financially sound government for future generations, because a First World society is difficult to maintain without a financially secure government.

But, what exactly is fiscal responsibility, fiscal management and FRBM. So, here is an attempt to demystify these.

Fiscal responsibility often starts with a balanced budget, which is one with no deficits and no surpluses. The expectations of what might be spent and what is actually spent are equal. Many forms of government have different views and expectations for maintaining a balanced budget, with some preferring to have a budget deficit during certain economic times and a budget surplus during others. Other types of government view a budget deficit as being fiscally irresponsible at any time. Fiscal irresponsibility refers to a lack of effective financial planning by a person, business or government. This can include decreasing taxes in one crucial area while drastically increasing spending in another. This type of situation can cause a budget deficit in which the outgoing expenditures exceed the cash coming in. A government is a business in its own right, and no business — or private citizen — can thrive eternally while operating with a deficit.

When a government is fiscally irresponsible, its ability to function effectively is severely limited. Emergent situations arise unexpectedly, and a government needs to have quick access to reserve funds. A fiscally irresponsible government isn’t able to sustain programs designed to provide fast relief to its citizens.

A government, business or person can take steps to become more fiscally responsible. One useful method for government is to provide some financial transparency, which can reduce waste, expose fraud and highlight areas of financial inefficiency. Not all aspects of government budgets and spending can be brought into full public view because of various risks to security, but offering an inside look at government spending can offer a nation’s citizens a sense of well-being and keep leaders honest. Similarly, a private citizen who is honest with himself about where he is spending his money is better able to determine where he might be able to make cuts that would allow him to live within his means.

Fiscal Responsibility and Budget Management (FRBM) became an Act in 2003. The objective of the Act is to ensure inter-generational equity in fiscal management, long run macroeconomic stability, better coordination between fiscal and monetary policy, and transparency in fiscal operation of the Government.

The Government notified FRBM rules in July 2004 to specify the annual reduction targets for fiscal indicators. The FRBM rule specifies reduction of fiscal deficit to 3% of the GDP by 2008-09 with annual reduction target of 0.3% of GDP per year by the Central government. Similarly, revenue deficit has to be reduced by 0.5% of the GDP per year with complete elimination to be achieved by 2008-09. It is the responsibility of the government to adhere to these targets. The Finance Minister has to explain the reasons and suggest corrective actions to be taken, in case of breach.

FRBM Act provides a legal institutional framework for fiscal consolidation. It is now mandatory for the Central government to take measures to reduce fiscal deficit, to eliminate revenue deficit and to generate revenue surplus in the subsequent years. The Act binds not only the present government but also the future Government to adhere to the path of fiscal consolidation. The Government can move away from the path of fiscal consolidation only in case of natural calamity, national security and other exceptional grounds which Central Government may specify.

Further, the Act prohibits borrowing by the government from the Reserve Bank of India, thereby, making monetary policy independent of fiscal policy. The Act bans the purchase of primary issues of the Central Government securities by the RBI after 2006, preventing monetization of government deficit. The Act also requires the government to lay before the parliament three policy statements in each financial year namely Medium Term Fiscal Policy Statement; Fiscal Policy Strategy Statement and Macroeconomic Framework Policy Statement.

To impart fiscal discipline at the state level, the Twelfth Finance Commission gave incentives to states through conditional debt restructuring and interest rate relief for introducing Fiscal Responsibility Legislations (FRLs). All the states have implemented their own FRLs.

Indian economy faced with the problem of large fiscal deficit and its monetization spilled over to external sector in the late 1980s and early 1990s. The large borrowings of the government led to such a precarious situation that government was unable to pay even for two weeks of imports resulting in economic crisis of 1991. Consequently, Economic reforms were introduced in 1991 and fiscal consolidation emerged as one of the key areas of reforms. After a good start in the early nineties, the fiscal consolidation faltered after 1997-98. The fiscal deficit started rising after 1997-98. The Government introduced FRBM Act, 2003 to check the deteriorating fiscal situation.

The implementation of FRBM Act/FRLs improved the fiscal performance of both centre and states.

The States have achieved the targets much ahead the prescribed timeline. Government of India was on the path of achieving this objective right in time. However, due to the global financial crisis, this was suspended and the fiscal consolidation as mandated in the FRBM Act was put on hold in 2007- 08.The crisis period called for increase in expenditure by the government to boost demand in the economy. As a result of fiscal stimulus, the government has moved away from the path of fiscal consolidation. However, it should be noted that strict adherence to the path of fiscal consolidation during pre crisis period created enough fiscal space for pursuing counter cyclical fiscal policy.the main provisions of the Act are:

  1. The government has to take appropriate measures to reduce the fiscal deficit and revenue deficit so as to eliminate revenue deficit by 2008-09 and thereafter, sizable revenue surplus has to be created.
  2. Setting annual targets for reduction of fiscal deficit and revenue deficit, contingent liabilities and total liabilities.
  3. The government shall end its borrowing from the RBI except for temporary advances.
  4. The RBI not to subscribe to the primary issues of the central government securities after 2006.
  5. The revenue deficit and fiscal deficit may exceed the targets specified in the rules only on grounds of national security, calamity etc.

Though the Act aims to achieve deficit reductions prima facie, an important objective is to achieve inter-generational equity in fiscal management. This is because when there are high borrowings today, it should be repaid by the future generation. But the benefit from high expenditure and debt today goes to the present generation. Achieving FRBM targets thus ensures inter-generation equity by reducing the debt burden of the future generation. Other objectives include: long run macroeconomic stability, better coordination between fiscal and monetary policy, and transparency in fiscal operation of the Government.

The Act had said that the fiscal deficit should be brought down to 3% of the gross domestic product (GDP) and revenue deficit should drop down to nil, both by March 2009. Fiscal deficit is the excess of government’s total expenditure over its total income. The government incurs revenue and capital expenses and receives income on the revenue and capital account. Further, the excess of revenue expenses over revenue income leads to a revenue deficit. The FRBM Act wants the revenue deficit to be nil as the revenue expenditure is day-to-day expenses and does not create a capital asset. Usually, the liabilities should not be carried forward, else the government ends up borrowing to repay its current liabilities.

However, these targets were not achieved because the global credit crisis hit the markets in 2008. The government had to roll out a fiscal stimulus to revive the economy and this increased the deficits.

In the 2011 budget, the finance minister said that the FRBM Act would be modified and new targets would be fixed and flexibility will be built in to have a cushion for unforeseen circumstances. According to the 13th Finance Commission, fiscal deficit will be brought down to 3.5% in 2013-14. Likewise, revenue deficit is expected to be cut to 2.1% in 2013-14.

In the 2012 Budget speech, the finance minister announced an amendment to the FRBM Act. He also announced that instead of the FRBM targeting the revenue deficit, the government will now target the effective revenue deficit. His budget speech defines effective revenue deficit as the difference between revenue deficit and grants for creation of capital assets. In other words, capital expenditure will now be removed from the revenue deficit and whatever remains (effective revenue deficit) will now be the new goalpost of the fiscal consolidation. Here’s what effective revenue deficit means.

Every year the government incurs expenditure and simultaneously earns income. Some expenses are planned (that it includes in its five-year plans) and other are non-planned. However, both planned and non-planned expenditure consists of capital and revenue expenditure. For instance, if the government sets up a power plant as part of its non-planned expenditure, then costs incurred towards maintaining it will now not be called revenue deficit because it is towards maintaining a “capital asset”. Experts say that revenue deficit could become a little distorted because by reclassifying revenue deficit, it is simplifying its target.


access to reserve funds. A fiscally irresponsible government isn’t able to sustain programs designed to provide fast relief to its citizens.