Econophysics: Financial White Noise Switch. Thought of the Day 115.0


What is the cause of large market fluctuation? Some economists blame irrationality behind the fat-tail distribution. Some economists observed that social psychology might create market fad and panic, which can be modeled by collective behavior in statistical mechanics. For example, the bi-modular distribution was discovered from empirical data in option prices. One possible mechanism of polarized behavior is collective action studied in physics and social psychology. Sudden regime switch or phase transition may occur between uni-modular and bi-modular distribution when field parameter changes across some threshold. The Ising model in equilibrium statistical mechanics was borrowed to study social psychology. Its phase transition from uni-modular to bi-modular distribution describes statistical features when a stable society turns into a divided society. The problem of the Ising model is that its key parameter, the social temperature, has no operational definition in social system. A better alternative parameter is the intensity of social interaction in collective action.

A difficult issue in business cycle theory is how to explain the recurrent feature of business cycles that is widely observed from macro and financial indexes. The problem is: business cycles are not strictly periodic and not truly random. Their correlations are not short like random walk and have multiple frequencies that changing over time. Therefore, all kinds of math models are tried in business cycle theory, including deterministic, stochastic, linear and nonlinear models. We outline economic models in terms of their base function, including white noise with short correlations, persistent cycles with long correlations, and color chaos model with erratic amplitude and narrow frequency band like biological clock.



The steady state of probability distribution function in the Ising Model of Collective Behavior with h = 0 (without central propaganda field). a. Uni-modular distribution with low social stress (k = 0). Moderate stable behavior with weak interaction and high social temperature. b. Marginal distribution at the phase transition with medium social stress (k = 2). Behavioral phase transition occurs between stable and unstable society induced by collective behavior. c. Bi-modular distribution with high social stress (k = 2.5). The society splits into two opposing groups under low social temperature and strong social interactions in unstable society. 

Deterministic models are used by Keynesian economists for endogenous mechanism of business cycles, such as the case of the accelerator-multiplier model. The stochastic models are used by the Frisch model of noise-driven cycles that attributes external shocks as the driving force of business fluctuations. Since 1980s, the discovery of economic chaos and the application of statistical mechanics provide more advanced models for describing business cycles. Graphically,


The steady state of probability distribution function in socio-psychological model of collective choice. Here, “a” is the independent parameter; “b” is the interaction parameter. a Centered distribution with b < a (denoted by short dashed curve). It happens when independent decision rooted in individualistic orientation overcomes social pressure through mutual communication. b Horizontal flat distribution with b = a (denoted by long dashed line). Marginal case when individualistic orientation balances the social pressure. c Polarized distribution with b > a (denoted by solid line). It occurs when social pressure through mutual communication is stronger than independent judgment. 


Numerical 1 autocorrelations from time series generated by random noise and harmonic wave. The solid line is white noise. The broken line is a sine wave with period P = 1. 

Linear harmonic cycles with unique frequency are introduced in business cycle theory. The auto-correlations from harmonic cycle and white noise are shown in the above figure. Auto-correlation function from harmonic cycles is a cosine wave. The amplitude of cosine wave is slightly decayed because of limited data points in numerical experiment. Auto-correlations from a random series are an erratic series with rapid decade from one to residual fluctuations in numerical calculation. The auto-regressive (AR) model in discrete time is a combination of white noise term for simulating short-term auto-correlations from empirical data.

The deterministic model of chaos can be classified into white chaos and color chaos. White chaos is generated by nonlinear difference equation in discrete-time, such as one-dimensional logistic map and two-dimensional Henon map. Its autocorrelations and power spectra look like white noise. Its correlation dimension can be less than one. White noise model is simple in mathematical analysis but rarely used in empirical analysis, since it needs intrinsic time unit.

Color chaos is generated by nonlinear differential equations in continuous-time, such as three-dimensional Lorenz model and one-dimensional model with delay-differential model in biology and economics. Its autocorrelations looks like a decayed cosine wave, and its power spectra seem a combination of harmonic cycles and white noise. The correlation dimension is between one and two for 3D differential equations, and varying for delay-differential equation.


History shows the remarkable resilience of a market that experienced a series of wars and crises. The related issue is why the economy can recover from severe damage and out of equilibrium? Mathematically speaking, we may exam the regime stability under parameter change. One major weakness of the linear oscillator model is that the regime of periodic cycle is fragile or marginally stable under changing parameter. Only nonlinear oscillator model is capable of generating resilient cycles within a finite area under changing parameters. The typical example of linear models is the Samuelson model of multiplier-accelerator. Linear stochastic models have similar problem like linear deterministic models. For example, the so-called unit root solution occurs only at the borderline of the unit root. If a small parameter change leads to cross the unit circle, the stochastic solution will fall into damped (inside the unit circle) or explosive (outside the unit circle) solution.

Stock Hedging Loss and Risk


A stock is supposed to be bought at time zero with price S0, and to be sold at time T with uncertain price ST. In order to hedge the market risk of the stock, the company decides to choose one of the available put options written on the same stock with maturity at time τ, where τ is prior and close to T, and the n available put options are specified by their strike prices Ki (i = 1,2,··· ,n). As the prices of different put options are also different, the company needs to determine an optimal hedge ratio h (0 ≤ h ≤ 1) with respect to the chosen strike price. The cost of hedging should be less than or equal to the predetermined hedging budget C. In other words, the company needs to determine the optimal strike price and hedging ratio under the constraint of hedging budget. The chosen put option is supposed to finish in-the-money at maturity, and the constraint of hedging expenditure is supposed to be binding.

Suppose the market price of the stock is S0 at time zero, the hedge ratio is h, the price of the put option is P0, and the riskless interest rate is r. At time T, the time value of the hedging portfolio is

S0erT + hP0erT —– (1)

and the market price of the portfolio is

ST + h(K − Sτ)+ er(T − τ) —— (2)

therefore the loss of the portfolio is

L = S0erT + hP0erT − (ST +h(K − Sτ)+ er(T − τ)—– (3)

where x+ = max(x, 0), which is the payoff function of put option at maturity. For a given threshold v, the probability that the amount of loss exceeds v is denoted as

α = Prob{L ≥ v} —– (4)

in other words, v is the Value-at-Risk (VaR) at α percentage level. There are several alternative measures of risk, such as CVaR (Conditional Value-at-Risk), ESF (Expected Shortfall), CTE (Conditional Tail Expectation), and other coherent risk measures.

The mathematical model of stock price is chosen to be a geometric Brownian motion

dSt/St = μdt + σdBt —– (5)

where St is the stock price at time t (0 < t ≤ T), μ and σ are the drift and the volatility of stock price, and Bt is a standard Brownian motion. The solution of the stochastic differential equation is

St = S0 eσBt + (μ − 1/2σ2)t —– (6)

where B0 = 0, and St is lognormally distributed.

For a given threshold of loss v, the probability that the loss exceeds v is

Prob {L ≥ v} = E [I{X≤c1}FY(g(X) − X)] + E [I{X≥c1}FY (c2 − X)] —– (7)

where E[X] is the expectation of random variable X. I{X<c} is the index function of X such that I{X<c} = 1 when {X < c} is true, otherwise I{X<c} = 0. FY(y) is the cumulative distribution function of random variable Y, and

c1 = 1/σ [ln(k/S0) – (μ – 1/2σ2)τ]

g(X) = 1/σ [ln((S0 + hP0)erT − h(K − f(X))er(T − τ) − v)/S0 – (μ – 1/2σ2)T]

f(X) = S0 eσX + (μ−1σ2

c2 = 1/σ [ln((S0 + hP0)erT − v)/S0 – (μ – 1/2σ2)T]

X and Y are both normally distributed, where X ∼ N(0, √τ), Y ∼ N(0, √(T−τ)).

For a specified hedging strategy, Q(v) = Prob {L ≥ v} is a decreasing function of v. The VaR under α level can be obtained from equation

Q(v) = α —– (8)

The expectations can be calculated with Monte Carlo simulation methods, and the optimal hedging strategy which has the smallest VaR can be obtained from (8) by numerical searching methods.

Appropriation of (Ir)reversibility of Noise Fluctuations to (Un)Facilitate Complexity



The logical depth is a suitable measure of subjective complexity for physical as well as mathematical objects. this, upon considering the effect of irreversibility, noise, and spatial symmetries of the equations of motion and initial conditions on the asymptotic depth-generating abilities of model systems.

“Self-organization” suggests a spontaneous increase of complexity occurring in a system with simple, generic (e.g. spatially homogeneous) initial conditions. The increase of complexity attending a computation, by contrast, is less remarkable because it occurs in response to special initial conditions. An important question, which would have interested Turing, is whether self-organization is an asymptotically qualitative phenomenon like phase transitions. In other words, are there physically reasonable models in which complexity, appropriately defined, not only increases, but increases without bound in the limit of infinite space and time? A positive answer to this question would not explain the natural history of our particular finite world, but would suggest that its quantitative complexity can legitimately be viewed as an approximation to a well-defined qualitative property of infinite systems. On the other hand, a negative answer would suggest that our world should be compared to chemical reaction-diffusion systems (e.g. Belousov-Zhabotinsky), which self-organize on a macroscopic, but still finite scale, or to hydrodynamic systems which self-organize on a scale determined by their boundary conditions.

The suitability of logical depth as a measure of physical complexity depends on the assumed ability (“physical Church’s thesis”) of Turing machines to simulate physical processes, and to do so with reasonable efficiency. Digital machines cannot of course integrate a continuous system’s equations of motion exactly, and even the notion of computability is not very robust in continuous systems, but for realistic physical systems, subject throughout their time development to finite perturbations (e.g. electromagnetic and gravitational) from an uncontrolled environment, it is plausible that a finite-precision digital calculation can approximate the motion to within the errors induced by these perturbations. Empirically, many systems have been found amenable to “master equation” treatments in which the dynamics is approximated as a sequence of stochastic transitions among coarse-grained microstates.

We concentrate arbitrarily on cellular automata, in the broad sense of discrete lattice models with finitely many states per site, which evolve according to a spatially homogeneous local transition rule that may be deterministic or stochastic, reversible or irreversible, and synchronous (discrete time) or asynchronous (continuous time, master equation). Such models cover the range from evidently computer-like (e.g. deterministic cellular automata) to evidently material-like (e.g. Ising models) with many gradations in between.

More of the favorable properties need to be invoked to obtain “self-organization,” i.e. nontrivial computation from a spatially homogeneous initial condition. A rather artificial system (a cellular automaton which is stochastic but noiseless, in the sense that it has the power to make purely deterministic as well as random decisions) undergoes this sort of self-organization. It does so by allowing the nucleation and growth of domains, within each of which a depth-producing computation begins. When two domains collide, one conquers the other, and uses the conquered territory to continue its own depth-producing computation (a computation constrained to finite space, of course, cannot continue for more than exponential time without repeating itself). To achieve the same sort of self-organization in a truly noisy system appears more difficult, partly because of the conflict between the need to encourage fluctuations that break the system’s translational symmetry, while suppressing fluctuations that introduce errors in the computation.

Irreversibility seems to facilitate complex behavior by giving noisy systems the generic ability to correct errors. Only a limited sort of error-correction is possible in microscopically reversible systems such as the canonical kinetic Ising model. Minority fluctuations in a low-temperature ferromagnetic Ising phase in zero field may be viewed as errors, and they are corrected spontaneously because of their potential energy cost. This error correcting ability would be lost in nonzero field, which breaks the symmetry between the two ferromagnetic phases, and even in zero field it gives the Ising system the ability to remember only one bit of information. This limitation of reversible systems is recognized in the Gibbs phase rule, which implies that under generic conditions of the external fields, a thermodynamic system will have a unique stable phase, all others being metastable. Even in reversible systems, it is not clear why the Gibbs phase rule enforces as much simplicity as it does, since one can design discrete Ising-type systems whose stable phase (ground state) at zero temperature simulates an aperiodic tiling of the plane, and can even get the aperiodic ground state to incorporate (at low density) the space-time history of a Turing machine computation. Even more remarkably, one can get the structure of the ground state to diagonalize away from all recursive sequences.

Potential Synapses. Thought of the Day 52.0

For a neuron to recognize a pattern of activity it requires a set of co-located synapses (typically fifteen to twenty) that connect to a subset of the cells that are active in the pattern to be recognized. Learning to recognize a new pattern is accomplished by the formation of a set of new synapses collocated on a dendritic segment.


Figure: Learning by growing new synapses. Learning in an HTM neuron is modeled by the growth of new synapses from a set of potential synapses. A “permanence” value is assigned to each potential synapse and represents the growth of the synapse. Learning occurs by incrementing or decrementing permanence values. The synapse weight is a binary value set to 1 if the permanence is above a threshold.

Figure shows how we model the formation of new synapses in a simulated Hierarchical Temporal Memory (HTM) neuron. For each dendritic segment we maintain a set of “potential” synapses between the dendritic segment and other cells in the network that could potentially form a synapse with the segment. The number of potential synapses is larger than the number of actual synapses. We assign each potential synapse a scalar value called “permanence” which represents stages of growth of the synapse. A permanence value close to zero represents an axon and dendrite with the potential to form a synapse but that have not commenced growing one. A 1.0 permanence value represents an axon and dendrite with a large fully formed synapse.

The permanence value is incremented and decremented using a Hebbian-like rule. If the permanence value exceeds a threshold, such as 0.3, then the weight of the synapse is 1, if the permanence value is at or below the threshold then the weight of the synapse is 0. The threshold represents the establishment of a synapse, albeit one that could easily disappear. A synapse with a permanence value of 1.0 has the same effect as a synapse with a permanence value at threshold but is not as easily forgotten. Using a scalar permanence value enables on-line learning in the presence of noise. A previously unseen input pattern could be noise or it could be the start of a new trend that will repeat in the future. By growing new synapses, the network can start to learn a new pattern when it is first encountered, but only act differently after several presentations of the new pattern. Increasing permanence beyond the threshold means that patterns experienced more than others will take longer to forget.

HTM neurons and HTM networks rely on distributed patterns of cell activity, thus the activation strength of any one neuron or synapse is not very important. Therefore, in HTM simulations we model neuron activations and synapse weights with binary states. Additionally, it is well known that biological synapses are stochastic, so a neocortical theory cannot require precision of synaptic efficacy. Although scalar states and weights might improve performance, they are not required from a theoretical point of view.

High Frequency Markets and Leverage


Leverage effect is a well-known stylized fact of financial data. It refers to the negative correlation between price returns and volatility increments: when the price of an asset is increasing, its volatility drops, while when it decreases, the volatility tends to become larger. The name “leverage” comes from the following interpretation of this phenomenon: When an asset price declines, the associated company becomes automatically more leveraged since the ratio of its debt with respect to the equity value becomes larger. Hence the risk of the asset, namely its volatility, should become more important. Another economic interpretation of the leverage effect, inverting causality, is that the forecast of an increase of the volatility should be compensated by a higher rate of return, which can only be obtained through a decrease in the asset value.

Some statistical methods enabling us to use high frequency data have been built to measure volatility. In financial engineering, it has become clear in the late eighties that it is necessary to introduce leverage effect in derivatives pricing frameworks in order to accurately reproduce the behavior of the implied volatility surface. This led to the rise of famous stochastic volatility models, where the Brownian motion driving the volatility is (negatively) correlated with that driving the price for stochastic volatility models.

Traditional explanations for leverage effect are based on “macroscopic” arguments from financial economics. Could microscopic interactions between agents naturally lead to leverage effect at larger time scales? We would like to know whether part of the foundations for leverage effect could be microstructural. To do so, our idea is to consider a very simple agent-based model, encoding well-documented and understood behaviors of market participants at the microscopic scale. Then we aim at showing that in the long run, this model leads to a price dynamic exhibiting leverage effect. This would demonstrate that typical strategies of market participants at the high frequency level naturally induce leverage effect.

One could argue that transactions take place at the finest frequencies and prices are revealed through order book type mechanisms. Therefore, it is an obvious fact that leverage effect arises from high frequency properties. However, under certain market conditions, typical high frequency behaviors, having probably no connection with the financial economics concepts, may give rise to some leverage effect at the low frequency scales. It is important to emphasize that leverage effect should be fully explained by high frequency features.

Another important stylized fact of financial data is the rough nature of the volatility process. Indeed, for a very wide range of assets, historical volatility time-series exhibit a behavior which is much rougher than that of a Brownian motion. More precisely, the dynamics of the log-volatility are typically very well modeled by a fractional Brownian motion with Hurst parameter around 0.1, that is a process with Hölder regularity of order 0.1. Furthermore, using a fractional Brownian motion with small Hurst index also enables to reproduce very accurately the features of the volatility surface.


The fact that for basically all reasonably liquid assets, volatility is rough, with the same order of magnitude for the roughness parameter, is of course very intriguing. Tick-by-tick price model is based on a bi-dimensional Hawkes process, which is a bivariate point process (Nt+, Nt)t≥0 taking values in (R+)2 and with intensity (λ+t, λt) of the form


Here μ+ and μ are positive constants and the functions (φi)i=1,…4 are non-negative with associated matrix called kernel matrix. Hawkes processes are said to be self-exciting, in the sense that the instantaneous jump probability depends on the location of the past events. Hawkes processes are nowadays of standard use in finance, not only in the field of microstructure but also in risk management or contagion modeling. The Hawkes process generates behavior that mimics financial data in a pretty impressive way. And back-fitting, yields coorespndingly good results.  Some key problems remain the same whether you use a simple Brownian motion model or this marvelous technical apparatus.

In short, back-fitting only goes so far.

  • The essentially random nature of living systems can lead to entirely different outcomes if said randomness had occurred at some other point in time or magnitude. Due to randomness, entirely different groups would likely succeed and fail every time the “clock” was turned back to time zero, and the system allowed to unfold all over again. Goldman Sachs would not be the “vampire squid”. The London whale would never have been. This will boggle the mind if you let it.

  • Extraction of unvarying physical laws governing a living system from data is in many cases is NP-hard. There are far many varieties of actors and variety of interactions for the exercise to be tractable.

  • Given the possibility of their extraction, the nature of the components of a living system are not fixed and subject to unvarying physical laws – not even probability laws.

  • The conscious behavior of some actors in a financial market can change the rules of the game, some of those rules some of the time, or complete rewire the system form the bottom-up. This is really just an extension of the former point.

  • Natural mutations over time lead to markets reworking their laws over time through an evolutionary process, with never a thought of doing so.


Thus, in this approach, Nt+ corresponds to the number of upward jumps of the asset in the time interval [0,t] and Nt to the number of downward jumps. Hence, the instantaneous probability to get an upward (downward) jump depends on the arrival times of the past upward and downward jumps. Furthermore, by construction, the price process lives on a discrete grid, which is obviously a crucial feature of high frequency prices in practice.

This simple tick-by-tick price model enables to encode very easily the following important stylized facts of modern electronic markets in the context of high frequency trading:

  1. Markets are highly endogenous, meaning that most of the orders have no real economic motivation but are rather sent by algorithms in reaction to other orders.
  2. Mechanisms preventing statistical arbitrages take place on high frequency markets. Indeed, at the high frequency scale, building strategies which are on average profitable is hardly possible.
  3. There is some asymmetry in the liquidity on the bid and ask sides of the order book. This simply means that buying and selling are not symmetric actions. Indeed, consider for example a market maker, with an inventory which is typically positive. She is likely to raise the price by less following a buy order than to lower the price following the same size sell order. This is because its inventory becomes smaller after a buy order, which is a good thing for her, whereas it increases after a sell order.
  4. A significant proportion of transactions is due to large orders, called metaorders, which are not executed at once but split in time by trading algorithms.

    In a Hawkes process framework, the first of these properties corresponds to the case of so-called nearly unstable Hawkes processes, that is Hawkes processes for which the stability condition is almost saturated. This means the spectral radius of the kernel matrix integral is smaller than but close to unity. The second and third ones impose a specific structure on the kernel matrix and the fourth one leads to functions φi with heavy tails.