Appropriation of (Ir)reversibility of Noise Fluctuations to (Un)Facilitate Complexity

 

data

The logical depth is a suitable measure of subjective complexity for physical as well as mathematical objects. this, upon considering the effect of irreversibility, noise, and spatial symmetries of the equations of motion and initial conditions on the asymptotic depth-generating abilities of model systems.

“Self-organization” suggests a spontaneous increase of complexity occurring in a system with simple, generic (e.g. spatially homogeneous) initial conditions. The increase of complexity attending a computation, by contrast, is less remarkable because it occurs in response to special initial conditions. An important question, which would have interested Turing, is whether self-organization is an asymptotically qualitative phenomenon like phase transitions. In other words, are there physically reasonable models in which complexity, appropriately defined, not only increases, but increases without bound in the limit of infinite space and time? A positive answer to this question would not explain the natural history of our particular finite world, but would suggest that its quantitative complexity can legitimately be viewed as an approximation to a well-defined qualitative property of infinite systems. On the other hand, a negative answer would suggest that our world should be compared to chemical reaction-diffusion systems (e.g. Belousov-Zhabotinsky), which self-organize on a macroscopic, but still finite scale, or to hydrodynamic systems which self-organize on a scale determined by their boundary conditions.

The suitability of logical depth as a measure of physical complexity depends on the assumed ability (“physical Church’s thesis”) of Turing machines to simulate physical processes, and to do so with reasonable efficiency. Digital machines cannot of course integrate a continuous system’s equations of motion exactly, and even the notion of computability is not very robust in continuous systems, but for realistic physical systems, subject throughout their time development to finite perturbations (e.g. electromagnetic and gravitational) from an uncontrolled environment, it is plausible that a finite-precision digital calculation can approximate the motion to within the errors induced by these perturbations. Empirically, many systems have been found amenable to “master equation” treatments in which the dynamics is approximated as a sequence of stochastic transitions among coarse-grained microstates.

We concentrate arbitrarily on cellular automata, in the broad sense of discrete lattice models with finitely many states per site, which evolve according to a spatially homogeneous local transition rule that may be deterministic or stochastic, reversible or irreversible, and synchronous (discrete time) or asynchronous (continuous time, master equation). Such models cover the range from evidently computer-like (e.g. deterministic cellular automata) to evidently material-like (e.g. Ising models) with many gradations in between.

More of the favorable properties need to be invoked to obtain “self-organization,” i.e. nontrivial computation from a spatially homogeneous initial condition. A rather artificial system (a cellular automaton which is stochastic but noiseless, in the sense that it has the power to make purely deterministic as well as random decisions) undergoes this sort of self-organization. It does so by allowing the nucleation and growth of domains, within each of which a depth-producing computation begins. When two domains collide, one conquers the other, and uses the conquered territory to continue its own depth-producing computation (a computation constrained to finite space, of course, cannot continue for more than exponential time without repeating itself). To achieve the same sort of self-organization in a truly noisy system appears more difficult, partly because of the conflict between the need to encourage fluctuations that break the system’s translational symmetry, while suppressing fluctuations that introduce errors in the computation.

Irreversibility seems to facilitate complex behavior by giving noisy systems the generic ability to correct errors. Only a limited sort of error-correction is possible in microscopically reversible systems such as the canonical kinetic Ising model. Minority fluctuations in a low-temperature ferromagnetic Ising phase in zero field may be viewed as errors, and they are corrected spontaneously because of their potential energy cost. This error correcting ability would be lost in nonzero field, which breaks the symmetry between the two ferromagnetic phases, and even in zero field it gives the Ising system the ability to remember only one bit of information. This limitation of reversible systems is recognized in the Gibbs phase rule, which implies that under generic conditions of the external fields, a thermodynamic system will have a unique stable phase, all others being metastable. Even in reversible systems, it is not clear why the Gibbs phase rule enforces as much simplicity as it does, since one can design discrete Ising-type systems whose stable phase (ground state) at zero temperature simulates an aperiodic tiling of the plane, and can even get the aperiodic ground state to incorporate (at low density) the space-time history of a Turing machine computation. Even more remarkably, one can get the structure of the ground state to diagonalize away from all recursive sequences.

Potential Synapses. Thought of the Day 52.0

For a neuron to recognize a pattern of activity it requires a set of co-located synapses (typically fifteen to twenty) that connect to a subset of the cells that are active in the pattern to be recognized. Learning to recognize a new pattern is accomplished by the formation of a set of new synapses collocated on a dendritic segment.

Untitled

Figure: Learning by growing new synapses. Learning in an HTM neuron is modeled by the growth of new synapses from a set of potential synapses. A “permanence” value is assigned to each potential synapse and represents the growth of the synapse. Learning occurs by incrementing or decrementing permanence values. The synapse weight is a binary value set to 1 if the permanence is above a threshold.

Figure shows how we model the formation of new synapses in a simulated Hierarchical Temporal Memory (HTM) neuron. For each dendritic segment we maintain a set of “potential” synapses between the dendritic segment and other cells in the network that could potentially form a synapse with the segment. The number of potential synapses is larger than the number of actual synapses. We assign each potential synapse a scalar value called “permanence” which represents stages of growth of the synapse. A permanence value close to zero represents an axon and dendrite with the potential to form a synapse but that have not commenced growing one. A 1.0 permanence value represents an axon and dendrite with a large fully formed synapse.

The permanence value is incremented and decremented using a Hebbian-like rule. If the permanence value exceeds a threshold, such as 0.3, then the weight of the synapse is 1, if the permanence value is at or below the threshold then the weight of the synapse is 0. The threshold represents the establishment of a synapse, albeit one that could easily disappear. A synapse with a permanence value of 1.0 has the same effect as a synapse with a permanence value at threshold but is not as easily forgotten. Using a scalar permanence value enables on-line learning in the presence of noise. A previously unseen input pattern could be noise or it could be the start of a new trend that will repeat in the future. By growing new synapses, the network can start to learn a new pattern when it is first encountered, but only act differently after several presentations of the new pattern. Increasing permanence beyond the threshold means that patterns experienced more than others will take longer to forget.

HTM neurons and HTM networks rely on distributed patterns of cell activity, thus the activation strength of any one neuron or synapse is not very important. Therefore, in HTM simulations we model neuron activations and synapse weights with binary states. Additionally, it is well known that biological synapses are stochastic, so a neocortical theory cannot require precision of synaptic efficacy. Although scalar states and weights might improve performance, they are not required from a theoretical point of view.

Conjuncted: Speculatively Accelerated Capital – Trading Outside the Pit.

hft

High Frequency Traders (HFTs hereafter) may anticipate the trades of a mutual fund, for instance, if the mutual fund splits large orders into a series of smaller ones and the initial trades reveal information about the mutual funds’ future trading intentions. HFTs might also forecast order flow if traditional asset managers with similar trading demands do not all trade at the same time, allowing the possibility that the initiation of a trade by one mutual fund could forecast similar future trades by other mutual funds. If an HFT were able to forecast a traditional asset managers’ order flow by either these or some other means, then the HFT could potentially trade ahead of them and profit from the traditional asset manager’s subsequent price impact.

There are two main empirical implications of HFTs engaging in such a trading strategy. The first implication is that HFT trading should lead non-HFT trading – if an HFT buys a stock, non-HFTs should subsequently come into the market and buy those same stocks. Second, since the HFT’s objective would be to profit from non-HFTs’ subsequent price impact, it should be the case that the prices of the stocks they buy rise and those of the stocks they sell fall. These two patterns, together, are consistent with HFTs trading stocks in order to profit from non-HFTs’ future buying and selling pressure. 

While HFTs may in aggregate anticipate non-HFT order flow, it is also possible that among HFTs, some firms’ trades are strongly correlated with future non-HFT order flow, while other firms’ trades have little or no correlation with non-HFT order flow. This may be the case if certain HFTs focus more on strategies that anticipate order flow or if some HFTs are more skilled than other firms. If certain HFTs are better at forecasting order flow or if they focus more on such a strategy, then these HFTs’ trades should be consistently more strongly correlated with future non-HFT trades than are trades from other HFTs. Additionally, if these HFTs are more skilled, then one might expect these HFTs’ trades to be more strongly correlated with future returns. 

Another implication of the anticipatory trading hypothesis is that the correlation between HFT trades and future non-HFT trades should be stronger at times when non-HFTs are impatient. The reason is anticipating buying and selling pressure requires forecasting future trades based on patterns in past trades and orders. To make anticipating their order flow difficult, non-HFTs typically use execution algorithms to disguise their trading intentions. But there is a trade-off between disguising order flow and trading a large position quickly. When non-HFTs are impatient and focused on trading a position quickly, they may not hide their order flow as well, making it easier for HFTs to anticipate their trades. At such times, the correlation between HFT trades and future non-HFT trades should be stronger. 

Gauge Theory of Arbitrage, or Financial Markets Resembling Screening in Electrodynamics

Arbitrage-image

When a mispricing appears in a market, market speculators and arbitrageurs rectify the mistake by obtaining a profit from it. In the case of profitable fluctuations they move into profitable assets, leaving comparably less profitable ones. This affects prices in such a way that all assets of similar risk become equally attractive, i.e. the speculators restore the equilibrium. If this process occurs infinitely rapidly, then the market corrects the mispricing instantly and current prices fully reflect all relevant information. In this case one says that the market is efficient. However, clearly it is an idealization and does not hold for small enough times.

The general picture, sketched above, of the restoration of equilibrium in financial markets resembles screening in electrodynamics. Indeed, in the case of electrodynamics, negative charges move into the region of the positive electric field, positive charges get out of the region and thus screen the field. Comparing this with the financial market we can say that a local virtual arbitrage opportunity with a positive excess return plays a role of the positive electric field, speculators in the long position behave as negative charges, whilst the speculators in the short position behave as positive ones. Movements of positive and negative charges screen out a profitable fluctuation and restore the equilibrium so that there is no arbitrage opportunity any more, i.e. the speculators have eliminated the arbitrage opportunity.

The analogy is apparently superficial, but it is not. The analogy emerges naturally in the framework of the Gauge Theory of Arbitrage (GTA). The theory treats a calculation of net present values and asset buying and selling as a parallel transport of money in some curved space, and interpret the interest rate, exchange rates and prices of asset as proper connection components. This structure is exactly equivalent to the geometrical structure underlying the electrodynamics where the components of the vector-potential are connection components responsible for the parallel transport of the charges. The components of the corresponding curvature tensors are the electromagnetic field in the case of electrodynamics and the excess rate of return in case of GTA. The presence of uncertainty is equivalent to the introduction of noise in the electrodynamics, i.e. quantization of the theory. It allows one to map the theory of the capital market onto the theory of quantized gauge field interacting with matter (money flow) fields. The gauge transformations of the matter field correspond to a change of the par value of the asset units which effect is eliminated by a gauge tuning of the prices and rates. Free quantum gauge field dynamics (in the absence of money flows) is described by a geometrical random walk for the assets prices with the log-normal probability distribution. In general case the consideration maps the capital market onto Quantum Electrodynamics where the price walks are affected by money flows.

Electrodynamical model of quasi-efficient financial market

Forward, Futures Contracts and Options: Top Down or bottom Up Modeling?

maxresdefault5

The simulation of financial markets can be modeled, from a theoretical viewpoint, according to two separate approaches: a bottom up approach and (or) a top down approach. For instance, the modeling of financial markets starting from diffusion equations and adding a noise term to the evolution of a function of a stochastic variable is a top down approach. This type of description is, effectively, a statistical one.

A bottom up approach, instead, is the modeling of artificial markets using complex data structures (agent based simulations) using general updating rules to describe the collective state of the market. The number of procedures implemented in the simulations can be quite large, although the computational cost of the simulation becomes forbidding as the size of each agent increases. Readers familiar with Sugarscape Models and the computational strategies based on Growing of Artificial Societies have probably an idea of the enormous potentialities of the field. All Sugarscape models include the agents (inhabitants), the environment (a two-dimensional grid) and the rules governing the interaction of the agents with each other and the environment. The original model presented by J. Epstein & R. Axtell (considered as the first large scale agent model) is based on a 51 x 51 cell grid, where every cell can contain different amounts of sugar (or spice). In every step agents look around, find the closest cell filled with sugar, move and metabolize. They can leave pollution, die, reproduce, inherit sources, transfer information, trade or borrow sugar, generate immunity or transmit diseases – depending on the specific scenario and variables defined at the set-up of the model. Sugar in simulation could be seen as a metaphor for resources in an artificial world through which the examiner can study the effects of social dynamics such as evolution, marital status and inheritance on populations. Exact simulation of the original rules provided by J. Epstein & R. Axtell in their book can be problematic and it is not always possible to recreate the same results as those presented in Growing Artificial Societies. However, one would expect that the bottom up description should become comparable to the top down description for a very large number of simulated agents.

The bottom up approach should also provide a better description of extreme events, such as crashes, collectively conditioned behaviour and market incompleteness, this approach being of purely algorithmic nature. A top down approach is, therefore, a model of reduced complexity and follows a statistical description of the dynamics of complex systems.

Forward, Futures Contracts and Options: Let the price at time t of a security be S(t). A specific good can be traded at time t at the price S(t) between a buyer and a seller. The seller (short position) agrees to sell the goods to the buyer (long position) at some time T in the future at a price F(t,T) (the contract price). Notice that contract prices have a 2-time dependence (actual time t and maturity time T). Their difference τ = T − t is usually called time to maturity. Equivalently, the actual price of the contract is determined by the prevailing actual prices and interest rates and by the time to maturity. Entering into a forward contract requires no money, and the value of the contract for long position holders and strong position holders at maturity T will be

(−1)p (S(T)−F(t,T)) (1)

where p = 0 for long positions and p = 1 for short positions. Futures Contracts are similar, except that after the contract is entered, any changes in the market value of the contract are settled by the parties. Hence, the cashflows occur all the way to expiry unlike in the case of the forward where only one cashflow occurs. They are also highly regulated and involve a third party (a clearing house). Forward, futures contracts and options go under the name of derivative products, since their contract price F(t, T) depend on the value of the underlying security S(T). Options are derivatives that can be written on any security and have a more complicated payoff function than the futures or forwards. For example, a call option gives the buyer (long position) the right (but not the obligation) to buy or sell the security at some predetermined strike-price at maturity. A payoff function is the precise form of the price. Path dependent options are derivative products whose value depends on the actual path followed by the underlying security up to maturity. In the case of path-dependent options, since the payoff may not be directly linked to an explicit right, they must be settled by cash. This is sometimes true for futures and plain options as well as this is more efficient.

Conjuncted: Mispricings Happened in the Past do not Influence the Derivative Price: Black-Scholes (BS) Analysis and Arbitrage-Free Financial Economics. Note Quote.

wpec0c5c1f_05_06

It can be shown that the probability (up to a normalization constant) of the trajectory R(·,·) has the form:

P[R(.,.)] ∼ exp[-1/2∑0 dt dt’ dS dS’ R(t, S) K-1(t, S|t’, S’) R(t’, S’)] —– (1)

where the kernel of the operator K is defined as:

K(t, S|t’, S’) = θ (T – t) θ (T – t’)∫0 dτ ds f(τ) θ (t – τ) θ (t’ – τ) e-λ(t + t’ – 2τ) x P (t, S|τ, s)P (t′, S′|τ, s) —– (2)

It is easy to see that the kernel is of order 1/λ and vanishes as λ → ∞. Equation 2, in particular, results in the equality for the correlation function:

⟨R(t, S) R(t′, S′)⟩ = Σ2 · K(t, S|t′, S′) —– (3)

Comment on Purely Random Correlations of the Matrix, or Studying Noise in Neural Networks

ED_Matrix

In the presence of two-body interactions the many-body Hamiltonian matrix elements vJα,α′ of good total angular momentum J in the shell-model basis |α⟩ generated by the mean field, can be expressed as follows:

vJα,α′ = ∑J’ii’ cJαα’J’ii’ gJ’ii’ —– (4)

The summation runs over all combinations of the two-particle states |i⟩ coupled to the angular momentum J′ and connected by the two-body interaction g. The analogy of this structure to the one schematically captured by the eq. (2) is evident. gJ’ii’ denote here the radial parts of the corresponding two-body matrix elements while cJαα’J’ii’ globally represent elements of the angular momentum recoupling geometry. gJ’ii’ are drawn from a Gaussian distribution while the geometry expressed by cJαα’J’ii’ enters explicitly. This originates from the fact that a quasi-random coupling of individual spins results in the so-called geometric chaoticity and thus cJαα’ coefficients are also Gaussian distributed. In this case, these two (gJ’ii’ and c) essentially random ingredients lead however to an order of magnitude larger separation of the ground state from the remaining states as compared to a pure Random Matrix Theory (RMT) limit. Due to more severe selection rules the effect of geometric chaoticity does not apply for J = 0. Consistently, the ground state energy gaps measured relative to the average level spacing characteristic for a given J is larger for J > 0 than for J = 0, and also J > 0 ground states are more orderly than those for J = 0, as it can be quantified in terms of the information entropy.

Interestingly, such reductions of dimensionality of the Hamiltonian matrix can also be seen locally in explicit calculations with realistic (non-random) nuclear interactions. A collective state, the one which turns out coherent with some operator representing physical external field, is always surrounded by a reduced density of states, i.e., it repells the other states. In all those cases, the global fluctuation characteristics remain however largely consistent with the corresponding version of the random matrix ensemble.

Recently, a broad arena of applicability of the random matrix theory opens in connection with the most complex systems known to exist in the universe. With no doubt, the most complex is the human’s brain and those phenomena that result from its activity. From the physics point of view the financial world, reflecting such an activity, is of particular interest because its characteristics are quantified directly in terms of numbers and a huge amount of electronically stored financial data is readily available. An access to a single brain activity is also possible by detecting the electric or magnetic fields generated by the neuronal currents. With the present day techniques of electro- or magnetoencephalography, in this way it is possible to generate the time series which resolve neuronal activity down to the scale of 1 ms.

One may debate over what is more complex, the human brain or the financial world, and there is no unique answer. It seems however to us that it is the financial world that is even more complex. After all, it involves the activity of many human brains and it seems even less predictable due to more frequent changes between different modes of action. Noise is of course owerwhelming in either of these systems, as it can be inferred from the structure of eigen-spectra of the correlation matrices taken across different space areas at the same time, or across different time intervals. There however always exist several well identifiable deviations, which, with help of reference to the universal characteristics of the random matrix theory, and with the methodology briefly reviewed above, can be classified as real correlations or collectivity. An easily identifiable gap between the corresponding eigenvalues of the correlation matrix and the bulk of its eigenspectrum plays the central role in this connection. The brain when responding to the sensory stimulations develops larger gaps than the brain at rest. The correlation matrix formalism in its most general asymmetric form allows to study also the time-delayed correlations, like the ones between the oposite hemispheres. The time-delay reflecting the maximum of correlation (time needed for an information to be transmitted between the different sensory areas in the brain is also associated with appearance of one significantly larger eigenvalue. Similar effects appear to govern formation of the heteropolymeric biomolecules. The ones that nature makes use of are separated by an energy gap from the purely random sequences.

 

Purely Random Correlations of the Matrix, or Studying Noise in Neural Networks

2-Figure1-1

Expressed in the most general form, in essentially all the cases of practical interest, the n × n matrices W used to describe the complex system are by construction designed as

W = XYT —– (1)

where X and Y denote the rectangular n × m matrices. Such, for instance, are the correlation matrices whose standard form corresponds to Y = X. In this case one thinks of n observations or cases, each represented by a m dimensional row vector xi (yi), (i = 1, …, n), and typically m is larger than n. In the limit of purely random correlations the matrix W is then said to be a Wishart matrix. The resulting density ρW(λ) of eigenvalues is here known analytically, with the limits (λmin ≤ λ ≤ λmax) prescribed by

λmaxmin = 1+1/Q±2 1/Q and Q = m/n ≥ 1.

The variance of the elements of xi is here assumed unity.

The more general case, of X and Y different, results in asymmetric correlation matrices with complex eigenvalues λ. In this more general case a limiting distribution corresponding to purely random correlations seems not to be yet known analytically as a function of m/n. It indicates however that in the case of no correlations, quite generically, one may expect a largely uniform distribution of λ bound in an ellipse on the complex plane.

Further examples of matrices of similar structure, of great interest from the point of view of complexity, include the Hamiltonian matrices of strongly interacting quantum many body systems such as atomic nuclei. This holds true on the level of bound states where the problem is described by the Hermitian matrices, as well as for excitations embedded in the continuum. This later case can be formulated in terms of an open quantum system, which is represented by a complex non-Hermitian Hamiltonian matrix. Several neural network models also belong to this category of matrix structure. In this domain the reference is provided by the Gaussian (orthogonal, unitary, symplectic) ensembles of random matrices with the semi-circle law for the eigenvalue distribution. For the irreversible processes there exists their complex version with a special case, the so-called scattering ensemble, which accounts for S-matrix unitarity.

As it has already been expressed above, several variants of ensembles of the random matrices provide an appropriate and natural reference for quantifying various characteristics of complexity. The bulk of such characteristics is expected to be consistent with Random Matrix Theory (RMT), and in fact there exists strong evidence that it is. Once this is established, even more interesting are however deviations, especially those signaling emergence of synchronous or coherent patterns, i.e., the effects connected with the reduction of dimensionality. In the matrix terminology such patterns can thus be associated with a significantly reduced rank k (thus k ≪ n) of a leading component of W. A satisfactory structure of the matrix that would allow some coexistence of chaos or noise and of collectivity thus reads:

W = Wr + Wc —– (2)

Of course, in the absence of Wr, the second term (Wc) of W generates k nonzero eigenvalues, and all the remaining ones (n − k) constitute the zero modes. When Wr enters as a noise (random like matrix) correction, a trace of the above effect is expected to remain, i.e., k large eigenvalues and the bulk composed of n − k small eigenvalues whose distribution and fluctuations are consistent with an appropriate version of random matrix ensemble. One likely mechanism that may lead to such a segregation of eigenspectra is that m in eq. (1) is significantly smaller than n, or that the number of large components makes it effectively small on the level of large entries w of W. Such an effective reduction of m (M = meff) is then expressed by the following distribution P(w) of the large off-diagonal matrix elements in the case they are still generated by the random like processes

P(w) = (|w|(M-1)/2K(M-1)/2(|w|))/(2(M-1)/2Γ(M/2)√π) —– (3)

where K stands for the modified Bessel function. Asymptotically, for large w, this leads to P(w) ∼ e(−|w|) |w|M/2−1, and thus reflects an enhanced probability of appearence of a few large off-diagonal matrix elements as compared to a Gaussian distribution. As consistent with the central limit theorem the distribution quickly converges to a Gaussian with increasing M.

Based on several examples of natural complex dynamical systems, like the strongly interacting Fermi systems, the human brain and the financial markets, one could systematize evidence that such effects are indeed common to all the phenomena that intuitively can be qualified as complex.

Random Uniform Deviate, or Correlation Dimension

Figure-10-Examples-of-the-correlation-dimension

A widely used dimension algorithm in data analysis is the correlation dimension. Fix m, a positive integer, and r, a positive real number. Given a time-series of data u(1), u(2), …, u(N),from measurements equally spaced in time, form a sequence of vectors x(1), x(2),…, x(N- m + 1) in R’, defined by x(i) = [u(i), u(i+ 1),…,u(i+ m – 1)]. Next, define for each i, 1 ≤ i ≤ N – m + 1,

Cmi (r)= (number of j such that d[x(i), x(j)] ≤ r)/(N-m+1) ———- [1]

We must define d[x(i), x(j)] for vectors x(i) and x(j). We define

d[x(i), x(j)]= maxk = 1,2,…,m (|u(i+k-1) – u(j+k-1)j) ———- [2]

From the Cmi (r), define

Cm(r) = (N- m + i)-1 ∑(N – m + 1)i = 1 Cmi (r) ———- [3]

and define

βm = limr → 0 limn → ∞ log Cm(r)/log r ———- [4]

The assertion is that for m sufficiently large, βmis the correlation dimension. Such a limiting slope has been shown to exist for the commonly studied chaotic attractors. This procedure has frequently been applied to experimental data; investigators seek a “scaling range”of r values for which log Cm(r)/log r is nearly constant for large m, and they infer that this ratio is the correlation dimension. In some instances, investigators have concluded that this procedure establishes deterministic chaos.

The later conclusion is not necessarily correct: a converged, finite correlation dimension value does not guarantee that the defining process is deterministic. Consider the following stochastic process. Fix 0 ≤ p ≤1. Define Xj = α-l/2 sin(2πj/12) ∀ j,where α is specified below. Define Yj as a family of independent identicaly distributed (i.i.d.) real random variables, with uniform density on the interval [-√3, √3]. Define Zj = 1 with probability p, Zj = 0 with probability 1 – p.

α = (∑j = 112 sin2(2πj/12)/12 ———- [5]

and define MI Xj = (1- Zj) Xj + ZjYj. Intuitively, MI X(p) is generated by first ascertaining, for each j, whether the jth sample will be from the deterministic sine wave or from the random uniform deviate, with likelihood (1- p) of the former choice, then calculating either Xj or Yj. Increasing p marks a tendency towards greater system randomness. We now show that almost surely βmin [4] equals 0 ∀ m for the MI X(p) process, p ≠ 1. Fix m, define Kj = (12m)j- 12m, and define Nj = 1 if (MI Xk(j)+l,…, k(j)+m) = (X1,. . ., Xm), Nj = 0 otherwise. The Nj are i.i.d.random variables, with the expected value of Nj, E(Nj) ≥ (1- p)m. By the Strong Law of Large Numbers,

limN → ∞ ∑j = 1N Nj/N = E(N1) ≥ (1-p)m

Observe that (∑j = 1N Nj/12 mN)2 is a lower bound to Cm(r), since xk(i)+1,…., xk(j)+1 if Ni = Nj = 1. Thus for r ‹ 1

limN → ∞ sup log Cm(r)/log r ≤ (1/log r) limN → ∞ (∑j = 1N Nj/12 mN)2 ≤ log (1-p)2m/(12m)2/log r

Since, (1-p)2m/(12m)2 is independent of r, βm = limr → 0 limN → ∞ log Cm(r)/log r = 0. Since, βm ≠ 0 with probability 0 for each m, by countable additivity, ∀m, β= 0.

The MIX(p) process can be motivated by considering an autonomous unit that produces sinusoidal output, surrounded by a world of interacting processes that in ensemble produces output that resembles noise relative to the timing of the unit. The extent to which the surrounding world interacts with the unit could be controlled by a gateway between the two, with a larger gateway admitting greater apparent noise to compete with the sinusoidal signal. It is easy to show that, given a sequence Xj, a sequence of k = 1, 2,…, m i.i.d.Yj, defined by a density function and independent of the Xj, and Z= X+ Yj, then Zj has an infinite correlation dimension. It appears that correlation dimension distinguishes between correlated and uncorrelated successive iterates, with larger estimates of dimension corresponding to uncorrelated data. For a more complete interpretation of correlation dimension results, stochastic processes with correlated increments should be analyzed. Error estimates in dimension calculations are commonly seen. In statistics, one presumes a specified underlying stochastic distribution to estimate misclassification probabilities. Without knowing the form of a distribution, or if the system is deterministic or stochastic, one must be suspicious of error estimates. There often appears to be a desire to establish a noninteger dimension value, to give a fractal and chaotic interpretation to the result, but again, prior to a thorough study of the relationship between the geometric Hausdorff dimension and the time series formula labeled correlation dimension, it is speculation to draw conclusions from a noninteger correlation dimension value.

Forward Pricing in Commodity Markets. Note Quote.

8200276844_26f1ec45e7_b

We use the Hilbert space

Hα := {f ∈ AC(R+,C) : ∫0 |f′(x)|2 eαx dx < ∞}

where AC(R+,C) denotes the space of complex-valued absolutely continuous functions on R+. We endow Hα with the scalar product ⟨f,g⟩α := f(0) g(0) + ∫0 f′(x) g(x) eαx dx, and denote the associated norm by ∥ · ∥αFilipović shows that (Hα, ∥ · ∥α) is a separable Hilbert space. This space has been used in Filipović for term structure modelling of bonds and many mathematical properties have been derived therein. We will frequently refer to Hα as the Filipović space.

We next introduce our dynamics for the term structure of forward prices in a commodity market. Denote by f (t, x) the price at time t of a forward contract where time to delivery of the underlying commodity is x ≥ 0. We treat f as a stochastic process in time with values in the Filipović space Hα. More specifically, we assume that the process {f(t)}t≥0 follows the HJM-Musiela model which we formalize next. The Heath–Jarrow–Morton (HJM) framework is a general framework to model the evolution of interest rate curve – instantaneous forward rate curve in particular (as opposed to simple forward rates). When the volatility and drift of the instantaneous forward rate are assumed to be deterministic, this is known as the Gaussian Heath–Jarrow–Morton (HJM) model of forward rates. For direct modeling of simple forward rates the Brace–Gatarek–Musiela model represents an example.

On a complete filtered probability space (Ω,{Ft}t≥0,F,P), where the filtration is assumed to be complete and right continuous, we work with an Hα-valued Lévy process {L(t)}t≥0 for the construction of Hα-valued Lévy processes). In mathematical finance, Lévy processes are becoming extremely fashionable because they can describe the observed reality of financial markets in a more accurate way than models based on Brownian motion. In the ‘real’ world, we observe that asset price processes have jumps or spikes, and risk managers have to take them into consideration. Moreover, the empirical distribution of asset returns exhibits fat tails and skewness, behavior that deviates from normality. Hence, models that accurately fit return distributions are essential for the estimation of profit and loss (P&L) distributions. Similarly, in the ‘risk-neutral’ world, we observe that implied volatilities are constant neither across strike nor across maturities as stipulated by the Black and Scholes. Therefore, traders need models that can capture the behavior of the implied volatility smiles more accurately, in order to handle the risk of trades. Lévy processes provide us with the appropriate tools to adequately and consistently describe all these observations, both in the ‘real’ and in the ‘risk-neutral’ world. We assume that L has finite variance and mean equal to zero, and denote its covariance operator by Q. Let f0 ∈ Hα and f be the solution of the stochastic partial differential equation (SPDE)

df(t) = ∂xf(t)dt + β(t)dt + Ψ(t)dL(t), t≥0,f(0)=f

where β ∈ L ((Ω × R+, P, P ⊗ λ), Hα), P being the predictable σ-field, and

Ψ ∈ L2L(Hα) := ∪T>0 L2L,T (Hα)

where the latter space is defined as in Peszat and Zabczyk. For t ≥ 0, denote by Ut the shift semigroup on Hα defined by Utf = f(t + ·) for f ∈ Hα. It is shown in Filipović that {Ut}t≥0 is a C0-semigroup on Hα, with generator ∂x. Recall, that any C0-semigroup admits the bound ∥Utop ≤ Mewt for some w, M > 0 and any t ≥ 0. Here, ∥ · ∥op denotes the operator norm. Thus s → Ut−s β(s) is Bochner-integrable (The Bochner integral, named for Salomon Bochner, extends the definition of Lebesgue integral to functions that take values in a Banach space, as the limit of integrals of simple functions). and s → Ut−s Ψ(s) is integrable with respect to L. The unique mild solution of SPDE is

f(t) = Utf0 + ∫t0 Ut−s β(s)ds+ ∫t0 Ut−s Ψ(s)dL(s)

If we model the forward price dynamics f in a risk-neutral setting, the drift coefficient β(t) will naturally be zero in order to ensure the (local) martingale property (In probability theory, a martingale is a model of a fair game where knowledge of past events never helps predict the mean of the future winnings and only the current event matters. In particular, a martingale is a sequence of random variables (i.e., a stochastic process) for which, at a particular time in the realized sequence, the expectation of the next value in the sequence is equal to the present observed value even given knowledge of all prior observed values.) of the process t → f(t, τ − t), where τ ≥ t is the time of delivery of the forward. In this case, the probability P is to be interpreted as the equivalent martingale measure (also called the pricing measure). However, with a non-zero drift, the forward model is stated under the market probability and β can be related to the risk premium in the market. In energy markets like power and gas, the forward contracts deliver over a period, and forward prices can be expressed by integral operators on the Filipović space applied on f. The dynamics of f can also be considered as a model for the forward rate in fixed-income theory. This is indeed the traditional application area and point of analysis of the SPDE. Note, however, that the original no-arbitrage condition in the HJM approach for interest rate markets is different from the no-arbitrage condition. If f is understood as the forward rate modelled in the risk-neutral setting, there is a no-arbitrage relationship between the drift β, the volatility σ and the covariance of the driving noise L.