Conjuncted: Noise Traders, Chartists and Fundamentalists

stocks-17-3

Let us leave traders’ decision-making processes and turn to the adjustment of the stock-market price. We assume the existence of a market maker, such as a specialist in the New York stock exchange. The role of the market maker is to give an execution price to incoming orders and to execute transactions. The market maker announces a price at the beginning of each trading period. Traders then determine their excess demand, based on the announced price and on their expected prices. When the market maker observes either excess demand or excess supply, he applies the so-called short-side rule to the demands and supplies, taking aggregate transactions for the stock to be equal to the minimum of total supply and demand. Thus traders on the short side of the market will realize their desired transactions. At the beginning of the next trading period, he announces a new price. If the excess demand in period t is positive (negative), the market maker raises (reduces) the price for the following period t + 1. The process then is repeated. Let κ and ξ be the fractions of chartists and of noise traders in the total number of traders, respectively. Then the process of price adjustment can be written as

pt+1 − pt = θn[(1 − κ − ξ)xtf + κxtc + ξxtn]

where θ denotes the speed of the adjustment of the price, and n the total number of traders.

Noise Traders

5min

The term used to describe an investor who makes decisions regarding buy and sell trades without the use of fundamental data. These investors generally have poor timing, follow trends, and over-react to good and bad news. Let us consider the noise traders’ decision making. They are assumed to base decisions on noise in the sense of a large number of small events. The behavior of a noise trader can be formalized as maximizing the quadratic utility function

W(xtn, ytn) = g(ytn + (pt + εt)xtn) – k(xtn)2 —– (1)

subject to the budget constraint

ytn + ptxtn = 0 —– (2)

where xtn and ytn represent the noise trader’s excess demand for stock and for money at time t, respectively. The noise εt is assumed to be an IID random variable. In probability theory and statistics, a sequence or other collection of random variables is independent and identically distributed (i.i.d. or iid or IID) if each random variable has the same probability distribution as the others and all are mutually independent. The excess demand function for stock is given as

xtn = γεt, γ = g/2k > 0 —– (3)

where γ denotes the strength of the reaction to noisy information. In short, noise traders try to buy stock if they believe the noise to be good news (εt > 0). Inversely, if they believe the noise to be bad news (εt < 0), they try to sell it.

Chartists

StockCharts.com-Inverse-Head-and-Shoulders

Chartists are assumed to have the same utility function as the fundamentalists. Their behavior is formalized as maximizing the utility function

v = α(yt + pt+1cxtc) + βxtc – (1+ βxtc) log (1+ βxtc) —– (1)

subject to the budget constraint

ytc + ptxtc = 0 —– (2)

where xtc and ytc represent the chartist’s excess demand for stock and for money at period t, and pt+1c denotes the price expected by him. The chartist’s excess demand function for the stock is given by

xtc = 1/β (exp α (pt+1– pt)/β – 1) —– (3)

His expectation formation is as follows: He is assumed to forecast the future price pt+1c using adaptive expectations,

pt+1c  = pt + μ (p– ptc) —– (4)

where the parameter µ(0 < µ < 1) is a so-called error correction coefficient. Chartists’ decisions are based on observation of the past price-data. This type of trader, who simply extrapolates patterns of past prices, is a common stylized example, currently in popular use in heterogeneous agent models. It follows that chartists try to buy stock when they anticipate a rising price for the next period, and, in contrast, try to sell stock when they expect a falling price.

Stocks and Fundamentalists’ Behavior

noise-trader

Let us consider a simple stock market with the following characteristics. A large amount of stock is traded. In the market, there are three typical groups of traders with different strategies: fundamentalists, chartists, and noise traders. Traders can invest either in money or in stock. Since the model is designed to describe stock price movements over short periods, such as one day, the dividend from stock and the interest rate for the risk-free asset will be omitted for simplicity. Traders are myopic and bent on maximizing utility. Their utility depends on the price change they expect, and on their excess demand for stock rather than simply their demand. Their excess demand is derived from utility maximization.

Let Ytf be the amount of money that a fundamentalist holds at a time t and Xtf be the number of shares purchased by a fundamentalist at time t. Let pt be the price per share at time t. The fundamentalist’s budget constrain is given by

Ytf + ptXtf = Yt-1f + ptXt-1f —– (1)

or equivalently

ytf + ptxtf = 0 —– (2)

where

ytf = Ytf – Yt-1f

denotes the fundamentalist’s excess demand for money, and

xtf = Xtf – Xt-1f

his excess demand for stock. Suppose that the fundamentalist’s preferences are represented by the utility function,

u = α(ytf + pt+1fxtf + βxtf – (1 + βxtf) log (1 + βxtf) —– (3)

where pt+1f denotes the fundamentalist’s expectation in period t about the price in the following period t + 1. The parameters α and β are assumed to be positive. Inserting (2) into (3) the fundamentalist’s utility maximization problem becomes:

maxxtf  u =  α(pt+1– pt)xtf  βxtf – (1 + βxtf) log (1 + βxtf) —– (4)

The utility function u satisfies the standard properties: u′ (|xtf|) > 0, u′′(|xtf|) < 0 ∀ |xf|t ≤ |xf*|, where |xf*| denotes the absolute value of xf producing a maximum utility. Thus, the utility function is strictly concave. It depends on the price change expected by fundamentalists (pt+1– pt) as well as fundamentalist’s excess demand for stock xtf. The first part α(pt+1– pt)xtf implies that a rise in the expected price change increases his utility. The remaining part expresses his attitude toward risk. Even if the expected price change is positive, he does not want to invest his total wealth in the stock, and vice versa. In this sense, fundamentalists are risk averse. β is the parameter that sets the lower limitation on excess demand. All excess demand for stock derived from the utility maximization is limited to -1/β. When the expected price change (pt+1– pt) is positive, the maximum value of the utility function is also positive. This means that fundamentalists try to buy stock. By analogy, when the expected price change (pt+1– pt) is negative, the maximum value of the utility function is negative, which means that they try to sell. The utility maximization problem (4) is solved for the fundamentalist’s excess demand,

xtf = 1/β(exp(α(pt+1– pt)/β) – 1) —– (5)

Excess demand increases as the expected price change (pt+1– pt) increases. It should be noticed that the optimal value of excess supply is limited to -1/β, while the optimal value of excess demand is not restricted. Since there is little loss of generality in fixing the parameter β at unity, below, we will assume β to be constant and equal to 1. Then let us think of the fundamentalist’s expectation formation. We assume that he form his price expectation according to a simple adaptive scheme:

pt+1f = p+ ν(p* – pt) —– (6)

We see from Equation (6) that fundamentalists believe that the price moves towards the fundamental price p* by factor ν. To sum up fundamentalists’ behavior: if the price pt is below their expected price, they will try to buy stock, because they consider the stock to be undervalued. On the contrary, if the price is above the expected value, they will try to sell, because they consider the stock to be overvalued.

Typicality. Cosmological Constant and Boltzmann Brains. Note Quote.

tumblr_nsafvjW8W31qg20oho1_1280

In a multiverse we would expect there to be relatively many universe domains with large values of the cosmological constant, but none of these allow gravitationally bound structures (such as our galaxy) to occur, so the likelihood of observing ourselves to be in one is essentially zero.

The cosmological constant has negative pressure, but positive energy.  The negative pressure ensures that as the volume expands then matter loses energy (photons get red shifted, particles slow down); this loss of energy by matter causes the expansion to slow down – but the increase in energy of the increased volume is more important .  The increase of energy associated with the extra space the cosmological constant fills has to be balanced by a decrease in the gravitational energy of the expansion – and this expansion energy is negative, allowing the universe to carry on expanding.  If you put all the terms on one side in the Friedmann equation – which is just an energy balancing equation – (with the other side equal to zero) you will see that the expansion energy is negative, whereas the cosmological constant and matter (including dark matter) all have positive energy.

nex6

However, as the cosmological constant is decreased, we eventually reach a transition point where it becomes just small enough for gravitational structures to occur. Reduce it a bit further still, and you now get universes resembling ours. Given the increased likelihood of observing such a universe, the chances of our universe being one of these will be near its peak. Theoretical physicist Steven Weinberg used this reasoning to correctly predict the order of magnitude of the cosmological constant well before the acceleration of our universe was even measured.

Unfortunately this argument runs into conceptually murky water. The multiverse is infinite and it is not clear whether we can calculate the odds for anything to happen in an infinite volume of space- time. All we have is the single case of our apparently small but positive value of the cosmological constant, so it’s hard to see how we could ever test whether or not Weinberg’s prediction was a lucky coincidence. Such questions concerning infinity, and what one can reasonably infer from a single data point, are just the tip of the philosophical iceberg that cosmologists face.

Another conundrum is where the laws of physics come from. Even if these laws vary across the multiverse, there must be, so it seems, meta-laws that dictate the manner in which they are distributed. How can we, inhabitants on a planet in a solar system in a galaxy, meaningfully debate the origin of the laws of physics as well as the origins of something, the very universe, that we are part of? What about the parts of space-time we can never see? These regions could infinitely outnumber our visible patch. The laws of physics could differ there, for all we know.

We cannot settle any of these questions by experiment, and this is where philosophers enter the debate. Central to this is the so-called observational-selection effect, whereby an observation is influenced by the observer’s “telescope”, whatever form that may take. But what exactly is it to be an observer, or more specifically a “typical” observer, in a system where every possible sort of observer will come about infinitely many times? The same basic question, centred on the role of observers, is as fundamental to the science of the indefinitely large (cosmology) as it is to that of the infinitesimally small (quantum theory).

This key issue of typicality also confronted Austrian physicist and philosopher Ludwig Boltzmann. In 1897 he posited an infinite space-time as a means to explain how extraordinarily well-ordered the universe is compared with the state of high entropy (or disorder) predicted by thermodynamics. Given such an arena, where every conceivable combination of particle position and momenta would exist somewhere, he suggested that the orderliness around us might be that of an incredibly rare fluctuation within an infinite space-time.

But Boltzmann’s reasoning was undermined by another, more absurd, conclusion. Rare fluctuations could also give rise to single momentary brains – self aware entities that spontaneously arises through random collisions of particles. Such “Boltzmann brains”, the argument goes, are far more likely to arise than the entire visible universe or even the solar system. Ludwig Boltzmann reasoned that brains and other complex, orderly objects on Earth were the result of random fluctuations. But why, then, do we see billions of other complex, orderly objects all around us? Why aren’t we like the lone being in the sea of nonsense?Boltzmann theorized that if random fluctuations create brains like ours, there should be Boltzmann brains floating around in space or sitting alone on uninhabited planets untold lightyears away. And in fact, those Boltzmann brains should be incredibly more common than the herds of complex, orderly objects we see here on Earth. So we have another paradox. If the only requirement of consciousness is a brain like the one in your head, why aren’t you a Boltzmann brain? If you were assigned to experience a random consciousness, you should almost certainly find yourself alone in the depths of space rather than surrounded by similar consciousnesses. The easy answers seem to all require a touch of magic. Perhaps consciousness doesn’t arise naturally from a brain like yours but requires some metaphysical endowment. Or maybe we’re not random fluctuations in a thermodynamic soup, and we were put here by an intelligent being. An infinity of space would therefore contain an infinitude of such disembodied brains, which would then be the “typical observer”, not us. OR. Starting at the very beginning: entropy must always stay the same or increase over time, according to the second law of thermodynamics. However, Boltzmann (the Ludwig one, not the brain one) formulated a version of the law of entropy that was statistical. What this means for what you’re asking is that while entropy almost always increases or stays the same, over billions of billions of billions of billions of billions…you get the idea years, entropy might go down a bit. This is called a fluctuation. So backing up a tad, if entropy always increases/stays the same, what is surprising for cosmologists is that the universe started in such a low-entropy state. So to (try) to explain this, Boltzmann said, hey, what if there’s a bigger universe that our universe is in, and it is in a state of the most possible entropy, or thermal equilibrium. Then, let’s say it exists for a long long time, those billions we talked about earlier. There’ll be statistical fluctuations, right? And those statistical fluctuations might be represented by the birth of universes. Ahem, our universe is one of them. So now, we get into the brains. Our universe must be a HUGE statistical fluctuation comparatively to other fluctuations. I mean, think about it. If it is so nuts for entropy to decrease by just a little tiny bit, how nuts would it be for it to decrease enough for the birth of a universe to happen!? So the question is, why aren’t we just brains? That is, why aren’t we a statistical fluctuation just big enough for intelligent life to develop, look around, see it exists, and melt back into goop. And it is this goopy-not-long-existing intelligent life that is a Boltzmann brain. This is a huge challenge to the Boltzmann (Ludwig) theory.

Can this bizarre vision possibly be real, or does it indicate something fundamentally wrong with our notion of “typicality”? Or is our notion of “the observer” flawed – can thermodynamic fluctuations that give rise to Boltzmann’s brains really suffice? Or could a futuristic supercomputer even play the Matrix-like role of a multitude of observers?

Time-Evolution in Quantum Mechanics is a “Flow” in the (Abstract) Space of Automorphisms of the Algebra of Observables

Spiral of life

In quantum mechanics, time is not a geometrical flow. Time-evolution is characterized as a transformation that preserves the algebraic relations between physical observables. If at a time t = 0 an observable – say the angular momentum L(0) – is defined as a certain combination (product and sum) of some other observables – for instance positions X(0), Y (0) and momenta PX (0), PY (0), that is to say

L(0) = X (0)PY (0) − Y (0)PX (0) —– (1)

then one asks that the same relation be satisfied at any other instant t (preceding or following t = 0),

L(t) = X (t)PY (t) − Y (t)PX (t) —– (2)

The quantum time-evolution is thus a map from an observable at time 0 to an observable at time t that preserves the algebraic form of the relation between observables. Technically speaking, one talks of an automorphism of the algebra of observables.

At first sight, this time-evolution has nothing to do with a flow. However there is still “something flowing”, although in an abstract mathematical space. Indeed, to any value of t (here time is an absolute parameter, as in Newton mechanics) is associated an automorphism αt that allows to deduce the observables at time t from the knowledge of the observables at time 0. Mathematically, one writes

L(t) = αt(L(0)), X(t) = αt(X(0)) —– (3)

and so on for the other observables. The term “group” is important for it precisely explains why it still makes sense to talk about a flow. Group refers to the property of additivity of the evolution: going from t to t′ is equivalent to going from t to t1, then from t1 to t′. Considering small variations of time (t′−t)/n where n is an integer, in the limit of large n one finds that going from t to t′ consists in flowing through n small variations, exactly as the geometric flow consists in going from a point x to a point y through a great number of infinitesimal variations (x−y)/n. That is why the time-evolution in quantum mechanics can be seen as a “flow” in the (abstract) space of automorphisms of the algebra of observables. To summarize, in quantum mechanics time is still “something that flows”, although in a less intuitive manner than in relativity. The idea of “flow of time” makes sense, as a flow in an abstract space rather than a geometrical flow.

“approximandum,” will not be General Theory of Relativity, but only its vacuum sector of spacetimes of topology Σ × R, or quantum gravity as a fecund ground for metaphysician. Note Quote.

1*Sr3ZCgKlan3YlW_n6oBc0w

In string theory as well as in Loop Quantum Gravity, and in other approaches to quantum gravity, indications are coalescing that not only time, but also space is no longer a fundamental entity, but merely an “emergent” phenomenon that arises from the basic physics. In the language of physics, spacetime theories such as GTR are “effective” theories and spacetime itself is “emergent”. However, unlike the notion that temperature is emergent, the idea that the universe is not in space and time arguably shocks our very idea of physical existence as profoundly as any scientific revolution ever did. It is not even clear whether we can coherently formulate a physical theory in the absence of space and time. Space disappears in LQG insofar as the physical structures it describes bear little, if any, resemblance to the spatial geometries found in GTR. These structures are discrete and not continuous as classical spacetimes are. They represent the fundamental constitution of our universe that correspond, somehow, to chunks of physical space and thus give rise – in a way yet to be elucidated – to the spatial geometries we find in GTR. The conceptual problem of coming to grasp how to do physics in the absence of an underlying spatio-temporal stage on which the physics can play out is closely tied to the technical difficulty of mathematically relating LQG back to GTR. Physicists have yet to fully understand how classical spacetimes emerge from the fundamental non-spatio-temporal structure of LQG, and philosophers are only just starting to study its conceptual foundations and the implications of quantum gravity in general and of the disappearance of space-time in particular. Even though the mathematical heavy-lifting will fall to the physicists, there is a role for philosophers here in exploring and mapping the landscape of conceptual possibilites, bringing to bear the immense philosophical literature in emergence and reduction which offers a variegated conceptual toolbox.

To understand how classical spacetime re-emerges from the fundamental quantum structure involves what the physicists call “taking the classical limit.” In a sense, relating the spin network states of LQG back to the spacetimes of GTR is a reversal of the quantization procedure employed to formulate the quantum theory in the first place. Thus, while the quantization can be thought of as the “context of discovery,” finding the classical limit that relates the quantum theory of gravity to GTR should be considered the “context of (partial) justification.” It should be emphasized that understanding how (classical) spacetime re-emerges by retrieving GTR as a low-energy limit of a more fundamental theory is not only important to “save the appearances” and to accommodate common sense – although it matters in these respects as well, but must also be considered a methodologically central part of the enterprise of quantum gravity. If it cannot be shown that GTR is indeed related to LQG in some mathematically well-understood way as the approximately correct theory when energies are sufficiently low or, equivalently, when scales are sufficiently large, then LQG cannot explain why GTR has been empirically as successful as it has been. But a successful theory can only be legitimately supplanted if the successor theory not only makes novel predictions or offers deeper explanations, but is also able to replicate the empirical success of the theory it seeks to replace.

Ultimately, of course, the full analysis will depend on the full articulation of the theory. But focusing on the kinematical level, and thus avoiding having to fully deal with the problem of time, lets apply the concepts to the problem of the emergence of full spacetime, rather than just time. Chris Isham and Butterfield identify three types of reductive relations between theories: definitional extension, supervenience, and emergence, of which only the last has any chance of working in the case at hand. For Butterfield and Isham, a theory T1 emerges from another theory T2 just in case there exists either a limiting or an approximating procedure to relate the two theories (or a combination of the two). A limiting procedure is taking the mathematical limit of some physically relevant parameters, in general in a particular order, of the underlying theory in order to arrive at the emergent theory. A limiting procedure won’t work, at least not by itself, due to technical problems concerning the maximal loop density as well as to what essentially amounts to the measurement problem familiar from non-relativistic quantum physics.

An approximating procedure designates the process of either neglecting some physical magni- tudes, and justifying such neglect, or selecting a proper subset of states in the state space of the approximating theory, and justifying such selection, or both, in order to arrive at a theory whose values of physical quantities remain sufficiently close to those of the theory to be approximated. Note that the “approximandum,” the theory to be approximated, in our case will not be GTR, but only its vacuum sector of spacetimes of topology Σ × R. One of the central questions will be how the selection of states will be justified. Such a justification would be had if we could identify a mechanism that “drives the system” to the right kind of states. Any attempt to finding such a mechanism will foist a host of issues known from the traditional problem of relating quantum to classical mechanics upon us. A candidate mechanism, here and there, is some form of “decoherence,” even though that standardly involves an “environment” with which the system at stake can interact. But the system of interest in our case is, of course, the universe, which makes it hard to see how there could be any outside environment with which the system could interact. The challenge then is to conceptualize decoherence is a way to circumvents this problem.

Once it is understood how classical space and time disappear in canonical quantum gravity and how they might be seen to re-emerge from the fundamental, non-spatiotemporal structure, the way in which classicality emerges from the quantum theory of gravity does not radically differ from the way it is believed to arise in ordinary quantum mechanics. The project of pursuing such an understanding is of relevance and interest for at least two reasons. First, important foundational questions concerning the interpretation of, and the relation between, theories are addressed, which can lead to conceptual clarification of the foundations of physics. Such conceptual progress may well prove to be the decisive stepping stone to a full quantum theory of gravity. Second, quantum gravity is a fertile ground for any metaphysician as it will inevitably yield implications for specifically philosophical, and particularly metaphysical, issues concerning the nature of space and time.