Stocks and Fundamentalists’ Behavior

noise-trader

Let us consider a simple stock market with the following characteristics. A large amount of stock is traded. In the market, there are three typical groups of traders with different strategies: fundamentalists, chartists, and noise traders. Traders can invest either in money or in stock. Since the model is designed to describe stock price movements over short periods, such as one day, the dividend from stock and the interest rate for the risk-free asset will be omitted for simplicity. Traders are myopic and bent on maximizing utility. Their utility depends on the price change they expect, and on their excess demand for stock rather than simply their demand. Their excess demand is derived from utility maximization.

Let Ytf be the amount of money that a fundamentalist holds at a time t and Xtf be the number of shares purchased by a fundamentalist at time t. Let pt be the price per share at time t. The fundamentalist’s budget constrain is given by

Ytf + ptXtf = Yt-1f + ptXt-1f —– (1)

or equivalently

ytf + ptxtf = 0 —– (2)

where

ytf = Ytf – Yt-1f

denotes the fundamentalist’s excess demand for money, and

xtf = Xtf – Xt-1f

his excess demand for stock. Suppose that the fundamentalist’s preferences are represented by the utility function,

u = α(ytf + pt+1fxtf + βxtf – (1 + βxtf) log (1 + βxtf) —– (3)

where pt+1f denotes the fundamentalist’s expectation in period t about the price in the following period t + 1. The parameters α and β are assumed to be positive. Inserting (2) into (3) the fundamentalist’s utility maximization problem becomes:

maxxtf  u =  α(pt+1– pt)xtf  βxtf – (1 + βxtf) log (1 + βxtf) —– (4)

The utility function u satisfies the standard properties: u′ (|xtf|) > 0, u′′(|xtf|) < 0 ∀ |xf|t ≤ |xf*|, where |xf*| denotes the absolute value of xf producing a maximum utility. Thus, the utility function is strictly concave. It depends on the price change expected by fundamentalists (pt+1– pt) as well as fundamentalist’s excess demand for stock xtf. The first part α(pt+1– pt)xtf implies that a rise in the expected price change increases his utility. The remaining part expresses his attitude toward risk. Even if the expected price change is positive, he does not want to invest his total wealth in the stock, and vice versa. In this sense, fundamentalists are risk averse. β is the parameter that sets the lower limitation on excess demand. All excess demand for stock derived from the utility maximization is limited to -1/β. When the expected price change (pt+1– pt) is positive, the maximum value of the utility function is also positive. This means that fundamentalists try to buy stock. By analogy, when the expected price change (pt+1– pt) is negative, the maximum value of the utility function is negative, which means that they try to sell. The utility maximization problem (4) is solved for the fundamentalist’s excess demand,

xtf = 1/β(exp(α(pt+1– pt)/β) – 1) —– (5)

Excess demand increases as the expected price change (pt+1– pt) increases. It should be noticed that the optimal value of excess supply is limited to -1/β, while the optimal value of excess demand is not restricted. Since there is little loss of generality in fixing the parameter β at unity, below, we will assume β to be constant and equal to 1. Then let us think of the fundamentalist’s expectation formation. We assume that he form his price expectation according to a simple adaptive scheme:

pt+1f = p+ ν(p* – pt) —– (6)

We see from Equation (6) that fundamentalists believe that the price moves towards the fundamental price p* by factor ν. To sum up fundamentalists’ behavior: if the price pt is below their expected price, they will try to buy stock, because they consider the stock to be undervalued. On the contrary, if the price is above the expected value, they will try to sell, because they consider the stock to be overvalued.

Advertisement

Typicality. Cosmological Constant and Boltzmann Brains. Note Quote.

tumblr_nsafvjW8W31qg20oho1_1280

In a multiverse we would expect there to be relatively many universe domains with large values of the cosmological constant, but none of these allow gravitationally bound structures (such as our galaxy) to occur, so the likelihood of observing ourselves to be in one is essentially zero.

The cosmological constant has negative pressure, but positive energy.  The negative pressure ensures that as the volume expands then matter loses energy (photons get red shifted, particles slow down); this loss of energy by matter causes the expansion to slow down – but the increase in energy of the increased volume is more important .  The increase of energy associated with the extra space the cosmological constant fills has to be balanced by a decrease in the gravitational energy of the expansion – and this expansion energy is negative, allowing the universe to carry on expanding.  If you put all the terms on one side in the Friedmann equation – which is just an energy balancing equation – (with the other side equal to zero) you will see that the expansion energy is negative, whereas the cosmological constant and matter (including dark matter) all have positive energy.

nex6

However, as the cosmological constant is decreased, we eventually reach a transition point where it becomes just small enough for gravitational structures to occur. Reduce it a bit further still, and you now get universes resembling ours. Given the increased likelihood of observing such a universe, the chances of our universe being one of these will be near its peak. Theoretical physicist Steven Weinberg used this reasoning to correctly predict the order of magnitude of the cosmological constant well before the acceleration of our universe was even measured.

Unfortunately this argument runs into conceptually murky water. The multiverse is infinite and it is not clear whether we can calculate the odds for anything to happen in an infinite volume of space- time. All we have is the single case of our apparently small but positive value of the cosmological constant, so it’s hard to see how we could ever test whether or not Weinberg’s prediction was a lucky coincidence. Such questions concerning infinity, and what one can reasonably infer from a single data point, are just the tip of the philosophical iceberg that cosmologists face.

Another conundrum is where the laws of physics come from. Even if these laws vary across the multiverse, there must be, so it seems, meta-laws that dictate the manner in which they are distributed. How can we, inhabitants on a planet in a solar system in a galaxy, meaningfully debate the origin of the laws of physics as well as the origins of something, the very universe, that we are part of? What about the parts of space-time we can never see? These regions could infinitely outnumber our visible patch. The laws of physics could differ there, for all we know.

We cannot settle any of these questions by experiment, and this is where philosophers enter the debate. Central to this is the so-called observational-selection effect, whereby an observation is influenced by the observer’s “telescope”, whatever form that may take. But what exactly is it to be an observer, or more specifically a “typical” observer, in a system where every possible sort of observer will come about infinitely many times? The same basic question, centred on the role of observers, is as fundamental to the science of the indefinitely large (cosmology) as it is to that of the infinitesimally small (quantum theory).

This key issue of typicality also confronted Austrian physicist and philosopher Ludwig Boltzmann. In 1897 he posited an infinite space-time as a means to explain how extraordinarily well-ordered the universe is compared with the state of high entropy (or disorder) predicted by thermodynamics. Given such an arena, where every conceivable combination of particle position and momenta would exist somewhere, he suggested that the orderliness around us might be that of an incredibly rare fluctuation within an infinite space-time.

But Boltzmann’s reasoning was undermined by another, more absurd, conclusion. Rare fluctuations could also give rise to single momentary brains – self aware entities that spontaneously arises through random collisions of particles. Such “Boltzmann brains”, the argument goes, are far more likely to arise than the entire visible universe or even the solar system. Ludwig Boltzmann reasoned that brains and other complex, orderly objects on Earth were the result of random fluctuations. But why, then, do we see billions of other complex, orderly objects all around us? Why aren’t we like the lone being in the sea of nonsense?Boltzmann theorized that if random fluctuations create brains like ours, there should be Boltzmann brains floating around in space or sitting alone on uninhabited planets untold lightyears away. And in fact, those Boltzmann brains should be incredibly more common than the herds of complex, orderly objects we see here on Earth. So we have another paradox. If the only requirement of consciousness is a brain like the one in your head, why aren’t you a Boltzmann brain? If you were assigned to experience a random consciousness, you should almost certainly find yourself alone in the depths of space rather than surrounded by similar consciousnesses. The easy answers seem to all require a touch of magic. Perhaps consciousness doesn’t arise naturally from a brain like yours but requires some metaphysical endowment. Or maybe we’re not random fluctuations in a thermodynamic soup, and we were put here by an intelligent being. An infinity of space would therefore contain an infinitude of such disembodied brains, which would then be the “typical observer”, not us. OR. Starting at the very beginning: entropy must always stay the same or increase over time, according to the second law of thermodynamics. However, Boltzmann (the Ludwig one, not the brain one) formulated a version of the law of entropy that was statistical. What this means for what you’re asking is that while entropy almost always increases or stays the same, over billions of billions of billions of billions of billions…you get the idea years, entropy might go down a bit. This is called a fluctuation. So backing up a tad, if entropy always increases/stays the same, what is surprising for cosmologists is that the universe started in such a low-entropy state. So to (try) to explain this, Boltzmann said, hey, what if there’s a bigger universe that our universe is in, and it is in a state of the most possible entropy, or thermal equilibrium. Then, let’s say it exists for a long long time, those billions we talked about earlier. There’ll be statistical fluctuations, right? And those statistical fluctuations might be represented by the birth of universes. Ahem, our universe is one of them. So now, we get into the brains. Our universe must be a HUGE statistical fluctuation comparatively to other fluctuations. I mean, think about it. If it is so nuts for entropy to decrease by just a little tiny bit, how nuts would it be for it to decrease enough for the birth of a universe to happen!? So the question is, why aren’t we just brains? That is, why aren’t we a statistical fluctuation just big enough for intelligent life to develop, look around, see it exists, and melt back into goop. And it is this goopy-not-long-existing intelligent life that is a Boltzmann brain. This is a huge challenge to the Boltzmann (Ludwig) theory.

Can this bizarre vision possibly be real, or does it indicate something fundamentally wrong with our notion of “typicality”? Or is our notion of “the observer” flawed – can thermodynamic fluctuations that give rise to Boltzmann’s brains really suffice? Or could a futuristic supercomputer even play the Matrix-like role of a multitude of observers?

Time-Evolution in Quantum Mechanics is a “Flow” in the (Abstract) Space of Automorphisms of the Algebra of Observables

Spiral of life

In quantum mechanics, time is not a geometrical flow. Time-evolution is characterized as a transformation that preserves the algebraic relations between physical observables. If at a time t = 0 an observable – say the angular momentum L(0) – is defined as a certain combination (product and sum) of some other observables – for instance positions X(0), Y (0) and momenta PX (0), PY (0), that is to say

L(0) = X (0)PY (0) − Y (0)PX (0) —– (1)

then one asks that the same relation be satisfied at any other instant t (preceding or following t = 0),

L(t) = X (t)PY (t) − Y (t)PX (t) —– (2)

The quantum time-evolution is thus a map from an observable at time 0 to an observable at time t that preserves the algebraic form of the relation between observables. Technically speaking, one talks of an automorphism of the algebra of observables.

At first sight, this time-evolution has nothing to do with a flow. However there is still “something flowing”, although in an abstract mathematical space. Indeed, to any value of t (here time is an absolute parameter, as in Newton mechanics) is associated an automorphism αt that allows to deduce the observables at time t from the knowledge of the observables at time 0. Mathematically, one writes

L(t) = αt(L(0)), X(t) = αt(X(0)) —– (3)

and so on for the other observables. The term “group” is important for it precisely explains why it still makes sense to talk about a flow. Group refers to the property of additivity of the evolution: going from t to t′ is equivalent to going from t to t1, then from t1 to t′. Considering small variations of time (t′−t)/n where n is an integer, in the limit of large n one finds that going from t to t′ consists in flowing through n small variations, exactly as the geometric flow consists in going from a point x to a point y through a great number of infinitesimal variations (x−y)/n. That is why the time-evolution in quantum mechanics can be seen as a “flow” in the (abstract) space of automorphisms of the algebra of observables. To summarize, in quantum mechanics time is still “something that flows”, although in a less intuitive manner than in relativity. The idea of “flow of time” makes sense, as a flow in an abstract space rather than a geometrical flow.