Initial Writing Systems. Thought of the Day 84.0


The discovery of the Sumerian civilization marks the culmination of the systematical exploration of the subsoil in the Near East, which got started in the late nineteenth-century. In the middle of that century, it was possible to spell and read the documents made with clay and covered with strange cuneiform or wedge-shaped signs, which had been found in the territory of Iraq a long time ago. This fact brought about the proliferation of excavations in the ancient Mesopotamia, just as it occurred in the Valley of Kings when the hieroglyphics were deciphered. Since these excavations were made in depth, they caused the vestiges or traces arranged in parallel layers to outcrop.

After having gone through layers with Arabian, Greek and Persian traces, the excavations got to testimonies dating from the middle of the first millennium B.C. The exploration thus reached the layer that stored the vast majority of the cuneiform documents. Consequently, were discovered the palaces, statues, treasures and weapons of the great Assyrian kings, who are mentioned in the Old Testament due to their conquests. In this way, the Assyriology was born as a scientific discipline from the cuneiform texts and the archeology of Mesopotamia.

Under that layer, other layers were discovered, which led to conclude that the apogee of the bellicose Assyrians proceeding from the north had been preceded in about one millennium by a people possessing a higher culture. These people originating from southern Mesopotamia were based on the Babylonians, whose code of laws (Hammurabi) symbolized their great cultural development and political equilibrium.

It was found out that the aforesaid code along with documents of that time were identical with the Assyrian annals and tablets, but with differences which determined that the Assyrian and Babylonian dialects came from an only language known as Akkadian. The Akkadian language is related to the Arabian, Aramean and Hebrew languages, and it is classified as a Semitic one. Then, the conclusion was that the empires of Babylon (in the early second millennium B.C.) and Nineveh (in the early first millennium B.C.) were of Semitic origin.

At the time that those archeological excavations were made, the cuneiform writing represented an enigma. This writing is composed of a large quantity of signs or characters (300 at its height), consisting of wedge-like strokes engraved on raw clay.

Initially, these linear drawings stood for concrete specific objects. In a second stage, each of the signs of this writing can be read in a text in two different ways:

  1. As the name of the object which originally was represented by that character.
  2. As the mark of a sound (syllable), but never an elemental irreducible sound like, for instance, those of the Latin alphabet.

Therefore, the cuneiform writing is ambivalent (both ideographic and phonetic). Thus, the drawing of a spike (e.g. a spike of wheat) within a cuneiform text can be read, according to the context, as the names of “grains” or the syllable “she”. In the same way, the engraving of a bird was ideographically interpreted as “volatile”, o else phonetically as the syllable “hu”.

The cuneiform signs were initially just a reproduction of objetcs. With time, they noticed that by means of such a rudimentary procedure as this, just a limited quantity of all that is possible to express in articulate language could be expressed. Only concrete typical objects could be depicted, but not actions or abstractions. For that reason, the solution was to disassociate in the character its reference to the object which reproduced, on one hand, and its pronunciation (phonetic value), on the other hand. So, the creators of this writing could write all that the spoken language expressed.

For example, the abstract word “vision” in Akkadian language is “shehu”, which could be represented by the drawing of a spike (i.e. a spike of a grain) followed by that of a bird (she + hu), but neither characters is related to a grain or something volatile in this case. Notwithstanding, in a different part of the text, those two characters might be directly translated as cereal and bird. This fact causes the decipherment of the cuneiform signs to be greatly difficult.

Because the Akkadian and Semitic name of the objects indicated by the cuneiform signs never corresponded to the phonetic value of those characters, it was inferred that the people who invented the cuneiform writing could not be Semites. The existence of another different and more ancient civilization prior to the Semitic Akkadians was then presumed.

The archeological excavations offered new cuneiform inscriptions, which, unlike the Babylonian and Assyrian texts, were written with ideograms only used due to their objective value, without any possibility of representing direct phonetic reading in either Akkadian or Semitic languages. Finally, the people who lived in southern Mesopotamia, whose monuments and cities underlying the Babylonian traces (2000 B.C.), were identified with the people who invented the cuneiform script.

As the ancient texts designated that zone of Mesopotamia adjacent to the Persian Gulf by the name of “Country of Sumer” (from the Akkadian term “shumerum”), it was agreed to call the predecessors of the Semitic Babylonians “Sumerians”. In the course of time, the investigations advanced until it was possible to reconstruct the Sumerian language, which had been lost for thousands years. Besides, this language had never could be classified within the well-known linguistic families.

The Sumerian language is really strange as far as its vocabulary (mostly monosyllabic) and even more its grammar (reconstructed in the most part) are concerned. In it, a big portion of the linguistic categories, which are indispensable according to our own way of viewing and expressing the things, is absent. As it was above mentioned, the Sumerian world is a finding of the nineteenth-century. It is the first civilization of the world, with the complexities this fact implies, namely: social and political organization, foundation of cities and states, creation of institutions, laws, organized production of assets, regulation of commerce, monumental artistic manifestations, and the invention of a writing system that would let knowledge be fixed and propagated. The appearance of this civilization dates from the fourth millennium B.C., in low Mesopotamia, between the rivers Tigris and Euphrates, to the south of Baghdad.

Two very ancient civilizations such as the Egyptian one and the Protoindian civilization of the Indus valley, are several centuries later than that of Sumer. Unlike Egypt and its pyramids, which reminds us of the glories of that civilization, or Israel and Greece, which built monuments that reminds us of their golden ages, in Sumer no testimonies of its past splendor were left. All that we know about Sumer at present, comes from the archeological excavations. All knowledge about this civilization has been extracted from clay tablets containing plenty of tiny cuneiform characters. These texts that are so difficult of being deciphered and understood, have been extracted by the hundreds of thousands, and they cover all aspects related to the writers’ lives: government, justice administration, economy, everyday life, science, history, literature and religion.


Cultural Alchemy: Berlin Sin City of the 1920s

Berlin was a cesspit of degeneracy and vice power primarily by the demand of rich and middle class patrons who had their services supplied by the poor, who chose this option due to their desperate poverty.  Anyone wanting to get a measure of the moral squalor of Germany during those years should look at the magazine Simplicissimus which would make comment of the social problems of Germany at that time through the medium of art. Long term sufferers of this blog will know that I am a huge fan of the work of Otto Dix and George Grosz, men whom I wouldn’t of shared political affinity with but men who were nonetheless disgusted at the moral abyss which Germany had fallen into after the First World War.  Their work is profoundly disturbing, disgusting and degenerate until you realise that that was what they were trying deliberately get across in their work. Berlin, especially was a morally destitute city. It’s easy to see how the moral revulsion generated by the antics of Berlin would engender a lot of sympathy for a man like Hitler. Purging the filth, regardless of the details, becomes very appealing. Just saying Social Pathologist.


The Chernobyl Herbarium


Some images in Anaïs Tondeur’s Chernobyl Herbarium are the explosions of light. Others are softly glowing, breathing with fragility and precariousness. The explosive imprints are, in effect, reminiscent of volcanic eruptions at night, hot lava spewing from the depths of the earth. Even assuming it is not an actual trace of radiation (which the specimens in the herbarium have received from the isotopes of cesium-137 and strontium-90 mixed with the soil of the exclusion zone) that comes through and shines forth from the plants’ contact with photosensitive paper, the resulting works of art cannot help but send us back to a space and time outside the frame, wherein this Linum usitatissimum germinated, grew, and blossomed.

The images are the visible records of an invisible calamity, tracked across the threshold of sight by the power of art. The literal translation from Greek of the technique used here, photogram, is a line of light. Not a photograph, the writing of light, but a photogram, its line captured on photosensitive paper, upon which the object is placed. In writing, a line is already too idealized, too heavy with meaning, overburdened with sense, nearly immaterial. In a photograph, light’s imprint is further removed from the being that emitted or reflected it than in a photogram, where, absent the camera, the line can be itself, can trace itself outside the system of coded significations and machinic mediations. The grammé of a photogram imposes itself from up close. Touching… It endures: etched, engraved, engrained, the energy it transported both reflected (or refracted) and absorbed. Much like radiation, indifferently imbibed by whatever and whoever is on its path – the soil, buildings, plants, animals, humans – yet uncontainable in any single entity whose time-frame it invariably overflows. Through her aesthetic practice, Tondeur detonates, releases the explosions of light trapped in plants, its lines dispersed, crisscrossing photograms every which way. She liberates luminescent traces without violence, avoiding the repetition of the first, invisible event of Chernobyl and, at the same time, capturing something of it. Release and preservation; preservation and release: by the grace of art.

The Chernobyl Herbarium


Quantum Music

Human neurophysiology suggests that artistic beauty cannot easily be disentangled from sexual attraction. It is, for instance, very difficult to appreciate Sandro Botticelli’s Primavera, the arguably “most beautiful painting ever painted,” when a beautiful woman or man is standing in front of that picture. Indeed so strong may be the distraction, and so deep the emotional impact, that it might not be unreasonable to speculate whether aesthetics, in particular beauty and harmony in art, could be best understood in terms of surrogates for natural beauty. This might be achieved through the process of artistic creation, idealization and “condensation.”


In this line of thought, in Hegelian terms, artistic beauty is the sublimation, idealization, completion, condensation and augmentation of natural beauty. Very different from Hegel who asserts that artistic beauty is “born of the spirit and born again, and the higher the spirit and its productions are above nature and its phenomena, the higher, too, is artistic beauty above the beauty of nature” what is believed here is that human neurophysiology can hardly be disregarded for the human creation and perception of art; and, in particular, of beauty in art. Stated differently, we are inclined to believe that humans are invariably determined by (or at least intertwined with) their natural basis that any neglect of it results in a humbling experience of irritation or even outright ugliness; no matter what social pressure groups or secret services may want to promote.

Thus, when it comes to the intensity of the experience, the human perception of artistic beauty, as sublime and refined as it may be, can hardly transcend natural beauty in its full exposure. In that way, art represents both the capacity as well as the humbling ineptitude of its creators and audiences.

Leaving these idealistic realms and come back to the quantization of musical systems. The universe of music consists of an infinity – indeed a continuum – of tones and ways to compose, correlate and arrange them. It is not evident how to quantize sounds, and in particular music, in general. One way to proceed would be a microphysical one: to start with frequencies of sound waves in air and quantize the spectral modes of these (longitudinal) vibrations very similar to phonons in solid state physics.

For the sake of relating to music, however, a different approach that is not dissimilar to the Deutsch-Turing approach to universal (quantum) computability, or Moore’s automata analogues to complementarity: a musical instrument is quantized, concerned with an octave, realized by the eight white keyboard keys typically written c, d, e, f, g, a, b, c′ (in the C major scale).

In analogy to quantum information quantization of tones is considered for a nomenclature in analogy to classical musical representation to be further followed up by introducing typical quantum mechanical features such as the coherent superposition of classically distinct tones, as well as entanglement and complementarity in music…..quantum music

Beginning of Matter, Start to Existence Itself


When the inequality

μ+3p/c2 >0 ⇔ w > −1/3

is satisfied, one obtains directly from the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ +3p/c2) + Λ

the Friedmann-Lemaître (FL) Universe Singularity Theorem, which states that:

In a FL universe with Λ ≤ 0 and μ + 3p/c2 > 0 at all times, at any instant t0 when H0 ≡ (S ̇/S)0 > 0 there is a finite time t: t0 − (1/H0) < t < t0, such that S(t) → 0 as t → t; the universe starts at a space-time singularity there, with μ → ∞ and T → ∞ if μ + p/c2 > 0.

This is not merely a start to matter – it is a start to space, to time, to physics itself. It is the most dramatic event in the history of the universe: it is the start of existence of everything. The underlying physical feature is the non-linear nature of the Einstein’s Field Equations (EFE): going back into the past, the more the universe contracts, the higher the active gravitational density, causing it to contract even more. The pressure p that one might have hoped would help stave off the collapse makes it even worse because (consequent on the form of the EFE) p enters algebraically into the Raychaudhuri equation with the same sign as the energy density μ. Note that the Hubble constant gives an estimate of the age of the universe: the time τ0 = t0 − t since the start of the universe is less than 1/H0.

This conclusion can in principle be avoided by a cosmological constant, but in practice this cannot work because we know the universe has expanded by at least a ratio of 11, as we have seen objects at a redshift 6 of 10, the cosmological constant would have to have an effective magnitude at least 113 = 1331 times the present matter density to dominate and cause a turn-around then or at any earlier time, and so would be much bigger than its observed present upper limit (of the same order as the present matter density). Accordingly, no turnaround is possible while classical physics holds. However energy-violating matter components such as a scalar field can avoid this conclusion, if they dominate at early enough times; but this can only be when quantum fields are significant, when the universe was at least 1012 smaller than at present.

Because Trad ∝ S−1, a major conclusion is that a Hot Big Bang must have occurred; densities and temperatures must have risen at least to high enough energies that quantum fields were significant, at something like the GUT energy. The universe must have reached those extreme temperatures and energies at which classical theory breaks down.

Single Asset Optimal Investment Fraction


We first consider a situation, when an investor can spend a fraction of his capital to buy shares of just one risky asset. The rest of his money he keeps in cash.

Generalizing Kelly, we consider the following simple strategy of the investor: he regularly checks the asset’s current price p(t), and sells or buys some asset shares in order to keep the current market value of his asset holdings a pre-selected fraction r of his total capital. These readjustments are made periodically at a fixed interval, which we refer to as readjustment interval, and select it as the discrete unit of time. In this work the readjustment time interval is selected once and for all, and we do not attempt optimization of its length.

We also assume that on the time-scale of this readjustment interval the asset price p(t) undergoes a geometric Brownian motion:

p(t + 1) = eη(t)p(t) —– (1)

i.e. at each time step the random number η(t) is drawn from some probability distribution π(η), and is independent of it’s value at previous time steps. This exponential notation is particularly convenient for working with multiplicative noise, keeping the necessary algebra at minimum. Under these rules of dynamics the logarithm of the asset’s price, ln p(t), performs a random walk with an average drift v = ⟨η⟩ and a dispersion D = ⟨η2⟩ − ⟨η⟩2.

It is easy to derive the time evolution of the total capital W(t) of an investor, following the above strategy:

W(t + 1) = (1 − r)W(t) + rW(t)eη(t) —– (2)

Let us assume that the value of the investor’s capital at t = 0 is W(0) = 1. The evolution of the expectation value of the expectation value of the total capital ⟨W (t)⟩ after t time steps is obviously given by the recursion ⟨W (t + 1)⟩ = (1 − r + r⟨eη⟩)⟨W (t)⟩. When ⟨eη⟩ > 1, at first thought the investor should invest all his money in the risky asset. Then the expectation value of his capital would enjoy an exponential growth with the fastest growth rate. However, it would be totally unreasonable to expect that in a typical realization of price fluctuations, the investor would be able to attain the average growth rate determined as vavg = d⟨W(t)⟩/dt. This is because the main contribution to the expectation value ⟨W(t)⟩ comes from exponentially unlikely outcomes, when the price of the asset after a long series of favorable events with η > ⟨η⟩ becomes exponentially big. Such outcomes lie well beyond reasonable fluctuations of W (t), determined by the standard deviation √Dt of ln W (t) around its average value ⟨ln W (t)⟩ = ⟨η⟩t. For the investor who deals with just one realization of the multiplicative process it is better not to rely on such unlikely events, and maximize his gain in a typical outcome of a process. To quantify the intuitively clear concept of a typical value of a random variable x, we define xtyp as a median of its distribution, i.e xtyp has the property that Prob(x > xtyp) = Prob(x < xtyp) = 1/2. In a multiplicative process (2) with r = 1, W (t + 1) = eη(t)W (t), one can show that Wtyp(t) – the typical value of W(t) – grows exponentially in time: Wtyp(t) = e⟨η⟩t at a rate vtyp = ⟨η⟩, while the expectation value ⟨W(t)⟩ also grows exponentially as ⟨W(t)⟩ = ⟨eη⟩t, but at a faster rate given by vavg = ln⟨eη⟩. Notice that ⟨lnW(t)⟩ always grows with the typical growth rate, since those very rare outcomes when W (t) is exponentially big, do not make significant contribution to this average.

The question we are going to address is: which investment fraction r provides the investor with the best typical growth rate vtyp of his capital. Kelly has answered this question for a particular realization of multiplicative stochastic process, where the capital is multiplied by 2 with probability q > 1/2, and by 0 with probability p = 1 − q. This case is realized in a gambling game, where betting on the right outcome pays 2:1, while you know the right outcome with probability q > 1/2. In our notation this case corresponds to η being equal to ln 2 with probability q and −∞ otherwise. The player’s capital in Kelly’s model with r = 1 enjoys the growth of expectation value ⟨W(t)⟩ at a rate vavg = ln2q > 0. In this case it is however particularly clear that one should not use maximization of the expectation value of the capital as the optimum criterion. If the player indeed bets all of his capital at every time step, sooner or later he will loose everything and would not be able to continue to play. In other words, r = 1 corresponds to the worst typical growth of the capital: asymptotically the player will be bankrupt with probability 1. In this example it is also very transparent, where the positive average growth rate comes from: after T rounds of the game, in a very unlikely (Prob = qT) event that the capital was multiplied by 2 at all times (the gambler guessed right all the time!), the capital is equal to 2T. This exponentially large value of the capital outweighs exponentially small probability of this event, and gives rise to an exponentially growing average. This would offer condolence to a gambler who lost everything.

We generalize Kelly’s arguments for arbitrary distribution π(η). As we will see this generalization reveals some hidden results, not realized in Kelly’s “betting” game. As we learned above, the growth of the typical value of W(t), is given by the drift of ⟨lnW(t)⟩ = vtypt, which in our case can be written as

vtyp(r) = ∫ dη π(η) ln(1 + r(eη − 1)) —– (3)

One can check that vtyp(0) = 0, since in this case the whole capital is in the form of cash and does not change in time. In another limit one has vtyp(1) = ⟨η⟩, since in this case the whole capital is invested in the asset and enjoys it’s typical growth rate (⟨η⟩ = −∞ for Kelly’s case). Can one do better by selecting 0 < r < 1? To find the maximum of vtyp(r) one differentiates (3) with respect to r and looks for a solution of the resulting equation: 0 = v’typ(r) = ∫ dη π(η) (eη −1)/(1+r(eη −1)) in the interval 0 ≤ r ≤ 1. If such a solution exists, it is unique since v′′typ(r) = − ∫ dη π(η) (eη − 1)2 / (1 + r(eη − 1))2 < 0 everywhere. The values of the v’typ(r) at 0 and 1 are given by v’typ(0) = ⟨eη⟩ − 1, and v’typ(1) = 1−⟨e−η⟩. One has to consider three possibilities:

(1) ⟨eη⟩ is realized at r = 0 and is equal to 0. In other words, one should never invest in an asset with negative average return per capital ⟨eη⟩ − 1 < 0.

(2) ⟨eη⟩ > 1 , and ⟨e−η⟩ > 1. In this case v’typ(0) > 0, but v’typ(1) < 0 and the maximum of v(r) is realized at some 0 < r < 1, which is a unique solution to v’typ(r) = 0. The typical growth rate in this case is always positive (because you could have always selected r = 0 to make it zero), but not as big as the average rate ln⟨eη⟩, which serves as an unattainable ideal limit. An intuitive understanding of why one should select r < 1 in this case comes from the following observation: the condition ⟨e−η⟩ > 1 makes ⟨1/p(t)⟩ to grow exponentially in time. Such an exponential growth indicates that the outcomes with very small p(t) are feasible and give dominant contribution to ⟨1/p(t)⟩. This is an indicator that the asset price is unstable and one should not trust his whole capital to such a risky investment.

(3) ⟨eη⟩ > 1 , and ⟨e−η⟩ < 1. This is a safe asset and one can invest his whole capital in it. The maximum vtyp(r) is achieved at r = 1 and is equal to vtyp(1) = ln⟨η⟩. A simple example of this type of asset is one in which the price p(t) with equal probabilities is multiplied by 2 or by a = 2/3. As one can see this is a marginal case in which ⟨1/p(t)⟩ = const. For a < 2/3 one should invest only a fraction r < 1 of his capital in the asset, while for a ≥ 2/3 the whole sum could be trusted to it. The specialty of the case with a = 2/3 cannot not be guessed by just looking at the typical and average growth rates of the asset! One has to go and calculate ⟨e−η⟩ to check if ⟨1/p(t)⟩ diverges. This “reliable” type of asset is a new feature of the model with a general π(η). It is never realized in Kelly’s original model, which always has ⟨η⟩ = −∞, so that it never makes sense to gamble the whole capital every time.

An interesting and somewhat counterintuitive consequence of the above results is that under certain conditions one can make his capital grow by investing in asset with a negative typical growth rate ⟨η⟩ < 0. Such asset certainly loses value, and its typical price experiences an exponential decay. Any investor bold enough to trust his whole capital in such an asset is losing money with the same rate. But as long as the fluctuations are strong enough to maintain a positive average return per capital ⟨eη⟩ − 1 > 0) one can maintain a certain fraction of his total capital invested in this asset and almost certainly make money! A simple example of such mind-boggling situation is given by a random multiplicative process in which the price of the asset with equal probabilities is doubled (goes up by 100%) or divided by 3 (goes down by 66.7%). The typical price of this asset drifts down by 18% each time step. Indeed, after T time steps one could reasonably expect the price of this asset to be ptyp(T) = 2T/2 3−T/2 = (√2/3)T ≃ 0.82T. On the other hand, the average ⟨p(t)⟩ enjoys a 17% growth ⟨p(t + 1)⟩ = 7/6 ⟨p(t)⟩ ≃ 1.17⟨W (t)⟩. As one can easily see, the optimum of the typical growth rate is achieved by maintaining a fraction r = 1/4 of the capital invested in this asset. The typical rate in this case is a meager √(25/24) ≃ 1.02, meaning that in a long run one almost certainly gets a 2% return per time step, but it is certainly better than losing 18% by investing the whole capital in this asset.

Of course the properties of a typical realization of a random multiplicative process are not fully characterized by the drift vtyp(r)t in the position of the center of mass of P(h,t), where h(t) = lnW(t) is a logarithm of the wealth of the investor. Indeed, asymptotically P (h, t) has a Gaussian shape P (h, t) =1/ (√2π D(r)t) (exp(−(h−vtyp(r)t)2)/(2D(r)t), where vtyp(r) is given by eq. (3). One needs to know the dispersion D(r) to estimate √D(r)t, which is the magnitude of characteristic deviations of h(t) away from its typical value htyp(t) = vtypt. At the infinite time horizon t → ∞, the process with the biggest vtyp(r) will certainly be preferable over any other process. This is because the separation between typical values of h(t) for two different investment fractions r grows linearly in time, while the span of typical fluctuations grows only as a √t. However, at a finite time horizon the investor should take into account both vtyp(r) and D(r) and decide what he prefers: moderate growth with small fluctuations or faster growth with still bigger fluctuations. To quantify this decision one needs to introduce an investor’s “utility function” which we will not attempt in this work. The most conservative players are advised to always keep their capital in cash, since with any other arrangement the fluctuations will certainly be bigger. As a rule one can show that the dispersion D(r) = ∫ π(η) ln2[1 + r(eη − 1)]dη − v2typ monotonically increases with r. Therefore, among two solutions with equal vtyp(r) one should always select the one with a smaller r, since it would guarantee smaller fluctuations. Here it is more convenient to switch to the standard notation. It is customary to use the random variable

Λ(t)= (p(t+1)−p(t))/p(t) = eη(t) −1 —– (4)

which is referred to as return per unit capital of the asset. The properties of a random multiplicative process are expressed in terms of the average return per capital α = ⟨Λ⟩ = ⟨eη⟩ − 1, and the volatility (standard deviation) of the return per capital σ = √(⟨Λ2⟩ – ⟨Λ⟩2. In our notation, α = ⟨eη⟩ – 1, is determined by the average and not typical growth rate of the process. For η ≪ 1 , α ≃ v + D/2 + v2/2, while the volatility σ is related to D ( the dispersion of η) through σ ≃ √D.


Price Dynamics for Fundamentalists – Risky Asset – Chartists via Modeling

Substituting (1), (2) and (3) to (4) from here, the dynamical system can be obtained as

pt+1 − pt = θN[(1 − κ)(exp(α(p − pt)) − 1) + κ(exp(β(1 − µ)(pet − pt)) − 1)] pet+1 − pet = µ(pt − pet ) —– (5)

In the following discussion we highlight the impact of increases in the total number of traders n on the price fluctuation that is defined as the price increment,

rt = pt+1 − pt

We first restrict ourselves to investigating the following set of parameters:

α = 3, β = 1, µ = 0.5, κ = 0.5, θ = 0.001 —– (6)

It is clear that the two-dimensional map (5) has a unique equilibrium with pet = pt = p, namely (p¯e , p¯) = (p, p), given the above conditions. Elementary computations show that for our map (5) the sufficient condition for the local stability of the fixed point p is given as

N < (2(2 − µ))/(θ[α(2 − µ)(1 − κ) + 2β(1 − µ)κ]) —– (7)

From (7) it follows that, starting from a small number of traders N inside the stability region, when the number of traders N increases, a loss of stability may occur via a flip bifurcation. We shall now look more globally into the effect of increases in the number of traders on the price dynamics. Figure 1 shows a bifurcation diagram of the price increments rt with N as the bifurcation parameter under the set of parameters (6). For the convenience of illustration, Figure 1 is drawn using θN as the bifurcation parameter. This figure suggests the following bifurcation scenario.


The price increments rt converge to 0 when the number of traders N is small. In other words, the price converges to the fundamental price p when the active traders are few. However the price dynamics become unstable when the number of traders N exceeds about 1000, and chaotic behavior of the price increments occurs after infinitely many period-doubling bifurcations. If N is further increased, then the price increments rt become more regular again after infinitely many period-halving bifurcations. A stable 2 orbit occurs for an interval of N-values. However as N is further increased, the behavior of the price increment rt becomes once again chaotic, and the prices diverge. Let us investigate closely the characteristics of chaos that are observed in the parameter interval (2000 < N < 4000). Figure 2 shows a series of price increments rt with N = 4000 and the set of parameters (6). The figure shows apparently the characteristic of intermittent chaos, that is, a long laminar phase, where the price fluctuations behave regularly, is interrupted from time to time by chaotic bursts.