Black Hole Analogue: Extreme Blue Shift Disturbance. Thought of the Day 141.0

One major contribution of the theoretical study of black hole analogues has been to help clarify the derivation of the Hawking effect, which leads to a study of Hawking radiation in a more general context, one that involves, among other features, two horizons. There is an apparent contradiction in Hawking’s semiclassical derivation of black hole evaporation, in that the radiated fields undergo arbitrarily large blue-shifting in the calculation, thus acquiring arbitrarily large masses, which contravenes the underlying assumption that the gravitational effects of the quantum fields may be ignored. This is known as the trans-Planckian problem. A similar issue arises in condensed matter analogues such as the sonic black hole.


Sonic horizons in a moving fluid, in which the speed of sound is 1. The velocity profile of the fluid, v(z), attains the value −1 at two values of z; these are horizons for sound waves that are right-moving with respect to the fluid. At the right-hand horizon right-moving waves are trapped, with waves just to the left of the horizon being swept into the supersonic flow region v < −1; no sound can emerge from this region through the horizon, so it is reminiscent of a black hole. At the left-hand horizon right-moving waves become frozen and cannot enter the supersonic flow region; this is reminiscent of a white hole.

Considering the sonic horizons in one-dimensional fluid flow, the velocity profile of the fluid as depicted in the figure above, the two horizons are formed for sound waves that propagate to the right with respect to the fluid. The horizon on the right of the supersonic flow region v < −1 behaves like a black hole horizon for right-moving waves, while the horizon on the left of the supersonic flow region behaves like a white hole horizon for these waves. In such a system, the equation for a small perturbation φ of the velocity potential is

(∂t + ∂zv)(∂t + v∂z)φ − ∂z2φ = 0 —– (1)

In terms of a new coordinate τ defined by

dτ := dt + v/(1 – v2) dz

(1) is the equation φ = 0 of a scalar field in the black-hole-type metric

ds2 = (1 – v2)dτ2 – dz2/(1 – v2)

Each horizon will produce a thermal spectrum of phonons with a temperature determined by the quantity that corresponds to the surface gravity at the horizon, namely the absolute value of the slope of the velocity profile:

kBT = ħα/2π, α := |dv/dz|v=-1 —– (2)


Hawking phonons in the fluid flow: Real phonons have positive frequency in the fluid-element frame and for right-moving phonons this frequency (ω − vk) is ω/(1 + v) = k. Thus in the subsonic-flow regions ω (conserved 1 + v for each ray) is positive, whereas in the supersonic-flow region it is negative; k is positive for all real phonons. The frequency in the fluid-element frame diverges at the horizons – the trans-Planckian problem.

The trajectories of the created phonons are formally deduced from the dispersion relation of the sound equation (1). Geometrical acoustics applied to (1) gives the dispersion relation

ω − vk = ±k —– (3)

and the Hamiltonians

dz/dt = ∂ω/∂k = v ± 1 —– (4)

dk/dt = -∂ω/∂z = − v′k —– (5)

The left-hand side of (3) is the frequency in the frame co-moving with a fluid element, whereas ω is the frequency in the laboratory frame; the latter is constant for a time-independent fluid flow (“time-independent Hamiltonian” dω/dt = ∂ω/∂t = 0). Since the Hawking radiation is right-moving with respect to the fluid, we clearly must choose the positive sign in (3) and hence in (4) also. By approximating v(z) as a linear function near the horizons we obtain from (4) and (5) the ray trajectories. The disturbing feature of the rays is the behavior of the wave vector k: at the horizons the radiation is exponentially blue-shifted, leading to a diverging frequency in the fluid-element frame. These runaway frequencies are unphysical since (1) asserts that sound in a fluid element obeys the ordinary wave equation at all wavelengths, in contradiction with the atomic nature of fluids. Moreover the conclusion that this Hawking radiation is actually present in the fluid also assumes that (1) holds at all wavelengths, as exponential blue-shifting of wave packets at the horizon is a feature of the derivation. Similarly, in the black-hole case the equation does not hold at arbitrarily high frequencies because it ignores the gravity of the fields. For the black hole, a complete resolution of this difficulty will require inputs from the gravitational physics of quantum fields, i.e. quantum gravity, but for the dumb hole the physics is available for a more realistic treatment.


Evolutionary Game Theory. Note Quote


In classical evolutionary biology the fitness landscape for possible strategies is considered static. Therefore optimization theory is the usual tool in order to analyze the evolution of strategies that consequently tend to climb the peaks of the static landscape. However in more realistic scenarios the evolution of populations modifies the environment so that the fitness landscape becomes dynamic. In other words, the maxima of the fitness landscape depend on the number of specimens that adopt every strategy (frequency-dependent landscape). In this case, when the evolution depends on agents’ actions, game theory is the adequate mathematical tool to describe the process. But this is precisely the scheme in that the evolving physical laws (i.e. algorithms or strategies) are generated from the agent-agent interactions (bottom-up process) submitted to natural selection.

The concept of evolutionarily stable strategy (ESS) is central to evolutionary game theory. An ESS is defined as that strategy that cannot be displaced by any alternative strategy when being followed by the great majority – almost all of systems in a population. In general,

an ESS is not necessarily optimal; however it might be assumed that in the last stages of evolution — before achieving the quantum equilibrium — the fitness landscape of possible strategies could be considered static or at least slow varying. In this simplified case an ESS would be one with the highest payoff therefore satisfying an optimizing criterion. Different ESSs could exist in other regions of the fitness landscape.

In the information-theoretic Darwinian approach it seems plausible to assume as optimization criterion the optimization of information flows for the system. A set of three regulating principles could be:

Structure: The complexity of the system is optimized (maximized).. The definition that is adopted for complexity is Bennett’s logical depth that for a binary string is the time needed to execute the minimal program that generates such string. There is no a general acceptance of the definition of complexity, neither is there a consensus on the relation between the increase of complexity – for a certain definition – and Darwinian evolution. However, it seems that there is some agreement on the fact that, in the long term, Darwinian evolution should drive to an increase in complexity in the biological realm for an adequate natural definition of this concept. Then the complexity of a system at time in this theory would be the Bennett’s logical depth of the program stored at time in its Turing machine. The increase of complexity is a characteristic of Lamarckian evolution, and it is also admitted that the trend of evolution in the Darwinian theory is in the direction in which complexity grows, although whether this tendency depends on the timescale – or some other factors – is still not very clear.

Dynamics: The information outflow of the system is optimized (minimized). The information is the Fisher information measure for the probability density function of the position of the system. According to S. A. Frank, natural selection acts maximizing the Fisher information within a Darwinian system. As a consequence, assuming that the flow of information between a system and its surroundings can be modeled as a zero-sum game, Darwinian systems would follow dynamics.

Interaction: The interaction between two subsystems optimizes (maximizes) the complexity of the total system. The complexity is again equated to the Bennett’s logical depth. The role of Interaction is central in the generation of composite systems, therefore in the structure for the information processor of composite systems resulting from the logical interconnections among the processors of the constituents. There is an enticing option of defining the complexity of a system in contextual terms as the capacity of a system for anticipating the behavior at t + ∆t of the surrounding systems included in the sphere of radius r centered in the position X(t) occupied by the system. This definition would directly drive to the maximization of the predictive power for the systems that maximized their complexity. However, this magnitude would definitely be very difficult to even estimate, in principle much more than the usual definitions for complexity.

Quantum behavior of microscopic systems should now emerge from the ESS. In other terms, the postulates of quantum mechanics should be deduced from the application of the three regulating principles on our physical systems endowed with an information processor.

Let us apply Structure. It is reasonable to consider that the maximization of the complexity of a system would in turn maximize the predictive power of such system. And this optimal statistical inference capacity would plausibly induce the complex Hilbert space structure for the system’s space of states. Let us now consider Dynamics. This is basically the application of the principle of minimum Fisher information or maximum Cramer-Rao bound on the probability distribution function for the position of the system. The concept of entanglement seems to be determinant to study the generation of composite systems, in particular in this theory through applying Interaction. The theory admits a simple model that characterizes the entanglement between two subsystems as the mutual exchange of randomizers (R1, R2), programs (P1, P2) – with their respective anticipation modules (A1, A2) – and wave functions (Ψ1, Ψ2). In this way, both subsystems can anticipate not only the behavior of their corresponding surrounding systems, but also that of the environment of its partner entangled subsystem. In addition, entanglement can be considered a natural phenomenon in this theory, a consequence of the tendency to increase the complexity, and therefore, in a certain sense, an experimental support to the theory.

In addition, the information-theoretic Darwinian approach is a minimalist realist theory – every system follows a continuous trajectory in time, as in Bohmian mechanics, a local theory in physical space – in this theory apparent nonlocality, as in Bell’s inequality violations, would be an artifact of the anticipation module in the information space, although randomness would necessarily be intrinsic to nature through the random number generator methodologically associated with every fundamental system at t = 0, and as essential ingredient to start and fuel – through variation – Darwinian evolution. As time increases, random events determined by the random number generators would progressively be replaced by causal events determined by the evolving programs that gradually take control of the elementary systems. Randomness would be displaced by causality as physical Darwinian evolution gave rise to the quantum equilibrium regime, but not completely, since randomness would play a crucial role in the optimization of strategies – thus, of information flows – as game theory states.

Conjuncted: Ergodicity. Thought of the Day 51.1


When we scientifically investigate a system, we cannot normally observe all possible histories in Ω, or directly access the conditional probability structure {PrE}E⊆Ω. Instead, we can only observe specific events. Conducting many “runs” of the same experiment is an attempt to observe as many histories of a system as possible, but even the best experimental design rarely allows us to observe all histories or to read off the full conditional probability structure. Furthermore, this strategy works only for smaller systems that we can isolate in laboratory conditions. When the system is the economy, the global ecosystem, or the universe in its entirety, we are stuck in a single history. We cannot step outside that history and look at alternative histories. Nonetheless, we would like to infer something about the laws of the system in general, and especially about the true probability distribution over histories.

Can we discern the system’s laws and true probabilities from observations of specific events? And what kinds of regularities must the system display in order to make this possible? In other words, are there certain “metaphysical prerequisites” that must be in place for scientific inference to work?

To answer these questions, we first consider a very simple example. Here T = {1,2,3,…}, and the system’s state at any time is the outcome of an independent coin toss. So the state space is X = {Heads, Tails}, and each possible history in Ω is one possible Heads/Tails sequence.

Suppose the true conditional probability structure on Ω is induced by the single parameter p, the probability of Heads. In this example, the Law of Large Numbers guarantees that, with probability 1, the limiting frequency of Heads in a given history (as time goes to infinity) will match p. This means that the subset of Ω consisting of “well-behaved” histories has probability 1, where a history is well-behaved if (i) there exists a limiting frequency of Heads for it (i.e., the proportion of Heads converges to a well-defined limit as time goes to infinity) and (ii) that limiting frequency is p. For this reason, we will almost certainly (with probability 1) arrive at the true conditional probability structure on Ω on the basis of observing just a single history and counting the number of Heads and Tails in it.

Does this result generalize? The short answer is “yes”, provided the system’s symmetries are of the right kind. Without suitable symmetries, generalizing from local observations to global laws is not possible. In a slogan, for scientific inference to work, there must be sufficient regularities in the system. In our toy system of the coin tosses, there are. Wigner (1967) recognized this point, taking symmetries to be “a prerequisite for the very possibility of discovering the laws of nature”.

Generally, symmetries allow us to infer general laws from specific observations. For example, let T = {1,2,3,…}, and let Y and Z be two subsets of the state space X. Suppose we have made the observation O: “whenever the state is in the set Y at time 5, there is a 50% probability that it will be in Z at time 6”. Suppose we know, or are justified in hypothesizing, that the system has the set of time symmetries {ψr : r = 1,2,3,….}, with ψr(t) = t + r, as defined as in the previous section. Then, from observation O, we can deduce the following general law: “for any t in T, if the state of the system is in the set Y at time t, there is a 50% probability that it will be in Z at time t + 1”.

However, this example still has a problem. It only shows that if we could make observation O, then our generalization would be warranted, provided the system has the relevant symmetries. But the “if” is a big “if”. Recall what observation O says: “whenever the system’s state is in the set Y at time 5, there is a 50% probability that it will be in the set Z at time 6”. Clearly, this statement is only empirically well supported – and thus a real observation rather than a mere hypothesis – if we can make many observations of possible histories at times 5 and 6. We can do this if the system is an experimental apparatus in a lab or a virtual system in a computer, which we are manipulating and observing “from the outside”, and on which we can perform many “runs” of an experiment. But, as noted above, if we are participants in the system, as in the case of the economy, an ecosystem, or the universe at large, we only get to experience times 5 and 6 once, and we only get to experience one possible history. How, then, can we ever assemble a body of evidence that allows us to make statements such as O?

The solution to this problem lies in the property of ergodicity. This is a property that a system may or may not have and that, if present, serves as the desired metaphysical prerequisite for scientific inference. To explain this property, let us give an example. Suppose T = {1,2,3,…}, and the system has all the time symmetries in the set Ψ = {ψr : r = 1,2,3,….}. Heuristically, the symmetries in Ψ can be interpreted as describing the evolution of the system over time. Suppose each time-step corresponds to a day. Then the history h = (a,b,c,d,e,….) describes a situation where today’s state is a, tomorrow’s is b, the next day’s is c, and so on. The transformed history ψ1(h) = (b,c,d,e,f,….) describes a situation where today’s state is b, tomorrow’s is c, the following day’s is d, and so on. Thus, ψ1(h) describes the same “world” as h, but as seen from the perspective of tomorrow. Likewise, ψ2(h) = (c,d,e,f,g,….) describes the same “world” as h, but as seen from the perspective of the day after tomorrow, and so on.

Given the set Ψ of symmetries, an event E (a subset of Ω) is Ψ-invariant if the inverse image of E under ψ is E itself, for all ψ in Ψ. This implies that if a history h is in E, then ψ(h) will also be in E, for all ψ. In effect, if the world is in the set E today, it will remain in E tomorrow, and the day after tomorrow, and so on. Thus, E is a “persistent” event: an event one cannot escape from by moving forward in time. In a coin-tossing system, where Ψ is still the set of time translations, examples of Ψ- invariant events are “all Heads”, where E contains only the history (Heads, Heads, Heads, …), and “all Tails”, where E contains only the history (Tails, Tails, Tails, …).

The system is ergodic (with respect to Ψ) if, for any Ψ-invariant event E, the unconditional probability of E, i.e., PrΩ(E), is either 0 or 1. In other words, the only persistent events are those which occur in almost no history (i.e., PrΩ(E) = 0) and those which occur in almost every history (i.e., PrΩ(E) = 1). Our coin-tossing system is ergodic, as exemplified by the fact that the Ψ-invariant events “all Heads” and “all Tails” occur with probability 0.

In an ergodic system, it is possible to estimate the probability of any event “empirically”, by simply counting the frequency with which that event occurs. Frequencies are thus evidence for probabilities. The formal statement of this is the following important result from the theory of dynamical systems and stochastic processes.

Ergodic Theorem: Suppose the system is ergodic. Let E be any event and let h be any history. For all times t in T, let Nt be the number of elements r in the set {1, 2, …, t} such that ψr(h) is in E. Then, with probability 1, the ratio Nt/t will converge to PrΩ(E) as t increases towards infinity.

Intuitively, Nt is the number of times the event E has “occurred” in history h from time 1 up to time t. The ratio Nt/t is therefore the frequency of occurrence of event E (up to time t) in history h. This frequency might be measured, for example, by performing a sequence of experiments or observations at times 1, 2, …, t. The Ergodic Theorem says that, almost certainly (i.e., with probability 1), the empirical frequency will converge to the true probability of E, PrΩ(E), as the number of observations becomes large. The estimation of the true conditional probability structure from the frequencies of Heads and Tails in our illustrative coin-tossing system is possible precisely because the system is ergodic.

To understand the significance of this result, let Y and Z be two subsets of X, and suppose E is the event “h(1) is in Y”, while D is the event “h(2) is in Z”. Then the intersection E ∩ D is the event “h(1) is in Y, and h(2) is in Z”. The Ergodic Theorem says that, by performing a sequence of observations over time, we can empirically estimate PrΩ(E) and PrΩ(E ∩ D) with arbitrarily high precision. Thus, we can compute the ratio PrΩ(E ∩ D)/PrΩ(E). But this ratio is simply the conditional probability PrΕ(D). And so, we are able to estimate the conditional probability that the state at time 2 will be in Z, given that at time 1 it was in Y. This illustrates that, by allowing us to estimate unconditional probabilities empirically, the Ergodic Theorem also allows us to estimate conditional probabilities, and in this way to learn the properties of the conditional probability structure {PrE}E⊆Ω.

We may thus conclude that ergodicity is what allows us to generalize from local observations to global laws. In effect, when we engage in scientific inference about some system, or even about the world at large, we rely on the hypothesis that this system, or the world, is ergodic. If our system, or the world, were “dappled”, then presumably we would not be able to presuppose ergodicity, and hence our ability to make scientific generalizations would be compromised.

Dance of the Shiva, q’i (chee) and Tibetan Sunyata. Manifestation of Mysticism.

अनेजदेकं मनसो जवीयो नैनद्देवाप्नुवन्पूर्वमर्षत् ।
तद्धावतोऽन्यान्नत्येति तिष्ठत् तस्मिन्नापो मातरिश्वा दधाति ॥

anejadekaṃ manaso javīyo nainaddevāpnuvanpūrvamarṣat |
taddhāvato’nyānnatyeti tiṣṭhat tasminnāpo mātariśvā dadhāti ||

The self is one. It is unmoving: yet faster than the mind. Thus moving faster, It is beyond the reach of the senses. Ever steady, It outstrips all that run. By its mere presence, the cosmic energy is enabled to sustain the activities of living beings.

तस्मिन् मनसि ब्रह्मलोकादीन्द्रुतं गच्छति सति प्रथमप्राप्त इवात्मचैतन्याभासो गृह्यते अतः मनसो जवीयः इत्याह ।

tasmin manasi brahmalokādīndrutaṃ gacchati sati prathamaprāpta ivātmacaitanyābhāso gṛhyate ataḥ manaso javīyaḥ ityāha |

When the mind moves fast towards the farthest worlds such as the brahmaloka, it finds the Atman, of the nature of pure awareness, already there; hence the statement that It is faster than the mind.

नित्योऽनित्यानां चेतनश्चेतनानाम्
एको बहूनां यो विदधाति कामान् ।
तमात्मस्थं योऽनुपश्यन्ति धीराः
तेषां शान्तिः शाश्वतं नेतरेषाम् ॥

nityo’nityānāṃ cetanaścetanānām
eko bahūnāṃ yo vidadhāti kāmān |
tamātmasthaṃ yo’nupaśyanti dhīrāḥ
teṣāṃ śāntiḥ śāśvataṃ netareṣām ||

He is the eternal in the midst of non-eternals, the principle of intelligence in all that are intelligent. He is One, yet fulfils the desires of many. Those wise men who perceive Him as existing within their own self, to them eternal peace, and non else.


Eastern mysticism approaches the manifestation of life in the cosmos and all that compose it from a position diametrically opposed to the view that prevailed until recently among the majority of Western scientists, philosophers, and religionists. Orientals see the universe as a whole, as an organism. For them all things are interconnected, links in a chain of beings permeated by consciousness which threads them together. This consciousness is the one life-force, originator of all the phenomena we know under the heading of nature, and it dwells within its emanations, urging them as a powerful inner drive to grow and evolve into ever more refined expressions of divinity. The One manifests, not only in all its emanations, but also through those emanations as channels: it is within them and yet remains transcendent as well.

The emphasis is on the Real as subject whereas in the West it is seen as object. If consciousness is the noumenal or subjective aspect of life in contrast to the phenomenal or objective — everything seen as separate objects — then only this consciousness can be experienced, and no amount of analysis can reveal the soul of Reality. To illustrate: for the ancient Egyptians, their numerous “gods” were aspects of the primal energy of the Divine Mind (Thoth) which, before the creation of our universe, rested, a potential in a subjective state within the “waters of Space.” It was through these gods that the qualities of divinity manifested.

A question still being debated runs: “How does the One become the many?” meaning: if there is a “God,” how do the universe and the many entities composing it come into being? This question does not arise among those who perceive the One to dwell in the many, and the many to live in the One from whom life and sustenance derive. Despite our Western separation of Creator and creation, and the corresponding distancing of “God” from human beings, Western mystics have held similar views to those of the East, e.g.: Meister Eckhart, the Dominican theologian and preacher, who was accused of blasphemy for daring to say that he had once experienced nearness to the “Godhead.” His friends and followers were living testimony to the charisma (using the word in its original connotation of spiritual magnetism) of those who live the life of love for fellow beings men like Johannes Tauler, Heinrich Suso, the “admirable Ruysbroeck,” who expressed views similar to those of Eastern exponents of the spiritual way or path.

In old China, the universe was described as appearing first as q’i (chee), an emanation of Light, not the physical light that we know, but its divine essence sometimes called Tien, Heaven, in contrast to Earth. The q’i energy polarized as Yang and Yin, positive and negative electromagnetism. From the action and interaction of these two sprang the “10,000 things”: the universe, our world, the myriads of beings and things as we perceive them to be. In other words, the ancient Chinese viewed our universe as one of process, the One energy, q’i, proliferating into the many.

In their paintings Chinese artists depict man as a small but necessary element in gigantic natural scenes. And since we are parts of the cosmos, we are embodiments of all its potentials and our relationship depends upon how we focus ourselves: (1) harmoniously, i.e., in accord with nature; or (2) disharmoniously, interfering with the course of nature. We therefore affect the rest: our environment, all other lives, and bear full responsibility for the outcome of our thoughts and acts, our motivations, our impacts. Their art students were taught to identify with what they were painting, because there is life in every thing, and it is this life with which they must identify, with boulders and rocks no less than with birds flying overhead. Matter, energy, space, are all manifestations of q’i and we, as parts thereof, are intimately connected with all the universe.

In India, the oneness of life was seen through the prism of successive manifestations of Brahman, a neuter or impersonal term in Sanskrit for divinity, the equivalent of what Eckhart called the Godhead. Brahman is the source of the creative power, Brahma, Eckhart’s Creator; and also the origin of the sustaining and supporting energy or Vishnu, and of the destructive/regenerative force or Siva. As these three operate through the cosmos, the “world” as we know it, so do they also through ourselves on a smaller scale according to our capacity. Matter is perceived to be condensed energy, Chit or consciousness itself. To quote from the Mundaka Upanishad:

By the energism of Consciousness Brahman is massed; from that: Matter is born and from Matter Life and Mind and the worlds . . .

In another Hindu scripture, it is stated that when Brahma awakened from his period of rest between manifestations, he desired to contemplate himself as he is. By gazing into the awakening matter particles as into a mirror, he stirred them to exhibit their latent divine qualities. Since this process involves a continuous unfoldment from the center within, an ever-becoming, there can never be an end to the creativity — universal “days” comprising trillions of our human years, followed by a like number of resting “nights.”

We feel within ourselves the same driving urge to grow that runs through the entire, widespread universe, to express more and more of what is locked up in the formless or subjective realm of Be-ness, awaiting the magic moment to come awake in our phase of life.

Tibetan metaphysics embraces all of this in discussing Sunyata, which can be viewed as Emptiness if we use only our outer senses, or as Fullness if we inwardly perceive it to be full of energies of limitless ranges of wave-lengths/frequencies. This latter aspect of Space is the great mother of all, ever fecund, from whose “heart” emerge endless varieties of beings, endless forces, ever-changing variations — like the pulsing energies the new physicists perceive nuclear subparticles to be.

In the Preface to his Tao of physics Fritjof Capra tells how one summer afternoon he had a transforming experience by the seashore as he watched the waves rolling in and felt the rhythm of his own breathing. He saw dancing motes revealed in a beam of sunlight; particles of energy vibrating as molecules and atoms; cascades of energy pouring down upon us from outer space. All of this coming and going, appearing and disappearing, he equated with the Indian concept of the dance of Siva . . . he felt its rhythm, “heard” its sound, and knew himself to be a part of it. Through this highly personal, indeed mystical, experience Capra became aware of his “whole environment as being engaged in a gigantic cosmic dance.”

This is the gist of the old Chinese approach to physics: students were taught gravitation by observing the petals of a flower as they fall gracefully to the ground. As Gary Zukav expresses it in his Dancing Wu Li Masters: An Overview of the New Physics:

The world of particle physics is a world of sparkling energy forever dancing with itself in the form of its particles as they twinkle in and out of existence, collide, transmute, and disappear again.

That is: the dance of Siva is the dance of attraction and repulsion between charged particles of the electromagnetic force. This is a kind of “transcendental” physics, going beyond the “world of opposites” and approaching a mystical view of the larger Reality that is to our perceptions an invisible foundation of what we call “physical reality.” It is so far beyond the capacity or vocabulary of the mechanically rational part of our mind to define, that the profound Hindu scripture Isa Upanishad prefers to suggest the thought by a paradox:

तदेजति तन्नैजति तद्दूरे तद्वन्तिके ।
तदन्तरस्य सर्वस्य तदु सर्वस्यास्य बाह्यतः ॥

tadejati tannaijati taddūre tadvantike |
tadantarasya sarvasya tadu sarvasyāsya bāhyataḥ ||

It moves. It moves not.  It is far, and it is near. It is within all this, And It is verily outside of all this.

Indeed, there is a growing recognition mostly by younger physicists that consciousness is more than another word for awareness, more than a by-product of cellular activity (or of atomic or subatomic vibrations). For instance, Jack Sarfatti, a quantum physicist, says that signals pulsating through space provide instant communication between all parts of the cosmos. “These signals can be likened to pulses of nerve cells of a great cosmic brain that permeates all parts of space (Michael Talbot, Mysticism and the New Physics).” Michael Talbot quotes Sir James Jeans’ remark, “the universe is more like a giant thought than a giant machine,” commenting that the “substance of the great thought is consciousness” which pervades all space. Or as Schrödinger would have it:

Consciouness is never experienced in the plural, only in the singular….Consciouness is a singular of which the plural is unknown; that; there is only one thing and that, what seems to be a plurality is merely a series of different aspects of this one thing, produced by a deception (the Indian Maya).

Other phenomena reported as occurring in the cosmos at great distances from each other, yet simultaneously, appear to be connected in some way so far unexplained, but to which the term consciousness has been applied.

In short, the mystic deals with direct experience; the intuitive scientist is open-minded, and indeed the great discoveries such as Einstein’s were made by amateurs in their field untrammeled by prior definitions and the limitations inherited from past speculations. This freedom enabled them to strike out on new paths that they cleared and paved. The rationalist tries to grapple with the problems of a living universe using only analysis and whatever the computer functions of the mind can put together.

The theosophic perspective upon universal phenomena is based on the concept of the ensoulment of the cosmos. That is: from the smallest subparticle we know anything about to the largest star-system that has been observed, each and all possess at their core vitality, energy, an active something propelling towards growth, evolution of faculties from within.

The only “permanent” in the whole universe is motion: unceasing movement, and the ideal perception is a blend of the mystical with the scientific, the intuitive with the rational.

Black-Scholes (BS) Analysis and Arbitrage-Free Financial Economics


The Black-Scholes (BS) analysis of derivative pricing is one of the most beautiful results in financial economics. There are several assumptions in the basis of BS analysis such as the quasi-Brownian character of the underlying price process, constant volatility and, the absence of arbitrage.

let us denote V (t, S) as the price of a derivative at time t condition to the underlying asset price equal to S. We assume that the underlying asset price follows the geometrical Brownian motion,

dS/S = μdt + σdW —– (1)

with some average return μ and the volatility σ. They can be kept constant or be arbitrary functions of S and t. The symbol dW stands for the standard Wiener process. To price the derivative one forms a portfolio which consists of the derivative and ∆ units of the underlying asset so that the price of the portfolio is equal to Π:

Π = V − ∆S —– (2)

The change in the portfolio price during a time step dt can be written as

dΠ = dV − ∆dS = (∂V/∂t + σ2S22V/2∂S2) dt + (∂V/∂S – ∆) dS —– (3)

from of Ito’s lemma. We can now chose the number of the underlying asset units ∆ to be equal to ∂V/∂S to cancel the second term on the right hand side of the last equation. Since, after cancellation, there are no risky contributions (i.e. there is no term proportional to dS) the portfolio is risk-free and hence, in the absence of the arbitrage, its price will grow with the risk-free interest rate r:

dΠ = rΠdt —– (4)

or, in other words, the price of the derivative V(t,S) shall obey the Black-Scholes equation:

(∂V/∂t + σ2S22V/2∂S2) dt + rS∂V/∂S – rV = 0 —– (5)

In what follows we use this equation in the following operator form:

LBSV = 0, LBS = ∂/∂t + σ2S22V/2∂S2 + rS∂/∂S – r —– (6)

To formulate the model we return back to Eqn(1). Let us imagine that at some moment of time τ < t a fluctuation of the return (an arbitrage opportunity) appeared in the market. It happened when the price of the underlying stock was S′ ≡ S(τ). We then denote this instantaneous arbitrage return as ν(τ, S′). Arbitragers would react to this circumstance and act in such a way that the arbitrage gradually disappears and the market returns to its equilibrium state, i.e. the absence of the arbitrage. For small enough fluctuations it is natural to assume that the arbitrage return R (in absence of other fluctuations) evolves according to the following equation:

dR/dt = −λR,   R(τ) = ν(τ,S′) —– (7)

with some parameter λ which is characteristic for the market. This parameter can be either estimated from a microscopic theory or can be found from the market using an analogue of the fluctuation-dissipation theorem. The fluctuation-dissipation theorem states that the linear response of a given system to an external perturbation is expressed in terms of fluctuation properties of the system in thermal equilibrium. This theorem may be represented by a stochastic equation describing the fluctuation, which is a generalization of the familiar Langevin equation in the classical theory of Brownian motion. In the last case the parameter λ can be estimated from the market data as

λ = -1/(t -t’) log [〈LBSV/(V – S∂V/∂S) (t) LBSV/(V – S∂V/∂S) (t’)〉market / 〈(LBSV/(V – S∂V/∂S)2 (t)〉market] —– (8)

and may well be a function of time and the price of the underlying asset. We consider λ as a constant to get simple analytical formulas for derivative prices. The generalization to the case of time-dependent parameters is straightforward.

The solution of Equation 7 gives us R(t,S) = ν(τ,S)e−λ(t−τ) which, after summing over all possible fluctuations with the corresponding frequencies, leads us to the following expression for the arbitrage return at time t:

R (t, S) = ∫0t dτ ∫0 dS’ P(t, S|τ, S’) e−λ(t−τ) ν (τ, S’), t < T —– (9)

where T is the expiration date for the derivative contract started at time t = 0 and the function P (t, S|τ, S′) is the conditional probability for the underlying price. To specify the stochastic process ν(t,S) we assume that the fluctuations at different times and underlying prices are independent and form the white noise with a variance Σ2 · f (t):

⟨ν(t, S)⟩ = 0 , ⟨ν(t, S) ν (t′, S′)⟩ = Σ2 · θ(T − t) f(t) δ(t − t′) δ(S − S′) —– (10)

The function f(t) is introduced here to smooth out the transition to the zero virtual arbitrage at the expiration date. The quantity Σ2 · f (t) can be estimated from the market data as:

∑2/2λ· f (t) = 〈(LBSV/(V – S∂V/∂S)) 2 (t)⟩ market —– (11)

and has to vanish as time tends to the expiration date. Since we introduced the stochastic arbitrage return R(t, S), equation 4 has to be substituted with the following equation:

dΠ = [r + R(t, S)]Πdt, which can be rewritten as

LBSV = R (t, S) V – (S∂V/∂S) —– (12)

using the operator LBS. 

It is worth noting that the model reduces to the pure BS analysis in the case of infinitely fast market reaction, i.e. λ → ∞. It also returns to the BS model when there are no arbitrage opportunities at all, i.e. when Σ = 0. In the presence of the random arbitrage fluctuations R(t, S), the only objects which can be calculated are the average value and other higher moments of the derivative price.