Superfluid He-3. Thought of the Day 130.0

nphys2220-f1

At higher temperatures 3He is a gas, while below temperature of 3K – due to van der Walls forces – 3He is a normal liquid with all symmetries which a condensed matter system can have: translation, gauge symmetry U(1) and two SO(3) symmetries for the spin (SOS(3)) and orbital (SOL(3)) rotations. At temperatures below 100 mK, 3He behaves as a strongly interacting Fermi liquid. Its physical properties are well described by Landau’s theory. Quasi-particles of the 3He (i.e. 3He atoms “dressed” into mutual interactions) have spin equal to 1/2 and similar to the electrons, they can create Cooper pairs as well. However, different from electrons in a metal, 3He is a liquid without a lattice and the electron-phonon interaction, responsible for superconductivity, can not be applied here. As the 3He quasiparticles have spin, the magnetic interaction between spins rises up when the temperature falls down until, at a certain temperature, Cooper pairs are created – the coupled pairs of 3He quasiparticles – and the normal 3He liquid becomes a superfluid. The Cooper pairs produce a superfluid component and the rest, unpaired 3He quasiparticles, generate a normal component (N -phase).

A physical picture of the superfluid 3He is more complicated than for superconducting electrons. First, the 3He quasiparticles are bare atoms and creating the Cooper pair they have to rotate around its common center of mass, generating an orbital angular momentum of the pair (L = 1). Secondly, the spin of the Cooper pair is equal to one (S = 1), thus superfluid 3He has magnetic properties. Thirdly, the orbital and spin angular momenta of the pair are coupled via a dipole-dipole interaction.

It is evident that the phase transition of 3He into the superfluid state is accompanied by spontaneously broken symmetry: orbital, spin and gauge: SOL(3)× SOS(3) × U(1), except the translational symmetry, as the superfluid 3He is still a liquid. Finally, an energy gap ∆ appears in the energy spectrum separating the Cooper pairs (ground state) from unpaired quasiparticles – Fermi excitations.

In superfluid 3He the density of Fermi excitations decreases upon further cooling. For temperatures below around 0.25Tc (where Tc is the superfluid transition temperature), the density of the Fermi excitations is so low that the excitations can be regarded as a non-interacting gas because almost all of them are paired and occupy the ground state. Therefore, at these very low temperatures, the superfluid phases of helium-3 represent well defined models of the quantum vacua, which allows us to study any influences of various external forces on the ground state and excitations from this state as well.

The ground state of superfluid 3He is formed by the Cooper pairs having both spin (S = 1) and orbital momentum (L = 1). As a consequence of this spin-triplet, orbital p-wave pairing, the order parameter (or wave function) is far more complicated than that of conventional superconductors and superfluid 4He. The order parameter of the superfluid 3He joins two spaces: the orbital (or k space) and spin and can be expressed as:

Ψ(k) = Ψ↑↑(kˆ)|↑↑⟩ + Ψ↓↓(kˆ)|↓↓⟩ + √2Ψ↑↓(kˆ)(|↑↓⟩ + |↓↑⟩) —– (1)

where kˆ is a unit vector in k space defining a position on the Fermi surface, Ψ↑↑(kˆ), Ψ↓↓(kˆ) a Ψ↑↓(kˆ) are amplitudes of the spin sub-states operators determined by its projection |↑↑⟩, |↓↓⟩ a (|↑↓⟩ + |↓↑⟩) on a quantization axis z.

The order parameter is more often written in a vector representation as a vector d(k) in spin space. For any orientation of the k on the Fermi surface, d(k) is in the direction for which the Cooper pairs have zero spin projection. Moreover, the amplitude of the superfluid condensate at the same point is defined by |d(k)|2 = 1/2tr(ΨΨH). The vector form of the order parameter d(k) for its components can be written as:

dν(k) = ∑μ Aνμkμ —– (2)

where ν (1,2,3) are orthogonal directions in spin space and μ (x,y,z) are those for orbital space. The matrix components Aνμ are complex and theoretically each of them represents possible superfluid phase of 3He. Experimentally, however, only three are stable.

phasediagramLooking at the phase diagram of 3He we can see the presence of two main superfluid phases: A – phase and B – phase. While B – phase consists of all three spin components, the A – phase does not have the component (|↑↓⟩ + |↓↑⟩). There is also a narrow region of the A1 superfluid phase which exists only at higher pressures and temperatures and in nonzero magnetic field. The A1 – phase has only one spin component |↑↑⟩. The phase transition from N – phase to the A or B – phase is a second order transition, while the phase transition between the superfluid A and B phases is of first order.

The B – phase occupies a low field region and it is stable down to the lowest temperatures. In zero field, the B – phase is a pure manifestation of p-wave superfluidity. Having equal numbers of all possible spin and angular momentum projections, the energy gap separating ground state from excitation is isotropic in k space.

The A – phase is preferable at higher pressures and temperatures in zero field. In limit T → 0K, the A – phase can exist at higher magnetic fields (above 340 mT) at zero pressure and this critical field needed for creation of the A – phase rises up as the pressure increases. In this phase, all Cooper pairs have orbital momenta orientated in a common direction defined by the vector lˆ, that is the direction in which the energy gap is reduced to zero. It results in a remarkable difference between these superfluid phases. The B – phase has an isotropic gap, while the A – phase energy spectrum consists of two Fermi points i.e. points with zero energy gap. The difference in the gap structure leads to the different thermodynamic properties of quasiparticle excitations in the limit T → 0K. The density of excitation in the B – phase falls down exponentially with temperature as exp(−∆/kBT), where kB is the Boltzman constant. At the lowest temperatures their density is so low that the excitations can be regarded as a non-interacting gas with a mean free path of the order of kilometers. On the other hand, in A – phase the Fermi points (or nodes) are far more populated with quasiparticle excitations. The nodes orientation in the lˆ direction make the A – phase excitations almost perfectly one-dimensional. The presence of the nodes in the energy spectrum leads to a T3 temperature dependence of the density of excitations and entropy. As a result, as T → 0K, the specific heat of the A – phase is far greater than that of the B – phase. In this limit, the A – phase represents a model system for a vacuum of the Standard model and B – phase is a model system for a Dirac vacuum.

In experiments with superfluid 3He phases, application of different external forces can excite the collective modes of the order parameter representing so called Bose excitations, while the Fermi excitations are responsible for the energy dissipation. Coexistence and mutual interactions of these excitations in the limit T → 0K (in limit of low energies), can be described by quantum field theory, where Bose and Fermi excitations represent Bose and Fermi quantum fields. Thus, 3He has a much broader impact by offering the possibility of experimentally investigating quantum field/cosmological theories via their analogies with the superfluid phases of 3He.

Matter Fields

In classical relativity theory, one generally takes for granted that all there is, and all that happens, can be described in terms of various “matter fields,” each of which is represented by one or more smooth tensor (or spinor) fields on the spacetime manifold M. The latter are assumed to satisfy particular “field equations” involving the spacetime metric gab.

Associated with each matter field F is a symmetric smooth tensor field Tab characterized by the property that, for all points p in M, and all future-directed, unit timelike vectors ξa at p, Tabξb is the four-momentum density of F at p as determined relative to ξa.

Tab is called the energy-momentum field associated with F. The four- momentum density vector Tabξb at a point can be further decomposed into its temporal and spatial components relative to ξa,

Tabξb = (Tmbξmξba + Tmbhmaξb

where the first term on the RHS is the energy density, while the second term is the three-momentum density. A number of assumptions about matter fields can be captured as constraints on the energy-momentum tensor fields with which they are associated.

Weak Energy Condition (WEC): Given any timelike vector ξa at any point in M, Tabξaξb ≥ 0.

Dominant Energy Condition (DEC): Given any timelike vector ξa at any point in M, Tabξaξb ≥ 0 and Tabξb is timelike or null.

Strengthened Dominant Energy Condition (SDEC): Given any timelike vector ξa at any point in M, Tabξaξb ≥ 0 and, if Tab ≠ 0 there, then Tabξb is timelike.

Conservation Condition (CC): ∇aTab = 0 at all points in M.

The WEC asserts that the energy density of F, as determined by any observer at any point, is non-negative. The DEC adds the requirement that the four-momentum density of F, as determined by any observer at any point, is a future-directed causal (i.e., timelike or null) vector. We can understand this second clause to assert that the energy of F does not propagate with superluminal velocity. The strengthened version of the DEC just changes “causal” to “timelike” in the second clause. It avoids reference to “point particles.” Each of the listed energy conditions is strictly stronger than the ones that precede it.

The CC, finally, asserts that the energy-momentum carried by F is locally conserved. If two or more matter fields are present in the same region of space-time, it need not be the case that each one individually satisfies the condition. Interaction may occur. But it is a fundamental assumption that the composite energy-momentum field formed by taking the sum of the individual ones satisfies it. Energy-momentum can be transferred from one matter field to another, but it cannot be created or destroyed. The stated conditions have a number of consequences that support the interpretations.

A subset S of M is said to be achronal if there do not exist points p and q in S such that p ≪ q. Let γ : I → M be a smooth curve. We say that a point p in M is a future-endpoint of γ if, for all open sets O containing p, there exists an s0 in I such that, ∀ s ∈ I, if s ≥ s0, then γ(s) ∈ O; i.e., γ eventually enters and remains in O. Now let S be an achronal subset of M. The domain of dependence D(S) of S is the set of all points p in M with this property: given any smooth causal curve without (past- or future-) endpoint, if its image contains p, then it intersects S. So, in particular, S ⊆ D(S).

Untitled

Let S be an achronal subset of M. Further, let Tab be a smooth, symmetric field on M that satisfies both the dominant energy and conservation conditions. Finally, assume Tab = 0 on S. Then Tab = 0 on all of D(S).

The intended interpretation of the proposition is clear. If energy-momentum cannot propagate (locally) outside the null-cone, and if it is conserved, and if it vanishes on S, then it must vanish throughout D(S). After all, how could it “get to” any point in D(S)? According to interpretive principle free massive point particles traverse (images of) timelike geodesics. It turns out that if the energy-momentum content of each body in the sequence satisfies appropriate conditions, then the convergence point will necessarily traverse (the image of) a timelike geodesic.

Let γ: I → M be smooth curve. Suppose that, given any open subset O of M containing γ[I], ∃ a smooth symmetric field Tab on M such that the following conditions hold.

(1) Tab satisfies the SDEC.
(2) Tab satisfies the CC.
(3) Tab = 0 outside of O.
(4) Tab ≠ 0 at some point in O.

Then γ is timelike and can be reparametrized so as to be a geodesic. This might be paraphrased another way. Suppose that for some smooth curve γ , arbitrarily small bodies with energy-momentum satisfying conditions (1) and (2) can contain the image of γ in their worldtubes. Then γ must be a timelike geodesic (up to reparametrization). Bodies here are understood to be “free” if their internal energy-momentum is conserved (by itself). If a body is acted on by a field, it is only the composite energy-momentum of the body and field together that is conserved.

Untitled

But, this formulation for granted that we can keep the background spacetime metric gab fixed while altering the fields Tab that live on M. This is justifiable only to the extent that we are dealing with test bodies whose effect on the background spacetime structure is negligible.

We have here a precise proposition in the language of matter fields that, at least to some degree, captures the interpretive principle. Similarly, it is possible to capture the behavior of light, wherein the behavior of solutions to Maxwell’s equations in a limiting regime (“the optical limit”) where wavelengths are small. It asserts, in effect, that when one passes to this limit, packets of electromagnetic waves are constrained to move along (images of ) null geodesics.

Something Out of Almost Nothing. Drunken Risibility.

Kant’s first antinomy makes the error of the excluded third option, i.e. it is not impossible that the universe could have both a beginning and an eternal past. If some kind of metaphysical realism is true, including an observer-independent and relational time, then a solution of the antinomy is conceivable. It is based on the distinction between a microscopic and a macroscopic time scale. Only the latter is characterized by an asymmetry of nature under a reversal of time, i.e. the property of having a global (coarse-grained) evolution – an arrow of time – or many arrows, if they are independent from each other. Thus, the macroscopic scale is by definition temporally directed – otherwise it would not exist.

On the microscopic scale, however, only local, statistically distributed events without dynamical trends, i.e. a global time-evolution or an increase of entropy density, exist. This is the case if one or both of the following conditions are satisfied: First, if the system is in thermodynamic equilibrium (e.g. there is degeneracy). And/or second, if the system is in an extremely simple ground state or meta-stable state. (Meta-stable states have a local, but not a global minimum in their potential landscape and, hence, they can decay; ground states might also change due to quantum uncertainty, i.e. due to local tunneling events.) Some still speculative theories of quantum gravity permit the assumption of such a global, macroscopically time-less ground state (e.g. quantum or string vacuum, spin networks, twistors). Due to accidental fluctuations, which exceed a certain threshold value, universes can emerge out of that state. Due to some also speculative physical mechanism (like cosmic inflation) they acquire – and, thus, are characterized by – directed non-equilibrium dynamics, specific initial conditions, and, hence, an arrow of time.

It is a matter of debate whether such an arrow of time is

1) irreducible, i.e. an essential property of time,

2) governed by some unknown fundamental and not only phenomenological law,

3) the effect of specific initial conditions or

4) of consciousness (if time is in some sense subjective), or

5) even an illusion.

Many physicists favour special initial conditions, though there is no consensus about their nature and form. But in the context at issue it is sufficient to note that such a macroscopic global time-direction is the main ingredient of Kant’s first antinomy, for the question is whether this arrow has a beginning or not.

Time’s arrow is inevitably subjective, ontologically irreducible, fundamental and not only a kind of illusion, thus if some form of metaphysical idealism for instance is true, then physical cosmology about a time before time is mistaken or quite irrelevant. However, if we do not want to neglect an observer-independent physical reality and adopt solipsism or other forms of idealism – and there are strong arguments in favor of some form of metaphysical realism -, Kant’s rejection seems hasty. Furthermore, if a Kantian is not willing to give up some kind of metaphysical realism, namely the belief in a “Ding an sich“, a thing in itself – and some philosophers actually insisted that this is superfluous: the German idealists, for instance -, he has to admit that time is a subjective illusion or that there is a dualism between an objective timeless world and a subjective arrow of time. Contrary to Kant’s thoughts: There are reasons to believe that it is possible, at least conceptually, that time has both a beginning – in the macroscopic sense with an arrow – and is eternal – in the microscopic notion of a steady state with statistical fluctuations.

Is there also some physical support for this proposal?

Surprisingly, quantum cosmology offers a possibility that the arrow has a beginning and that it nevertheless emerged out of an eternal state without any macroscopic time-direction. (Note that there are some parallels to a theistic conception of the creation of the world here, e.g. in the Augustinian tradition which claims that time together with the universe emerged out of a time-less God; but such a cosmological argument is quite controversial, especially in a modern form.) So this possible overcoming of the first antinomy is not only a philosophical conceivability but is already motivated by modern physics. At least some scenarios of quantum cosmology, quantum geometry/loop quantum gravity, and string cosmology can be interpreted as examples for such a local beginning of our macroscopic time out of a state with microscopic time, but with an eternal, global macroscopic timelessness.

To put it in a more general, but abstract framework and get a sketchy illustration, consider the figure.

Untitled

Physical dynamics can be described using “potential landscapes” of fields. For simplicity, here only the variable potential (or energy density) of a single field is shown. To illustrate the dynamics, one can imagine a ball moving along the potential landscape. Depressions stand for states which are stable, at least temporarily. Due to quantum effects, the ball can “jump over” or “tunnel through” the hills. The deepest depression represents the ground state.

In the common theories the state of the universe – the product of all its matter and energy fields, roughly speaking – evolves out of a metastable “false vacuum” into a “true vacuum” which has a state of lower energy (potential). There might exist many (perhaps even infinitely many) true vacua which would correspond to universes with different constants or laws of nature. It is more plausible to start with a ground state which is the minimum of what physically can exist. According to this view an absolute nothingness is impossible. There is something rather than nothing because something cannot come out of absolutely nothing, and something does obviously exist. Thus, something can only change, and this change might be described with physical laws. Hence, the ground state is almost “nothing”, but can become thoroughly “something”. Possibly, our universe – and, independent from this, many others, probably most of them having different physical properties – arose from such a phase transition out of a quasi atemporal quantum vacuum (and, perhaps, got disconnected completely). Tunneling back might be prevented by the exponential expansion of this brand new space. Because of this cosmic inflation the universe not only became gigantic but simultaneously the potential hill broadened enormously and got (almost) impassable. This preserves the universe from relapsing into its non-existence. On the other hand, if there is no physical mechanism to prevent the tunneling-back or makes it at least very improbable, respectively, there is still another option: If infinitely many universes originated, some of them could be long-lived only for statistical reasons. But this possibility is less predictive and therefore an inferior kind of explanation for not tunneling back.

Another crucial question remains even if universes could come into being out of fluctuations of (or in) a primitive substrate, i.e. some patterns of superposition of fields with local overdensities of energy: Is spacetime part of this primordial stuff or is it also a product of it? Or, more specifically: Does such a primordial quantum vacuum have a semi-classical spacetime structure or is it made up of more fundamental entities? Unique-universe accounts, especially the modified Eddington models – the soft bang/emergent universe – presuppose some kind of semi-classical spacetime. The same is true for some multiverse accounts describing our universe, where Minkowski space, a tiny closed, finite space or the infinite de Sitter space is assumed. The same goes for string theory inspired models like the pre-big bang account, because string and M- theory is still formulated in a background-dependent way, i.e. requires the existence of a semi-classical spacetime. A different approach is the assumption of “building-blocks” of spacetime, a kind of pregeometry also the twistor approach of Roger Penrose, and the cellular automata approach of Stephen Wolfram. The most elaborated accounts in this line of reasoning are quantum geometry (loop quantum gravity). Here, “atoms of space and time” are underlying everything.

Though the question whether semiclassical spacetime is fundamental or not is crucial, an answer might be nevertheless neutral with respect of the micro-/macrotime distinction. In both kinds of quantum vacuum accounts the macroscopic time scale is not present. And the microscopic time scale in some respect has to be there, because fluctuations represent change (or are manifestations of change). This change, reversible and relationally conceived, does not occur “within” microtime but constitutes it. Out of a total stasis nothing new and different can emerge, because an uncertainty principle – fundamental for all quantum fluctuations – would not be realized. In an almost, but not completely static quantum vacuum however, macroscopically nothing changes either, but there are microscopic fluctuations.

The pseudo-beginning of our universe (and probably infinitely many others) is a viable alternative both to initial and past-eternal cosmologies and philosophically very significant. Note that this kind of solution bears some resemblance to a possibility of avoiding the spatial part of Kant’s first antinomy, i.e. his claimed proof of both an infinite space without limits and a finite, limited space: The theory of general relativity describes what was considered logically inconceivable before, namely that there could be universes with finite, but unlimited space, i.e. this part of the antinomy also makes the error of the excluded third option. This offers a middle course between the Scylla of a mysterious, secularized creatio ex nihilo, and the Charybdis of an equally inexplicable eternity of the world.

In this context it is also possible to defuse some explanatory problems of the origin of “something” (or “everything”) out of “nothing” as well as a – merely assumable, but never provable – eternal cosmos or even an infinitely often recurring universe. But that does not offer a final explanation or a sufficient reason, and it cannot eliminate the ultimate contingency of the world.

Conjuncted: Avarice

Greed followed by avarice….We consider the variation in which events occur at a rate equal to the difference in capital of the two traders. That is, an individual is more likely to take capital from a much poorer person rather than from someone of slightly less wealth. For this “avaricious” exchange, the corresponding rate equations are

dck/dt = ck-1j=1k-1(k – 1 – j)cj + ck+1j=k+1(j – k – 1)cj – ckj=1|k – j|cj —– (1)

while the total density obeys,

dN/dt = -c1(1 – N) —– (2)

under the assumption that the total wealth density is set equal to one, ∑kck = 1

These equations can be solved by again applying scaling. For this purpose, it is first expedient to rewrite the rate equation as,

dck/dt = (ck-1 – ck)∑j=1k-1(k – j)cj – ck-1j=1k-1cj + (ck+1 – ck)∑j=k+1(j – k)cj – ck+1j=k+1cj —– (3)

taking the continuum limits

∂c/∂t = ∂c/∂k – N∂/∂k(kc) —— (3)

We now substitute the scaling ansatz,

ck(t) ≅ N2C(x), with x = kN to yield

C(0)[2C + xC′] = (x − 1)C′ + C —– (4)

and

dN/dt = -C(0)N2 —– (5)

Solving the above equations gives N ≅ [C(0)t]−1 and

C(x) = (1 + μ)(1 + μx)−2−1/μ —– (6)

with μ = C(0) − 1. The scaling approach has thus found a family of solutions which are parameterized by μ, and additional information is needed to determine which of these solutions is appropriate for our system. For this purpose, note that equation (6) exhibits different behaviors depending on the sign of μ. When μ > 0, there is an extended non-universal power-law distribution, while for μ = 0 the solution is the pure exponential, C(x) = e−x. These solutions may be rejected because the wealth distribution cannot extend over an unbounded domain if the initial wealth extends over a finite range.

The accessible solutions therefore correspond to −1 < μ < 0, where the distribution is compact and finite, with C(x) ≡ 0 for x ≥ xf = −μ−1. To determine the true solution, let us re-examine the continuum form of the rate equation, equation (3). From naive power counting, the first two terms are asymptotically dominant and they give a propagating front with kf exactly equal to t. Consequently, the scaled location of the front is given by xf = Nkf. Now the result N ≃ [C(0)t]−1 gives xf = 1/C(0). Comparing this expression with the corresponding value from the scaling approach, xf = [1 − C(0)]−1, selects the value C(0) = 1/2. Remarkably, this scaling solution coincides with the Fermi distribution that found for the case of constant interaction rate. Finally, in terms of the unscaled variables k and t, the wealth distribution is

ck(t) = 4/t2, k < t

= 0, k ≥ 0 —– (7)

This discontinuity is smoothed out by diffusive spreading. Another interesting feature is that if the interaction rate is sufficiently greedy, “gelation” occurs, whereby a finite fraction of the total capital is possessed by a single individual. For interaction rates, or kernels K(j, k) between individuals of capital j and k which do not give rise to gelation, the total density typically varies as a power law in time, while for gelling kernels N(t) goes to zero at some finite time. At the border between these regimes N(t) typically decays exponentially in time. We seek a similar transition in behavior for the capital exchange model by considering the rate equation for the density

dN/dt = -c1k=1k(1, k)ck —– (8)

For the family of kernels with K(1, k) ∼ kν as k → ∞, substitution of the scaling ansatz gives N ̇ ∼ −N3−ν. Thus N(t) exhibits a power-law behavior N ∼ t1/(2−ν) for ν < 2 and an exponential behavior for ν = 2. Thus gelation should arise for ν > 2.

Greed

In greedy exchange, when two individuals meet, the richer person takes one unit of capital from the poorer person, as represented by the reaction scheme (j, k) → (j + 1, k − 1) for j ≥ k. In the rate equation approximation, the densities ck(t) now evolve according to

dck/dt = ck-1j=1k-1cj + ck+1j=k+1cj – ckN – c2k —– (1)

The first two terms account for the gain in ck(t) due to the interaction between pairs of individuals of capitals (j, k−1), with j k, respectively. The last two terms correspondingly account for the loss of ck(t). One can check that the wealth density M1 ≡ ∑k=1 k ck(t) is conserved, and that the population density obeys

dN/dt = -c1N —– (2)

Equation (1) are conceptually similar to the Smoluchowski equations for aggregation with a constant reaction rate. Mathematically, however, they appear to be more complex and we have been unable to solve them analytically. Fortunately, equation (1) is amenable to a scaling solution. For this purpose, we first re-write equation (1) as

dck/dt = -ck(ck + ck+1) + N(ck-1 – ck) + (ck+1 – ck-1)∑j=kcj —– (3)

Taking the continuum limit and substituting the scaling ansatz,

ck(t) ≅ N2C(x), with x = kN —– (4)

transforms equations (2) and (3) to

dN/dt = -C(0)N3 —– (5)

and

C(0)[2C + xC’] = 2C2 + C'[1 – 2∫xdyC(y)] —– (6)

where C ′ = dC/dx. Note also that the scaling function must obey the integral relations

xdxC(x) = 1 and ∫xdxxC(x) = 1 —– (7)

The former follows from the definition of density, N = ∑ck(t) ≅ N∫dx C(x), while the latter follows if we set, without loss of generality, the conserved wealth density equal to unity, ∑kkck(t) = 1.

Introducing B(x) = ∫0x dyC(y) recasts equation (6) into C(0)[2B′ + xB′′] = 2B′2 + B′′[2B − 1]. Integrating twice gives [C(0)x − B][B − 1] = 0, with solution B(x) = C(0)x for x < xf and B(x) = 1 for x ≥ xf, from which we conclude that the scaled wealth distribution C(x) = B′(x) coincides with the zero-temperature Fermi distribution;

C(x) = C(0), for x < xf

= 0, for x ≥ xf —– (8)

Hence the scaled profile has a sharp front at x = xf, with xf = 1/C(0) found by matching the two branches of the solution for B(x). Making use of the second integral relation, equation (7), gives C(0) = 1/2 and thereby closes the solution. Thus, the unscaled wealth distribution ck(t) reads,

ck(t) = 1/(2t), for k < 2√t

= 0, for k ≥ 2√t —– (9)

and the total density is N(t) = t-1/2

Untitled

Figure: Simulation results for the wealth distribution in greedy additive exchange based on 2500 configurations for 106 traders. Shown are the scaled distributions C(x) versus x = kN for t = 1.5n, with n = 18, 24, 30, and 36; these steepen with increasing time. Each data set has been av- eraged over a range of ≈ 3% of the data points to reduce fluctuations.

These predictions by numerical simulations are shown in the figure. In the simulation, two individuals are randomly chosen to undergo greedy exchange and this process is repeated. When an individual reaches zero capital he is eliminated from the system, and the number of active traders is reduced by one. After each reaction, the time is incremented by the inverse of the number of active traders. While the mean-field predictions are substantially corroborated, the scaled wealth distribution for finite time actually resembles a finite-temperature Fermi distribution. As time increases, the wealth distribution becomes sharper and approaches equation (9). In analogy with the Fermi distribution, the relative width of the front may be viewed as an effective temperature. Thus the wealth distribution is characterized by two scales; one of order √t characterizes the typical wealth of active traders and a second, smaller scale which characterizes the width of the front.

To quantify the spreading of the front, let us include the next corrections in the continuum limit of the rate equations, equation (3). This gives,

∂c/∂t = 2∂/∂k [c∫kdjc(j)] – c∂c/∂k – N∂c/∂k + N/2 ∂2c/∂k2 —– (10)

Here, the second and fourth terms on the RHS denote the second corrections. since, the convective third term determines the location of the front to be at kf = 2√t, it is natural to expect that the diffusive fourth term describes the spreading of the front. the term c∂c/∂k  turns out to be negligible in comparison to the diffusive spreading term and is henceforth neglected. The dominant convective term can be removed by transforming to a frame of reference which moves with the front namely, k → K = k − 2√t. among the remaining terms in the transformed rate equation, the width of the front region W can now be determined by demanding that the diffusion term has the same order of magnitude as the reactive terms, i.e. N ∂2c/∂k∼ c2. This implies W ∼ √(N/c). Combining this with N = t−1/2 and c ∼ t−1 gives W ∼ t1/4, or a relative width w = W/kf ∼ t−1/4. This suggests the appropriate scaling ansatz for the front region is

ck(t) = 1/t X(ξ), ξ = (k – 2√t)/ t1/4 —– (11)

Substituting this ansatz into equation (10) gives a non-linear single variable integro-differential equation for the scaling function X(ξ). Together with the appropriate boundary conditions, this represents, in principle, a more complete solution to the wealth distribution. However, the essential scaling behavior of the finite-time spreading of the front is already described by equation (11), so that solving for X(ξ) itself does not provide additional scaling information. Analysis gives w ∼ t−α with α ≅ 1/5. We attribute this discrepancy to the fact that w is obtained by differentiating C(x), an operation which generally leads to an increase in numerical errors.

Fortune of the Individuals Restricted to Integers: Random Economic Exchange Between Populations of Traders.

thinkstockphotos_493208894

Consider a population of traders, each of which possesses a certain amount of capital which is assumed to be quantized in units of minimal capital. Taking this latter quantity as the basic unit, the fortune of an individual is restricted to the integers. The wealth of the population evolves by the repeated interaction of random pairs of traders. In each interaction, one unit of capital is transferred between the trading partners. To complete the description, we specify that if a poorest individual (with one unit of capital) loses all remaining capital by virtue of a “loss”, the bankrupt individual is considered to be economically dead and no longer participates in economic activity.

In the following, we consider a specific realization of additive capital exchange, the “random” exchange, where the direction of the capital exchange is independent of the relative capital of the traders. While this rule has little economic basis, the model is completely soluble and thus provides a helpful pedagogical point.

In a random exchange, one unit of capital is exchanged between trading partners as represented by the reaction scheme (j, k) → (j ± 1, k ∓ 1). Let ck(t) be the density of individuals with capital k. within a mean-field description, ck(t) evolves according to

dck(t)/dt = N(t) [ck+1(t) + ck-1(t) – 2ck(t)] —– (1)

with N(t) ≡ M0(t) = ∑k=1 ck(t), the population density. The first two terms account for gain in ck(t) due to the interactions (j, k + 1) → (j + 1, k) and (j, k − 1) → (j−1, k), respectively, while the last term accounts for the loss in ck(t) due to the interactions (j, k) → (j±1, k∓1).

By defining a modified time variable,

T = ∫0dt’N(t’) —– (2)

equation (1) is reduced to the discrete diffusion equation

dck(T)/dT = ck+1(T) + ck-1(T) – 2ck(T) —– (3)

The rate equation for the poorest density has the slightly different form, dc1/dT = c2 − 2c1, but may be written in the same form as equation (3) if we impose the boundary condition c0(T) = 0.

For illustrative purposes, let us assume that initially all individuals have one unit of capital, ck(0) = δk1. The solution to equation (3) subject to these initial and boundary conditions is

ck(T) = e−2T [Ik−1(2T) − Ik+1(2T)] —– (4)

where In denotes the modified Bessel function of order n. consequently, the total density N(t) is

N(T) = e−2T [I0(2T) + I1(2T)] —– (5)

To re-express this exact solution in terms of the physical time t, we first invert equation (2) to obtain t(T) = ∫0T dT′/N(T′), and then eliminate T in favor of t in the solution for ck(T). For simplicity and concreteness, let us consider the long-time limit. From equation (4),

ck(T) ≅ k/√(4πT3) exp (-k2/4T) —– (6)

and from equation (5),

N(T) ≅ (πT)−1/2 —– (7)

Equation (7) also implies t ≅ 2/3 √(πT3) which gives

N(T) ≅ (2/3πt)1/3 —– (8)

and

ck(t) ≅ k/3t exp [-(π/144)1/3 k2/t2/3] —– (9)

Note that this latter expression may be written in the scaling form ck(t) ∝ N2xe−x2, with the scaling variable x ∝ kN. One can also confirm that the scaling solution represents the basin of attraction for almost all exact solutions. Indeed, for any initial condition with ck(0) decaying faster than k−2, the system reaches the scaling limit ck(t) ∝ N2xe−x2. On the other hand, if ck(0) ∼ k−1−α, with 0 < α < 1, such an initial state converges to an alternative scaling limit which depends on α. These solutions exhibit a slower decay of the total density, N ∼ t−α/(1+α), while the scaling form of the wealth distribution is

ck(t) ∼ N2/αCα(x), x ∝ kN1/α —– (10)

with the scaling function

Cα(x) = e−x20 du e−u2 sinh(2ux)/u1+α —– (11)

Evaluating the integral by the Laplace method gives an asymptotic distribution which exhibits the same x−1−α as the initial distribution. This anomalous scaling in the solution to the diffusion equation is a direct consequence of the extended initial condition. This latter case is not physically relevant, however, since the extended initial distribution leads to a divergent initial wealth density.

Whitehead’s Anti-Substantivilism, or Process & Reality as a Cosmology to-be. Thought of the Day 39.0

whiteheads-process-philosophy

Treating “stuff” as some kind of metaphysical primitive is mere substantivilism – and fundamentally question-begging. One has replaced an extra-theoretic referent of the wave-function (unless one defers to some quasi-literalist reading of the nature of the stochastic amplitude function ζ[X(t)] as somehow characterizing something akin to being a “density of stuff”, and moreover the logic and probability (Born Rules) must ultimately be obtained from experimentally obtained scattering amplitudes) with something at least as equally mystifying, as the argument against decoherence goes on to show:

In other words, you have a state vector which gives rise to an outcome of a measurement and you cannot understand why this is so according to your theory.

As a response to Platonism, one can likewise read Process and Reality as essentially anti-substantivilist.

Consider, for instance:

Those elements of our experience which stand out clearly and distinctly [giving rise to our substantial intuitions] in our consciousness are not its basic facts, [but] they are . . . late derivatives in the concrescence of an experiencing subject. . . .Neglect of this law [implies that] . . . [e]xperience has been explained in a thoroughly topsy-turvy fashion, the wrong end first (161).

To function as an object is to be a determinant of the definiteness of an actual occurrence [occasion] (243).

The phenomenological ontology offered in Process and Reality is richly nuanced (including metaphysical primitives such as prehensions, occasions, and their respectively derivative notions such as causal efficacy, presentational immediacy, nexus, etc.). None of these suggest metaphysical notions of substance (i.e., independently existing subjects) as a primitive. The case can perhaps be made concerning the discussion of eternal objects, but such notions as discussed vis-à-vis the process of concrescence are obviously not metaphysically primitive notions. Certainly these metaphysical primitives conform in a more nuanced and articulated manner to aspects of process ontology. “Embedding” – as the notion of emergence is a crucial constituent in the information-theoretic, quantum-topological, and geometric accounts. Moreover, concerning the issue of relativistic covariance, it is to be regarded that Process and Reality is really a sketch of a cosmology-to-be . . . [in the spirit of ] Kant [who] built on the obsolete ideas of space, time, and matter of Euclid and Newton. Whitehead set out to suggest what a philosophical cosmology might be that builds on Newton.

Without Explosions, WE Would NOT Exist!

bb_theory

The matter and radiation in the universe gets hotter and hotter as we go back in time towards the initial quantum state, because it was compressed into a smaller volume. In this Hot Big Bang epoch in the early universe, we can use standard physical laws to examine the processes going on in the expanding mixture of matter and radiation. A key feature is that about 300,000 years after the start of the Hot Big Bang epoch, nuclei and electrons combined to form atoms. At earlier times when the temperature was higher, atoms could not exist, as the radiation then had so much energy it disrupted any atoms that tried to form into their constituent parts (nuclei and electrons). Thus at earlier times matter was ionized, consisting of negatively charged electrons moving independently of positively charged atomic nuclei. Under these conditions, the free electrons interact strongly with radiation by Thomson scattering. Consequently matter and radiation were tightly coupled in equilibrium at those times, and the Universe was opaque to radiation. When the temperature dropped through the ionization temperature of about 4000K, atoms formed from the nuclei and electrons, and this scattering ceased: the Universe became very transparent. The time when this transition took place is known as the time of decoupling – it was the time when matter and radiation ceased to be tightly coupled to each other, at a redshift zdec ≃ 1100 (Scott Dodelson (Auth.)-Modern Cosmology-Academic Press). By

μbar ∝ S−3, μrad ∝ S−4, Trad ∝ S−1 —– (1)

The scale factor S(t) obeys the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ +3p/c2) + Λ —– (2)

where κ is the gravitational constant and Λ the cosmological constant.

, the universe was radiation dominated (μrad ≫ μmat) at early times and matter dominated (μrad ≪ μmat) at late times; matter-radiation density equality occurred significantly before decoupling (the temperature Teq when this equality occurred was Teq ≃ 104K; at that time the scale factor was Seq ≃ 104S0, where S0 is the present-day value). The dynamics of both the background model and of perturbations about that model differ significantly before and after Seq.

Radiation was emitted by matter at the time of decoupling, thereafter travelling freely to us through the intervening space. When it was emitted, it had the form of blackbody radiation, because this is a consequence of matter and radiation being in thermodynamic equilibrium at earlier times. Thus the matter at z = zdec forms the Last Scattering Surface (LSS) in the early universe, emitting Cosmic Blackbody Background Radiation (‘CBR’) at 4000K, that since then has travelled freely with its temperature T scaling inversely with the scale function of the universe. As the radiation travelled towards us, the universe expanded by a factor of about 1100; consequently by the time it reaches us, it has cooled to 2.75 K (that is, about 3 degrees above absolute zero, with a spectrum peaking in the microwave region), and so is extremely hard to observe. It was however detected in 1965, and its spectrum has since been intensively investigated, its blackbody nature being confirmed to high accuracy (R. B. Partridge-3K_ The Cosmic Microwave Background Radiation). Its existence is now taken as solid proof both that the Universe has indeed expanded from a hot early phase, and that standard physics applied unchanged at that era in the early universe.

The thermal capacity of the radiation is hugely greater than that of the matter. At very early times before decoupling, the temperatures of the matter and radiation were the same (because they were in equilibrium with each other), scaling as 1/S(t) (Equation 1 above). The early universe exceeded any temperature that can ever be attained on Earth or even in the centre of the Sun; as it dropped towards its present value of 3 K, successive physical reactions took place that determined the nature of the matter we see around us today. At very early times and high temperatures, only elementary particles can survive and even neutrinos had a very small mean free path; as the universe cooled down, neutrinos decoupled from the matter and streamed freely through space. At these times the expansion of the universe was radiation dominated, and we can approximate the universe then by models with {k = 0, w = 1/3, Λ = 0}, the resulting simple solution of

3S ̇2/S2 = A/S3 + B/S4 + Λ/3 – 3k/S2 —– (3)

uniquely relating time to temperature:

S(t)=S0t1/2 , t=1.92sec [T/1010K]−2 —– (4)

(There are no free constants in the latter equation).

At very early times, even neutrinos were tightly coupled and in equilibrium with the radiation; they decoupled at about 1010K, resulting in a relic neutrino background density in the universe today of about Ων0 ≃ 10−5 if they are massless (but it could be higher depending on their masses). Key events in the early universe are associated with out of equilibrium phenomena. An important event was the era of nucleosynthesis, the time when the light elements were formed. Above about 109K, nuclei could not exist because the radiation was so energetic that as fast as they formed, they were disrupted into their constituent parts (protons and neutrons). However below this temperature, if particles collided with each other with sufficient energy for nuclear reactions to take place, the resultant nuclei remained intact (the radiation being less energetic than their binding energy and hence unable to disrupt them). Thus the nuclei of the light elements  – deuterium, tritium, helium, and lithium – were created by neutron capture. This process ceased when the temperature dropped below about 108K (the nuclear reaction threshold). In this way, the proportions of these light elements at the end of nucleosynthesis were determined; they have remained virtually unchanged since. The rate of reaction was extremely high; all this took place within the first three minutes of the expansion of the Universe. One of the major triumphs of Big Bang theory is that theory and observation are in excellent agreement provided the density of baryons is low: Ωbar0 ≃ 0.044. Then the predicted abundances of these elements (25% Helium by weight, 75% Hydrogen, the others being less than 1%) agrees very closely with the observed abundances. Thus the standard model explains the origin of the light elements in terms of known nuclear reactions taking place in the early Universe. However heavier elements cannot form in the time available (about 3 minutes).

In a similar way, physical processes in the very early Universe (before nucleosynthesis) can be invoked to explain the ratio of matter to anti-matter in the present-day Universe: a small excess of matter over anti-matter must be created then in the process of baryosynthesis, without which we could not exist today (if there were no such excess, matter and antimatter would have all annihilated to give just radiation). However other quantities (such as electric charge) are believed to have been conserved even in the extreme conditions of the early Universe, so their present values result from given initial conditions at the origin of the Universe, rather than from physical processes taking place as it evolved. In the case of electric charge, the total conserved quantity appears to be zero: after quarks form protons and neutrons at the time of baryosynthesis, there are equal numbers of positively charged protons and negatively charged electrons, so that at the time of decoupling there were just enough electrons to combine with the nuclei and form uncharged atoms (it seems there is no net electrical charge on astronomical bodies such as our galaxy; were this not true, electromagnetic forces would dominate cosmology, rather than gravity).

After decoupling, matter formed large scale structures through gravitational instability which eventually led to the formation of the first generation of stars and is probably associated with the re-ionization of matter. However at that time planets could not form for a very important reason: there were no heavy elements present in the Universe. The first stars aggregated matter together by gravitational attraction, the matter heating up as it became more and more concentrated, until its temperature exceeded the thermonuclear ignition point and nuclear reactions started burning hydrogen to form helium. Eventually more complex nuclear reactions started in concentric spheres around the centre, leading to a build-up of heavy elements (carbon, nitrogen, oxygen for example), up to iron. These elements can form in stars because there is a long time available (millions of years) for the reactions to take place. Massive stars burn relatively rapidly, and eventually run out of nuclear fuel. The star becomes unstable, and its core rapidly collapses because of gravitational attraction. The consequent rise in temperature blows it apart in a giant explosion, during which time new reactions take place that generate elements heavier than iron; this explosion is seen by us as a Supernova (“New Star”) suddenly blazing in the sky, where previously there was just an ordinary star. Such explosions blow into space the heavy elements that had been accumulating in the star’s interior, forming vast filaments of dust around the remnant of the star. It is this material that can later be accumulated, during the formation of second generation stars, to form planetary systems around those stars. Thus the elements of which we are made (the carbon, nitrogen, oxygen and iron nuclei for example) were created in the extreme heat of stellar interiors, and made available for our use by supernova explosions. Without these explosions, we could not exist.