QFT, being/non-being & Birth of Properties


What is the birth of the properties of physical objects? As we have seen, we have to enlarge the category of entities where properties can originate from, by including the quantum vacuum. To make the difference more clear, suppose that we have a region of space emptied of matter and fields. Classically, the only way to create a property inside that region is to bring in from outside an object carrying that specific property. In this sense, Netwonian physics appears as a strongly coerced theory, while relativity and quantum physics introduce different relaxations. Firstly, Newtonian physics needs to have the concept of space as existing independently of objects and with all the points easily accessed. Space and time are distinct from and exist independently of the objects (carrying properties) one chooses to populate it with. Space-time is the immense theater stage where physical processes unfold, the canvas where each dot is an event. One has, in principle, access to any of these points. General relativity shows that this does not happen if the object carrying the desired property is too massive or if we insist of making it as much as point-like – squeezing too much energy into too little space could result in the formation of a black hole. Secondly, if properties are tied to physical objects (particles or non-zero fields) as a condition along with true stochasticity as not being satisfied, then properties could appear spontaneously in vacuum, as they do not require either a real object to be attached to or a causal chain of events that would produce them.

Dynamical Casimir effect shows that there exists another way of generating properties. Note that these experiments still use the classical concept of spacetime background as in, but to explain them one needs to alter dramatically the conditions of properties as tied to physical objects and non-existence of stochasticity to accommodate the quantum-mechanical account of randomness (there exists pure randomness) and properties (properties are not intrinsically attached to objects, but are created contextually, as shown by the Kochen-Specker theorem). Let H be a Hilbert space of QM state vectors of dimension x ≥ 3. There is a set M of observables on H, containing y elements, such that the followong two assumptions are contradictory: 

(KS1) All y members of M simultaneously have values, i.e. are unambiguously mapped onto real numbers (designated, for observables ABC, …, by v(A), v(B), v(C), …).

(KS2) Values of observables conform to the following constraints:

(a) If ABC are all compatible and C = A+B, then v(C) = v(A)+v(B);(b) if ABC are all compatible and C = A·B, then v(C) = v(Av(B).

The theorem demonstrates the impossibility of a certain type of interpretation of QM in terms of hidden variables (HV) that naturally suggests itself when one begins to consider the project of interpretating QM. Because in quantum field theory the vacuum has a structure, properties can be generated at a certain point by changes of this structure, and not just by bringing them in from somewhere else. As mentioned above already, one cannot do this classically: if a property were to appear at some point in space, then classical physics would tell us that, there must be a real object that carries this property, and that there must be a causal story, enfolding in the region of space-time under consideration, which one must discover in order to have a complete description of the phenomenon. In a quantum vacuum, the structure exists as such, ready to acquire real properties, without being constructed beforehand by energy or mass previously brought in from elsewhere. By definition, the vacuum is the ground state, therefore (unless the system is metastable) there is no other lower-energy state where the system would go to if one attempts to extract energy from it. The quantum vacuum behaves, from this point of view, almost as a real material. Clearly, the ontological status of an entity that is not made of real particles but reacts to external actions does not fall straight into any of the standard philosophical categories of being/non-being.


Matter Defined as Just Another Quantum State: Whatever Ontologies.


In quantum physics, vacuum is defined as the ground state of a quantum field. It is a state of minimum energy, corresponding to zero particles. Note that this definition of vacuum uses already the conceptual and formal machinery of quantum field theory. It is justifiable to ask weather it is possible to give a more theory-independent definition with lesser theoretical load. In this situation vacuum would be an entity which is explained – not just defined within and then explored – by quantum field theory. For example, one could attempt an operational definition of vacuum as the state in which no particles are detected. But then we have to specify how to detect the particles, with what efficiency, etc., that is, we need a model for the particle detector. Such a model, known as the Unruh-DeWitt detector, is constructed however from within quantum field theory. Unruh-DeWitt detector is a simplified model of a real particle detector. Its basic property is the fact that it is linearly coupled to the field, so that it can detect one-particle states. Indeed, as long as the detector moves inertially in Minkowski spacetime, it really does react to one-particle states and not to the 0-particle state (vacuum). However, when it moves non-inertially, it may react even in the vacuum. The energy needed for the reaction in the vacuum comes from the agency that accelerates the detector (not from the vacuum energy).


The vacuum is simply a special state of the quantum field – implying that quantum physics allows the return of the concept of ether, although in a rather weaker, modified form. This new ether – the quantum vacuum – does not contradict the special theory of relativity because the vacuum of the known fields are constructed to be Lorentz-invariant. In some sense, each particle in motion carries with it its own ether, thus Lorentz transformations act in the same way on the vacuum and on the particle itself. Otherwise, the vacuum state is not that different from any other wavefunction in the Hilbert space. Attaching probability amplitudes to the ground state is allowed to the same degree as attaching probability amplitudes to any other state with nonzero number of particles. In particular, one expects to be able to generate a real property – a value for an observable – in the same way as for any other state: by perturbation, evolution, and measurement. The picture that quantum field theory provides is that both particles and vacuum are now constructed from the same “substance”, namely the quantum states of the fields at each point (or, equivalently, that of the modes). What we used to call matter is just another quantum state, and so is the absence of matter – there is no underlying substance that makes up particles as opposed to the absence of this substance when particles are not present. One could even turn around the tables and say that everything is made of vacuum – indeed, the vacuum is just one special combination of states of the quantum field, and so are the particles. In this way, the difference between the two worldviews, the one where everything is a plenum and vacuum does not exist, and the other where the world is empty space (nonbeing) filled with entities that truly have the attribute of being, is completely dissolved. Quantum physics essentially tells us that there is a third option, in which these two pictures of the world are just two complementary aspects. In quantum physics the objects inhabit at the same time the world of the continuum and that of the discrete.

Incidentally, the discussion has implications for the concept of individuality, a pivotal one both in philosophy and in statistical physics. Two objects are distinguishable if there is at least one property which can be used to make the difference between them. In the classical world, finding this property is not difficult, because any two objects have a large amount of properties that can be analyzed to find a different one. But, because in quantum field theory objects are only combinations of modes, with no additional properties, it means that one can have objects which cannot be distinguished one from each other even in principle. For example, two electrons are perfectly identical. To use a well-known Aristotelian distinction, they have no accidental properties, they are truly made of the same essence.

To see in a simple way why quantum physics requires a re-evaluation of the concept of emptiness, the following qualitative argument is useful: the Heisenberg uncertainty principle shows that, if a state has a well-defined number of particles (zero) the phase of the corresponding field cannot be well-defined. Thus, quantum fluctuations of the phase appear as an immediate consequence of the very definition of emptiness. Another argument can be put forward: the classical concept of emptiness assumes the separability of space in distinct volumes. Indeed, to be able to say that nothing exists in a region of space, we implicitly assume that it is possible to delimitate that region of space from the rest of the world. We do this by surrounding it with walls of some sort. In particular, the thickness of the walls is irrelevant in the classical picture, and, as long as the particles do not have enough energy to penetrate the wall, all that matters is the volume cut out from space. Yet, quantum physics teaches us that, due to the phenomenon of tunneling, this is only possible to some extent – there is, in reality, a non-zero probability for a particle to go through the walls even if classically they are prohibited to do so because they do not have enough energy. This already suggests that, even if we start with zero particles in that region, there is no guarantee that the number of particles is conserved if e.g. we change the shape of the enclosure by moving the walls. This is precisely what happens in the case of the dynamical Casimir effect. These demonstrate that in quantum field theory the vacuum state is not just an inert background in which fields propagate, but a dynamic entity containing the seeds of multiple possibilities, which are actualized once the vacuum is disturbed in specific ways.

Top-down Causation in Financial Markets. Note Quote.


Regulators attempt to act on a financial market based on the intelligent and reasonable formulation of rules. For example, changing the market micro-structure at the lowest level in the hierarchy, can change the way that asset prices assimilate changes in information variables Zk,t or θi,m,t. Similarly, changes in accounting rules could change the meaning and behaviour of bottom-up information variables θi,m,t and changes in economic policy and policy implementation can change the meaning of top-down information variables Zk,t and influence shared risk factors rp,t.

In hierarchical analysis, theories and plans may be embodied in a symbolic system to build effective and robust models to be used for detecting deeper dependencies and emergent phenomena. Mechanisms for the transmission of information and asymmetric information information have impacts on market quality. Thus, Regulators can impact the activity and success of all the other actors, either directly or indirectly through knock-on effects. Examples include the following: Investor behaviour could change the goal selection of Traders; change in the latter could in turn impact variables coupled to Traders activity in such a way that Profiteers are able to benefit from change in liquidity or use leverage as a mean to achieve profit targets and overcome noise.

Idealistically, Regulators may aim for increasing productivity, managing inflation, reducing unemployment and eliminating malfeasance. However, the circumvention of rules, usually in the name of innovation or by claims of greater insight on optimality, is as much part of a complex system in which participants can respond to rules. Tax arbitrages are examples of actions which manipulate reporting to reduce levies paid to a profit- facilitating system. In regulatory arbitrage, rules may be followed technically, but nevertheless use relevant new information which has not been accounted for in system rules. Such activities are consistent with goals of profiteering but are not necessarily in agreement with longer term optimality of reliable and fair markets.

Rulers, i.e. agencies which control populations more generally, also impact markets and economies. Examples of top-down causation here include segregation of workers and differential assignment of economic rights to market participants, as in the evolution of local miners’ rights in the late 1800’s in South Africa and the national Native Land act of 1913 in South Africa, international agreements such as the Bretton Woods system, the Marshall plan of 1948, the lifting of the gold standard in 1973 and the regulation of capital allocations and capital flows between individual and aggregated participants. Ideas on target-based goal selection are already in circulation in the literature on applications of viability theory and stochastic control in economics. Such approaches provide alternatives to the Laplacian ideal of attaining perfect prediction by offering analysable future expectations to regulators and rulers.

Unruh Radiation, Black Holes and Partial Waves. Note Quote.


It is well known that Hawking radiation from an asymptotically flat Schwarzschild black hole is dominated by low angular momentum modes. This is a consequence of the fact that a black hole of Hawking temperaure TH and Schwarzschild radius rs has TH rs ∼ 1, so that high angular momentum modes of energy TH are trapped behind a large barrier in the effective radial potential. Since a local observer is unlikely to encounter such quanta, one might then conclude that a (much-weakened) version of postulate “A freely falling observer experiences nothing out of the ordinary when crossing the horizon” might still hold in which the suppression is replaced by a fixed (1/area) power law. In addition, one would need to propose a mechanism through which these quanta would arise from the infalling perspective. This would appear to require that the infalling observer experience violations of local quantum field theory at this (power-law-suppressed) level.

This would already be a striking result: these quanta must appear quite close to the horizon and so violate the standard wisdom that the horizon is not a distinguished location. And they are not rare in the sense that their number is of the same order as the number of actual Hawking quanta.

As noted long ago by Unruh and Wald, it is possible to ‘mine’ energy from the modes trapped behind the effective potential. The basic procedure is to lower some object below the potential barrier, let the object absorb the trapped modes, and then raise the object back above the barrier. Unruh and Wald thought of the object as a box that could be opened to collect ambient radiation and then closed to keep the radiation from escaping. One may also visualize the object as a particle detector, though the two are equivalent at the level discussed here.

In the context of such a mining operation, one need only consider the internal state of the mining equipment to be part of the late-time Hawking radiation. In particular, postulate “outside the stretched horizon of a massive black hole, physics can be described to good approximation by a set of semi-classical field equations”, can be used to evolve the mode to be mined backward in time and to conclude for an old black hole that, even before the mining process takes place, the mode must be fully entangled with the early-time radiation. “A freely falling observer experiences nothing out of the ordinary when crossing the horizon” is then violated for these modes as well, suggesting that the infalling observer encounters a Planck density of Planck scale radiation and burns up. One might say that the black hole is protected by a Planck-scale firewall.

Note that this firewall need not be visible to any observer that remains outside the horizon. All that we have argued is that the infalling observer does not experience a pure state. There remains considerable freedom in the possible reduced density matrices that could describe a few localized degrees of freedom outside the black hole, so that this matrix might still agree perfectly with that predicted by Hawking. In this case any local signal that an external observer might hope to ascribe to the firewall at distance 1/ω cannot be disentangled from the Unruh radiation that results from probing this scale without falling into the black hole.