Network Theoretic of the Fermionic Quantum State – Epistemological Rumination. Thought of the Day 150.0

galitski_moatv3

In quantum physics, fundamental particles are believed to be of two types: fermions or bosons, depending on the value of their spin (an intrinsic ‘angular moment’ of the particle). Fermions have half-integer spin and cannot occupy a quantum state (a configuration with specified microscopic degrees of freedom, or quantum numbers) that is already occupied. In other words, at most one fermion at a time can occupy one quantum state. The resulting probability that a quantum state is occupied is known as the Fermi-Dirac statistics.

Now, if we want to convert this into a model with maximum entropy, where the real movement is defined topologically, then we require a reproduction of heterogeneity that is observed. The starting recourse is network theory with an ensemble of networks where each vertex i has the same degree ki as in the real network. This choice is justified by the fact that, being an entirely local topological property, the degree is expected to be directly affected by some intrinsic (non-topological) property of vertices. The caveat is that the real shouldn’t be compared with the randomized, which could otherwise lead to interpreting the observed as ‘unavoidable’ topological constraints, in the sense that the violation of the observed values would lead to an ‘impossible’, or at least very unrealistic values.

The resulting model is known as the Configuration Model, and is defined as a maximum-entropy ensemble of graphs with given degree sequence. The degree sequence, which is the constraint defining the model, is nothing but the ordered vector k of degrees of all vertices (where the ith component ki is the degree of vertex i). The ordering preserves the ‘identity’ of vertices: in the resulting network ensemble, the expected degree ⟨ki⟩ of each vertex i is the same as the empirical value ki for that vertex. In the Configuration Model, the graph probability is given by

P(A) = ∏i<jqij(aij) =  ∏i<jpijaij (1 – pij)1-aij —– (1)

where qij(a) = pija (1 – pij)1-a is the probability that particular entry of the adjacency matrix A takes the value aij = a, which is a Bernoulli process with different pairs of vertices characterized by different connection probabilities pij. A Bernoulli trial (or Bernoulli process) is the simplest random event, i.e. one characterized by only two possible outcomes. One of the two outcomes is referred to as the ‘success’ and is assigned a probability p. The other outcome is referred to as the ‘failure’, and is assigned the complementary probability 1 − p. These probabilities read

⟨aij⟩ = pij = (xixj)/(1 + xixj) —– (2)

where xi is the Lagrange multiplier obtained by ensuring that the expected degree of the corresponding vertex i equals its observed value: ⟨ki⟩ = ki ∀ i. As always happens in maximum-entropy ensembles, the probabilistic nature of configurations implies that the constraints are valid only on average (the angular brackets indicate an average over the ensemble of realizable networks). Also note that pij is a monotonically increasing function of xi and xj. This implies that ⟨ki⟩ is a monotonically increasing function of xi. An important consequence is that two variables i and j with the same degree ki = kj must have the same value xi = xj.

Unknown

(2) provides an interesting connection with quantum physics, and in particular the statistical mechanics of fermions. The ‘selection rules’ of fermions dictate that only one particle at a time can occupy a single-particle state, exactly as each pair of vertices in binary networks can be either connected or disconnected. In this analogy, every pair i, j of vertices is a ‘quantum state’ identified by the ‘quantum numbers’ i and j. So each link of a binary network is like a fermion that can be in one of the available states, provided that no two objects are in the same state. (2) indicates the expected number of particles/links in the state specified by i and j. With no surprise, it has the same form of the so-called Fermi-Dirac statistics describing the expected number of fermions in a given quantum state. The probabilistic nature of links allows also for the presence of empty states, whose occurrence is now regulated by the probability coefficients (1 − pij). The Configuration Model allows the whole degree sequence of the observed network to be preserved (on average), while randomizing other (unconstrained) network properties. now, when one compares the higher-order (unconstrained) observed topological properties with their expected values calculated over the maximum-entropy ensemble, it should be indicative of the fact that the degree of sequence is informative in explaining the rest of the topology, which is a consequent via probabilities in (2). Colliding these into a scatter plot, the agreement between model and observations can be simply assessed as follows: the less scattered the cloud of points around the identity function, the better the agreement between model and reality. In principle, a broadly scattered cloud around the identity function would indicate the little effectiveness of the chosen constraints in reproducing the unconstrained properties, signaling the presence of genuine higher-order patterns of self-organization, not simply explainable in terms of the degree sequence alone. Thus, the ‘fermionic’ character of the binary model is the mere result of the restriction that no two binary links can be placed between any two vertices, leading to a mathematical result which is formally equivalent to the one of quantum statistics.

Black Holes. Thought of the Day 23.0

bhdiagram_1

The formation of black holes can be understood, at least partially, within the context of general relativity. According to general relativity the gravitational collapse leads to a spacetime singularity. But this spacetime singularity can not be adequately described within general relativity, because the equivalence principle of general relativity is not valid for spacetime singularities; therefore, general relativity does not give a complete description of black holes. The same problem exists with regard to the postulated initial singularity of the expanding cosmos. In these cases, quantum mechanics and quantum field theory also reach their limit; they are not applicable for highly curved spacetimes. For a certain curving parameter (the famous Planck scale), gravity has the same strength as the other interactions; then it is not possible to ignore gravity in the context of a quantum field theoretical description. So, there exists no theory which would be able to describe gravitational collapses or which could explain, why (although they are predicted by general relativity) they don’t happen, or why there is no spacetime singularity. And the real problems start, if one brings general relativity and quantum field theory together to describe black holes. Then it comes to rather strange forms of contradictions, and the mutual conceptual incompatibility of general relativity and quantum field theory becomes very clear:

Black holes are according to general relativity surrounded by an event horizon. Material objects and radiation can enter the black hole, but nothing inside its event horizon can leave this region, because the gravitational pull is strong enough to hold back even radiation; the escape velocity is greater than the speed of light. Not even photons can leave a black hole. Black holes have a mass; in the case of the Schwarzschild metrics, they have exclusively a mass. In the case of the Reissner-Nordström metrics, they have a mass and an electric charge; in case of the Kerr metrics, they have a mass and an angular momentum; and in case of the Kerr-Newman metrics, they have mass, electric charge and angular momentum. These are, according to the no-hair theorem, all the characteristics a black hole has at its disposal. Let’s restrict the argument in the following to the Reissner-Nordström metrics in which a black hole has only mass and electric charge. In the classical picture, the electric charge of a black hole becomes noticeable in form of a force exerted on an electrically charged probe outside its event horizon. In the quantum field theoretical picture, interactions are the result of the exchange of virtual interaction bosons, in case of an electric charge: virtual photons. But how can photons be exchanged between an electrically charged black hole and an electrically charged probe outside its event horizon, if no photon can leave a black hole – which can be considered a definition of a black hole? One could think, that virtual photons, mediating electrical interaction, are possibly able (in contrast to real photons, representing radiation) to leave the black hole. But why? There is no good reason and no good answer for that within our present theoretical framework. The same problem exists for the gravitational interaction, for the gravitational pull of the black hole exerted on massive objects outside its event horizon, if the gravitational force is understood as an exchange of gravitons between massive objects, as the quantum field theoretical picture in its extrapolation to gravity suggests. How could (virtual) gravitons leave a black hole at all?

There are three possible scenarios resulting from the incompatibility of our assumptions about the characteristics of a black hole, based on general relativity, and on the picture quantum field theory draws with regard to interactions:

(i) Black holes don’t exist in nature. They are a theoretical artifact, demonstrating the asymptotic inadequacy of Einstein’s general theory of relativity. Only a quantum theory of gravity will explain where the general relativistic predictions fail, and why.

(ii) Black holes exist, as predicted by general relativity, and they have a mass and, in some cases, an electric charge, both leading to physical effects outside the event horizon. Then, we would have to explain, how these effects are realized physically. The quantum field theoretical picture of interactions is either fundamentally wrong, or we would have to explain, why virtual photons behave completely different, with regard to black holes, from real radiation photons. Or the features of a black hole – mass, electric charge and angular momentum – would be features imprinted during its formation onto the spacetime surrounding the black hole or onto its event horizon. Then, interactions between a black hole and its environment would rather be interactions between the environment and the event horizon or even interactions within the environmental spacetime.

(iii) Black holes exist as the product of gravitational collapses, but they do not exert any effects on their environment. This is the craziest of all scenarios. For this scenario, general relativity would have to be fundamentally wrong. In contrast to the picture given by general relativity, black holes would have no physically effective features at all: no mass, no electric charge, no angular momentum, nothing. And after the formation of a black hole, there would be no spacetime curvature, because there remains no mass. (Or, the spacetime curvature has to result from other effects.) The mass and the electric charge of objects falling (casually) into a black hole would be irretrievably lost. They would simply disappear from the universe, when they pass the event horizon. Black holes would not exert any forces on massive or electrically charged objects in their environment. They would not pull any massive objects into their event horizon and increase thereby their mass. Moreover, their event horizon would mark a region causally disconnected with our universe: a region outside of our universe. Everything falling casually into the black hole, or thrown intentionally into this region, would disappear from the universe.