Network Theoretic of the Fermionic Quantum State – Epistemological Rumination. Thought of the Day 150.0

galitski_moatv3

In quantum physics, fundamental particles are believed to be of two types: fermions or bosons, depending on the value of their spin (an intrinsic ‘angular moment’ of the particle). Fermions have half-integer spin and cannot occupy a quantum state (a configuration with specified microscopic degrees of freedom, or quantum numbers) that is already occupied. In other words, at most one fermion at a time can occupy one quantum state. The resulting probability that a quantum state is occupied is known as the Fermi-Dirac statistics.

Now, if we want to convert this into a model with maximum entropy, where the real movement is defined topologically, then we require a reproduction of heterogeneity that is observed. The starting recourse is network theory with an ensemble of networks where each vertex i has the same degree ki as in the real network. This choice is justified by the fact that, being an entirely local topological property, the degree is expected to be directly affected by some intrinsic (non-topological) property of vertices. The caveat is that the real shouldn’t be compared with the randomized, which could otherwise lead to interpreting the observed as ‘unavoidable’ topological constraints, in the sense that the violation of the observed values would lead to an ‘impossible’, or at least very unrealistic values.

The resulting model is known as the Configuration Model, and is defined as a maximum-entropy ensemble of graphs with given degree sequence. The degree sequence, which is the constraint defining the model, is nothing but the ordered vector k of degrees of all vertices (where the ith component ki is the degree of vertex i). The ordering preserves the ‘identity’ of vertices: in the resulting network ensemble, the expected degree ⟨ki⟩ of each vertex i is the same as the empirical value ki for that vertex. In the Configuration Model, the graph probability is given by

P(A) = ∏i<jqij(aij) =  ∏i<jpijaij (1 – pij)1-aij —– (1)

where qij(a) = pija (1 – pij)1-a is the probability that particular entry of the adjacency matrix A takes the value aij = a, which is a Bernoulli process with different pairs of vertices characterized by different connection probabilities pij. A Bernoulli trial (or Bernoulli process) is the simplest random event, i.e. one characterized by only two possible outcomes. One of the two outcomes is referred to as the ‘success’ and is assigned a probability p. The other outcome is referred to as the ‘failure’, and is assigned the complementary probability 1 − p. These probabilities read

⟨aij⟩ = pij = (xixj)/(1 + xixj) —– (2)

where xi is the Lagrange multiplier obtained by ensuring that the expected degree of the corresponding vertex i equals its observed value: ⟨ki⟩ = ki ∀ i. As always happens in maximum-entropy ensembles, the probabilistic nature of configurations implies that the constraints are valid only on average (the angular brackets indicate an average over the ensemble of realizable networks). Also note that pij is a monotonically increasing function of xi and xj. This implies that ⟨ki⟩ is a monotonically increasing function of xi. An important consequence is that two variables i and j with the same degree ki = kj must have the same value xi = xj.

Unknown

(2) provides an interesting connection with quantum physics, and in particular the statistical mechanics of fermions. The ‘selection rules’ of fermions dictate that only one particle at a time can occupy a single-particle state, exactly as each pair of vertices in binary networks can be either connected or disconnected. In this analogy, every pair i, j of vertices is a ‘quantum state’ identified by the ‘quantum numbers’ i and j. So each link of a binary network is like a fermion that can be in one of the available states, provided that no two objects are in the same state. (2) indicates the expected number of particles/links in the state specified by i and j. With no surprise, it has the same form of the so-called Fermi-Dirac statistics describing the expected number of fermions in a given quantum state. The probabilistic nature of links allows also for the presence of empty states, whose occurrence is now regulated by the probability coefficients (1 − pij). The Configuration Model allows the whole degree sequence of the observed network to be preserved (on average), while randomizing other (unconstrained) network properties. now, when one compares the higher-order (unconstrained) observed topological properties with their expected values calculated over the maximum-entropy ensemble, it should be indicative of the fact that the degree of sequence is informative in explaining the rest of the topology, which is a consequent via probabilities in (2). Colliding these into a scatter plot, the agreement between model and observations can be simply assessed as follows: the less scattered the cloud of points around the identity function, the better the agreement between model and reality. In principle, a broadly scattered cloud around the identity function would indicate the little effectiveness of the chosen constraints in reproducing the unconstrained properties, signaling the presence of genuine higher-order patterns of self-organization, not simply explainable in terms of the degree sequence alone. Thus, the ‘fermionic’ character of the binary model is the mere result of the restriction that no two binary links can be placed between any two vertices, leading to a mathematical result which is formally equivalent to the one of quantum statistics.

Advertisement

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Superstrings as Grand Unifier. Thought of the Day 86.0

1*An_5-O6idFfNx4Bs_c8lig

The first step of deriving General Relativity and particle physics from a common fundamental source may lie within the quantization of the classical string action. At a given momentum, quantized strings exist only at discrete energy levels, each level containing a finite number of string states, or particle types. There are huge energy gaps between each level, which means that the directly observable particles belong to a small subset of string vibrations. In principle, a string has harmonic frequency modes ad infinitum. However, the masses of the corresponding particles get larger, and decay to lighter particles all the quicker.

Most importantly, the ground energy state of the string contains a massless, spin-two particle. There are no higher spin particles, which is fortunate since their presence would ruin the consistency of the theory. The presence of a massless spin-two particle is undesirable if string theory has the limited goal of explaining hadronic interactions. This had been the initial intention. However, attempts at a quantum field theoretic description of gravity had shown that the force-carrier of gravity, known as the graviton, had to be a massless spin-two particle. Thus, in string theory’s comeback as a potential “theory of everything,” a curse turns into a blessing.

Once again, as with the case of supersymmetry and supergravity, we have the astonishing result that quantum considerations require the existence of gravity! From this vantage point, right from the start the quantum divergences of gravity are swept away by the extended string. Rather than being mutually exclusive, as it seems at first sight, quantum physics and gravitation have a symbiotic relationship. This reinforces the idea that quantum gravity may be a mandatory step towards the unification of all forces.

Unfortunately, the ground state energy level also includes negative-mass particles, known as tachyons. Such particles have light speed as their limiting minimum speed, thus violating causality. Tachyonic particles generally suggest an instability, or possibly even an inconsistency, in a theory. Since tachyons have negative mass, an interaction involving finite input energy could result in particles of arbitrarily high energies together with arbitrarily many tachyons. There is no limit to the number of such processes, thus preventing a perturbative understanding of the theory.

An additional problem is that the string states only include bosonic particles. However, it is known that nature certainly contains fermions, such as electrons and quarks. Since supersymmetry is the invariance of a theory under the interchange of bosons and fermions, it may come as no surprise, post priori, that this is the key to resolving the second issue. As it turns out, the bosonic sector of the theory corresponds to the spacetime coordinates of a string, from the point of view of the conformal field theory living on the string worldvolume. This means that the additional fields are fermionic, so that the particle spectrum can potentially include all observable particles. In addition, the lowest energy level of a supersymmetric string is naturally massless, which eliminates the unwanted tachyons from the theory.

The inclusion of supersymmetry has some additional bonuses. Firstly, supersymmetry enforces the cancellation of zero-point energies between the bosonic and fermionic sectors. Since gravity couples to all energy, if these zero-point energies were not canceled, as in the case of non-supersymmetric particle physics, then they would have an enormous contribution to the cosmological constant. This would disagree with the observed cosmological constant being very close to zero, on the positive side, relative to the energy scales of particle physics.

Also, the weak, strong and electromagnetic couplings of the Standard Model differ by several orders of magnitude at low energies. However, at high energies, the couplings take on almost the same value, almost but not quite. It turns out that a supersymmetric extension of the Standard Model appears to render the values of the couplings identical at approximately 1016 GeV. This may be the manifestation of the fundamental unity of forces. It would appear that the “bottom-up” approach to unification is winning. That is, gravitation arises from the quantization of strings. To put it another way, supergravity is the low-energy limit of string theory, and has General Relativity as its own low-energy limit.

Fundamental Theorem of Asset Pricing: Tautological Meeting of Mathematical Martingale and Financial Arbitrage by the Measure of Probability.

thinkstockphotos-496599823

The Fundamental Theorem of Asset Pricing (FTAP hereafter) has two broad tenets, viz.

1. A market admits no arbitrage, if and only if, the market has a martingale measure.

2. Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The FTAP is a theorem of mathematics, and the use of the term ‘measure’ in its statement places the FTAP within the theory of probability formulated by Andrei Kolmogorov (Foundations of the Theory of Probability) in 1933. Kolmogorov’s work took place in a context captured by Bertrand Russell, who observed that

It is important to realise the fundamental position of probability in science. . . . As to what is meant by probability, opinions differ.

In the 1920s the idea of randomness, as distinct from a lack of information, was becoming substantive in the physical sciences because of the emergence of the Copenhagen Interpretation of quantum mechanics. In the social sciences, Frank Knight argued that uncertainty was the only source of profit and the concept was pervading John Maynard Keynes’ economics (Robert Skidelsky Keynes the return of the master).

Two mathematical theories of probability had become ascendant by the late 1920s. Richard von Mises (brother of the Austrian economist Ludwig) attempted to lay down the axioms of classical probability within a framework of Empiricism, the ‘frequentist’ or ‘objective’ approach. To counter–balance von Mises, the Italian actuary Bruno de Finetti presented a more Pragmatic approach, characterised by his claim that “Probability does not exist” because it was only an expression of the observer’s view of the world. This ‘subjectivist’ approach was closely related to the less well-known position taken by the Pragmatist Frank Ramsey who developed an argument against Keynes’ Realist interpretation of probability presented in the Treatise on Probability.

Kolmogorov addressed the trichotomy of mathematical probability by generalising so that Realist, Empiricist and Pragmatist probabilities were all examples of ‘measures’ satisfying certain axioms. In doing this, a random variable became a function while an expectation was an integral: probability became a branch of Analysis, not Statistics. Von Mises criticised Kolmogorov’s generalised framework as un-necessarily complex. About a decade and a half back, the physicist Edwin Jaynes (Probability Theory The Logic Of Science) champions Leonard Savage’s subjectivist Bayesianism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science”.

The objections to measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting; subjectivist probability is based on a flow of information, which, following Claude Shannon, is now an observable entity in Empirical science. Measure theoretic probability is based on abstract mathematical objects unrelated to sensible phenomena. However, the generality of Kolmogorov’s approach made it flexible enough to handle problems that emerged in physics and engineering during the Second World War and his approach became widely accepted after 1950 because it was practically more useful.

In the context of the first statement of the FTAP, a ‘martingale measure’ is a probability measure, usually labelled Q, such that the (real, rather than nominal) price of an asset today, X0, is the expectation, using the martingale measure, of its (real) price in the future, XT. Formally,

X0 = EQ XT

The abstract probability distribution Q is defined so that this equality exists, not on any empirical information of historical prices or subjective judgement of future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’, probability measures usually assigned the label P, is that they agree on what is possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in a development of von Mises’ work. The idea that asset prices have the martingale property was first proposed by Benoit Mandelbrot in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH), the two concepts being combined by Fama. For Mandelbrot and Fama the key consequence of prices being martingales was that the current price was independent of the future price and technical analysis would not prove profitable in the long run. In developing the EMH there was no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure. While the FTAP employs modern terminology in the context of value-neutrality, the idea of equating a current price with a future, uncertain, has ethical ramifications.

The other technical term in the first statement of the FTAP, arbitrage, has long been used in financial mathematics. Liber Abaci Fibonacci (Laurence Sigler Fibonaccis Liber Abaci) discusses ‘Barter of Merchandise and Similar Things’, 20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’, or ‘mediate’ as Aristotle might say, between the other two commodities.

Within neo-classical economics, the Law of One Price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gérard Debreu and Lionel MacKenzie in the context of general equilibrium, in particular the introduction of the Arrow Security, which, employing the Law of One Price, could be used to price any asset. It was on this principle that Black and Scholes believed the value of the warrants could be deduced by employing a hedging portfolio, in introducing their work with the statement that “it should not be possible to make sure profits” they were invoking the arbitrage argument, which had an eight hundred year history. In the context of the FTAP, ‘an arbitrage’ has developed into the ability to formulate a trading strategy such that the probability, under a natural or martingale measure, of a loss is zero, but the probability of a positive profit is not.

To understand the connection between the financial concept of arbitrage and the mathematical idea of a martingale measure, consider the most basic case of a single asset whose current price, X0, can take on one of two (present) values, XTD < XTU, at time T > 0, in the future. In this case an arbitrage would exist if X0 ≤ XTD < XTU: buying the asset now, at a price that is less than or equal to the future pay-offs, would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if XTD < XTU ≤ X0, short selling the asset now, and buying it back would also lead to an arbitrage. So, for there to be no arbitrage opportunities we require that

XTD < X0 < XTU

This implies that there is a number, 0 < q < 1, such that

X0 = XTD + q(XTU − XTD)

= qXTU + (1−q)XTD

The price now, X0, lies between the future prices, XTU and XTD, in the ratio q : (1 − q) and represents some sort of ‘average’. The first statement of the FTAP can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (real) future price”.

If X0 < XTD ≤ XTU we have that q < 0 whereas if XTD ≤ XTU < X0 then q > 1, and in both cases q does not represent a probability measure which by Kolmogorov’s axioms, must lie between 0 and 1. In either of these cases an arbitrage exists and a trader can make a riskless profit, the market involves ‘turpe lucrum’. This account gives an insight as to why James Bernoulli, in his moral approach to probability, considered situations where probabilities did not sum to 1, he was considering problems that were pathological not because they failed the rules of arithmetic but because they were unfair. It follows that if there are no arbitrage opportunities then quantity q can be seen as representing the ‘probability’ that the XTU price will materialise in the future. Formally

X0 = qXTU + (1−q) XTD ≡ EQ XT

The connection between the financial concept of arbitrage and the mathematical object of a martingale is essentially a tautology: both statements mean that the price today of an asset must lie between its future minimum and maximum possible value. This first statement of the FTAP was anticipated by Frank Ramsey when he defined ‘probability’ in the Pragmatic sense of ‘a degree of belief’ and argues that measuring ‘degrees of belief’ is through betting odds. On this basis he formulates some axioms of probability, including that a probability must lie between 0 and 1. He then goes on to say that

These are the laws of probability, …If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event.

This is a Pragmatic argument that identifies the absence of the martingale measure with the existence of arbitrage and today this forms the basis of the standard argument as to why arbitrages do not exist: if they did the, other market participants would bankrupt the agent who was mis-pricing the asset. This has become known in philosophy as the ‘Dutch Book’ argument and as a consequence of the fact/value dichotomy this is often presented as a ‘matter of fact’. However, ignoring the fact/value dichotomy, the Dutch book argument is an alternative of the ‘Golden Rule’– “Do to others as you would have them do to you.”– it is infused with the moral concepts of fairness and reciprocity (Jeffrey Wattles The Golden Rule).

FTAP is the ethical concept of Justice, capturing the social norms of reciprocity and fairness. This is significant in the context of Granovetter’s discussion of embeddedness in economics. It is conventional to assume that mainstream economic theory is ‘undersocialised’: agents are rational calculators seeking to maximise an objective function. The argument presented here is that a central theorem in contemporary economics, the FTAP, is deeply embedded in social norms, despite being presented as an undersocialised mathematical object. This embeddedness is a consequence of the origins of mathematical probability being in the ethical analysis of commercial contracts: the feudal shackles are still binding this most modern of economic theories.

Ramsey goes on to make an important point

Having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you.

Ramsey is arguing that an agent needs to employ the same measure in pricing all assets in a market, and this is the key result in contemporary derivative pricing. Having identified the martingale measure on the basis of a ‘primal’ asset, it is then applied across the market, in particular to derivatives on the primal asset but the well-known result that if two assets offer different ‘market prices of risk’, an arbitrage exists. This explains why the market-price of risk appears in the Radon-Nikodym derivative and the Capital Market Line, it enforces Ramsey’s consistency in pricing. The second statement of the FTAP is concerned with incomplete markets, which appear in relation to Arrow-Debreu prices. In mathematics, in the special case that there are as many, or more, assets in a market as there are possible future, uncertain, states, a unique pricing vector can be deduced for the market because of Cramer’s Rule. If the elements of the pricing vector satisfy the axioms of probability, specifically each element is positive and they all sum to one, then the market precludes arbitrage opportunities. This is the case covered by the first statement of the FTAP. In the more realistic situation that there are more possible future states than assets, the market can still be arbitrage free but the pricing vector, the martingale measure, might not be unique. The agent can still be consistent in selecting which particular martingale measure they choose to use, but another agent might choose a different measure, such that the two do not agree on a price. In the context of the Law of One Price, this means that we cannot hedge, replicate or cover, a position in the market, such that the portfolio is riskless. The significance of the second statement of the FTAP is that it tells us that in the sensible world of imperfect knowledge and transaction costs, a model within the framework of the FTAP cannot give a precise price. When faced with incompleteness in markets, agents need alternative ways to price assets and behavioural techniques have come to dominate financial theory. This feature was realised in The Port Royal Logic when it recognised the role of transaction costs in lotteries.

Dialectics of God: Lautman’s Mathematical Ascent to the Absolute. Paper.

centurionrage

Figure and Translation, visit Fractal Ontology

The first of Lautman’s two theses (On the unity of the mathematical sciences) takes as its starting point a distinction that Hermann Weyl made on group theory and quantum mechanics. Weyl distinguished between ‘classical’ mathematics, which found its highest flowering in the theory of functions of complex variables, and the ‘new’ mathematics represented by (for example) the theory of groups and abstract algebras, set theory and topology. For Lautman, the ‘classical’ mathematics of Weyl’s distinction is essentially analysis, that is, the mathematics that depends on some variable tending towards zero: convergent series, limits, continuity, differentiation and integration. It is the mathematics of arbitrarily small neighbourhoods, and it reached maturity in the nineteenth century. On the other hand, the ‘new’ mathematics of Weyl’s distinction is ‘global’; it studies the structures of ‘wholes’. Algebraic topology, for example, considers the properties of an entire surface rather than aggregations of neighbourhoods. Lautman re-draws the distinction:

In contrast to the analysis of the continuous and the infinite, algebraic structures clearly have a finite and discontinuous aspect. Though the elements of a group, field or algebra (in the restricted sense of the word) may be infinite, the methods of modern algebra usually consist in dividing these elements into equivalence classes, the number of which is, in most applications, finite.

In his other major thesis, (Essay on the notions of structure and existence in mathematics), Lautman gives his dialectical thought a more philosophical and polemical expression. His thesis is composed of ‘structural schemas’ and ‘origination schemas’ The three structural schemas are: local/global, intrinsic properties/induced properties and the ‘ascent to the absolute’. The first two of these three schemas close to Lautman’s ‘unity’ thesis. The ‘ascent to the absolute’ is a different sort of pattern; it involves a progress from mathematical objects that are in some sense ‘imperfect’, towards an object that is ‘perfect’ or ‘absolute’. His two mathematical examples of this ‘ascent’ are: class field theory, which ‘ascends’ towards the absolute class field, and the covering surfaces of a given surface, which ‘ascend’ towards a simply-connected universal covering surface. In each case, there is a corresponding sequence of nested subgroups, which induces a ‘stepladder’ structure on the ‘ascent’. This dialectical pattern is rather different to the others. The earlier examples were of pairs of notions (finite/infinite, local/global, etc.) and neither member of any pair was inferior to the other. Lautman argues that on some occasions, finite mathematics offers insight into infinite mathematics. In mathematics, the finite is not a somehow imperfect version of the infinite. Similarly, the ‘local’ mathematics of analysis may depend for its foundations on ‘global’ topology, but the former is not a botched or somehow inadequate version of the latter. Lautman introduces the section on the ‘ascent to the absolute’ by rehearsing Descartes’s argument that his own imperfections lead him to recognise the existence of a perfect being (God). Man (for Descartes) is not the dialectical opposite of or alternative to God; rather, man is an imperfect image of his creator. In a similar movement of thought, according to Lautman, reflection on ‘imperfect’ class fields and covering surfaces leads mathematicians up to ‘perfect’, ‘absolute’ class fields and covering surfaces respectively.

Albert Lautman Dialectics in mathematics

Derivability from Relational Logic of Charles Sanders Peirce to Essential Laws of Quantum Mechanics

Charles_Sanders_Peirce

Charles Sanders Peirce made important contributions in logic, where he invented and elaborated novel system of logical syntax and fundamental logical concepts. The starting point is the binary relation SiRSj between the two ‘individual terms’ (subjects) Sj and Si. In a short hand notation we represent this relation by Rij. Relations may be composed: whenever we have relations of the form Rij, Rjl, a third transitive relation Ril emerges following the rule

RijRkl = δjkRil —– (1)

In ordinary logic the individual subject is the starting point and it is defined as a member of a set. Peirce considered the individual as the aggregate of all its relations

Si = ∑j Rij —– (2)

The individual Si thus defined is an eigenstate of the Rii relation

RiiSi = Si —– (3)

The relations Rii are idempotent

R2ii = Rii —– (4)

and they span the identity

i Rii = 1 —– (5)

The Peircean logical structure bears resemblance to category theory. In categories the concept of transformation (transition, map, morphism or arrow) enjoys an autonomous, primary and irreducible role. A category consists of objects A, B, C,… and arrows (morphisms) f, g, h,… . Each arrow f is assigned an object A as domain and an object B as codomain, indicated by writing f : A → B. If g is an arrow g : B → C with domain B, the codomain of f, then f and g can be “composed” to give an arrow gof : A → C. The composition obeys the associative law ho(gof) = (hog)of. For each object A there is an arrow 1A : A → A called the identity arrow of A. The analogy with the relational logic of Peirce is evident, Rij stands as an arrow, the composition rule is manifested in equation (1) and the identity arrow for A ≡ Si is Rii.

Rij may receive multiple interpretations: as a transition from the j state to the i state, as a measurement process that rejects all impinging systems except those in the state j and permits only systems in the state i to emerge from the apparatus, as a transformation replacing the j state by the i state. We proceed to a representation of Rij

Rij = |ri⟩⟨rj| —– (6)

where state ⟨ri | is the dual of the state |ri⟩ and they obey the orthonormal condition

⟨ri |rj⟩ = δij —– (7)

It is immediately seen that our representation satisfies the composition rule equation (1). The completeness, equation (5), takes the form

n|ri⟩⟨ri|=1 —– (8)

All relations remain satisfied if we replace the state |ri⟩ by |ξi⟩ where

i⟩ = 1/√N ∑n |ri⟩⟨rn| —– (9)

with N the number of states. Thus we verify Peirce’s suggestion, equation (2), and the state |ri⟩ is derived as the sum of all its interactions with the other states. Rij acts as a projection, transferring from one r state to another r state

Rij |rk⟩ = δjk |ri⟩ —– (10)

We may think also of another property characterizing our states and define a corresponding operator

Qij = |qi⟩⟨qj | —– (11)

with

Qij |qk⟩ = δjk |qi⟩ —– (12)

and

n |qi⟩⟨qi| = 1 —– (13)

Successive measurements of the q-ness and r-ness of the states is provided by the operator

RijQkl = |ri⟩⟨rj |qk⟩⟨ql | = ⟨rj |qk⟩ Sil —– (14)

with

Sil = |ri⟩⟨ql | —– (15)

Considering the matrix elements of an operator A as Anm = ⟨rn |A |rm⟩ we find for the trace

Tr(Sil) = ∑n ⟨rn |Sil |rn⟩ = ⟨ql |ri⟩ —– (16)

From the above relation we deduce

Tr(Rij) = δij —– (17)

Any operator can be expressed as a linear superposition of the Rij

A = ∑i,j AijRij —– (18)

with

Aij =Tr(ARji) —– (19)

The individual states could be redefined

|ri⟩ → ei |ri⟩ —– (20)

|qi⟩ → ei |qi⟩ —– (21)

without affecting the corresponding composition laws. However the overlap number ⟨ri |qj⟩ changes and therefore we need an invariant formulation for the transition |ri⟩ → |qj⟩. This is provided by the trace of the closed operation RiiQjjRii

Tr(RiiQjjRii) ≡ p(qj, ri) = |⟨ri |qj⟩|2 —– (22)

The completeness relation, equation (13), guarantees that p(qj, ri) may assume the role of a probability since

j p(qj, ri) = 1 —– (23)

We discover that starting from the relational logic of Peirce we obtain all the essential laws of Quantum Mechanics. Our derivation underlines the outmost relational nature of Quantum Mechanics and goes in parallel with the analysis of the quantum algebra of microscopic measurement.

Duality’s Anti-Realism or Poisoning Ontological Realism: The Case of Vanishing Ontology. Note Quote.

M_Systems_-_A__html_m65d67aa7

If the intuitive quality of the external ontological object is diminished piece by piece during the evolutionary progress of physical theory (which must be acknowledged also in a hidden parameter framework), is there any core of the notion of an ontological object at all that can be trusted to be immune against scientific decomposition?

Quantum mechanics cannot answer this question. Contemporary physics is in a quite different position. The full dissolution of ontology is a characteristic process of particle physics whose unfolding starts with quantum mechanics and gains momentum in gauge field theory until, in string theory, the ontological object has simply vanished.

The concept to be considered is string duality, with the remarkable phenomenon of T-duality according to which a string wrapped around a small compact dimension can as well be understood as a string that is not wrapped but moves freely along a large compact dimension. The phenomenon is rooted in the quantum principles but clearly transcends what one is used to in the quantum world. It is not a mere case of quantum indeterminacy concerning two states of the system. We rather face two theoretical formulations which are undistinguishable in principle so that they cannot be interpreted as referring to two different states at all. Nevertheless the two formulations differ in characteristics which lie at the core of any meaningful ontology of an external world. They differ in the shape of space-time and they differ in form and topological position of the elementary objects. The fact that those characteristics are reduced to technical parameters whose values depend on the choice of the theoretical formulation contradicts ontological scientific realism in the most straightforward way. If a situation can be described by two different sets of elementary objects depending on the choice of the theoretical framework, how can it make sense to assert that these ontological objects actually exist in an external world?

The question gets even more virulent as T-duality by no means remains the only duality relation that surfaces in string theory. It turns out that the existence of dualities is one of string theory’s most characteristic features. They seem to pop up wherever one looks for them. Probably the most important role played by duality relations today is to connect all different superstring theories. Before 1995 physicists knew 5 different types of superstring theory. Then it turned out that these 5 theories and a 6th by then unknown theory named ‘M-theory’ are interconnected by duality relations. Two types of duality are involved. Some theories can be transformed into each other through inversion of a compactification radius, which is the phenomenon we know already under the name of T-duality. Others can be transformed into each other by inversion of the string coupling constant. This duality is called S-duality. Then there is M-theory, where the string coupling constant is transformed into an additional 11th dimension whose size is proportional to the coupling strength of the dual theory. The described web of dualities connects theories whose elementary objects have different symmetry structure and different dimensionality. M-theory even has a different number of spatial dimensions than its co-theories. Duality nevertheless implies that M-theory and the 5 possible superstring theories only represent different formulations of one single actual theory. This statement constitutes the basis for string theory’s uniqueness claims and shows the pivotal role played by the duality principle.

An evaluation of the philosophical implications of duality in modern string theory must first acknowledge that the problems to identify uniquely the ontological basis of a scientific theory are as old as the concept of invisible scientific objects itself. Complex theories tend to allow the insertion of ontology at more than one level of their structure. It is not a priori clear in classical electromagnetism whether the field or the potential should be understood as the fundamental physical object and one may wonder similarly in quantum field theory whether that concept’s basic object is the particle or the field. Questions of this type clearly pose a serious philosophical problem. Some philosophers like Quine have drawn the conclusion to deny any objective basis for the imputation of ontologies. Philosophers with a stronger affinity for realism however often stress that there do exist arguments which are able to select a preferable ontological set after all. It might also be suggested that ontological alternatives at different levels of the theoretical structure do not pose a threat to realism but should be interpreted merely as different parameterisations of ontological reality. The problem is created at a philosophical level by imputing an ontology to a physical theory whose structure neither depends on nor predetermines uniquely that imputation. The physicist puts one compact theoretical structure into space-time and the philosopher struggles with the question at which level ontological claims should be inserted.

The implications of string-duality have an entirely different quality. String duality really posits different ‘parallel’ empirically indistinguishable versions of structure in spacetime which are based on different sets of elementary objects. This statement is placed at the physical level independently of any philosophical interpretation. Thus it transfers the problem of the lack of ontological uniqueness from a philosophical to a physical level and makes it much more difficult to cure. If theories with different sets of elementary objects give the same physical world (i. e. show the same pattern of observables), the elementary object cannot be seen as the unique foundation of the physical world any more. There seems to be no way to avoid this conclusion. There exists an additional aspect of duality that underlines its anti-ontological character. Duality does not just spell destruction for the notion of the ontological scientific object but in a sense offers a replacement as well.

Do there remain any loop-holes in duality’s anti-realist implications which could be used by the die-hard realist? A natural objection to the asserted crucial philosophical importance of duality can be based on the fact, that duality was not invented in the context of string theory. It is known since the times of P. M. Dirac that quantum electrodynamics with magnetic monopoles would be dual to a theory with inverted coupling constant and exchanged electric and magnetic charges. The question arises, if duality is poison to ontological realism, why didn’t it have its effect already at the level of quantum electrodynamics. The answer gives a nice survey of possible measures to save ontological realism. As it will turn out, they all fail in string theory.

In the case of quantum-electrodynamics the realist has several arguments to counter the duality threat. First, duality looks more like an accidental oddity that appears in an unrealistic scenario than like a characteristic feature of the world. No one has observed magnetic monopoles, which renders the problem hypothetical. And even if there were magnetic monopoles, an embedding of electromagnetism into a fuller description of the natural forces would destroy the dual structure anyway.

In string theory the situation is very different. Duality is no ‘lucky strike’ any more, which just by chance arises in a certain scenario that is not the real one anyway. As we have seen, it rather represents a core feature of the emerging theoretical structure and cannot be ignored. A second option open to the realist at the level of quantum electrodynamics is to shift the ontological posit. Some philosophers of quantum physics argue that the natural elementary object of quantum field theory is the quantum field, which represents something like the potentiality to produce elementary particles. One quantum field covers the full sum over all variations of particle exchange which have to be accounted for in a quantum process. The philosopher who posits the quantum field to be the fundamental real object discovered by quantum field theory understands the single elementary particles as mere mathematical entities introduced to calculate the behaviour of the quantum field. Dual theories from his perspective can be taken as different technical procedures to calculate the behaviour of the univocal ontological object, the electromagnetic quantum field. The phenomenon of duality then does not appear as a threat to the ontological concept per se but merely as an indication in favour of an ontologisation of the field instead of the particle.

The field theoretical approach to interpret the quantum field as the ontological object does not have any pendent in string theory. String theory only exists as a perturbative theory. There seems to be no way to introduce anything like a quantum field that would cover the full expansion of string exchanges. In the light of duality this lack of a unique ontological object arguably appears rather natural. The reason is related to another point that makes string dualities more dramatic than its field theoretical predecessor. String theory includes gravitation. Therefore object (the string geometry) and space-time are not independent. Actually it turns out that the string geometry in a way carries all information about space-time as well. This dependence of space-time on string-geometry makes it difficult already to imagine how it should be possible to put into this very spacetime some kind of overall field whose coverage of all string realisations actually implies coverage of variations of spacetime itself. The duality context makes the paradoxical quality of such an attempt more transparent. If two dual theories with different radii of a compactified dimension shall be covered by the same ontological object in analogy to the quantum field in field theory, this object obviously cannot live in space and time. If it would, it had to choose one of the two spacetime versions endorsed by the dual theories, thereby discriminating the other one. This theory however should not be expected to be a theory of objects in spacetime and therefore does not rise any hopes to redeem the external ontological perspective.

A third strategy to save ontological realism is based on the following argument: In quantum electrodynamics the difference between the dual theories boils down to a mere replacement of a weak coupling constant which allows perturbative calculation by a strong one which does not. Therefore the choice is open between a natural formulation and a clumsy untreatable one which maybe should just be discarded as an artificial construction.

Today string theory cannot tell whether its final solution will put its parameters comfortably into the low-coupling-constant-and-large-compact-dimension-regime of one of the 5 superstring theories or M-theory. This might be the case but it might as well happen, that the solution lies in a region of parameter space where no theory clearly stands out in this sense. However, even if there was one preferred theory, the simple discarding of the others could not save realism as in the case of field theory. First, the argument of natural choice is not really applicable to T-duality. A small compactification radius does not render a theory intractable like a large coupling constant. The choice of the dual version with a large radius thus looks more like a convention than anything else. Second, the choice of both compactification radii and string coupling constants in string theory is the consequence of a dynamical process that has to be calculated itself. Calculation thus stands before the selection of a certain point in parameter space and consequently also before a possible selection of the ontological objects. The ontological objects therefore, even if one wanted to hang on to their meaningfulness in the final scenario, would appear as a mere product of prior dynamics and not as a priori actors in the game.

Summing up, the phenomenon of duality is admittedly a bit irritating for the ontological realist in field theory but he can live with it. In string theory however, the field theoretical strategies to save realism all fail. The position assumed by the duality principle in string theory clearly renders obsolete the traditional realist understanding of scientific objects as smaller cousins of visible ones. The theoretical posits of string theory get their meaning only relative to their theoretical framework and must be understood as mathematical concepts without any claim to ‘corporal’ existence in an external world. The world of string theory has cut all ties with classical theories about physical bodies. To stick to ontological realism in this altered context, would be inadequate to the elementary changes which characterize the new situation. The demise of ontology in string theory opens new perspectives on the positions where the stress is on the discontinuity of ontological claims throughout the history of scientific theories.

Vector Representations and Why Would They Deviate From Projective Geometry? Note Quote.

Stereographic_projection_in_3D

There is, of course, a definite reason why von Neumann used the mathematical structure of a complex Hilbert space for the formalization of quantum mechanics, but this reason is much less profound than it is for Riemann geometry and general relativity. The reason is that Heisenberg’s matrix mechanics and Schrödinger’s wave mechanics turned out to be equivalent, the first being a formalization of the new mechanics making use of l2, the set of all square summable complex sequences, and the second making use of L2(R3), the set of all square integrable complex functions of three real variables. The two spaces l2 and L2(R3) are canonical examples of a complex Hilbert space. This means that Heisenberg and Schrödinger were working already in a complex Hilbert space, when they formulated matrix mechanics and wave mechanics, without being aware of it. This made it a straightforward choice for von Neumann to propose a formulation of quantum mechanics in an abstract complex Hilbert space, reducing matrix mechanics and wave mechanics to two possible specific representations.

One problem with the Hilbert space representation was known from the start. A (pure) state of a quantum entity is represented by a unit vector or ray of the complex Hilbert space, and not by a vector. Indeed vectors contained in the same ray represent the same state or one has to renormalize the vector that represents the state after it has been changed in one way or another. It is well known that if rays of a vector space are called points and two dimensional subspaces of this vector space are called lines, the set of points and lines corresponding in this way to a vector space, form a projective geometry. What we just remarked about the unit vector or ray representing the state of the quantum entity means that in some way the projective geometry corresponding to the complex Hilbert space represents more intrinsically the physics of the quantum world as does the Hilbert space itself. This state of affairs is revealed explicitly in the dynamics of quantum entities, that is built by using group representations, and one has to consider projective representations, which are representations in the corresponding projective geometry, and not vector representations.

Priest’s Razor: Metaphysics. Note Quote.

Quantum-Physics-and-Metaphysics

The very idea that some mathematical piece employed to develop an empirical theory may furnish us information about unobservable reality requires some care and philosophical reflection. The greatest difficulty for the scientifically minded metaphysician consists in furnishing the means for a “reading off” of ontology from science. What can come in, and what can be left out? Different strategies may provide for different results, and, as we know, science does not wear its metaphysics on its sleeves. The first worry may be making the metaphysical piece compatible with the evidence furnished by the theory.

The strategy adopted by da Costa and de Ronde may be called top-down: investigating higher science and, by judging from the features of the objects described by the theory, one can look for the appropriate logic to endow it with just those features. In this case (quantum mechanics), there is the theory, apparently attributing contradictory properties to entities, so that a logic that does cope with such feature of objects is called forth. Now, even though we believe that this is in great measure the right methodology to pursue metaphysics within scientific theories, there are some further methodological principles that also play an important role in these kind of investigation, principles that seem to lessen the preferability of the paraconsistent approach over alternatives.

To begin with, let us focus on the paraconsistent property attribution principle. According to this principle, the properties corresponding to the vectors in a superposition are all attributable to the system, they are all real. The first problem with this rendering of properties (whether they are taken to be actual or just potential) is that such a superabundance of properties may not be justified: not every bit of a mathematical formulation of a theory needs to be reified. Some of the parts of the theory are just that: mathematics required to make things work, others may correspond to genuine features of reality. The greatest difficulty is to distinguish them, but we should not assume that every bit of it corresponds to an entity in reality. So, on the absence of any justified reason to assume superpositions as a further entity on the realms of properties for quantum systems, we may keep them as not representing actual properties (even if merely possible or potential ones).

That is, when one takes into account other virtues of a metaphysical theory, such as economy and simplicity, the paraconsistent approach seems to inflate too much the population of our world. In the presence of more economical candidates doing the same job and absence of other grounds on which to choose the competing proposals, the more economical approaches take advantage. Furthermore, considering economy and the existence of theories not postulating contradictions in quantum mechanics, it seems reasonable to employ Priest’s razor – the principle according to which one should not assume contradictions beyond necessity – and stick with the consistent approaches. Once again, a useful methodological principle seems to deem the interpretation of superposition as contradiction as unnecessary.

The paraconsistent approach could take advantage over its competitors, even in the face of its disadvantage in order to accommodate such theoretical virtues, if it could endow quantum mechanics with a better understanding of quantum phenomena, or even if it could add some explanatory power to the theory. In the face of some such kind of gain, we could allow for some ontological extravagances: in most cases explanatory power rules over matters of economy. However, it does not seem that the approach is indeed going to achieve some such result.

Besides that lack of additional explanatory power or enlightenment on the theory, there are some additional difficulties here. There is a complete lack of symmetry with the standard case of property attribution in quantum mechanics. As it is usually understood, by adopting the minimal property attribution principle, it is not contentious that when a system is in one eigenstate of an observable, then we may reasonably infer that the system has the property represented by the associated observable, so that the probability of obtaining the eigenvalue associated is 1. In the case of superpositions, if they represented properties of their own, there is a complete disanalogy with that situation: probabilities play a different role, a system has a contradictory property attributed by a superposition irrespective of probability attribution and the role of probabilities in determining measurement outcomes. In a superposition, according to the proposal we are analyzing, probabilities play no role, the system simply has a given contradictory property by the simple fact of being in a (certain) superposition.

For another disanalogy with the usual case, one does not expect to observe a sys- tem in such a contradictory state: every measurement gives us a system in particular state, never in a superposition. If that is a property in the same foot as any other, why can’t we measure it? Obviously, this does not mean that we put measurement as a sign of the real, but when doubt strikes, it may be a good advice not to assume too much on the unobservable side. As we have observed before, a new problem is created by this interpretation, because besides explaining what is it that makes a measurement give a specific result when the system measured is in a superposition (a problem usually addressed by the collapse postulate, which seems to be out of fashion now), one must also explain why and how the contradictory properties that do not get actualized vanish. That is, besides explaining how one particular property gets actual, one must explain how the properties posed by the system that did not get actual vanish.

Furthermore, even if states like 1/√2 (| ↑x ⟩ + | ↓x ⟩) may provide for an example of a  candidate of a contradictory property, because the system seems to have both spin up and down in a given direction, there are some doubts when the distribution of probabilities is different, in cases such as 2/√7 | ↑x ⟩ + √(3/7) | ↓x ⟩. What are we to think about that? Perhaps there is still a contradiction, but it is a little more inclined to | ↓x⟩ than to | ↑x⟩? That is, it is difficult to see how a contradiction arises in such cases. Or should we just ignore the probabilities and take the states composing the superposition as somehow opposed to form a contradiction anyway? That would put metaphysics way too much ahead of science, by leaving the role of probabilities unexplained in quantum mechanics in order to allow a metaphysical view of properties in.

Ontological-Objects Categoric-Theoretic Physics. A Case of Dagger Functor. Note Quote.

RovelliFig3

Jonathan Bain’s examples in support of the second strategy are:

(i) the category Hilb of complex Hilbert spaces and linear maps; and

(ii) the category nCob, which has (n−1)-dimensional oriented closed manifolds as objects, and n-dimensional oriented manifolds as morphisms.

These examples purportedly represent ‘purely’ category-theoretic physics. This means that formal statements about the physical theory, e.g. quantum mechanics using Hilb, are derived using the category-theoretic rules of morphisms in Hilb.

Now, prima facie, both of these examples look like good candidates for doing purely category-theoretic physics. First, each category is potentially useful for studying the properties of quantum theory and general relativity respectively. Second, each possesses categorical properties which are promising for describing physical properties. More ambitiously, they suggest that one could use categorical tools to develop an approach for integrating parts of quantum theory and general relativity.

Let us pause to explain this second point, which rests on the fact that, qua categories, Hilb and nCob share some important properties. For example, both of these categories are monoidal, meaning that both categories carry a generalisation of the tensor product V ⊗ W of vector spaces V and W. In nCob the monoidal structure is given by the disjoint union of manifolds; whereas in Hilb, the monoidal structure is given by the usual linear-algebraic tensor product of Hilbert spaces.

A second formal property shared by both categories is that they each possess a contravariant involutive endofunctor (·)called the dagger functor. Recall that a contravariant functor is a functor F : C → D that reverses the direction of arrows, i.e. a morphism f : A → B is mapped to a morphism F (f ) : F (B) → F (A). Also recall that an endofunctor on a category C is a functor F : C → C, i.e. the domain and codomain of F are equal. This means that, given a cobordism f: A → B in nCob or a linear map L: A → B in Hilb, there exists a cobordism f: B → A and a linear adjoint L : B → A respectively, satisfying the involution laws f ◦ f = 1A and f ◦ f = 1B, and identically for L.

The formal analogy between Hilb and nCob has led to the definition of a type of quantum field theory, known as a topological quantum field theory (TQFT), first introduced by Atiyah and Witten. A TQFT is a (symmetric monoidal) functor:

T : nCob → Hilb,

and the conditions placed on this functor, e.g. that it preserve monoidal structure, reflect that its domain and target categories share formal categorical properties. To further flesh out the physical interpretation of TQFTs, we note that the justification for the term ‘quantum field theory’ arises from the fact that a TQFT assigns a state space (i.e. a Hilbert space) to each closed manifold in nCob, and it assigns a linear map representing evolution to each cobordism. This can be thought of as assigning an amplitude to each cobordism, and hence we obtain something like a quantum field theory.

Recall that the significance of these examples for Bain is their apparent status as purely category-theoretic formulations of physics which, in virtue of their generality, do not make any reference to O-objects (represented in the standard way, i.e. as elements of sets). We now turn to a criticism of this claim.

Bain’s key idea seems to be that this ‘generality’ consists of the fact that nCob and Hilb (and thus TQFTs) have very different properties (qua categories) from Set. In fact, he claims that three such differences count in favor of (Objectless):

(i) nCob and Hilb are non-concrete categories, but Set (and other categories based on it) are concrete.

(ii) nCob and Hilb are monoidal categories, but Set is not.

(iii) nCob and Hilb have a dagger functor, but Set does not.

We address these points and their implications for (Objectless) in turn. First, (i). Bain wants to argue that since nCob and Hilb ‘cannot be considered categories of structured sets’, nor can these categories be interpreted as having O-objects. If one is talking about categorical properties, this claim is best couched in the standard terminology as the claim that these are not concrete categories. But this inference is faulty for two reasons. First, his point about non-concreteness is not altogether accurate, i.e. point (i) is false as stated. On the one hand, it is true that nCob is not a concrete category: in particular, while the objects of nCob are structured sets, its morphisms are not functions, but manifolds, i.e. sets equipped with the structure of a manifold. But on the other hand, Hilb is certainly a concrete category, since the objects are Hilbert spaces, which are sets with extra conditions; and the morphisms are just functions with linearity conditions. In other words, the morphisms are structure-preserving functions. Thus, Bain’s examples of category-theoretic physics are based in part on concrete categories. Second and more importantly, it is doubtful that the standard mathematical notion of concreteness will aid Bain in defending (Objectless). Bain wants to hold that the non-concreteness of a category is a sufficient condition for its not referring to O-objects. But nCob is an example of a non-concrete category that apparently contain O-objects—indeed the same O-objects (viz. space-time points) that Bain takes to be present in geometric models of GTR. We thus see that, by Bain’s own lights, non-concreteness cannot be a sufficient condition of evading O-objects.

So the example of nCob still has C-objects that are based on sets, albeit morphisms which are more general than functions. However, one can go further than this: the notion of a category is in fact defined in a schematic way, which leaves open the question of whether C-objects are sets or whether functions are morphisms. One might thus rhetorically ask whether this could this be the full version of ‘categorical generality’ that Bain needs in order to defend (Objectless). In fact, this is implausible, because of the way in which such a schematic generality ends up being deployed in physics.

On to (ii): unfortunately, this claim is straightforwardly false: the category Set is certainly monoidal, with the monoidal product being given by the cartesian product.

Finally, (iii). While it is true that Set does not have a dagger functor, and nCob and Hilb do, it is easy to construct an example of a category with a dagger functor, but which Bain would presumably agree has O-objects. Consider the category C with one object, namely a manifold M representing a relativistic spacetime; the morphisms of C are taken to be the automorphisms of M. As with nCob, this category has natural candidates for O-objects (as Bain assumes), viz. the points of the manifold. But the category C also has a dagger functor: given an automorphism f : M → M, the morphism f : M → M is given by the inverse automorphism f−1. In contrast, the category Set does not have a dagger functor: this follows from the observation that for any set A that is not the singleton set {∗}, there is a unique morphism f : A → {∗}, but the number of morphisms g : {∗} → A is just the cardinality |A| > 1. Hence there does not exist a bijection between the set of morphisms {f : A → {∗}} and the set of morphisms {g : {∗} → A}, which implies that there does not exist a dagger functor on Set. Thus, by Bain’s own criterion, it is reasonable to consider C to be structurally dissimilar to Set, despite the fact that it has O-objects.

More generally, i.e. putting aside the issue of (Objectless), it is quite unclear how one should interpret the physical significance of the fact that nCob/Hilb, but not Set has a dagger functor. For instance, it turns out that by an easy extension of Set, one can construct a category that does have a dagger functor. This easy extension is the category Rel, whose objects are sets and whose morphisms are relations between objects (i.e. subsets of the Cartesian product of a pair of objects). Note first that Set is a subcategory of Rel because Set and Rel have same objects, and every morphism in Set is a morphism in Rel. This can be seen by noting that every function f : A → B can be written as a relation f ⊆ A × B, consisting of the pairs (a, b) defined by f(a) = b. Second, note that – unlike Set – Rel does have a non-trivial involution endofunctor, i.e. a dagger functor, since given a relation R : A → B, the relation