The Biological Kant. Note Quote.

Nb3O7(OH)_self-organization2

The biological treatise takes as its object the realm of physics left out of Kant’s critical demarcations of scientific, that is, mathematical and mechanistic, physics. Here, the main idea was that scientifically understandable Nature was defined by lawfulness. In his Metaphysical Foundations of Natural Science, this idea was taken further in the following claim:

I claim, however, that there is only as much proper science to be found in any special doctrine on nature as there is mathematics therein, and further ‘a pure doctrine on nature about certain things in nature (doctrine on bodies and doctrine on minds) is only possible by means of mathematics’.

The basic idea is thus to identify Nature’s lawfulness with its ability to be studied by means of mathematical schemata uniting understanding and intuition. The central schema, to Kant, was numbers, so apt to be used in the understanding of mechanically caused movement. But already here, Kant is very well aware of a whole series of aspects of spontaneuosly experienced Nature is left out of sight by the concentration on matter in movement, and he calls for these further realms of Nature to be studied by a continuation of the Copernican turn, by the mind’s further study of the utmost limits of itself. Why do we spontaneously see natural purposes, in Nature? Purposiveness is wholly different from necessity, crucial to Kant’s definition of Nature. There is no reason in the general concept of Nature (as lawful) to assume that nature’s objects may serve each other as purposes. Nevertheless, we do not stop assuming just that. But what we do when we ascribe purposes to Nature is using the faculties of mind in another way than in science, much closer to the way we use them in the appreciation of beauty and art, the object of the first part of the book immediately before the treatment of teleological judgment. This judgment is characterized by a central distinction, already widely argued in this first part of the book: the difference between determinative and reflective judgments, respectively. While the judgment used scientifically to decide whether a specific case follows a certain rule in explanation by means of a derivation from a principle, and thus constitutes the objectivity of the object in question – the judgment which is reflective lacks all these features. It does not proceed by means of explanation, but by mere analogy; it is not constitutive, but merely regulative; it does not prove anything but merely judges, and it has no principle of reason to rest its head upon but the very act of judging itself. These ideas are now elaborated throughout the critic of teleological judgment.

nrm2357-i1

In the section Analytik der teleologischen Urteilskraft, Kant gradually approaches the question: first is treated the merely formal expediency: We may ascribe purposes to geometry in so far as it is useful to us, just like rivers carrying fertile soils with them for trees to grow in may be ascribed purposes; these are, however, merely contingent purposes, dependent on an external telos. The crucial point is the existence of objects which are only possible as such in so far as defined by purposes:

That its form is not possible after mere natural laws, that is, such things which may not be known by us through understanding applied to objects of the senses; on the contrary that even the empirical knowledge about them, regarding their cause and effect, presupposes concepts of reason.

The idea here is that in order to conceive of objects which may not be explained with reference to understanding and its (in this case, mechanical) concepts only, these must be grasped by the non-empirical ideas of reason itself. If causes are perceived as being interlinked in chains, then such contingencies are to be thought of only as small causal circles on the chain, that is, as things being their own cause. Hence Kant’s definition of the Idea of a natural purpose:

an object exists as natural purpose, when it is cause and effect of itself.

This can be thought as an idea without contradiction, Kant maintains, but not conceived. This circularity (the small causal circles) is a very important feature in Kant’s tentative schematization of purposiveness. Another way of coining this Idea is – things as natural purposes are organized beings. This entails that naturally purposeful objects must possess a certain spatio-temporal construction: the parts of such a thing must be possible only through their relation to the whole – and, conversely, the parts must actively connect themselves to this whole. Thus, the corresponding idea can be summed up as the Idea of the Whole which is necessary to pass judgment on any empirical organism, and it is very interesting to note that Kant sums up the determination of any part of a Whole by all other parts in the phrase that a natural purpose is possible only as an organized and self-organizing being. This is probably the very birth certificate of the metaphysics of self-organization. It is important to keep in mind that Kant does not feel any vitalist temptation at supposing any organizing power or any autonomy on the part of the whole which may come into being only by this process of self-organization between its parts. When Kant talks about the forming power in the formation of the Whole, it is thus nothing outside of this self-organization of its parts.

This leads to Kant’s final definition: an organized being is that in which all that is alternating is ends and means. This idea is extremely important as a formalization of the idea of teleology: the natural purposes do not imply that there exists given, stable ends for nature to pursue, on the contrary, they are locally defined by causal cycles, in which every part interchangeably assumes the role of ends and means. Thus, there is no absolute end in this construal of nature’s teleology; it analyzes teleology formally at the same time as it relativizes it with respect to substance. Kant takes care to note that this maxim needs not be restricted to the beings – animals – which we spontaneously tend to judge as purposeful. The idea of natural purposes thus entails that there might exist a plan in nature rendering processes which we have all reasons to disgust purposeful for us. In this vision, teleology might embrace causality – and even aesthetics:

Also natural beauty, that is, its harmony with the free play of our epistemological faculties in the experience and judgment of its appearance can be seen in the way of objective purposivity of nature in its totality as system, in which man is a member.

An important consequence of Kant’s doctrine is that their teleology is so to speak secularized in two ways: (1) it is formal, and (2) it is local. It is formal because self-organization does not ascribe any special, substantial goal for organisms to pursue – other than the sustainment of self-organization. Thus teleology is merely a formal property in certain types of systems. This is why teleology is also local – it is to be found in certain systems when the causal chain form loops, as Kant metaphorically describes the cycles involved in self-organization – it is no overarching goal governing organisms from the outside. Teleology is a local, bottom-up, process only.

Kant does not in any way doubt the existence of organized beings, what is at stake is the possibility of dealing with them scientifically in terms of mechanics. Even if they exist as a given thing in experience, natural purposes can not receive any concept. This implies that biology is evident in so far as the existence of organisms cannot be doubted. Biology will never rise to the heights of science, its attempts at doing so are beforehand delimited, all scientific explanations of organisms being bound to be mechanical. Following this line of argument, it corresponds very well to present-day reductionism in biology, trying to take all problems of phenotypical characters, organization, morphogenesis, behavior, ecology, etc. back to the biochemistry of genetics. But the other side of the argument is that no matter how successful this reduction may prove, it will never be able to reduce or replace the teleological point of view necessary in order to understand the organism as such in the first place.

Evidently, there is something deeply unsatisfactory in this conclusion which is why most biologists have hesitated at adopting it and cling to either full-blown reductionism or to some brand of vitalism, subjecting themselves to the dangers of ‘transcendental illusion’ and allowing for some Goethe-like intuitive idea without any schematization. Kant tries to soften up the question by philosophical means by establishing an crossing over from metaphysics to physics, or, from the metaphysical constraints on mechanical physics and to physics in its empirical totality, including the organized beings of biology. Pure mechanics leaves physics as a whole unorganized, and this organization is sought to be established by means of mediating concepts’. Among them is the formative power, which is not conceived of in a vitalist substantialist manner, but rather a notion referring to the means by which matter manages to self-organize. It thus comprehends not only biological organization, but macrophysic solid matter physics as well. Here, he adds an important argument to the critic of judgment:

Because man is conscious of himself as a self-moving machine, without being able to further understand such a possibility, he can, and is entitled to, introduce a priori organic-moving forces of bodies into the classification of bodies in general and thus to distinguish mere mechanical bodies from self-propelled organic bodies.

Something Out of Almost Nothing. Drunken Risibility.

Kant’s first antinomy makes the error of the excluded third option, i.e. it is not impossible that the universe could have both a beginning and an eternal past. If some kind of metaphysical realism is true, including an observer-independent and relational time, then a solution of the antinomy is conceivable. It is based on the distinction between a microscopic and a macroscopic time scale. Only the latter is characterized by an asymmetry of nature under a reversal of time, i.e. the property of having a global (coarse-grained) evolution – an arrow of time – or many arrows, if they are independent from each other. Thus, the macroscopic scale is by definition temporally directed – otherwise it would not exist.

On the microscopic scale, however, only local, statistically distributed events without dynamical trends, i.e. a global time-evolution or an increase of entropy density, exist. This is the case if one or both of the following conditions are satisfied: First, if the system is in thermodynamic equilibrium (e.g. there is degeneracy). And/or second, if the system is in an extremely simple ground state or meta-stable state. (Meta-stable states have a local, but not a global minimum in their potential landscape and, hence, they can decay; ground states might also change due to quantum uncertainty, i.e. due to local tunneling events.) Some still speculative theories of quantum gravity permit the assumption of such a global, macroscopically time-less ground state (e.g. quantum or string vacuum, spin networks, twistors). Due to accidental fluctuations, which exceed a certain threshold value, universes can emerge out of that state. Due to some also speculative physical mechanism (like cosmic inflation) they acquire – and, thus, are characterized by – directed non-equilibrium dynamics, specific initial conditions, and, hence, an arrow of time.

It is a matter of debate whether such an arrow of time is

1) irreducible, i.e. an essential property of time,

2) governed by some unknown fundamental and not only phenomenological law,

3) the effect of specific initial conditions or

4) of consciousness (if time is in some sense subjective), or

5) even an illusion.

Many physicists favour special initial conditions, though there is no consensus about their nature and form. But in the context at issue it is sufficient to note that such a macroscopic global time-direction is the main ingredient of Kant’s first antinomy, for the question is whether this arrow has a beginning or not.

Time’s arrow is inevitably subjective, ontologically irreducible, fundamental and not only a kind of illusion, thus if some form of metaphysical idealism for instance is true, then physical cosmology about a time before time is mistaken or quite irrelevant. However, if we do not want to neglect an observer-independent physical reality and adopt solipsism or other forms of idealism – and there are strong arguments in favor of some form of metaphysical realism -, Kant’s rejection seems hasty. Furthermore, if a Kantian is not willing to give up some kind of metaphysical realism, namely the belief in a “Ding an sich“, a thing in itself – and some philosophers actually insisted that this is superfluous: the German idealists, for instance -, he has to admit that time is a subjective illusion or that there is a dualism between an objective timeless world and a subjective arrow of time. Contrary to Kant’s thoughts: There are reasons to believe that it is possible, at least conceptually, that time has both a beginning – in the macroscopic sense with an arrow – and is eternal – in the microscopic notion of a steady state with statistical fluctuations.

Is there also some physical support for this proposal?

Surprisingly, quantum cosmology offers a possibility that the arrow has a beginning and that it nevertheless emerged out of an eternal state without any macroscopic time-direction. (Note that there are some parallels to a theistic conception of the creation of the world here, e.g. in the Augustinian tradition which claims that time together with the universe emerged out of a time-less God; but such a cosmological argument is quite controversial, especially in a modern form.) So this possible overcoming of the first antinomy is not only a philosophical conceivability but is already motivated by modern physics. At least some scenarios of quantum cosmology, quantum geometry/loop quantum gravity, and string cosmology can be interpreted as examples for such a local beginning of our macroscopic time out of a state with microscopic time, but with an eternal, global macroscopic timelessness.

To put it in a more general, but abstract framework and get a sketchy illustration, consider the figure.

Untitled

Physical dynamics can be described using “potential landscapes” of fields. For simplicity, here only the variable potential (or energy density) of a single field is shown. To illustrate the dynamics, one can imagine a ball moving along the potential landscape. Depressions stand for states which are stable, at least temporarily. Due to quantum effects, the ball can “jump over” or “tunnel through” the hills. The deepest depression represents the ground state.

In the common theories the state of the universe – the product of all its matter and energy fields, roughly speaking – evolves out of a metastable “false vacuum” into a “true vacuum” which has a state of lower energy (potential). There might exist many (perhaps even infinitely many) true vacua which would correspond to universes with different constants or laws of nature. It is more plausible to start with a ground state which is the minimum of what physically can exist. According to this view an absolute nothingness is impossible. There is something rather than nothing because something cannot come out of absolutely nothing, and something does obviously exist. Thus, something can only change, and this change might be described with physical laws. Hence, the ground state is almost “nothing”, but can become thoroughly “something”. Possibly, our universe – and, independent from this, many others, probably most of them having different physical properties – arose from such a phase transition out of a quasi atemporal quantum vacuum (and, perhaps, got disconnected completely). Tunneling back might be prevented by the exponential expansion of this brand new space. Because of this cosmic inflation the universe not only became gigantic but simultaneously the potential hill broadened enormously and got (almost) impassable. This preserves the universe from relapsing into its non-existence. On the other hand, if there is no physical mechanism to prevent the tunneling-back or makes it at least very improbable, respectively, there is still another option: If infinitely many universes originated, some of them could be long-lived only for statistical reasons. But this possibility is less predictive and therefore an inferior kind of explanation for not tunneling back.

Another crucial question remains even if universes could come into being out of fluctuations of (or in) a primitive substrate, i.e. some patterns of superposition of fields with local overdensities of energy: Is spacetime part of this primordial stuff or is it also a product of it? Or, more specifically: Does such a primordial quantum vacuum have a semi-classical spacetime structure or is it made up of more fundamental entities? Unique-universe accounts, especially the modified Eddington models – the soft bang/emergent universe – presuppose some kind of semi-classical spacetime. The same is true for some multiverse accounts describing our universe, where Minkowski space, a tiny closed, finite space or the infinite de Sitter space is assumed. The same goes for string theory inspired models like the pre-big bang account, because string and M- theory is still formulated in a background-dependent way, i.e. requires the existence of a semi-classical spacetime. A different approach is the assumption of “building-blocks” of spacetime, a kind of pregeometry also the twistor approach of Roger Penrose, and the cellular automata approach of Stephen Wolfram. The most elaborated accounts in this line of reasoning are quantum geometry (loop quantum gravity). Here, “atoms of space and time” are underlying everything.

Though the question whether semiclassical spacetime is fundamental or not is crucial, an answer might be nevertheless neutral with respect of the micro-/macrotime distinction. In both kinds of quantum vacuum accounts the macroscopic time scale is not present. And the microscopic time scale in some respect has to be there, because fluctuations represent change (or are manifestations of change). This change, reversible and relationally conceived, does not occur “within” microtime but constitutes it. Out of a total stasis nothing new and different can emerge, because an uncertainty principle – fundamental for all quantum fluctuations – would not be realized. In an almost, but not completely static quantum vacuum however, macroscopically nothing changes either, but there are microscopic fluctuations.

The pseudo-beginning of our universe (and probably infinitely many others) is a viable alternative both to initial and past-eternal cosmologies and philosophically very significant. Note that this kind of solution bears some resemblance to a possibility of avoiding the spatial part of Kant’s first antinomy, i.e. his claimed proof of both an infinite space without limits and a finite, limited space: The theory of general relativity describes what was considered logically inconceivable before, namely that there could be universes with finite, but unlimited space, i.e. this part of the antinomy also makes the error of the excluded third option. This offers a middle course between the Scylla of a mysterious, secularized creatio ex nihilo, and the Charybdis of an equally inexplicable eternity of the world.

In this context it is also possible to defuse some explanatory problems of the origin of “something” (or “everything”) out of “nothing” as well as a – merely assumable, but never provable – eternal cosmos or even an infinitely often recurring universe. But that does not offer a final explanation or a sufficient reason, and it cannot eliminate the ultimate contingency of the world.

Quantum Informational Biochemistry. Thought of the Day 71.0

el_net2

A natural extension of the information-theoretic Darwinian approach for biological systems is obtained taking into account that biological systems are constituted in their fundamental level by physical systems. Therefore it is through the interaction among physical elementary systems that the biological level is reached after increasing several orders of magnitude the size of the system and only for certain associations of molecules – biochemistry.

In particular, this viewpoint lies in the foundation of the “quantum brain” project established by Hameroff and Penrose (Shadows of the Mind). They tried to lift quantum physical processes associated with microsystems composing the brain to the level of consciousness. Microtubulas were considered as the basic quantum information processors. This project as well the general project of reduction of biology to quantum physics has its strong and weak sides. One of the main problems is that decoherence should quickly wash out the quantum features such as superposition and entanglement. (Hameroff and Penrose would disagree with this statement. They try to develop models of hot and macroscopic brain preserving quantum features of its elementary micro-components.)

However, even if we assume that microscopic quantum physical behavior disappears with increasing size and number of atoms due to decoherence, it seems that the basic quantum features of information processing can survive in macroscopic biological systems (operating on temporal and spatial scales which are essentially different from the scales of the quantum micro-world). The associated information processor for the mesoscopic or macroscopic biological system would be a network of increasing complexity formed by the elementary probabilistic classical Turing machines of the constituents. Such composed network of processors can exhibit special behavioral signatures which are similar to quantum ones. We call such biological systems quantum-like. In the series of works Asano and others (Quantum Adaptivity in Biology From Genetics to Cognition), there was developed an advanced formalism for modeling of behavior of quantum-like systems based on theory of open quantum systems and more general theory of adaptive quantum systems. This formalism is known as quantum bioinformatics.

The present quantum-like model of biological behavior is of the operational type (as well as the standard quantum mechanical model endowed with the Copenhagen interpretation). It cannot explain physical and biological processes behind the quantum-like information processing. Clarification of the origin of quantum-like biological behavior is related, in particular, to understanding of the nature of entanglement and its role in the process of interaction and cooperation in physical and biological systems. Qualitatively the information-theoretic Darwinian approach supplies an interesting possibility of explaining the generation of quantum-like information processors in biological systems. Hence, it can serve as the bio-physical background for quantum bioinformatics. There is an intriguing point in the fact that if the information-theoretic Darwinian approach is right, then it would be possible to produce quantum information from optimal flows of past, present and anticipated classical information in any classical information processor endowed with a complex enough program. Thus the unified evolutionary theory would supply a physical basis to Quantum Information Biology.

High Frequency Markets and Leverage

0*o9wpWk6YyXYGxntK

Leverage effect is a well-known stylized fact of financial data. It refers to the negative correlation between price returns and volatility increments: when the price of an asset is increasing, its volatility drops, while when it decreases, the volatility tends to become larger. The name “leverage” comes from the following interpretation of this phenomenon: When an asset price declines, the associated company becomes automatically more leveraged since the ratio of its debt with respect to the equity value becomes larger. Hence the risk of the asset, namely its volatility, should become more important. Another economic interpretation of the leverage effect, inverting causality, is that the forecast of an increase of the volatility should be compensated by a higher rate of return, which can only be obtained through a decrease in the asset value.

Some statistical methods enabling us to use high frequency data have been built to measure volatility. In financial engineering, it has become clear in the late eighties that it is necessary to introduce leverage effect in derivatives pricing frameworks in order to accurately reproduce the behavior of the implied volatility surface. This led to the rise of famous stochastic volatility models, where the Brownian motion driving the volatility is (negatively) correlated with that driving the price for stochastic volatility models.

Traditional explanations for leverage effect are based on “macroscopic” arguments from financial economics. Could microscopic interactions between agents naturally lead to leverage effect at larger time scales? We would like to know whether part of the foundations for leverage effect could be microstructural. To do so, our idea is to consider a very simple agent-based model, encoding well-documented and understood behaviors of market participants at the microscopic scale. Then we aim at showing that in the long run, this model leads to a price dynamic exhibiting leverage effect. This would demonstrate that typical strategies of market participants at the high frequency level naturally induce leverage effect.

One could argue that transactions take place at the finest frequencies and prices are revealed through order book type mechanisms. Therefore, it is an obvious fact that leverage effect arises from high frequency properties. However, under certain market conditions, typical high frequency behaviors, having probably no connection with the financial economics concepts, may give rise to some leverage effect at the low frequency scales. It is important to emphasize that leverage effect should be fully explained by high frequency features.

Another important stylized fact of financial data is the rough nature of the volatility process. Indeed, for a very wide range of assets, historical volatility time-series exhibit a behavior which is much rougher than that of a Brownian motion. More precisely, the dynamics of the log-volatility are typically very well modeled by a fractional Brownian motion with Hurst parameter around 0.1, that is a process with Hölder regularity of order 0.1. Furthermore, using a fractional Brownian motion with small Hurst index also enables to reproduce very accurately the features of the volatility surface.

hurst_fbm

The fact that for basically all reasonably liquid assets, volatility is rough, with the same order of magnitude for the roughness parameter, is of course very intriguing. Tick-by-tick price model is based on a bi-dimensional Hawkes process, which is a bivariate point process (Nt+, Nt)t≥0 taking values in (R+)2 and with intensity (λ+t, λt) of the form

Untitled

Here μ+ and μ are positive constants and the functions (φi)i=1,…4 are non-negative with associated matrix called kernel matrix. Hawkes processes are said to be self-exciting, in the sense that the instantaneous jump probability depends on the location of the past events. Hawkes processes are nowadays of standard use in finance, not only in the field of microstructure but also in risk management or contagion modeling. The Hawkes process generates behavior that mimics financial data in a pretty impressive way. And back-fitting, yields coorespndingly good results.  Some key problems remain the same whether you use a simple Brownian motion model or this marvelous technical apparatus.

In short, back-fitting only goes so far.

  • The essentially random nature of living systems can lead to entirely different outcomes if said randomness had occurred at some other point in time or magnitude. Due to randomness, entirely different groups would likely succeed and fail every time the “clock” was turned back to time zero, and the system allowed to unfold all over again. Goldman Sachs would not be the “vampire squid”. The London whale would never have been. This will boggle the mind if you let it.

  • Extraction of unvarying physical laws governing a living system from data is in many cases is NP-hard. There are far many varieties of actors and variety of interactions for the exercise to be tractable.

  • Given the possibility of their extraction, the nature of the components of a living system are not fixed and subject to unvarying physical laws – not even probability laws.

  • The conscious behavior of some actors in a financial market can change the rules of the game, some of those rules some of the time, or complete rewire the system form the bottom-up. This is really just an extension of the former point.

  • Natural mutations over time lead to markets reworking their laws over time through an evolutionary process, with never a thought of doing so.

ee2bb4_8eaf3fa3c14d4960aceae022db54340c

Thus, in this approach, Nt+ corresponds to the number of upward jumps of the asset in the time interval [0,t] and Nt to the number of downward jumps. Hence, the instantaneous probability to get an upward (downward) jump depends on the arrival times of the past upward and downward jumps. Furthermore, by construction, the price process lives on a discrete grid, which is obviously a crucial feature of high frequency prices in practice.

This simple tick-by-tick price model enables to encode very easily the following important stylized facts of modern electronic markets in the context of high frequency trading:

  1. Markets are highly endogenous, meaning that most of the orders have no real economic motivation but are rather sent by algorithms in reaction to other orders.
  2. Mechanisms preventing statistical arbitrages take place on high frequency markets. Indeed, at the high frequency scale, building strategies which are on average profitable is hardly possible.
  3. There is some asymmetry in the liquidity on the bid and ask sides of the order book. This simply means that buying and selling are not symmetric actions. Indeed, consider for example a market maker, with an inventory which is typically positive. She is likely to raise the price by less following a buy order than to lower the price following the same size sell order. This is because its inventory becomes smaller after a buy order, which is a good thing for her, whereas it increases after a sell order.
  4. A significant proportion of transactions is due to large orders, called metaorders, which are not executed at once but split in time by trading algorithms.

    In a Hawkes process framework, the first of these properties corresponds to the case of so-called nearly unstable Hawkes processes, that is Hawkes processes for which the stability condition is almost saturated. This means the spectral radius of the kernel matrix integral is smaller than but close to unity. The second and third ones impose a specific structure on the kernel matrix and the fourth one leads to functions φi with heavy tails.

Financial Forward Rate “Strings” (Didactic 1)

screenshot

Imagine that Julie wants to invest $1 for two years. She can devise two possible strategies. The first one is to put the money in a one-year bond at an interest rate r1. At the end of the year, she must take her money and find another one-year bond, with interest rate r1/2 which is the interest rate in one year on a loan maturing in two years. The final payoff of this strategy is simply (1 + r1)(1 + r1/2). The problem is that Julie cannot know for sure what will be the one-period interest rate r1/2 of next year. Thus, she can only estimate a return by guessing the expectation of r1/2.

Instead of making two separate investments of one year each, Julie could invest her money today in a bond that pays off in two years with interest rate r2. The final payoff is then (1 + r2)2. This second strategy is riskless as she knows for sure her return. Now, this strategy can be reinterpreted along the line of the first strategy as follows. It consists in investing for one year at the rate r1 and for the second year at a forward rate f2. The forward rate is like the r1/2 rate, with the essential difference that it is guaranteed : by buying the two-year bond, Julie can “lock in” an interest rate f2 for the second year.

This simple example illustrates that the set of all possible bonds traded on the market is equivalent to the so-called forward rate curve. The forward rate f(t,x) is thus the interest rate that can be contracted at time t for instantaneously riskless borrowing 1 or lending at time t + x. It is thus a function or curve of the time-to-maturity x2, where x plays the role of a “length” variable, that deforms with time t. Its knowledge is completely equivalent to the set of bond prices P(t,x) at time t that expire at time t + x. The shape of the forward rate curve f(t,x) incessantly fluctuates as a function of time t. These fluctuations are due to a combination of factors, including future expectation of the short-term interest rates, liquidity preferences, market segmentation and trading. It is obvious that the forward rate f (t, x+δx) for δx small can not be very different from f (t,x). It is thus tempting to see f(t,x) as a “string” characterized by a kind of tension which prevents too large local deformations that would not be financially acceptable. This superficial analogy is in the follow up of the repetitious intersections between finance and physics, starting with Bachelier who solved the diffusion equation of Brownian motion as a model of stock market price fluctuations five years before Einstein, continuing with the discovery of the relevance of Lévy laws for cotton price fluctuations by Mandelbrot that can be compared with the present interest of such power laws for the description of physical and natural phenomena. The present investigation delves into how to formalize mathematically this analogy between the forward rate curve and a string. We formulate the term structure of interest rates as the solution of a stochastic partial differential equation (SPDE), following the physical analogy of a continuous curve (string) whose shape moves stochastically through time.

The equation of motion of macroscopic physical strings is derived from conservation laws. The fundamental equations of motion of microscopic strings formulated to describe the fundamental particles derive from global symmetry principles and dualities between long-range and short-range descriptions. Are there similar principles that can guide the determination of the equations of motion of the more down-to-earth financial forward rate “strings”?

Suppose that in the middle ages, before Copernicus and Galileo, the Earth really was stationary at the centre of the universe, and only began moving later on. Imagine that during the nineteenth century, when everyone believed classical physics to be true, that it really was true, and quantum phenomena were non-existent. These are not philosophical musings, but an attempt to portray how physics might look if it actually behaved like the financial markets. Indeed, the financial world is such that any insight is almost immediately used to trade for a profit. As the insight spreads among traders, the “universe” changes accordingly. As G. Soros has pointed out, market players are “actors observing their own deeds”. As E. Derman, head of quantitative strategies at Goldman Sachs, puts it, in physics you are playing against God, who does not change his mind very often. In finance, you are playing against Gods creatures, whose feelings are ephemeral, at best unstable, and the news on which they are based keep streaming in. Value clearly derives from human beings, while mass, charge and electromagnetism apparently do not. This has led to suggestions that a fruitful framework to study finance and economy is to use evolutionary models inspired from biology and genetics.

This does not however guide us much for the determination of “fundamental” equa- tions, if any. Here, we propose to use the condition of absence of arbitrage opportunity and show that this leads to strong constraints on the structure of the governing equations. The basic idea is that, if there are arbitrage opportunities (free lunches), they cannot live long or must be quite subtle, otherwise traders would act on them and arbitrage them away. The no-arbitrage condition is an idealization of a self-consistent dynamical state of the market resulting from the incessant actions of the traders (ar- bitragers). It is not the out-of-fashion equilibrium approximation sometimes described but rather embodies a very subtle cooperative organization of the market.

We consider this condition as the fundamental backbone for the theory. The idea to impose this requirement is not new and is in fact the prerequisite of most models developed in the academic finance community. Modigliani and Miller [here and here] have indeed emphasized the critical role played by arbitrage in determining the value of securities. It is sometimes suggested that transaction costs and other market imperfections make irrelevant the no-arbitrage condition. Let us address briefly this question.

Transaction costs in option replication and other hedging activities have been extensively investigated since they (or other market “imperfections”) clearly disturb the risk-neutral argument and set option theory back a few decades. Transaction costs induce, for obvious reasons, dynamic incompleteness, thus preventing valuation as we know it since Black and Scholes. However, the most efficient dynamic hedgers (market makers) incur essentially no transaction costs when owning options. These specialized market makers compete with each other to provide liquidity in option instruments, and maintain inventories in them. They rationally limit their dynamic replication to their residual exposure, not their global exposure. In addition, the fact that they do not hold options until maturity greatly reduces their costs of dynamic hedging. They have an incentive in the acceleration of financial intermediation. Furthermore, as options are rarely replicated until maturity, the expected transaction costs of the short options depend mostly on the dynamics of the order flow in the option markets – not on the direct costs of transacting. For the efficient operators (and those operators only), markets are more dynamically complete than anticipated. This is not true for a second category of traders, those who merely purchase or sell financial instruments that are subjected to dynamic hedging. They, accordingly, neither are equipped for dynamic hedging, nor have the need for it, thanks to the existence of specialized and more efficient market makers. The examination of their transaction costs in the event of their decision to dynamically replicate their options is of no true theoretical contribution. A second important point is that the existence of transaction costs should not be invoked as an excuse for disregarding the no-arbitrage condition, but, rather should be constructively invoked to study its impacts on the models…..