The Biological Kant. Note Quote.

Nb3O7(OH)_self-organization2

The biological treatise takes as its object the realm of physics left out of Kant’s critical demarcations of scientific, that is, mathematical and mechanistic, physics. Here, the main idea was that scientifically understandable Nature was defined by lawfulness. In his Metaphysical Foundations of Natural Science, this idea was taken further in the following claim:

I claim, however, that there is only as much proper science to be found in any special doctrine on nature as there is mathematics therein, and further ‘a pure doctrine on nature about certain things in nature (doctrine on bodies and doctrine on minds) is only possible by means of mathematics’.

The basic idea is thus to identify Nature’s lawfulness with its ability to be studied by means of mathematical schemata uniting understanding and intuition. The central schema, to Kant, was numbers, so apt to be used in the understanding of mechanically caused movement. But already here, Kant is very well aware of a whole series of aspects of spontaneuosly experienced Nature is left out of sight by the concentration on matter in movement, and he calls for these further realms of Nature to be studied by a continuation of the Copernican turn, by the mind’s further study of the utmost limits of itself. Why do we spontaneously see natural purposes, in Nature? Purposiveness is wholly different from necessity, crucial to Kant’s definition of Nature. There is no reason in the general concept of Nature (as lawful) to assume that nature’s objects may serve each other as purposes. Nevertheless, we do not stop assuming just that. But what we do when we ascribe purposes to Nature is using the faculties of mind in another way than in science, much closer to the way we use them in the appreciation of beauty and art, the object of the first part of the book immediately before the treatment of teleological judgment. This judgment is characterized by a central distinction, already widely argued in this first part of the book: the difference between determinative and reflective judgments, respectively. While the judgment used scientifically to decide whether a specific case follows a certain rule in explanation by means of a derivation from a principle, and thus constitutes the objectivity of the object in question – the judgment which is reflective lacks all these features. It does not proceed by means of explanation, but by mere analogy; it is not constitutive, but merely regulative; it does not prove anything but merely judges, and it has no principle of reason to rest its head upon but the very act of judging itself. These ideas are now elaborated throughout the critic of teleological judgment.

nrm2357-i1

In the section Analytik der teleologischen Urteilskraft, Kant gradually approaches the question: first is treated the merely formal expediency: We may ascribe purposes to geometry in so far as it is useful to us, just like rivers carrying fertile soils with them for trees to grow in may be ascribed purposes; these are, however, merely contingent purposes, dependent on an external telos. The crucial point is the existence of objects which are only possible as such in so far as defined by purposes:

That its form is not possible after mere natural laws, that is, such things which may not be known by us through understanding applied to objects of the senses; on the contrary that even the empirical knowledge about them, regarding their cause and effect, presupposes concepts of reason.

The idea here is that in order to conceive of objects which may not be explained with reference to understanding and its (in this case, mechanical) concepts only, these must be grasped by the non-empirical ideas of reason itself. If causes are perceived as being interlinked in chains, then such contingencies are to be thought of only as small causal circles on the chain, that is, as things being their own cause. Hence Kant’s definition of the Idea of a natural purpose:

an object exists as natural purpose, when it is cause and effect of itself.

This can be thought as an idea without contradiction, Kant maintains, but not conceived. This circularity (the small causal circles) is a very important feature in Kant’s tentative schematization of purposiveness. Another way of coining this Idea is – things as natural purposes are organized beings. This entails that naturally purposeful objects must possess a certain spatio-temporal construction: the parts of such a thing must be possible only through their relation to the whole – and, conversely, the parts must actively connect themselves to this whole. Thus, the corresponding idea can be summed up as the Idea of the Whole which is necessary to pass judgment on any empirical organism, and it is very interesting to note that Kant sums up the determination of any part of a Whole by all other parts in the phrase that a natural purpose is possible only as an organized and self-organizing being. This is probably the very birth certificate of the metaphysics of self-organization. It is important to keep in mind that Kant does not feel any vitalist temptation at supposing any organizing power or any autonomy on the part of the whole which may come into being only by this process of self-organization between its parts. When Kant talks about the forming power in the formation of the Whole, it is thus nothing outside of this self-organization of its parts.

This leads to Kant’s final definition: an organized being is that in which all that is alternating is ends and means. This idea is extremely important as a formalization of the idea of teleology: the natural purposes do not imply that there exists given, stable ends for nature to pursue, on the contrary, they are locally defined by causal cycles, in which every part interchangeably assumes the role of ends and means. Thus, there is no absolute end in this construal of nature’s teleology; it analyzes teleology formally at the same time as it relativizes it with respect to substance. Kant takes care to note that this maxim needs not be restricted to the beings – animals – which we spontaneously tend to judge as purposeful. The idea of natural purposes thus entails that there might exist a plan in nature rendering processes which we have all reasons to disgust purposeful for us. In this vision, teleology might embrace causality – and even aesthetics:

Also natural beauty, that is, its harmony with the free play of our epistemological faculties in the experience and judgment of its appearance can be seen in the way of objective purposivity of nature in its totality as system, in which man is a member.

An important consequence of Kant’s doctrine is that their teleology is so to speak secularized in two ways: (1) it is formal, and (2) it is local. It is formal because self-organization does not ascribe any special, substantial goal for organisms to pursue – other than the sustainment of self-organization. Thus teleology is merely a formal property in certain types of systems. This is why teleology is also local – it is to be found in certain systems when the causal chain form loops, as Kant metaphorically describes the cycles involved in self-organization – it is no overarching goal governing organisms from the outside. Teleology is a local, bottom-up, process only.

Kant does not in any way doubt the existence of organized beings, what is at stake is the possibility of dealing with them scientifically in terms of mechanics. Even if they exist as a given thing in experience, natural purposes can not receive any concept. This implies that biology is evident in so far as the existence of organisms cannot be doubted. Biology will never rise to the heights of science, its attempts at doing so are beforehand delimited, all scientific explanations of organisms being bound to be mechanical. Following this line of argument, it corresponds very well to present-day reductionism in biology, trying to take all problems of phenotypical characters, organization, morphogenesis, behavior, ecology, etc. back to the biochemistry of genetics. But the other side of the argument is that no matter how successful this reduction may prove, it will never be able to reduce or replace the teleological point of view necessary in order to understand the organism as such in the first place.

Evidently, there is something deeply unsatisfactory in this conclusion which is why most biologists have hesitated at adopting it and cling to either full-blown reductionism or to some brand of vitalism, subjecting themselves to the dangers of ‘transcendental illusion’ and allowing for some Goethe-like intuitive idea without any schematization. Kant tries to soften up the question by philosophical means by establishing an crossing over from metaphysics to physics, or, from the metaphysical constraints on mechanical physics and to physics in its empirical totality, including the organized beings of biology. Pure mechanics leaves physics as a whole unorganized, and this organization is sought to be established by means of mediating concepts’. Among them is the formative power, which is not conceived of in a vitalist substantialist manner, but rather a notion referring to the means by which matter manages to self-organize. It thus comprehends not only biological organization, but macrophysic solid matter physics as well. Here, he adds an important argument to the critic of judgment:

Because man is conscious of himself as a self-moving machine, without being able to further understand such a possibility, he can, and is entitled to, introduce a priori organic-moving forces of bodies into the classification of bodies in general and thus to distinguish mere mechanical bodies from self-propelled organic bodies.

Advertisement

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Banking and Lending/Investment. How Monetary Policy Becomes Decisive? Some Branching Rumination.

the-state-of-global-monetary-policy-in-a-chart-and-a-map

Among the most notoriously pernicious effects of asset price inflation is that it offers speculators the prospect of gain in excess of the costs of borrowing the money to buy the asset whose price is being inflated. This is how many unstable Ponzi financing structures begin. There are usually strict regulations to prevent or limit banks’ direct investment in financial instruments without any assured residual liquidity, such as equity or common stocks. However, it is less easy to prevent banks from lending to speculative investors, who then use the proceeds of their loans to buy securities or to limit lending secured on financial assets. As long as asset markets are being inflated, such credit expansions also conceal from banks, their shareholders and their regulators the disintermediation that occurs when the banks’ best borrowers, governments and large companies, use bills and company paper instead of bank loans for their short-term financing. As long as the boom proceeds, banks can enjoy the delusion that they can replace the business of governments and large companies with good lending secured on stocks.

In addition to undermining the solvency of the banking system, and distracting commerce and industry with the possibilities of lucrative corporate restructuring, capital market inflation also tends to make monetary policy ineffective. Monetary policy is principally the fixing of reserve requirements, buying and selling short-term paper or bills in the money or inter-bank markets, buying and selling government bonds and fixing short-term interest rates. As noted in the previous section, with capital market inflation there has been a proliferation of short-term financial assets traded in the money markets, as large companies and banks find it cheaper to issue their own paper than to borrow for banks. This disintermediation has extended the range of short-term liquid assets which banks may hold. As a result of this it is no longer possible for central banks, in countries experiencing capital market inflation, to control the overall amount of credit available in the economy: attempts to squeeze the liquidity of banks in order to limit their credit advances by, say, open market operations (selling government bonds) are frustrated by the ease with which banks may restore their liquidity by selling bonds or their holdings of short-term paper or bills. In this situation central banks have been forced to reduce the scope of their monetary policy to the setting of short-term interest rates.

Economists have long believed that monetary policy is effective in controlling price inflation in the economy at large, as opposed to inflation of securities prices. Various rationalizations have been advanced for this efficacy of monetary policy. For the most part they suppose some automatic causal connection between changes in the quantity of money in circulation and changes in prices, although the Austrian School of Economists (here, here, here, and here) tended on occasion to see the connection as being between changes in the rate of interest and changes in prices.

Whatever effect changes in the rate of interest may have on the aggregate of money circulating in the economy, the effect of such changes on prices has to be through the way in which an increase or decrease in the rate of interest causes alterations in expenditure in the economy. Businesses and households are usually hard-headed enough to decide their expenditure and financial commitments in the light of their nominal revenues and cash outflows, which may form their expectations, rather than in accordance with their expectations or optimizing calculations. If the same amount of money continues to be spent in the economy, then there is no effective reason for the business-people setting prices to vary prices. Only if expenditure in markets is rising or falling would retailers and industrialists consider increasing or decreasing prices. Because price expectations are observable directly with difficulty, they may explain everything in general and therefore lack precision in explaining anything in particular. Notwithstanding their effects on all sorts of expectations, interest rate changes affect inflation directly through their effects on expenditure.

The principal expenditure effects of changes in interest rates occur among net debtors in the economy, i.e., economic units whose financial liabilities exceed their financial assets. This is in contrast to net creditors, whose financial assets exceed their liabilities, and who are usually wealthy enough not to have their spending influenced by changes in interest rates. If they do not have sufficient liquid savings out of which to pay the increase in their debt service payments, then net debtors have their expenditure squeezed by having to devote more of their income to debt service payments. The principal net debtors are governments, households with mortgages and companies with large bank loans.

With or without capital market inflation, higher interest rates have never constrained government spending because of the ease with which governments may issue debt. In the case of indebted companies, the degree to which their expenditure is constrained by higher interest rates depends on their degree of indebtedness, the available facilities for additional financing and the liquidity of their assets. As a consequence of capital market inflation, larger companies reduce their borrowing from banks because it becomes cheaper and more convenient to raise even short- term finance in the booming securities markets. This then makes the expenditure of even indebted companies less immediately affected by changes in bank interest rates, because general changes in interest rates cannot affect the rate of discount or interest paid on securities already issued. Increases in short-term interest rates to reduce general price inflation can then be easily evaded by companies financing themselves by issuing longer-term securities, whose interest rates tend to be more stable. Furthermore, with capital market inflation, companies are more likely to be over-capitalized and have excessive financial liabilities, against which companies tend to hold a larger stock of more liquid assets. As inflated financial markets have become more unstable, this has further increased the liquidity preference of large companies. This excess liquidity enables the companies enjoying it to gain higher interest income to offset the higher cost of their borrowing and to maintain their planned spending. Larger companies, with access to capital markets, can afford to issue securities to replenish their liquid reserves.

If capital market inflation reduces the effectiveness of monetary policy against product price inflation, because of the reduced borrowing of companies and the ability of booming asset markets to absorb large quantities of bank credit, interest rate increases have appeared effective in puncturing asset market bubbles in general and capital market inflations in particular. Whether interest rate rises actually can effect an end to capital market inflation depends on how such rises actually affect the capital market. In asset markets, as with anti-inflationary policy in the rest of the economy, such increases are effective when they squeeze the liquidity of indebted economic units by increasing the outflow of cash needed to service debt payments and by discouraging further speculative borrowing. However, they can only be effective in this way if the credit being used to inflate the capital market is short term or is at variable rates of interest determined by the short-term rate.

Keynes’s speculative demand for money is the liquidity preference or demand for short-term securities of rentiers in relation to the yield on long-term securities. Keynes’s speculative motive is ‘a continuous response to gradual changes in the rate of interest’ in which, as interest rates along the whole maturity spectrum decline, there is a shift in rentiers’ portfolio preference toward more liquid assets. Keynes clearly equated a rise in equity (common stock) prices with just such a fall in interest rates. With falling interest rates, the increasing preference of rentiers for short-term financial assets could keep the capital market from excessive inflation.

But the relationship between rates of interest, capital market inflation and liquidity preference is somewhat more complicated. In reality, investors hold liquid assets not only for liquidity, which gives them the option to buy higher-yielding longer-term stocks when their prices fall, but also for yield. This marginalizes Keynes’s speculative motive for liquidity. The motive was based on Keynes’s distinction between what he called ‘speculation’ (investment for capital gain) and ‘enterprise’ (investment long term for income). In our times, the modern rentier is the fund manager investing long term on behalf of pension and insurance funds and competing for returns against other funds managers. An inflow into the capital markets in excess of the financing requirements of firms and governments results in rising prices and turnover of stock. This higher turnover means greater liquidity so that, as long as the capital market is being inflated, the speculative motive for liquidity is more easily satisfied in the market for long-term securities.

Furthermore, capital market inflation adds a premium of expected inflation, or prospective capital gain, to the yield on long-term financial instruments. Hence when the yield decreases, due to an increase in the securities’ market or actual price, the prospective capital gain will not fall in the face of this capital appreciation, but may even increase if it is large or abrupt. Rising short-term interest rates will therefore fail to induce a shift in the liquidity preference of rentiers towards short-term instruments until the central bank pushes these rates of interest above the sum of the prospective capital gain and the market yield on long-term stocks. Only at this point will there be a shift in investors’ preferences, causing capital market inflation to cease, or bursting an asset bubble.

This suggests a new financial instability hypothesis, albeit one that is more modest and more limited in scope and consequence than Minsky’s Financial Instability Hypothesis. During an economic boom, capital market inflation adds a premium of expected capital gain to the market yield on long-term stocks. As long as this yield plus the expected capital gain exceed the rate of interest on short-term securities set by the central bank’s monetary policy, rising short-term interest rates will have no effect on the inflow of funds into the capital market and, if this inflow is greater than the financing requirements of firms and governments, the resulting capital market inflation. Only when the short-term rate of interest exceeds the threshold set by the sum of the prospective capital gain and the yield on long-term stocks will there be a shift in rentiers’ preferences. The increase in liquidity preference will reduce the inflow of funds into the capital market. As the rise in stock prices moderates, the prospective capital gain gets smaller, and may even become negative. The rentiers’ liquidity preference increases further and eventually the stock market crashes, or ceases to be active in stocks of longer maturities.

At this point, the minimal or negative prospective capital gain makes equity or common stocks unattractive to rentiers at any positive yield, until the rate of interest on short-term securities falls below the sum of the prospective capital gain and the market yield on those stocks. When the short-term rate of interest does fall below this threshold, the resulting reduction in rentiers’ liquidity preference revives the capital market. Thus, in between the bursting of speculative bubbles and the resurrection of a dormant capital market, monetary policy has little effect on capital market inflation. Hence it is a poor regulator for ‘squeezing out inflationary expectations’ in the capital market.

Evolutionary Game Theory. Note Quote

Untitled

In classical evolutionary biology the fitness landscape for possible strategies is considered static. Therefore optimization theory is the usual tool in order to analyze the evolution of strategies that consequently tend to climb the peaks of the static landscape. However in more realistic scenarios the evolution of populations modifies the environment so that the fitness landscape becomes dynamic. In other words, the maxima of the fitness landscape depend on the number of specimens that adopt every strategy (frequency-dependent landscape). In this case, when the evolution depends on agents’ actions, game theory is the adequate mathematical tool to describe the process. But this is precisely the scheme in that the evolving physical laws (i.e. algorithms or strategies) are generated from the agent-agent interactions (bottom-up process) submitted to natural selection.

The concept of evolutionarily stable strategy (ESS) is central to evolutionary game theory. An ESS is defined as that strategy that cannot be displaced by any alternative strategy when being followed by the great majority – almost all of systems in a population. In general,

an ESS is not necessarily optimal; however it might be assumed that in the last stages of evolution — before achieving the quantum equilibrium — the fitness landscape of possible strategies could be considered static or at least slow varying. In this simplified case an ESS would be one with the highest payoff therefore satisfying an optimizing criterion. Different ESSs could exist in other regions of the fitness landscape.

In the information-theoretic Darwinian approach it seems plausible to assume as optimization criterion the optimization of information flows for the system. A set of three regulating principles could be:

Structure: The complexity of the system is optimized (maximized).. The definition that is adopted for complexity is Bennett’s logical depth that for a binary string is the time needed to execute the minimal program that generates such string. There is no a general acceptance of the definition of complexity, neither is there a consensus on the relation between the increase of complexity – for a certain definition – and Darwinian evolution. However, it seems that there is some agreement on the fact that, in the long term, Darwinian evolution should drive to an increase in complexity in the biological realm for an adequate natural definition of this concept. Then the complexity of a system at time in this theory would be the Bennett’s logical depth of the program stored at time in its Turing machine. The increase of complexity is a characteristic of Lamarckian evolution, and it is also admitted that the trend of evolution in the Darwinian theory is in the direction in which complexity grows, although whether this tendency depends on the timescale – or some other factors – is still not very clear.

Dynamics: The information outflow of the system is optimized (minimized). The information is the Fisher information measure for the probability density function of the position of the system. According to S. A. Frank, natural selection acts maximizing the Fisher information within a Darwinian system. As a consequence, assuming that the flow of information between a system and its surroundings can be modeled as a zero-sum game, Darwinian systems would follow dynamics.

Interaction: The interaction between two subsystems optimizes (maximizes) the complexity of the total system. The complexity is again equated to the Bennett’s logical depth. The role of Interaction is central in the generation of composite systems, therefore in the structure for the information processor of composite systems resulting from the logical interconnections among the processors of the constituents. There is an enticing option of defining the complexity of a system in contextual terms as the capacity of a system for anticipating the behavior at t + ∆t of the surrounding systems included in the sphere of radius r centered in the position X(t) occupied by the system. This definition would directly drive to the maximization of the predictive power for the systems that maximized their complexity. However, this magnitude would definitely be very difficult to even estimate, in principle much more than the usual definitions for complexity.

Quantum behavior of microscopic systems should now emerge from the ESS. In other terms, the postulates of quantum mechanics should be deduced from the application of the three regulating principles on our physical systems endowed with an information processor.

Let us apply Structure. It is reasonable to consider that the maximization of the complexity of a system would in turn maximize the predictive power of such system. And this optimal statistical inference capacity would plausibly induce the complex Hilbert space structure for the system’s space of states. Let us now consider Dynamics. This is basically the application of the principle of minimum Fisher information or maximum Cramer-Rao bound on the probability distribution function for the position of the system. The concept of entanglement seems to be determinant to study the generation of composite systems, in particular in this theory through applying Interaction. The theory admits a simple model that characterizes the entanglement between two subsystems as the mutual exchange of randomizers (R1, R2), programs (P1, P2) – with their respective anticipation modules (A1, A2) – and wave functions (Ψ1, Ψ2). In this way, both subsystems can anticipate not only the behavior of their corresponding surrounding systems, but also that of the environment of its partner entangled subsystem. In addition, entanglement can be considered a natural phenomenon in this theory, a consequence of the tendency to increase the complexity, and therefore, in a certain sense, an experimental support to the theory.

In addition, the information-theoretic Darwinian approach is a minimalist realist theory – every system follows a continuous trajectory in time, as in Bohmian mechanics, a local theory in physical space – in this theory apparent nonlocality, as in Bell’s inequality violations, would be an artifact of the anticipation module in the information space, although randomness would necessarily be intrinsic to nature through the random number generator methodologically associated with every fundamental system at t = 0, and as essential ingredient to start and fuel – through variation – Darwinian evolution. As time increases, random events determined by the random number generators would progressively be replaced by causal events determined by the evolving programs that gradually take control of the elementary systems. Randomness would be displaced by causality as physical Darwinian evolution gave rise to the quantum equilibrium regime, but not completely, since randomness would play a crucial role in the optimization of strategies – thus, of information flows – as game theory states.

Whitehead’s Non-Anthropocentric Quantum Field Ontology. Note Quote.

pop5-5

Whitehead builds also upon James’s claim that “The thought is itself the thinker”.

Either your experience is of no content, of no change, or it is of a perceptible amount of content or change. Your acquaintance with reality grows literally by buds or drops of perception. Intellectually and on reflection you can divide them into components, but as immediately given they come totally or not at all. — William James.

If the quantum vacuum displays features that make it resemble a material, albeit a really special one, we can immediately ask: then what is this material made of? Is it a continuum, or are the “atoms” of vacuum? Is vacuum the primordial substance of which everything is made of? Let us start by decoupling the concept of vacuum from that of spacetime. The concept of vacuum as accepted and used in standard quantum field theory is tied with that of spacetime. This is important for the theory of quantum fields, because it leads to observable effects. It is the variation of geometry, either as a change in boundary conditions or as a change in the speed of light (and therefore the metric) which is responsible for the creation of particles. Now, one can legitimately go further and ask: which one is the fundamental “substance”, the space-time or the vacuum? Is the geometry fundamental in any way, or it is just a property of the empty space emerging from a deeper structure? That geometry and substance can be separated is of course not anything new for philosophers. Aristotle’s distinction between form and matter is one example. For Aristotle the “essence” becomes a true reality only when embodied in a form. Otherwise it is just a substratum of potentialities, somewhat similar to what quantum physics suggests. Immanuel Kant was even more radical: the forms, or in general the structures that we think of as either existing in or as being abstracted from the realm of noumena are actually innate categories of the mind, preconditions that make possible our experience of reality as phenomena. Structures such as space and time, causality, etc. are a priori forms of intuition – thus by nature very different from anything from the outside reality, and they are used to formulate synthetic a priori judgments. But almost everything that was discovered in modern physics is at odds with Kant’s view. In modern philosophy perhaps Whitehead’s process metaphysics provides the closest framework for formulating these problems. For Whitehead, potentialities are continuous, while the actualizations are discrete, much like in the quantum theory the unitary evolution is continuous, while the measurement is non-unitary and in some sense “discrete”. An important concept is the “extensive continuum”, defined as a “relational complex” containing all the possibilities of objectification. This continuum also contains the potentiality for division; this potentiality is effected in what Whitehead calls “actual entities (occasions)” – the basic blocks of his cosmology. The core issue for both Whiteheadian Process and Quantum Process is the emergence of the discrete from the continuous. But what fixes, or determines, the partitioning of the continuous whole into the discrete set of subsets? The orthodox answer is this: it is an intentional action of an experimenter that determines the partitioning! But, in Whiteheadian process the world of fixed and settled facts grows via a sequence actual occasions. The past actualities are the causal and structural inputs for the next actual occasion, which specifies a new space-time standpoint (region) from which the potentialities created by the past actualities will be prehended (grasped) by the current occasion. This basic autogenetic process creates the new actual entity, which, upon becoming actual, contributes to the potentialities for the succeeding actual occasions. For the pragmatic physicist, since the extensive continuum provides the space of possibilities from which the actual entities arise, it is tempting to identify it with the quantum vacuum. The actual entities are then assimilated with events in spacetime, as resulting from a quantum measurement, or simply with particles. The following caveat is however due: Whitehead’s extensive continuum is also devoid of geometrical content, while the quantum vacuum normally carries information about the geometry, be it flat or curved. Objective/absolute actuality consist of a sequence of psycho-physical quantum reduction events, identified as Whiteheadian actual entities/occasions. These happenings combine to create a growing “past” of fixed and settled “facts”. Each “fact” is specified by an actual occasion/entity that has a physical aspect (pole), and a region in space-time from which it views reality. The physical input is precisely the aspect of the physical state of the universe that is localized along the part of the contemporary space-like surface σ that constitutes the front of the standpoint region associated with the actual occasion. The physical output is reduced state ψ(σ) on this space-like surface σ. The mental pole consists of an input and an output. The mental inputs and outputs have the ontological character of thoughts, ideas, or feelings, and they play an essential dynamical role in unifying, evaluating, and selecting discrete classically conceivable activities from among the continuous range of potentialities offered by the operation of the physically describable laws. The paradigmatic example of an actual occasion is an event whose mental pole is experienced by a human being as an addition to his or her stream of conscious events, and whose output physical pole is the neural correlate of that experiential event. Such events are “high-grade” actual occasions. But the Whitehead/Quantum ontology postulates that simpler organisms will have fundamentally similar but lower-grade actual occasions, and that there can be actual occasions associated with any physical systems that possess a physical structure that will support physically effective mental interventions of the kind described above. Thus the Whitehead/Quantum ontology is essentially an ontologicalization of the structure of orthodox relativistic quantum field theory, stripped of its anthropocentric trappings. It identifies the essential physical and psychological aspects of contemporary orthodox relativistic quantum field theory, and lets them be essential features of a general non-anthropocentric ontology.

quantum_veda

It is reasonable to expect that the continuous differentiable manifold that we use as spacetime in physics (and experience in our daily life) is a coarse-grained manifestation of a deeper reality, perhaps also of quantum (probabilistic) nature. This search for the underlying structure of spacetime is part of the wider effort of bringing together quantum physics and the theory of gravitation under the same conceptual umbrella. From various the- oretical considerations, it is inferred that this unification should account for physics at the incredibly small scale set by the Planck length, 10−35m, where the effects of gravitation and quantum physics would be comparable. What happens below this scale, which concepts will survive in the new description of the world, is not known. An important point is that, in order to incorporate the main conceptual innovation of general relativity, the the- ory should be background-independent. This contrasts with the case of the other fields (electromagnetic, Dirac, etc.) that live in the classical background provided by gravitation. The problem with quantizing gravitation is – if we believe that the general theory of relativity holds in the regime where quantum effects of gravitation would appear, that is, beyond the Planck scale – that there is no underlying background on which the gravitational field lives. There are several suggestions and models for a “pre-geometry” (a term introduced by Wheeler) that are currently actively investigated. This is a question of ongoing investigation and debate, and several research programs in quantum gravity (loops, spinfoams, noncommutative geometry, dynamical triangulations, etc.) have proposed different lines of attack. Spacetime would then be an emergent entity, an approximation valid only at scales much larger than the Planck length. Incidentally, nothing guarantees that background-independence itself is a fundamental concept that will survive in the new theory. For example, string theory is an approach to unifying the Standard Model of particle physics with gravitation which uses quantization in a fixed (non-dynamic) background. In string theory, gravitation is just another force, with the graviton (zero mass and spin 2) obtained as one of the string modes in the perturbative expansion. A background-independent formulation of string theory would be a great achievement, but so far it is not known if it can be achieved.

Whitehead’s Ontologization of the Quantum Field Theory (QFT)

The art of progress is to preserve order amid change, and to preserve change amid order.

— Alfred North Whitehead, Process and Reality, 1929.

OLYMPUS DIGITAL CAMERA

After his attempt to complete the set-theoretic foundations of mathematics in collaboration with Russell, Whitehead’s venture into the natural sciences made him realise that the traditional timeless ontology of substances, and not in the least their static set-theoretic underpinning, does not suit natural phenomena. Instead, he claims, it is processes and their relationships which should underpin our understanding of those phenomena. Whiteheadian quantum ontology is essentially an ontologization of the structure of orthodox relativistic quantum field theory, stripped of any anthropocentric formulations. This means that mentality is no longer reserved for human beings and higher creatures. Does Whitehead’s ontology contain an inconsistency due to the fact that the principle of separateness of all realized regions will generally not be satisfied in his causally local and separable ontology? This would be true if his metaphysics were traced back only to the theory of relativity, if one did not take into account that his ideas originate from a psycho-philosophical discussion, that his theory of prehension connects all occasions of the contemporary world, and that the concrescence process selects positive prehensions. If one concluded that, then either the causal independence of simultaneous occasions or the distinctness of their concrescence processes would have to be abandoned in order to secure the separateness of all realized regions, and one would have to answer the questions: What does causality mean?

Causality is merely the way in which each instance of freedom takes into account the previous instances, as each of our experience refers back through memory to our own past and through perception to the world’s past.” According to quantum thinking and process philosophy there is no backward-in-time causation. “The basic properties of relativistic quantum theory emerge […] from a logically simple model of reality. In this model there is a fundamental creative process by discrete steps. Each step is a creative act or event. Each event is associated with a definitive spacetime location. The fundamental process is not local in character, but it generates local spacetime patterns that have mathematical forms amenable to scientific studies. According to Charles Hartshorne,

The mutual independence of contemporaries constitutes their freedom. Without this independence, what happens anywhere would immediately condition what happens anywhere else. However, this would be fatal to freedom only if the sole alternative to mutual independence were mutual dependence. And this is not a necessary, it is even a possible, interpretation of Bell’s result. What happens here now may condition what happens somewhere else without measurable temporal lapse, although what happens at somewhere else does not condition what happens here, still retains its freedom since […] no set of conditions can be fully determinative of the resulting actuality.

Quantum Entanglement, Post-Selection and Time Travel

If Copenhagen interpretation of quantum mechanics is to be believed, nothing exists in reality until a measurement is carried out. In the double slit experiment carried out by John Wheeler, post-selection can be made to work, after the experiment is finished, and that by delaying the observation after the photon has purportedly passed through the slits. Now, if post-selection is to work, there must be a change in the properties in the past. This has been experimentally proved by physicists like Jean-François Roch at the Ecole Normale Supérieure in Cachan, France. This is weird, but invoking the quantum entanglement and throwing it up for grabs against the philosophic principle of causality surprises. If the experimental set up impacts the future course of outcome, quantum particles in a most whimsical manner are susceptible to negate it. This happens due to the mathematics governing these particle, which enable or rather disable them to differentiate between the course of sense they are supposed to undertake. In short, what happens in the future could determine the past….

….If particles are caught up in quantum entanglement, the measurement of one immediately affects the other, some kind of a Einsteinian spooky action at a distance.

 A weird connection was what sprang up in my mind this morning, and the vestibule comes from French theoretical consideration. Without any kind of specificity, the knower and the known are crafted together by a meditation that rides on instability populated by discursive and linguistic norms and forms that is derided as secondary in the analytical tradition. The autonomy of the knower as against the known is questionable, and derives significance only when its trajectory is mapped by a simultaneity put forth by the known.
Does this not imply French theory getting close to interpreting quantum mechanics? just shocking weird….
Anyways, adieu to this and still firmed up in this post from last week.