Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Appropriation of (Ir)reversibility of Noise Fluctuations to (Un)Facilitate Complexity

 

data

The logical depth is a suitable measure of subjective complexity for physical as well as mathematical objects. this, upon considering the effect of irreversibility, noise, and spatial symmetries of the equations of motion and initial conditions on the asymptotic depth-generating abilities of model systems.

“Self-organization” suggests a spontaneous increase of complexity occurring in a system with simple, generic (e.g. spatially homogeneous) initial conditions. The increase of complexity attending a computation, by contrast, is less remarkable because it occurs in response to special initial conditions. An important question, which would have interested Turing, is whether self-organization is an asymptotically qualitative phenomenon like phase transitions. In other words, are there physically reasonable models in which complexity, appropriately defined, not only increases, but increases without bound in the limit of infinite space and time? A positive answer to this question would not explain the natural history of our particular finite world, but would suggest that its quantitative complexity can legitimately be viewed as an approximation to a well-defined qualitative property of infinite systems. On the other hand, a negative answer would suggest that our world should be compared to chemical reaction-diffusion systems (e.g. Belousov-Zhabotinsky), which self-organize on a macroscopic, but still finite scale, or to hydrodynamic systems which self-organize on a scale determined by their boundary conditions.

The suitability of logical depth as a measure of physical complexity depends on the assumed ability (“physical Church’s thesis”) of Turing machines to simulate physical processes, and to do so with reasonable efficiency. Digital machines cannot of course integrate a continuous system’s equations of motion exactly, and even the notion of computability is not very robust in continuous systems, but for realistic physical systems, subject throughout their time development to finite perturbations (e.g. electromagnetic and gravitational) from an uncontrolled environment, it is plausible that a finite-precision digital calculation can approximate the motion to within the errors induced by these perturbations. Empirically, many systems have been found amenable to “master equation” treatments in which the dynamics is approximated as a sequence of stochastic transitions among coarse-grained microstates.

We concentrate arbitrarily on cellular automata, in the broad sense of discrete lattice models with finitely many states per site, which evolve according to a spatially homogeneous local transition rule that may be deterministic or stochastic, reversible or irreversible, and synchronous (discrete time) or asynchronous (continuous time, master equation). Such models cover the range from evidently computer-like (e.g. deterministic cellular automata) to evidently material-like (e.g. Ising models) with many gradations in between.

More of the favorable properties need to be invoked to obtain “self-organization,” i.e. nontrivial computation from a spatially homogeneous initial condition. A rather artificial system (a cellular automaton which is stochastic but noiseless, in the sense that it has the power to make purely deterministic as well as random decisions) undergoes this sort of self-organization. It does so by allowing the nucleation and growth of domains, within each of which a depth-producing computation begins. When two domains collide, one conquers the other, and uses the conquered territory to continue its own depth-producing computation (a computation constrained to finite space, of course, cannot continue for more than exponential time without repeating itself). To achieve the same sort of self-organization in a truly noisy system appears more difficult, partly because of the conflict between the need to encourage fluctuations that break the system’s translational symmetry, while suppressing fluctuations that introduce errors in the computation.

Irreversibility seems to facilitate complex behavior by giving noisy systems the generic ability to correct errors. Only a limited sort of error-correction is possible in microscopically reversible systems such as the canonical kinetic Ising model. Minority fluctuations in a low-temperature ferromagnetic Ising phase in zero field may be viewed as errors, and they are corrected spontaneously because of their potential energy cost. This error correcting ability would be lost in nonzero field, which breaks the symmetry between the two ferromagnetic phases, and even in zero field it gives the Ising system the ability to remember only one bit of information. This limitation of reversible systems is recognized in the Gibbs phase rule, which implies that under generic conditions of the external fields, a thermodynamic system will have a unique stable phase, all others being metastable. Even in reversible systems, it is not clear why the Gibbs phase rule enforces as much simplicity as it does, since one can design discrete Ising-type systems whose stable phase (ground state) at zero temperature simulates an aperiodic tiling of the plane, and can even get the aperiodic ground state to incorporate (at low density) the space-time history of a Turing machine computation. Even more remarkably, one can get the structure of the ground state to diagonalize away from all recursive sequences.

Gnostic Semiotics. Thought of the Day 63.0

untitled-1-1024x346

The question here is what is being composed? For the deferment and difference that is always already of the Sign, suggests that perhaps the composition is one that lies not within but without, a creation that lies on the outside but which then determines – perhaps through the reader more than anything else, for after all the meaning of a particular sign, be it a word or anything else, requires a form of impregnation by the receiver – a particular meaning.

Is there any choice but to assume a meaning in a sign? Only through the simulation, or ‘belief’ if you prefer (there is really no difference in the two concepts), of an inherent meaning in the sign can any transference continue. For even if we acknowledge that all communication is merely the circulation of empty signifiers, the impregnation of the signified (no matter how unconnected it may be to the other person’s signified) still ensures that the sign carries with it a meaning. Only through this simulation of a meaning is circulation possible – even if one posits that the sign circulates itself, this would not be possible if it were completely empty.

Since it is from without (even if meaning is from the reader, (s)he is external to the signification), this suggests that the meaning is a result, a consequence of forces – its signification is a result of the significance of various forces (convention, context, etc) which then means that inherently, the sign remains empty; a pure signifier leading to yet another signifier.

The interesting element though lies in the fact that the empty signifier then sucks the Other (in the form of the signified, which takes the form of the Absolute Other here) into it, in order to define an existence, but essentially remains an empty signifier, awaiting impregnation with meaning from the reader. A void: always full and empty or perhaps (n)either full (n)or empty. For true potentiality must always already contain the possibility of non-potentiality. Otherwise there would be absolutely no difference between potentiality and actualization – they would merely be different ends of the same spectrum.

Astrobiological Traces Within the Secret Doctrine.

पूर्णस्य पूर्णमादाय पूर्णमेवाशिष्यते

pūrṇasya pūrṇamādāya pūrṇamevāśiṣyate

‘From the Fullness of Brahman has come the fullness of the universe, leaving alone Fullness as the remainder.’

पूर्णमदः पूर्णमादाय पूर्णात् पूर्णमुदच्यते
पूर्णस्य पूर्णमादाय पूर्णमेवाशिष्यते
ॐ शान्तिः शान्तिः शान्तिः ।

pūrṇamadaḥ pūrṇamādāya pūrṇāt pūrṇamudacyate
pūrṇasya pūrṇamādāya pūrṇamevāśiṣyate
oṃ śāntiḥ śāntiḥ śāntiḥ |

‘The invisible (Brahman) is the Full; the visible (the world) too is the Full. From the Full (Brahman), the Full (the visible) universe has come. The Full (Brahman) remains the same, even after the Full (the visible universe) has come out of the Full (Brahman).’

नित्योऽनित्यानां चेतनश्चेतनानाम्
एको बहूनां यो विदधाति कामान् ।
तमात्मस्थं योऽनुपश्यन्ति धीराः
तेषां शान्तिः शाश्वतं नेतरेषाम् ॥

nityo’nityānāṃ cetanaścetanānām
eko bahūnāṃ yo vidadhāti kāmān |
tamātmasthaṃ yo’nupaśyanti dhīrāḥ
teṣāṃ śāntiḥ śāśvataṃ netareṣām ||

‘He is the eternal in the midst of non-eternals, the principle of intelligence in all that are intelligent. He is One, yet fulfils the desires of many. Those wise men who perceive Him as existing within their own self, to them eternal peace, and non else.’

65694db395ffefda0299c331616604ca

The Secret Doctrine of the Ages teaches that the universe came into existence through creative and evolutionary processes; and it demonstrates why both are necessary to explain our origins. It harmonizes the truths of science and religion, while showing that major assumptions of both Darwinism and Fundamentalist Creationism do not bear up to careful examination. By drawing our attention to the questions of why we live and die, of what is mind and substance, the Secret Doctrine helps us realize that wisdom begins with understanding how very little we really know. Yet it also affirms that the most perplexing problems can be solved; that of the progeny of one cosmos.

Evolution means unfolding and progressive development, derived from the Latin evolutio: “unrolling,” specifically of a scroll or volume — suggestively connoting the serial expression of previously hidden ideas. A climb from the bottom of the Grand Canyon reveals an unmistakable evolutionary story: of the appearance of progressively more complex species over a lengthy period of time. But how actually did this happen? The compelling evidence of nature contradicts the week-long special creation postulated by Bible literalists. Darwinian theory is also proving unsatisfactory, as a growing number of scientists are relegating its major claims to the category of “mythology. “Though not assenting to any metaphysical implications, Harvard paleontologist Stephen Jay Gould declared in 1980 that the modern synthetic theory of evolution, “as a general proposition, is effectively dead, despite its persistence as textbook orthodoxy.” Pierre-P. Grassé, former president of the French Academy of Sciences and editor of the 35-volume Traité de Zoologie, was more forceful:

Their success among certain biologists, philosophers, and sociologists notwithstanding, the explanatory doctrines of biological evolution do not stand up to an objective, in-depth criticism. They prove to be either in conflict with reality or else incapable of solving the major problems involved. Through the use and abuse of hidden postulates, of bold, often ill-founded extrapolations, a pseudoscience has been created. It is taking root in the very heart of biology and is leading astray many biochemists and biologists, who sincerely believe that the accuracy of fundamental concepts has been demonstrated, which is not the case.

While most critics readily acknowledge that natural selection and gene changes partially explain variation in species or microevolution, they point out that Darwinism has failed spectacularly to describe the origin of life and the mechanism of macroevolution: the manner in which higher types emerge.

Textbook theory asserts that life on earth began with the formation of DNA and RNA, the first self-replicating molecules, in a prebiotic soup rich in organic compounds, amino acids, and nucleotides. Robert Shapiro, professor of chemistry at New York University, wrote:

many scientists now believe that neither the atmosphere described nor the soup had ever existed. Laboratory efforts had also been made to prepare the magic molecule from a simulation of the soup, and thus far had failed.

Even if the purported soup existed elsewhere in the universe, and DNA were brought to earth by meteorite, comet, or some other means, there remains the enigma of how it was originally synthesized. Astrobiology.

In the first place, several mathematicians have shown the astronomical improbability of chance mutations “evolving” any organized system — neither complex DNA molecules nor higher organisms. The 10-20 billion year time frame presently assigned to our universe is far too short a period, given known mutation rates. Moreover, nothing in empirical experience suggests that unguided trial and error — i.e., random mutation — will produce anything but the most trivial ends. Research biologist Michael Denton writes that to “get a cell by chance would require at least one hundred functional proteins to appear simultaneously in one place” — the probability of which has been calculated at the negative figure 1 followed by 2,000 zeros — a staggeringly remote possibility, to say nothing of the lipids, polysaccharides, and nucleic acids also needed to form a viable, reproducing cell.

The same reasoning applies to the extraordinary number of coordinated, immediately useful mutations required to produce “organs of extreme perfection,” such as the mammalian brain, the human eye, and the sophisticated survival mechanisms (including inter-species symbiotic systems) of the plant and animal kingdoms. There is simply no justification, according to Denton, for assuming that blind physical forces will self-organize “in the finite time available the sorts of complex systems which are so ubiquitous in nature.” In observing the sheer elegance and ingenuity of nature’s purposeful designs, scientists like Denton can hardly resist the logic of analogy. The conclusion may have religious implications, he says, but the inference is clear: nature’s systems are the result of intelligent activity.

Another enigmatic problem is the absence in fossil strata of finely-graded transitional forms between major groups of species, i.e., between reptiles and birds, land mammals and whales, and so forth. Darwin himself recognized this as one of the “gravest” impediments to his theory and tried to defend it by asserting “imperfection of the geologic record.” Yet over a century of intensive search has failed to disclose the hypothetical missing links. Thus far only conjecture, or imagination, has been able to fill in how gills became lungs, scales became feathers, and legs became wings — for the record of nature on this point is still a secret.

Darwin also worried over one of nature’s most formidable barriers to macroevolutionary change: hybrid limits. Artificial breeding shows that extreme variations are usually sterile or weak. Left to themselves these hybrid varieties — if they are able to reproduce at all — revert to ancestral norms or eventually die out. In this sense, natural selection, environmental pressures, and genetic coding tend as much to weed out unusual novelties, as to ensure the survival of the fittest of each typea fact which is confirmed by the fossil record. Unquestionably, species adapt and change within natural limits; refinement occurs, too, as in flowering plants. But no one has yet artificially bred, genetically engineered, or observed in nature a series of chromosomal changes, micro or macro, resulting in a species of a higher genus. There are no “hopeful monsters,” except, perhaps, in a poetic sense. Trees remain trees, birds birds, and the problem of how higher types originate has not been solved by Darwin or his successors.

We do not give up our dogmas easily, scientific or religious. Obviously, ideas should be examined for their intrinsic value, not blindly accepted because somebody tells us “Science has proven” or the “Bible says so,” or again, because the Secret Doctrine teaches it. But with science’s recognized ignorance of first causes and macroevolutionary mechanisms, as well as the failure of scriptural literalism to provide satisfactory explanations, there remain the questions about our origins, purpose, and destiny. The answers to these questions are, in a sense, nature’s secret doctrines. Her evolutionary pattern suggests, however, that they are not hopelessly beyond knowing. Just as from the conception of a human embryo to a fully-developed adult, so from the first burst outward of the primordial cosmic atom, the progressive unfolding of intelligence is a natural and observable process. The whole universe seems bent on discovering itself and its reason for being.

The concept of the universe evolving for purposes of self-discovery and creative expression is found not only in modern European philosophy, such as Hegel’s, but also in ancient myths the world over, some of which sound surprisingly up-to-date. The Hindu Puranas, for example, speak of our universe as Brahma, and of alternating periods of cosmic activity and rest as the Days and Nights of Brahma, each of which spans over four billion years — an oscillating universe reminiscent of modern cosmological theory. In each “creation” Brahma attempts to fashion an ever-more perfected mankind, in the process of which he serially evolves from his own consciousness and root substance all of nature’s kingdoms: atoms, minerals, plants, animals, and so forth. Conversely, the stories allude also to the striving of mankind and, for that matter, of all sentient beings, to become Brahma-like in quality — i.e., to express more and more of the hidden mind pattern of the cosmos.

We often look down on ancient traditions as moldy superstitions. While this judgment may well apply to the rind of literalism and later accretions, concealed within and giving life to every religion are core ideas which bear the hallmark of insight. Biblical Genesis also, when read allegorically as is done in gnostic and kabbalistic schools, yields a picture of evolutionary growth and perfectibility, both testaments clearly implying that we are sibling gods of wondrous potential. But are the secret doctrines spoken of in these older traditions expressions of truth or simply romantic wish-fulfilling fantasy? Can they teach us anything relevant about our heritage and our future? It is to such questions that the modern book entitled The Secret Doctrine addresses itself. Impulsed by divinity and guided by karma (cause and effect), each of us has been periodically manifesting since eternity through all the kingdoms, from sub-mineral through human, earning our way to the next realm and beyond. Although seeded with godlike potential, we are not irrevocably fated to an unsought destiny. Karma is a philosophy of merit, and within our power is the capacity to choose — to evolve and create — our own future. We give life and active existence to our thoughts and, to a very large extent, we become what we think we are, or would like to be. This affects ourselves for good or evil, and it affects all others — profoundly so.

Forward, Futures Contracts and Options: Top Down or bottom Up Modeling?

maxresdefault5

The simulation of financial markets can be modeled, from a theoretical viewpoint, according to two separate approaches: a bottom up approach and (or) a top down approach. For instance, the modeling of financial markets starting from diffusion equations and adding a noise term to the evolution of a function of a stochastic variable is a top down approach. This type of description is, effectively, a statistical one.

A bottom up approach, instead, is the modeling of artificial markets using complex data structures (agent based simulations) using general updating rules to describe the collective state of the market. The number of procedures implemented in the simulations can be quite large, although the computational cost of the simulation becomes forbidding as the size of each agent increases. Readers familiar with Sugarscape Models and the computational strategies based on Growing of Artificial Societies have probably an idea of the enormous potentialities of the field. All Sugarscape models include the agents (inhabitants), the environment (a two-dimensional grid) and the rules governing the interaction of the agents with each other and the environment. The original model presented by J. Epstein & R. Axtell (considered as the first large scale agent model) is based on a 51 x 51 cell grid, where every cell can contain different amounts of sugar (or spice). In every step agents look around, find the closest cell filled with sugar, move and metabolize. They can leave pollution, die, reproduce, inherit sources, transfer information, trade or borrow sugar, generate immunity or transmit diseases – depending on the specific scenario and variables defined at the set-up of the model. Sugar in simulation could be seen as a metaphor for resources in an artificial world through which the examiner can study the effects of social dynamics such as evolution, marital status and inheritance on populations. Exact simulation of the original rules provided by J. Epstein & R. Axtell in their book can be problematic and it is not always possible to recreate the same results as those presented in Growing Artificial Societies. However, one would expect that the bottom up description should become comparable to the top down description for a very large number of simulated agents.

The bottom up approach should also provide a better description of extreme events, such as crashes, collectively conditioned behaviour and market incompleteness, this approach being of purely algorithmic nature. A top down approach is, therefore, a model of reduced complexity and follows a statistical description of the dynamics of complex systems.

Forward, Futures Contracts and Options: Let the price at time t of a security be S(t). A specific good can be traded at time t at the price S(t) between a buyer and a seller. The seller (short position) agrees to sell the goods to the buyer (long position) at some time T in the future at a price F(t,T) (the contract price). Notice that contract prices have a 2-time dependence (actual time t and maturity time T). Their difference τ = T − t is usually called time to maturity. Equivalently, the actual price of the contract is determined by the prevailing actual prices and interest rates and by the time to maturity. Entering into a forward contract requires no money, and the value of the contract for long position holders and strong position holders at maturity T will be

(−1)p (S(T)−F(t,T)) (1)

where p = 0 for long positions and p = 1 for short positions. Futures Contracts are similar, except that after the contract is entered, any changes in the market value of the contract are settled by the parties. Hence, the cashflows occur all the way to expiry unlike in the case of the forward where only one cashflow occurs. They are also highly regulated and involve a third party (a clearing house). Forward, futures contracts and options go under the name of derivative products, since their contract price F(t, T) depend on the value of the underlying security S(T). Options are derivatives that can be written on any security and have a more complicated payoff function than the futures or forwards. For example, a call option gives the buyer (long position) the right (but not the obligation) to buy or sell the security at some predetermined strike-price at maturity. A payoff function is the precise form of the price. Path dependent options are derivative products whose value depends on the actual path followed by the underlying security up to maturity. In the case of path-dependent options, since the payoff may not be directly linked to an explicit right, they must be settled by cash. This is sometimes true for futures and plain options as well as this is more efficient.

The Differentiated Hyperreality of Baudrillard

maxence-parache-s-journey-into-hyper-reality-yatzer-12

A sense of meaning for Baudrillard connotes a totality that is called knowledge and it is here that he differs significantly from someone like Foucault. For the latter, knowledge is a product of relations employing power, whereas for the former, any attempt to reach a finality or totality as he calls fit is always a flirtation with delusion. A delusion, since the human subject would always aim at understanding the human or non-human object, and, in the process the object would always be elusive since, it being based on signifiers would be vulnerable to a shift in significations. The two key ideas of Baudrillard are simulation and hyperreality. Simulation accords to representation of things such that they become the things represented, or in other words, representations gain priority over the “real” things. There are certain orders that define simulations viz. signs get to represent objective reality, signs veil reality, signs masking the absence of reality and signs turning into simulacra, since they have relation to reality thus ending up simulating a simulation. In Hegarty‘s reading of Baudrillard, there happen to be three types of simulacra each with a distinct historical epoch. The first is the pre-modern period, where the image marks the place for an item and hence the uniqueness of objects and situations marks them as irreproducibly real. The second is the modern period characterized by industrial revolution signifying the breaking down of distinctions between images and reality because of mass reproduction of copies or proliferation of commodities thus risking the essential existence of the original. The third is the post-modern period, where simulacra precedes the original and the distinction between reality and representation vanishes implying only the existence of simulacra and relegating reality as a vacuous concept. Hyperreality defines a condition wherein “reality” as known gets substituted by simulacra. This notion of Baudrillard is influenced by Canadian communication theorist and rhetorician Marshall McLuhan. Hyperreality with its insistence of signs and simulations fit perfectly in the post-modern era and therefore highlights the inability or shortcomings of consciousness to demarcate between reality and the phantasmatic space. In a quite remarkable analysis of Disneyland, Baudrillard (166-184) clarifies the notion of hyperreality, when he says,

The Disneyland imaginary is neither true nor false: it is a deterrence machine set in order to rejuvenate in reverse the fiction of the real. Whence the debility, the infantile degeneration of this imaginary. It’s meant to be an infantile world, in order to make us believe that adults are everywhere, in the “real” world and to conceal the fact that real childishness is everywhere, particularly among those adults who go there to act the child in order to foster illusion of their real childishness.

Although his initial ideas were affiliated with those of Marxism, he differed from Marx in his epitomizing consumption as the driving force of capitalism as compared to latter’s production. Another issue that was worked out remarkably in Baudrillard was historicity. Agreeing largely with Fukuyama’s notion of the end of history after the collapse of the communist block, Baudrillard only differed by placing importance on the idea of historical progress to have ended and not history necessarily. He forcefully makes the point of ending of history as also the ending of dustbins of history. His post-modern stand differed significantly with that of Lyotard’s in one major respect, despite finding common grounds elsewhere. Despite showing growing aversion to the theory of meta-narratives, Baudrillard, unlike Lyotard, reached a point of pragmatic reality within the confines of an excuse laden notion of universality that happened to be in vogue.

Baudrillard has been at the receiving end with some very extreme, acerbic criticisms directed at him. His writings are not just obscure, but also fail in many respects like defining certain concepts he employs, totalizing insights that have no substantial claim to conjectures, and often hinting strongly at apodicticity without paying due attention to the rival positions. This extremity reaches a culmination point when he is cited as a purveyor of reality-denying irrationalism. But not everything is to be looked at critically in his case and he does enjoy an established status as a transdisciplinary theorist, who, with his provocations have put traditional issues regarding modernity and philosophy in general at stake by providing insights into a better comprehensibility of cultural studies, sociology and philosophy. Most importantly, Baudrillard provides for autonomous and differentiated spaces in cultural, socio-economic and political domains by an implosive theory that cuts across boundaries of various disciplines paving the way for a new era in philosophical and social theory at large.

Reality as Contingently Generating the Actual

f1-large

If reality could be copied, the mapping or simulation of a neural network digitally is possible. This scheme has some nagging problems. Chief among them being the simulated nature of neural network, as the possibility of reality vis-a-vis natural neural network is susceptible to mismatch thus leading to what could be termed non-reductionist essentialism. The other option that could aid better apprehensibility of hyperrealism and simulation in terms of neural networks is from Bhaskar Roy’s idea of critical realism. This aspect differs significantly from Baudrillard’s in that the latter takes reality as potentially open to copying, whereas the former delves into reality as a generative mechanism. According to Bhaskar’s take, reality is not something that can be copied, but something that contingently generates the actual.