Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Whitehead and Peirce’s Synchronicity with Hegel’s Capital Error. Thought of the Day 97.0

6a00d83451aec269e201b8d28c1a40970c-800wi

The focus on experience ensures that Whitehead’s metaphysics is grounded. Otherwise the narrowness of approach would only culminate in sterile measurement. This becomes especially evident with regard to the science of history. Whitehead gives a lucid example of such ‘sterile measurement’ lacking the immediacy of experience.

Consider, for example, the scientific notion of measurement. Can we elucidate the turmoil of Europe by weighing its dictators, its prime ministers, and its editors of newspapers? The idea is absurd, although some relevant information might be obtained. (Alfred North Whitehead – Modes of Thought)

The wealth of experience leaves us with the problem of how to cope with it. Selection of data is required. This selection is done by a value judgment – the judgment of importance. Although Whitehead opposes the dichotomy of the two notions ‘importance’ and ‘matter of fact’, it is still necessary to distinguish grades and types of importance, which enables us to structure our experience, to focus it. This is very similar to hermeneutical theories in Schleiermacher, Gadamer and Habermas: the horizon of understanding structures the data. Therefore, we not only need judgment but the process of concrescence implicitly requires an aim. Whitehead explains that

By this term ‘aim’ is meant the exclusion of the boundless wealth of alternative potentiality and the inclusion of that definite factor of novelty which constitutes the selected way of entertaining those data in that process of unification.

The other idea that underlies experience is “matter of fact.”

There are two contrasted ideas which seem inevitably to underlie all width of experience, one of them is the notion of importance, the sense of importance, the presupposition of importance. The other is the notion of matter of fact. There is no escape from sheer matter of fact. It is the basis of importance; and importance is important because of the inescapable character of matter of fact.

By stressing the “alien character” of feeling that enters into the privately felt feeling of an occasion, Whitehead is able to distinguish the responsive and the supplemental stages of concrescence. The responsive stage being a purely receptive phase, the latter integrating the former ‘alien elements’ into a unity of feeling. The alien factor in the experiencing subjects saves Whitehead’s concept from being pure Spirit (Geist) in a Hegelian sense. There are more similarities between Hegelian thinking and Whitehead’s thought than his own comments on Hegel may suggest. But, his major criticism could probably be stated with Peirce, who wrote that

The capital error of Hegel which permeates his whole system in every part of it is that he almost altogether ignores the Outward clash. (The Essential Peirce 1)

Whitehead refers to that clash as matter of fact. Although, even there, one has to keep in mind that matter-of-fact is an abstraction. 

Matter of fact is an abstraction, arrived at by confining thought to purely formal relations which then masquerade as the final reality. This is why science, in its perfection, relapses into the study of differential equations. The concrete world has slipped through the meshes of the scientific net.

Whitehead clearly keeps the notion of prehension in his late writings as developed in Process and Reality. Just to give one example, 

I have, in my recent writings, used the word ‘prehension’ to express this process of appropriation. Also I have termed each individual act of immediate self-enjoyment an ‘occasion of experience’. I hold that these unities of existence, these occasions of experience, are the really real things which in their collective unity compose the evolving universe, ever plunging into the creative advance. 

Process needs an aim in Process and Reality as much as in Modes of Thought:

We must add yet another character to our description of life. This missing characteristic is ‘aim’. By this term ‘aim’ is meant the exclusion of the boundless wealth of alternative potentiality, and the inclusion of that definite factor of novelty which constitutes the selected way of entertaining those data in that process of unification. The aim is at that complex of feeling which is the enjoyment of those data in that way. ‘That way of enjoyment’ is selected from the boundless wealth of alternatives. It has been aimed at for actualization in that process.

Tarski, Wittgenstein and Undecidable Sentences in Affine Relation to Gödel’s. Thought of the Day 65.0

 maxresdefault

I imagine someone asking my advice; he says: “I have constructed a proposition (I will use ‘P’ to designate it) in Russell’s symbolism, and by means of certain definitions and transformations it can be so interpreted that it says: ‘P is not provable in Russell’s system.’ Must I not say that this proposition on the one hand is true, and on the other hand is unprovable? For suppose it were false; then it is true that it is provable. And that surely cannot be! And if it is proved, then it is proved that it is not provable. Thus it can only be true, but unprovable.” — Wittgenstein

Any language of such a set, say Peano Arithmetic PA (or Russell and Whitehead’s Principia Mathematica, or ZFC), expresses – in a finite, unambiguous, and communicable manner – relations between concepts that are external to the language PA (or to Principia, or to ZFC). Each such language is, thus, essentially two-valued, since a relation either holds or does not hold externally (relative to the language).

Further, a selected, finite, number of primitive formal assertions about a finite set of selected primitive relations of, say, PA are defined as axiomatically PA-provable; all other assertions about relations that can be effectively defined in terms of the primitive relations are termed as PA-provable if, and only if, there is a finite sequence of assertions of PA, each of which is either a primitive assertion, or which can effectively be determined in a finite number of steps as an immediate consequence of any two assertions preceding it in the sequence by a finite set of rules of consequence.

The philosophical dimensions of this emerges if we take M as the standard, arithmetical, interpretation of PA, where:

(a)  the set of non-negative integers is the domain,

(b)  the integer 0 is the interpretation of the symbol “0” of PA,

(c)  the successor operation (addition of 1) is the interpretation of the “ ‘ ” function,

(d)  ordinary addition and multiplication are the interpretations of “+” and “.“,

(e) the interpretation of the predicate letter “=” is the equality relation.

Now, post-Gödel, the standard interpretation of classical theory seems to be that:

(f) PA can, indeed, be interpreted in M;

(g) assertions in M are decidable by Tarski’s definitions of satisfiability and truth;

(h) Tarskian truth and satisfiability are, however, not effectively verifiable in M.

Tarski made clear his indebtedness to Gödel’s methods,

We owe the method used here to Gödel who employed it for other purposes in his recently published work Gödel. This exceedingly important and interesting article is not directly connected with the theme of our work it deals with strictly methodological problems the consistency and completeness of deductive systems, nevertheless we shall be able to use the methods and in part also the results of Gödel’s investigations for our purpose.

On the other hand Tarski strongly emphasized the fact that his results were obtained independently, even though Tarski’s theorem on the undefinability of truth implies the existence of undecidable sentences, and hence Gödel’s first incompleteness theorem. Shifting gears here, how far was the Wittgensteinian quote really close to Gödel’s? However, the question, implicit in Wittgenstein’s argument regarding the possibility of a semantic contradiction in Gödel’s reasoning, then arises: How can we assert that a PA-assertion (whether such an assertion is PA-provable or not) is true under interpretation in M, so long as such truth remains effectively unverifiable in M? Since the issue is not resolved unambiguously by Gödel in his paper (nor, apparently, by subsequent standard interpretations of his formal reasoning and conclusions), Wittgenstein’s quote can be taken to argue that, although we may validly draw various conclusions from Gödel’s formal reasoning and conclusions, the existence of a true or false assertion of M cannot be amongst them.

Whitehead’s Anti-Substantivilism, or Process & Reality as a Cosmology to-be. Thought of the Day 39.0

whiteheads-process-philosophy

Treating “stuff” as some kind of metaphysical primitive is mere substantivilism – and fundamentally question-begging. One has replaced an extra-theoretic referent of the wave-function (unless one defers to some quasi-literalist reading of the nature of the stochastic amplitude function ζ[X(t)] as somehow characterizing something akin to being a “density of stuff”, and moreover the logic and probability (Born Rules) must ultimately be obtained from experimentally obtained scattering amplitudes) with something at least as equally mystifying, as the argument against decoherence goes on to show:

In other words, you have a state vector which gives rise to an outcome of a measurement and you cannot understand why this is so according to your theory.

As a response to Platonism, one can likewise read Process and Reality as essentially anti-substantivilist.

Consider, for instance:

Those elements of our experience which stand out clearly and distinctly [giving rise to our substantial intuitions] in our consciousness are not its basic facts, [but] they are . . . late derivatives in the concrescence of an experiencing subject. . . .Neglect of this law [implies that] . . . [e]xperience has been explained in a thoroughly topsy-turvy fashion, the wrong end first (161).

To function as an object is to be a determinant of the definiteness of an actual occurrence [occasion] (243).

The phenomenological ontology offered in Process and Reality is richly nuanced (including metaphysical primitives such as prehensions, occasions, and their respectively derivative notions such as causal efficacy, presentational immediacy, nexus, etc.). None of these suggest metaphysical notions of substance (i.e., independently existing subjects) as a primitive. The case can perhaps be made concerning the discussion of eternal objects, but such notions as discussed vis-à-vis the process of concrescence are obviously not metaphysically primitive notions. Certainly these metaphysical primitives conform in a more nuanced and articulated manner to aspects of process ontology. “Embedding” – as the notion of emergence is a crucial constituent in the information-theoretic, quantum-topological, and geometric accounts. Moreover, concerning the issue of relativistic covariance, it is to be regarded that Process and Reality is really a sketch of a cosmology-to-be . . . [in the spirit of ] Kant [who] built on the obsolete ideas of space, time, and matter of Euclid and Newton. Whitehead set out to suggest what a philosophical cosmology might be that builds on Newton.

Non-self Self

Philosophy is the survey of all the sciences with the special object of their harmony and of their completion. It brings to this task not only the evidence of the separate sciences but also its special appeal to the concrete experience – Whitehead

bitchen-navel-power-tint

Vidya and Avidya, the Self and the not-Self, as well as sambhūti and asambhūti, Brahman and the world, are basically one, not two. Avidya affirms the world, as a self-sufficient reality. Vidya affirms God as the Other, as a far away reality. When true knowledge arises, says the Upanishads, this opposition is overcome.

The true knowledge involves comprehension of the total Reality, of the truth of both Being and Becoming. Philosophic knowledge or vision cannot be complete if it ignores or neglects any aspect of knowledge or experience. Philosophy is the synthesis of all knowledge and experience, according to the Upanishads and according also to modern thought. Brahmavidya, philosophy, is sarvavidyapratishthā, the basis and support of all knowledge, says the Mundaka Upanishad. All knowledge, according to that Upanishad, can be divided in to two distinct categories – the apara, the lower, and the para, the higher. It boldly relegates all sciences, arts, theologies, and holy scriptures of religions, including the Vedas, to the apara category. And that is para it says, yayā tadaksharam adhigamyate, by which the imperishable Reality is realized.’

The vision of the Totality therefore must include the vision of the para and the apara aspects of Reality. If brahmavidya, philosophy, is the pratisthā, support, of sarvavidyā, totality of knowledge, it must be a synthesis of both the aparā and the parā forms of knowledge.

This is endorsed by the Gita in its statement that the jnana, philosophy, is the synthesis of the knowledge of the not-Self and the Self:

क्षेत्रक्षेत्रज्ञयोर्ज्ञानं यत्तज्ज्ञानं मतं मम ।

kṣetrakṣetrajñayorjñānaṃ yattajjñānaṃ mataṃ mama |

The synthesis of the knowledge of the not-Self, avidya, which is positive science, with that of the Self, vidya, which is the science of religion, will give us true philosophy, which is the knowledge flowering in to vision and maturing into wisdom.

This is purnajñāna, fullness of knowledge, as termed by Ramakrishna. The Gita speaks of this as jñānam vijñāna sahitamjñāna coupled with vijñāna, and proclaims this as the summit of spiritual achievement:

बहूनां जन्मनामन्ते ज्ञानवान्मां प्रपद्यते ।
वासुदेवः सर्वमिति स महात्मा सुदुर्लभः ॥

bahūnāṃ janmanāmante jñānavānmāṃ prapadyate |
vāsudevaḥ sarvamiti sa mahātmā sudurlabhaḥ ||

‘At the end of many births, the wise man attains Me with the realization that all this (universe) is Vasudeva the indwelling Self); such a great-souled one is rare to come across’

Whitehead’s Non-Anthropocentric Quantum Field Ontology. Note Quote.

pop5-5

Whitehead builds also upon James’s claim that “The thought is itself the thinker”.

Either your experience is of no content, of no change, or it is of a perceptible amount of content or change. Your acquaintance with reality grows literally by buds or drops of perception. Intellectually and on reflection you can divide them into components, but as immediately given they come totally or not at all. — William James.

If the quantum vacuum displays features that make it resemble a material, albeit a really special one, we can immediately ask: then what is this material made of? Is it a continuum, or are the “atoms” of vacuum? Is vacuum the primordial substance of which everything is made of? Let us start by decoupling the concept of vacuum from that of spacetime. The concept of vacuum as accepted and used in standard quantum field theory is tied with that of spacetime. This is important for the theory of quantum fields, because it leads to observable effects. It is the variation of geometry, either as a change in boundary conditions or as a change in the speed of light (and therefore the metric) which is responsible for the creation of particles. Now, one can legitimately go further and ask: which one is the fundamental “substance”, the space-time or the vacuum? Is the geometry fundamental in any way, or it is just a property of the empty space emerging from a deeper structure? That geometry and substance can be separated is of course not anything new for philosophers. Aristotle’s distinction between form and matter is one example. For Aristotle the “essence” becomes a true reality only when embodied in a form. Otherwise it is just a substratum of potentialities, somewhat similar to what quantum physics suggests. Immanuel Kant was even more radical: the forms, or in general the structures that we think of as either existing in or as being abstracted from the realm of noumena are actually innate categories of the mind, preconditions that make possible our experience of reality as phenomena. Structures such as space and time, causality, etc. are a priori forms of intuition – thus by nature very different from anything from the outside reality, and they are used to formulate synthetic a priori judgments. But almost everything that was discovered in modern physics is at odds with Kant’s view. In modern philosophy perhaps Whitehead’s process metaphysics provides the closest framework for formulating these problems. For Whitehead, potentialities are continuous, while the actualizations are discrete, much like in the quantum theory the unitary evolution is continuous, while the measurement is non-unitary and in some sense “discrete”. An important concept is the “extensive continuum”, defined as a “relational complex” containing all the possibilities of objectification. This continuum also contains the potentiality for division; this potentiality is effected in what Whitehead calls “actual entities (occasions)” – the basic blocks of his cosmology. The core issue for both Whiteheadian Process and Quantum Process is the emergence of the discrete from the continuous. But what fixes, or determines, the partitioning of the continuous whole into the discrete set of subsets? The orthodox answer is this: it is an intentional action of an experimenter that determines the partitioning! But, in Whiteheadian process the world of fixed and settled facts grows via a sequence actual occasions. The past actualities are the causal and structural inputs for the next actual occasion, which specifies a new space-time standpoint (region) from which the potentialities created by the past actualities will be prehended (grasped) by the current occasion. This basic autogenetic process creates the new actual entity, which, upon becoming actual, contributes to the potentialities for the succeeding actual occasions. For the pragmatic physicist, since the extensive continuum provides the space of possibilities from which the actual entities arise, it is tempting to identify it with the quantum vacuum. The actual entities are then assimilated with events in spacetime, as resulting from a quantum measurement, or simply with particles. The following caveat is however due: Whitehead’s extensive continuum is also devoid of geometrical content, while the quantum vacuum normally carries information about the geometry, be it flat or curved. Objective/absolute actuality consist of a sequence of psycho-physical quantum reduction events, identified as Whiteheadian actual entities/occasions. These happenings combine to create a growing “past” of fixed and settled “facts”. Each “fact” is specified by an actual occasion/entity that has a physical aspect (pole), and a region in space-time from which it views reality. The physical input is precisely the aspect of the physical state of the universe that is localized along the part of the contemporary space-like surface σ that constitutes the front of the standpoint region associated with the actual occasion. The physical output is reduced state ψ(σ) on this space-like surface σ. The mental pole consists of an input and an output. The mental inputs and outputs have the ontological character of thoughts, ideas, or feelings, and they play an essential dynamical role in unifying, evaluating, and selecting discrete classically conceivable activities from among the continuous range of potentialities offered by the operation of the physically describable laws. The paradigmatic example of an actual occasion is an event whose mental pole is experienced by a human being as an addition to his or her stream of conscious events, and whose output physical pole is the neural correlate of that experiential event. Such events are “high-grade” actual occasions. But the Whitehead/Quantum ontology postulates that simpler organisms will have fundamentally similar but lower-grade actual occasions, and that there can be actual occasions associated with any physical systems that possess a physical structure that will support physically effective mental interventions of the kind described above. Thus the Whitehead/Quantum ontology is essentially an ontologicalization of the structure of orthodox relativistic quantum field theory, stripped of its anthropocentric trappings. It identifies the essential physical and psychological aspects of contemporary orthodox relativistic quantum field theory, and lets them be essential features of a general non-anthropocentric ontology.

quantum_veda

It is reasonable to expect that the continuous differentiable manifold that we use as spacetime in physics (and experience in our daily life) is a coarse-grained manifestation of a deeper reality, perhaps also of quantum (probabilistic) nature. This search for the underlying structure of spacetime is part of the wider effort of bringing together quantum physics and the theory of gravitation under the same conceptual umbrella. From various the- oretical considerations, it is inferred that this unification should account for physics at the incredibly small scale set by the Planck length, 10−35m, where the effects of gravitation and quantum physics would be comparable. What happens below this scale, which concepts will survive in the new description of the world, is not known. An important point is that, in order to incorporate the main conceptual innovation of general relativity, the the- ory should be background-independent. This contrasts with the case of the other fields (electromagnetic, Dirac, etc.) that live in the classical background provided by gravitation. The problem with quantizing gravitation is – if we believe that the general theory of relativity holds in the regime where quantum effects of gravitation would appear, that is, beyond the Planck scale – that there is no underlying background on which the gravitational field lives. There are several suggestions and models for a “pre-geometry” (a term introduced by Wheeler) that are currently actively investigated. This is a question of ongoing investigation and debate, and several research programs in quantum gravity (loops, spinfoams, noncommutative geometry, dynamical triangulations, etc.) have proposed different lines of attack. Spacetime would then be an emergent entity, an approximation valid only at scales much larger than the Planck length. Incidentally, nothing guarantees that background-independence itself is a fundamental concept that will survive in the new theory. For example, string theory is an approach to unifying the Standard Model of particle physics with gravitation which uses quantization in a fixed (non-dynamic) background. In string theory, gravitation is just another force, with the graviton (zero mass and spin 2) obtained as one of the string modes in the perturbative expansion. A background-independent formulation of string theory would be a great achievement, but so far it is not known if it can be achieved.

Whitehead’s Ontologization of the Quantum Field Theory (QFT)

The art of progress is to preserve order amid change, and to preserve change amid order.

— Alfred North Whitehead, Process and Reality, 1929.

OLYMPUS DIGITAL CAMERA

After his attempt to complete the set-theoretic foundations of mathematics in collaboration with Russell, Whitehead’s venture into the natural sciences made him realise that the traditional timeless ontology of substances, and not in the least their static set-theoretic underpinning, does not suit natural phenomena. Instead, he claims, it is processes and their relationships which should underpin our understanding of those phenomena. Whiteheadian quantum ontology is essentially an ontologization of the structure of orthodox relativistic quantum field theory, stripped of any anthropocentric formulations. This means that mentality is no longer reserved for human beings and higher creatures. Does Whitehead’s ontology contain an inconsistency due to the fact that the principle of separateness of all realized regions will generally not be satisfied in his causally local and separable ontology? This would be true if his metaphysics were traced back only to the theory of relativity, if one did not take into account that his ideas originate from a psycho-philosophical discussion, that his theory of prehension connects all occasions of the contemporary world, and that the concrescence process selects positive prehensions. If one concluded that, then either the causal independence of simultaneous occasions or the distinctness of their concrescence processes would have to be abandoned in order to secure the separateness of all realized regions, and one would have to answer the questions: What does causality mean?

Causality is merely the way in which each instance of freedom takes into account the previous instances, as each of our experience refers back through memory to our own past and through perception to the world’s past.” According to quantum thinking and process philosophy there is no backward-in-time causation. “The basic properties of relativistic quantum theory emerge […] from a logically simple model of reality. In this model there is a fundamental creative process by discrete steps. Each step is a creative act or event. Each event is associated with a definitive spacetime location. The fundamental process is not local in character, but it generates local spacetime patterns that have mathematical forms amenable to scientific studies. According to Charles Hartshorne,

The mutual independence of contemporaries constitutes their freedom. Without this independence, what happens anywhere would immediately condition what happens anywhere else. However, this would be fatal to freedom only if the sole alternative to mutual independence were mutual dependence. And this is not a necessary, it is even a possible, interpretation of Bell’s result. What happens here now may condition what happens somewhere else without measurable temporal lapse, although what happens at somewhere else does not condition what happens here, still retains its freedom since […] no set of conditions can be fully determinative of the resulting actuality.

Indian Thought and Language: a raw recipe imported in the east

In his Philosophy of History, Hegel mistakenly believed that

“Hindoo principles” are polar in character. Because of their polarity which vacillates between “pure self-renouncing ideality, and that (phenomenal) variety which goes to the opposite extreme of sensuousness, it is evident that nothing but abstract thought and imagination can be developed”. However, from these mistaken beliefs, he rightly concluded that grammar in Indian thought “has advanced to a high degree of consistent regularity”. He was so impressed by the developments that he concluded that the development in grammar “has been so far cultivated that no language can be regarded as more fully developed than the Sanscrit”.

This is enlightening to the extent of what even Fred Dallmayr in his opus on Hegel titled aptly “G. W. F. Hegel: Modernity and Political Thought” would be most happy to corroborate. This is precisely what I would call ‘Philosophy in the times of errors’ (pun intended for Hegel and his arrogance).

About the nature of language, I quote in full the paragraph:

“Language is intimately related with our life like the warp and weft in a cloth. Our concepts determine the way we look at our world. Any aberration in our understanding of language affects our cognition. Despite the cardinal importance of language, the questions like “What is the nature of language?” “What is the role of semantics and syntax of language? ” What is the relationship between language, thought and reality?” How do we understand language—do we understand it by understanding each of the words in a sentence, or is the sentence a carrier of meaning?” “How does the listener understand the speaker?” are the questions which have been an enigma.”

Philosopher Christopher Gauker created quite a ruckus with his influential yet, critically attacked book called “Words without Meaning” and I quote a small review of it from the MIT press (which published the book):

“According to the received view of linguistic communication, the primary function of language is to enable speakers to reveal the propositional contents of their thoughts to hearers. Speakers are able to do this because they share with their hearers an understanding of the meanings of words. Christopher Gauker rejects this conception of language, arguing that it rests on an untenable conception of mental representation and yields a wrong account of the norms of discourse.

Gauker’s alternative starts with the observation that conversations have goals and that the best way to achieve the goal of a conversation depends on the circumstances under which the conversation takes place. These goals and circumstances determine a context of utterance quite apart from the attitudes of the interlocutors. The fundamental norms of discourse are formulated in terms of the conditions under which sentences are assertible in such contexts.

Words without Meaning contains original solutions to a wide array of outstanding problems in the philosophy of language, including the logic of quantification, the logic of conditionals, the semantic paradoxes, the nature of presupposition and implicature, and the nature and attribution of beliefs.”

gal-004a1

This is indeed a new way of looking up at the nature of language and the real question is if anyone in the Indian tradition comes really close to doing this, i.e. a conflation of what Gauker says with that of the tradition. Another thing that I discovered thanks to a  friend of mine is a book by Richard King on Indian Philosophy. He quotes about Bhartṛhari/भर्तृहरि thus:

Bhartṛhari/भर्तृहरि, like his Lacanian and Derridean counterparts rejects the view that one can know anythin outside of language.There is an eternal connection between knowledge and language which cannot be broken”

If this identity between knowledge and the word were to disappear, knowledge would cease to be knowledge. (Bhartṛhari/भर्तृहरि himself)

Thus he equates Śabda and Jñāna as they become or come identical in nature.

Language could indeed be looked as a function that may take the arguments as getting passed on to it that need not specifically base itself upon communication as an end, but could somehow serve as communication as a means. I would somehow call this as the syndrome of language (or maybe even a deficit of language, to take the cue from the ‘phenomenological deficit’), as in whatsoever way it is looked upon, i.e. transcendental realization or an immanent force (‘play’ would be better suited here) ‘in-itself’ for the sake of establishment, the possibilization of keeping out communication cannot be ruled out. Language would still be communico-centric for all that.

But another way of looking at realizing (by not establishing or introducing) relations between two relata, and by this if it could indeed be thought of is, if we somehow attribute language to ‘Objects’ and therefore even call the untenability of interactions between any entities as based on a relation that is linguistic in ways we might not comprehend.

No wonder, why I am getting drawn into the seriousness of objects as a way of realizing their interactions, their language and this all, away from the mandates we (anthropocentrism) have hitherto set upon them.

Why I insist on the objects having a language of their own and that too divorced from the realm of humans is maybe the impact of Whitehead on reading the tool-analysis of Heidegger. It must be noted that Whitehead never shied of embracing inanimate reality, of never using words like ‘thought’, ’emotions/feelings’ for the inner life of the inanimate entities. If these things, in their hermeneutical exegesis get attributed to the inanimate entities, there can be no doubt of these ‘Objects’ possessing language, as I said that is far away from the human interference. This could indeed be a way of looking at language in the sense of transcendentalizing possibility, this time, maybe, through the immanent look……