Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Individuation. Thought of the Day 91.0

Figure-6-Concepts-of-extensionality

The first distinction is between two senses of the word “individuation” – one semantic, the other metaphysical. In the semantic sense of the word, to individuate an object is to single it out for reference in language or in thought. By contrast, in the metaphysical sense of the word, the individuation of objects has to do with “what grounds their identity and distinctness.” Sets are often used to illustrate the intended notion of “grounding.” The identity or distinctness of sets is said to be “grounded” in accordance with the principle of extensionality, which says that two sets are identical iff they have precisely the same elements:

SET(x) ∧ SET(y) → [x = y ↔ ∀u(u ∈ x ↔ u ∈ y)]

The metaphysical and semantic senses of individuation are quite different notions, neither of which appears to be reducible to or fully explicable in terms of the other. Since sufficient sense cannot be made of the notion of “grounding of identity” on which the metaphysical notion of individuation is based, focusing on the semantic notion of individuation is an easy way out. This choice of focus means that our investigation is a broadly empirical one drawn on empirical linguistics and psychology.

What is the relation between the semantic notion of individuation and the notion of a criterion of identity? It is by means of criteria of identity that semantic individuation is effected. Singling out an object for reference involves being able to distinguish this object from other possible referents with which one is directly presented. The final distinction is between two types of criteria of identity. A one-level criterion of identity says that two objects of some sort F are identical iff they stand in some relation RF:

Fx ∧ Fy → [x = y ↔ RF(x,y)]

Criteria of this form operate at just one level in the sense that the condition for two objects to be identical is given by a relation on these objects themselves. An example is the set-theoretic principle of extensionality.

A two-level criterion of identity relates the identity of objects of one sort to some condition on entities of another sort. The former sort of objects are typically given as functions of items of the latter sort, in which case the criterion takes the following form:

f(α) = f(β) ↔ α ≈ β

where the variables α and β range over the latter sort of item and ≈ is an equivalence relation on such items. An example is Frege’s famous criterion of identity for directions:

d(l1) = d(l2) ↔ l1 || l2

where the variables l1 and l2 range over lines or other directed items. An analogous two-level criterion relates the identity of geometrical shapes to the congruence of things or figures having the shapes in question. The decision to focus on the semantic notion of individuation makes it natural to focus on two-level criteria. For two-level criteria of identity are much more useful than one-level criteria when we are studying how objects are singled out for reference. A one-level criterion provides little assistance in the task of singling out objects for reference. In order to apply a one-level criterion, one must already be capable of referring to objects of the sort in question. By contrast, a two-level criterion promises a way of singling out an object of one sort in terms of an item of another and less problematic sort. For instance, when Frege investigated how directions and other abstract objects “are given to us”, although “we cannot have any ideas or intuitions of them”, he proposed that we relate the identity of two directions to the parallelism of the two lines in terms of which these directions are presented. This would be explanatory progress since reference to lines is less puzzling than reference to directions.

“The Conceptual Penis as a Social Construct”: Sokal-Like Hoax Returns to Test Academic Left’s Moral (Architecture + Orthodox Gender Studies) and Cripples It.

conceptual_penis_cogent_gender_studies

Destructive, unsustainable hegemonically male approaches to pressing environmental policy and action are the predictable results of a raping of nature by a male-dominated mindset. This mindset is best captured by recognizing the role of [sic] the conceptual penis holds over masculine psychology. When it is applied to our natural environment, especially virgin environments that can be cheaply despoiled for their material resources and left dilapidated and diminished when our patriarchal approaches to economic gain have stolen their inherent worth, the extrapolation of the rape culture inherent in the conceptual penis becomes clear…….Toxic hypermasculinity derives its significance directly from the conceptual penis and applies itself to supporting neocapitalist materialism, which is a fundamental driver of climate change, especially in the rampant use of carbon-emitting fossil fuel technologies and careless domination of virgin natural environments. We need not delve deeply into criticisms of dialectic objectivism, or their relationships with masculine tropes like the conceptual penis to make effective criticism of (exclusionary) dialectic objectivism. All perspectives matter.

The androcentric scientific and meta-scientific evidence that the penis is the male reproductive organ is considered overwhelming and largely uncontroversial.”

That’s how we began. We used this preposterous sentence to open a “paper” consisting of 3,000 words of utter nonsense posing as academic scholarship. Then a peer-reviewed academic journal in the social sciences accepted and published it.

This paper should never have been published. Titled, “The Conceptual Penis as a Social Construct,” our paper “argues” that “The penis vis-à-vis maleness is an incoherent construct. We argue that the conceptual penis is better understood not as an anatomical organ but as a gender-performative, highly fluid social construct.” As if to prove philosopher David Hume’s claim that there is a deep gap between what is and what ought to be, our should-never-have-been-published paper was published in the open-access (meaning that articles are freely accessible and not behind a paywall), peer-reviewed journal Cogent Social Sciences.

Assuming the pen names “Jamie Lindsay” and “Peter Boyle,” and writing for the fictitious “Southeast Independent Social Research Group,” we wrote an absurd paper loosely composed in the style of post-structuralist discursive gender theory. The paper was ridiculous by intention, essentially arguing that penises shouldn’t be thought of as male genital organs but as damaging social constructions. We made no attempt to find out what “post-structuralist discursive gender theory” actually means. We assumed that if we were merely clear in our moral implications that maleness is intrinsically bad and that the penis is somehow at the root of it, we could get the paper published in a respectable journal.

This already damning characterization of our hoax understates our paper’s lack of fitness for academic publication by orders of magnitude. We didn’t try to make the paper coherent; instead, we stuffed it full of jargon (like “discursive” and “isomorphism”), nonsense (like arguing that hypermasculine men are both inside and outside of certain discourses at the same time), red-flag phrases (like “pre-post-patriarchal society”), lewd references to slang terms for the penis, insulting phrasing regarding men (including referring to some men who choose not to have children as being “unable to coerce a mate”), and allusions to rape (we stated that “manspreading,” a complaint levied against men for sitting with their legs spread wide, is “akin to raping the empty space around him”). After completing the paper, we read it carefully to ensure it didn’t say anything meaningful, and as neither one of us could determine what it is actually about, we deemed it a success.

Why did Boghossian and Lindsay do this?

Sokal exposed an infatuation with academic puffery that characterizes the entire project of academic postmodernism. Our aim was smaller yet more pointed. We intended to test the hypothesis that flattery of the academic Left’s moral architecture in general, and of the moral orthodoxy in gender studies in particular, is the overwhelming determiner of publication in an academic journal in the field. That is, we sought to demonstrate that a desire for a certain moral view of the world to be validated could overcome the critical assessment required for legitimate scholarship. Particularly, we suspected that gender studies is crippled academically by an overriding almost-religious belief that maleness is the root of all evil. On the evidence, our suspicion was justified.

In the words of Graham Harman,

We kind of deserve it. There is still far too much empty jargon of this sort in the humanities and social sciences fields. Quite aside from whether or not you find the jargon off-putting, it leads to very bad writing, and when writing sounds bad it’s a much more serious sign of bad thinking than most people realize. (Nietzsche was on to this a long time ago, when he said that the only way to improve you writing is to improve your thoughts. Methodologically, I find the converse to be true as well. It is through trying to make your thoughts more readable that you make them better thoughts.) And again, I was one of the few people in the environs of continental philosophy who deeply enjoyed the original Sokal hoax. Until we stop writing (and thinking) like this, we will be repeatedly targeted by such hoaxes, and they will continue to sneak through. We ought to be embarrassed by this, and ought to ask ourselves some tough questions about our disciplinary norms, rather than pretending to be outraged at the “unethical behavior” of the hoax authors.

Endless turf war….

The authors worry that gender studies folk will believe that, “…men do often suffer from machismo braggadocio, and that there is an isomorphism between these concepts via some personal toxic hypermasculine conception of their penises.” But I don’t really see why a gender studies academic wouldn’t believe this… This is NOT a case of cognitive dissonance.

As much as the authors like to pretend like they have “no idea” what they are talking about, they clearly do. They are taking existing gender study ideas and just turning up the volume and adding more jargon. As if this proves a point against the field.

The author’s biases are on their sleeve. Their arguments are about as effective as a Men’s Rights Activist on Reddit. By using a backhanded approach in an attempt to give a coup de grace to gender studies academaniacs, all they’ve done is blow $625 and “exposed” the already well known issue of pay-to-play. If they wanted to make an actual case against the “feminazis” writ large, I suggest they “man” up and actually make a real argument rather than show a bunch of fancy words can fool some people. Ah!, but far from being a meta-analytical multiplier of defense, quantum homeomorphism slithers through the conceptual penis!

Phantom Originary Intentionality: Thought of the Day 16.0

man

Phantom limbs and anosognosias – cases of abnormal impressions of the presence or absence of parts of our body – seem like handy illustrations of an irreducible, first-person dimension of experience, of the sort that will delight the phenomenologist, who will say: aha! there is an empirical case of self-reference which externalist, third-person explanations of the type favoured by deflationary materialists, cannot explain away, cannot do away with. As Merleau-Ponty would say, and Varela after him, there is something about my body which makes it irreducibly my own (le corps propre). Whether illusory or not, such images (phantoms) have something about them such that we perceive them as our own, not someone else’s (well, some agnosias are different: thinking our paralyzed limb is precisely someone else’s, often a relative’s). One might then want to insist that phantom limbs testify to the transcendence of mental life! Indeed, in one of the more celebrated historical cases of phantom limb syndrome, Lord Horatio Nelson, having lost his right arm in a sea battle off of Tenerife, suffered from pains in his phantom hand. Most importantly, he apparently declared that this phantom experience was a “direct proof of the existence of the soul”. Although the materialist might agree with the (reformed) phenomenologist to reject dualism and accept that we are not in our bodies like a sailor in a ship, she might not want to go and declare, as Merleau-Ponty does, that “the mind does not use the body, but fulfills itself through it while at the same time transferring the body outside of physical space.” This way of talking goes back to the Husserlian distinction between Korper, ‘body’ in the sense of one body among others in a vast mechanistic universe of bodies, and Leib, ‘flesh’ in the sense of a subjectivity which is the locus of experience. Now, granted, in cognitivist terms one would want to say that a representation is always my representation, it is not ‘transferable’ like a neutral piece of information, since the way an object appear to me is always a function of my needs and interests. What my senses tell me at any given time relies on my interests as an agent and is determined by them, as described by Andy Clark, who appeals to the combined research traditions of the psychology of perception, new robotics, and Artificial Life. But the phenomenologist will take off from there and build a full-blown defense of intentionality, now recast as ‘motor intentionality’, a notion which goes back to Husserl’s claim in Ideas II that the way the body relates to the external world is crucially through “kinestheses”: all external motions which we perceive are first of all related to kinesthetic sensations, out of which we constitute a sense of space. On this view, our body thus already displays ‘originary intentionality’ in how it relates to the world.

Philosophizing Twistors via Fibration

The basic issue, is a question of so called time arrow. This issue is an important subject of examination in mathematical physics as well as ontology of spacetime and philosophical anthropology. It reveals crucial contradiction between the knowledge about time, provided by mathematical models of spacetime in physics and psychology of time and its ontology. The essence of the contradiction lies in the invariance of the majority of fundamental equations in physics with regard to the reversal of the direction of the time arrow (i. e. the change of a variable t to -t in equations). Neither metric continuum, constituted by the spaces of concurrency in the spacetime of the classical mechanics before the formulation of the Particular Theory of Relativity, the spacetime not having metric but only affine structure, nor Minkowski’s spacetime nor the GTR spacetime (pseudo-Riemannian), both of which have metric structure, distinguish the categories of past, present and future as the ones that are meaningful in physics. Every event may be located with the use of four coordinates with regard to any curvilinear coordinate system. That is what clashes remarkably with the human perception of time and space. Penrose realizes and understands the necessity to formulate such theory of spacetime that would remove this discrepancy. He remarked that although we feel the passage of time, we do not perceive the “passage” of any of the space dimensions. Theories of spacetime in mathematical physics, while considering continua and metric manifolds, cannot explain the difference between time dimension and space dimensions, they are also unable to explain by means of geometry the unidirection of the passage of time, which can be comprehended only by means of thermodynamics. The theory of spaces of twistors is aimed at better and crucial for the ontology of nature understanding of the problem of the uniqueness of time dimension and the question of time arrow. There are some hypotheses that the question of time arrow would be easier to solve thanks to the examination of so called spacetime singularities and the formulation of the asymmetric in time quantum theory of gravitation — or the theory of spacetime in microscale.

The unique role of twistors in TGD

Although Lorentzian geometry is the mathematical framework of classical general relativity and can be seen as a good model of the world we live in, the theoretical-physics community has developed instead many models based on a complex space-time picture.

(1) When one tries to make sense of quantum field theory in flat space-time, one finds it very convenient to study the Wick-rotated version of Green functions, since this leads to well defined mathematical calculations and elliptic boundary-value problems. At the end, quantities of physical interest are evaluated by analytic continuation back to real time in Minkowski space-time.

(2) The singularity at r = 0 of the Lorentzian Schwarzschild solution disappears on the real Riemannian section of the corresponding complexified space-time, since r = 0 no longer belongs to this manifold. Hence there are real Riemannian four-manifolds which are singularity-free, and it remains to be seen whether they are the most fundamental in modern theoretical physics.

(3) Gravitational instantons shed some light on possible boundary conditions relevant for path-integral quantum gravity and quantum cosmology.  Unprimed and primed spin-spaces are not (anti-)isomorphic if Lorentzian space-time is replaced by a complex or real Riemannian manifold. Thus, for example, the Maxwell field strength is represented by two independent symmetric spinor fields, and the Weyl curvature is also represented by two independent symmetric spinor fields and since such spinor fields are no longer related by complex conjugation (i.e. the (anti-)isomorphism between the two spin-spaces), one of them may vanish without the other one having to vanish as well. This property gives rise to the so-called self-dual or anti-self-dual gauge fields, as well as to self-dual or anti-self-dual space-times.

(5) The geometric study of this special class of space-time models has made substantial progress by using twistor-theory techniques. The underlying idea is that conformally invariant concepts such as null lines and null surfaces are the basic building blocks of the world we live in, whereas space-time points should only appear as a derived concept. By using complex-manifold theory, twistor theory provides an appropriate mathematical description of this key idea.

A possible mathematical motivation for twistors can be described as follows.  In two real dimensions, many interesting problems are best tackled by using complex-variable methods. In four real dimensions, however, the introduction of two complex coordinates is not, by itself, sufficient, since no preferred choice exists. In other words, if we define the complex variables

z1 ≡ x1 + ix2 —– (1)

z2 ≡ x3 + ix4 —– (2)

we rely too much on this particular coordinate system, and a permutation of the four real coordinates x1, x2, x3, x4 would lead to new complex variables not well related to the first choice. One is thus led to introduce three complex variables u, z1u, z2u : the first variable u tells us which complex structure to use, and the next two are the

complex coordinates themselves. In geometric language, we start with the complex projective three-space P3(C) with complex homogeneous coordinates (x, y, u, v), and we remove the complex projective line given by u = v = 0. Any line in P3(C) − P1(C) is thus given by a pair of equations

x = au + bv —– (3)

y = cu + dv —– (4)

In particular, we are interested in those lines for which c = −b, d = a. The determinant ∆ of (3) and (4) is thus given by

∆ = aa +bb + |a|2 + |b|2 —– (5)

which implies that the line given above never intersects the line x = y = 0, with the obvious exception of the case when they coincide. Moreover, no two lines intersect, and they fill out the whole of P3(C) − P1(C). This leads to the fibration P3(C) − P1(C) → R4 by assigning to each point of P3(C) − P1(C) the four coordinates Re(a), Im(a), Re(b), Im(b). Restriction of this fibration to a plane of the form

αu + βv = 0 —— (6)

yields an isomorphism C2 ≅ R4, which depends on the ratio (α,β) ∈ P1(C). This is why the picture embodies the idea of introducing complex coordinates.

∆=a

Such a fibration depends on the conformal structure of R4. Hence, it can be extended to the one-point compactification S4 of R4, so that we get a fibration P3(C) → S4 where the line u = v = 0, previously excluded, sits over the point at ∞ of S4 = R∪ ∞ . This fibration is naturally obtained if we use the quaternions H to identify C4 with H2 and the four-sphere S4 with P1(H), the quaternion projective line. We should now recall that the quaternions H are obtained from the vector space R of real numbers by adjoining three symbols i, j, k such that

i2 = j2 = k2 =−1 —– (7)

ij = −ji = k,  jk = −kj =i,  ki = −ik = j —– (8)

Thus, a general quaternion ∈ H is defined by

x ≡ x1 + x2i + x3j + x4k —– (9)

where x1, x2, x3, x4 ∈ R4, whereas the conjugate quaternion x is given by

x ≡ x1 – x2i – x3j – x4k —– (10)

Note that conjugation obeys the identities

(xy) = y x —– (11)

xx = xx = ∑μ=14 x2μ ≡ |x|2 —– (12)

If a quaternion does not vanish, it has a unique inverse given by

x-1 ≡ x/|x|2 —– (13)

Interestingly, if we identify i with √−1, we may view the complex numbers C as contained in H taking x3 = x4 = 0. Moreover, every quaternion x as in (9) has a unique decomposition

x = z1 + z2j —– (14)

where z1 ≡ x1 + x2i, z2 ≡ x3 + x4i, by virtue of (8). This property enables one to identify H with C2, and finally H2 with C4, as we said following (6)

The map σ : P3(C) → P3(C) defined by

σ(x, y, u, v) = (−y, x, −v, u) —– (15)

preserves the fibration because c = −b, d = a, and induces the antipodal map on each fibre.

maxresdefault4

Third Space Theory of Postcoloniality. Note Quote.

cropped-mundo_pb2

Writers, such as Homi Bhabha and Salman Rushdie, who proceed from a consideration of the nature of postcolonial societies and the types of hybridization these various cultures have produced, proposed a radical rethinking—an appropriation of the European thinking by a different discourse. Whereas in European thinking, history and the past are the reference point for epistemology, in postcolonial thought space annihilates time. History is rewritten and realigned from the standpoint of the victims of the destructive progress.  Hybridity replaces a temporal linearity with a spatial plurality. Salman Rushdie  makes this obvious when commenting on the message of his controversial novel, The Satanic Verses, in an essay called “In Good Faith” as follows:

The Satanic Verses celebrates hybridity, impurity, intermingling, the transformation that comes of new and unexpected combinations of human beings, cultures, ideas, politics, movies, songs. It rejoices in mongrelization and fears the absolutism of the Pure. Melange, hotchpotch, a bit of this and a bit of that is how newness enters the world. It is the great possibility that mass migration gives the world, and I have tried to embrace it. The Satanic Verses is for change-by-fusion, change-by-conjoining. It is a love-song to our mongrel selves.

Even though on the surface postcolonial texts may contain race divisions and cultural differences, they all contain germs of community which, as they grow in the mind of the reader, they detach from the apparently inescapable dialectic of history. Thus, postcolonial literatures have begun to deal  with problems of transmuting time into space and of attempting to construct a future. It highlights the acceptance of difference on equal terms. Now both literary critics and historians are recognizing cross-culturality as the possible ending point of an apparent endless human history of conquest and occupations.  They recognize that the myth of purity or essence, the Eurocentric viewpoint must be challenged. The recent approaches show that the power of postcolonial theory lies in its comparative methodology and the hybridized and syncretic view of the modern world which it implies.

Of the various points in which postcolonial texts intersect, place has a paramount importance. In his dialogism thesis, Mikhail Bakhtin emphasizes a space of enunciation where negotiation of discursive doubleness gives birth to a new speech act:

The  hybrid is not only double-voiced and double-accented . . . but is also double-languaged; for in it there are not only (and not even so much) two individual consciounesses, two voices, two accents, as there are [doublings of] socio-linguistic consciousnesses, two epochs . . . that come together and consciously fight it out on the territory of the utterance.

Also, Homi Bhabha talks about a third space of enunciation, a hybrid space or a new position in which communication is possible. Third Space theory emerges from the sociocultural tradition in psychology identified with Lev Vygotsky. Sociocultural approaches are concerned with the “… constitutive role of culture in mind, i.e., on how mind develops by incorporating the community’s shared artifacts accumulated over generations”. Bhabha applies socioculturalism directly to the postcolonial condition, where there are, “… unequal and uneven forces of cultural representation”. For Bhabha, such negotiation is neither assimilation nor collaboration as it makes possible the emergence of an “interstitial” agency that refuses the binary representation of social antagonism. The “interstitial perspective” as Bhabha calls it replaces the “polarity of a prefigurative self-generating nation ‘in-itself’ and extrinsic other nations” with the notion of cultural liminality within the nation. the liminal figure of the nation-space would ensure that no political ideologies could claim transcendent or metaphysical authority for themselves. this is because the subject of cultural discourse – the agency of a people – is spilt in the discursive ambivalence that emerges in the contest of narrative authority between the pedagogical and the performative, which is to say, between the peoples’ status as historical objects of a nationalist pedagogy and their ability to perform themselves as subjects of a process of signification that must erase any prior or originally national presence. Hybrid agencies find their voice in a dialectic that does not seek cultural supremacy or sovereignty. They deploy the partial culture from which they emerge to construct visions of community, and versions of historic memory, that give narrative form to the minority positions they occupy: “the outside of the inside; the part in the whole”.

This “new position” Bhabha proposes is closely related to the “homeless” existence of post-colonial persons. It certainly cannot be assumed to be an independent third space already there, a “no-man’s-land” between the nations. Instead, a way of cultural syncretization, i.e. a medium of negotiating cultural antagonisms, has to be created. Cultural difference has to be acknowledged: “Culture does imply difference, but the differences now are no longer, if you wish, taxonomical; they are interactive and refractive”. This position emphasizes, contrary to the too facile assumption of world literature and world culture as the stages of a multicultural cosmopolitanism already in existence, that the “intellectual trade” takes place mostly on the borders and in the border crossings between cultures where meanings and values are not codified but misunderstood, misrepresented, even falsely adopted. Bhabha explains how beyond fixed cultural (ethnic, gender- and class-related) identities, so-called “hybrid” identities are formed by discontinuous translation and negotiation. Hybridity, liminality, “interrogatory, interstitial space” – these are the positive values Bhabha opposes to a retrograde historicism that continues to dominate Western critical thinking, a “linear narrative of the nation,” with its claims for the “holism of culture and community” and a “fixed horizontal nation-space”. We must, he argues eloquently, undo such thinking with its facile binary oppositions. Rather than emphasizing the opposition between First World and Third World nations, between colonizer and colonized, men and women, black and white, straight and gay, Bhabha would have it, we might more profitably focus on the faultlines themselves, on border situations and thresholds as the sites where identities are performed and contested. Bhabha says, “hybridity to me is the ‘third space’ which enables other positions to emerge”.

Lyotard and Disruption at the Limits of Reason

1412371917-who-step-up-disrupt-real-estate-business

Delegitimation shook the centric stronghold of authority and legitimacy, while dedifferentiation sought out to shake the foundations of hitherto known differences between centers and margins by erosive action within these differences themselves.

Initially, Lyotard made an attempt to fuse the Freudian libido, the fictional/theoretical energy with philosophy, through which he understated the transformations wrought out in the social-political realm, that he managed to free himself of the totalizing aspect of Marxism. His commitment to ontology of events that mingles with the multiplicities of forces and desires at work in any social, political and economic scenarios.

Lyotard’s main thesis revolves around the fact that representations always lag behind events, and this is where he tries to establish the relationship between reason and representation. He has always doubted reason’s efficacy for it operates within the confines of structures, wherein sensual perceptions and psychological factors like emotions and sentiments are always ostracized. The fact of the matter is that one could never work with reasons with such factors stringently kept aside. What is discursive is reason and representation, and what is figural is rational representation. The figural is what encompasses sensual perceptions and psychological factors like emotions and sentiments. Furthermore, he gets metaphorical with flatness and depth mapping onto discourse and the figural respectively. Subsequently, what is aimed at is the deconstruction of the two categories of discourse and figural that happen to be opposites, since, doing this would break the shackles of logic of discourse and strip the status of prerogativeness from discourse. With difference corresponding to the figural, the difference between discourse and figural is measured in difference rather than in opposition. What distinguishes difference from opposition is that in the former, the binary is characterized by strict opposites, whereas in the latter, two terms in the binary are mutually implicated, but ultimately irreconcilable. Disruption at the limits of reason is what characterizes difference implying that no rational system of representation can ever enjoy the status of being closed or complete, and cannot escape the impacts of the figural that it tries so hard to keep out.

Psychological Approaches to Cognition and Rationality in Political Science

11111

The theoretical basis of information processing in politics comes largely from psychologists studying other issues and from fields outside the realm of psychology. We assume that the task of translating available information into sets of focused, legally consistent beliefs, judgement and commitments is a subtle process, not at all a straightforward or an obvious issue, and furthermore, although political reasoning may take place largely outside a person’s awareness, political cognition is a very active mental process. Cognitive theories in politics are largely bent on understanding as to how people selectively attend to, interpret and organise information in ways that allow everyone to reach coherent understandings. For various reasons known or unknown to all of us now, such understandings may deviate substantially from the true set of affairs and from whatever mix of information or disinformation is available to be considered.

The two terms ‘belief’ and ‘system’ have been a familiar part of the language of attitude psychology for decades. Let us define ‘belief system’ in a three point structure:

  • a set of related beliefs, attitudes together
  • rules of how these contents of mind are linked to one another
  • linkages between the related beliefs and ideologies.

Now to model a belief system is an attempt to create an abstract representation of how someone’s set of related beliefs are formed, organised, maintained and modified.

In the cognition of psychology, or vice versa, a commonality of two terms is being used nowadays in the form of schemas and scripts with the former being the building blocks about peoples’ beliefs about the globe, and the latter being the working assemblages of those building blocks, with an adequate emphasis on sequences of actions involving various schematised objects. Scripts are our images of events routinely take place; events involve objects and activities for which schematic representations are available. For e.g., we might have a script of leadership in mixed age groups. Men routinely expect to exercise leadership over women and bristly at having to be subordinate, while women expect the same status differentiation and fear the wrath of others if women acquire leadership responsibilities. This hypothesised script incorporates schemas about the nature of leadership, about gender differences and about emotions such as resentment and fear. Just as schemas organise our understanding of concepts and objects, scripts organise our understanding about about activities and events  that involve and link those objects and concepts.

Much of the modern social psychology is concerned with the attribution processes. These refer to subjective connections people make between cause and effect. Attribution processes, by their nature involve going beyond the ‘information given’ in the direct observation of events. They are inferential process that allow us to understand what we think are the meaningful causes, observations and motivations underlying the observable behaviour directly. they are the central elements of the broader constructive processes through which people find meaning in ongoing events. Regardless of how well our attributive reasoning corresponds with objective reality, attribution process provide us with an enhanced sense of confidence that we understand what is going on around us. two kinds of attributive processes are heuristics and biases, when the former can be considered as mental short-cuts, by which one is able to circumvent the tediousness of logically exhaustive reasoning, or to fill in lacunae in our knowledge base and reach conclusions that make sense to our already made up assumptions.

Biases can be thought of as tendencies to come to some kind of conclusions more often than others. We have to often take the short cut of relying on representations of some bit of information, while ignoring other factors that also should be taken into account. We have to attach probabilities. Suppose a foreign service analyst anted to know whether a move by a foreign government to increase security at border was part of a larger plan to prepare for a surprise military attack across the border. the cue for the analyst is border clampdown; one possible meaning is that a military invasion is about to begin. The analyst must decide how likely it is. If the analyst uses the representativeness heuristic, she would decide how typical a border crackdown is as a early sign of a coming invasion. The more typically she feels the border clampdown is a sign of coming invasion, the more credibility she would attach to that interpretation of the change. In fact, representativeness, the degree to which some cue resembles or fits as part of the typical form of an interpretation, is an important and a legitimate aspect of assessing probabilities. The representativeness of heuristic however is the tendency to ignore other relevant information and thereby overemphasise the role of representativeness. Representativeness is one of the most prominently and actively investigated cognitive heuristics. Of course in most real life settings it cannot be proven that we credit or blame the actor too much for her behaviour or consequences. However, in carefully designed experiments in which hapless actors obviously have very little control over what happens to them, observers nonetheless hold the actors responsible for their actions and circumstances.

Now moving on to integrative complexity, it is a combination of two distinct mental activities, differentiation and integration. Differentiation refers to person’s recognition of multiple issues or facets in thinking about a political problem. Undifferentiated thinking occurs when an individual sees a problem as involving very few distinct issues, or that all of the issues nearly lead to the same conclusion. Differentiating one’s understanding of political situation gives one a better grasp on that situation, but it can cause difficulties too. different aspects of a political problem may contradict each other or may lead to contradictory actions. differentiating a problem can also lead a decision maker to the discovery that she really does not have a full grasp on the relevant information, which can be an unpleasant awareness, especially when decisions are to made immediately.

Integration on the other hand refers to the careful consideration of the relationships among parts of the problem. as a political actor formulates opinions and possible choices, integrated thinking allows the person to see how various courses of action may lead to contradictory outcomes, how goals might be well set by actions that violate one’s presuppositions or outcomes. Integration moves the thinker away from all or nothing oversimplification of issues. thus it improves the chances for political compromise, the heart of successful diplomacy. furthermore, by opening up the eyes of the decision maker to the complex interconnections of many political problems, it enables her to anticipate the complicated consequences that may follow from her choices. Obviously, high levels of integration can occur when an individual or a group has successfully differentiated the various issues involved in a problem. without the identification of the issue, there is nothing to integrate. however, simple awareness of all of the potentially conflicting aspects of a problem does not guarantee that a decision maker will pull these elements meaningfully. On can recognise any number of ambiguous qualifications, contradictions and non-sequitors, yet ignore most of them in deciding what to believe and what to do. Thus integration requires differentiation, but generally vice versa does not follow.

Integrating complexity may affect the careers of political leaders. It may also help shape the outcome of entire political and military conflicts, not just the future carer of leaders. For e.g., intense diplomatic activity between the US and the USSR averted a potential WW3, which arose in 1962 when the US objected to the Soviet missile deployment in Cuba. Taking the above case, it was hypothesised that in very complex political situations, highly integrated thinking is necessary in order for leaders to discover the availability and superiority of non-military solutions.

Everyone knows that attitudes about a political problem influence our political actions. Exceptions are there, but people usually act in ways that further their beliefs avoid acting in ways that contradict their beliefs. We no longer claim that the causal link from beliefs to behaviour is simple; instead, attention is now directed towards understanding the complex and subtle ways in which beliefs influence decision-making. General beliefs are considered to be less general in predicting actions such as voting behaviour. Some also maintain that general beliefs are important influences on specific actions, though the influence is not a direct cause-effect link. Instead, general beliefs produce subtle tendencies to favour some interpretation of events over other plausible interpretations, and to favour some general styles of political action over others when choosing a specific political action. Talking of political actor’s operational code, there are diagnostic propensities which are tendencies to interpret ambiguous events in some ways rather that in others, to search for certain kinds of information rather than others, and to exaggerate or ignore the probable role of chance and uncontrollable events. For eg. one national leader may immediately look for the hostile intentions behind any important diplomatic move on the part of arrival nation. Such a person would search for other evidence confirming his or her initial presumption, by contrast, another leader might be aware that the rival nation has severe internal problems, and presume that any important foreign policy initiatives from that nation are attempts to distract its citizens from those problems. Choice propensities are tendencies to prefer certain modes of political action to others. Diagnostic propensities are the expressions in political reasoning of leader’s general views about how to act effectively in political arena.

Politics in its very essence is an impersonal activity. The vast bulk of political planning, commitment and actions take place among groups of people, whether these people come together to pool resources, squabble, or negotiate compromises among their conflicting group interests. What is then the psychology of rationality in political groups? But groups are different. Groups do not negate the picture about the nature of political cognition; they complicate it instead. Groups themselves do not think. It is still the individual people who share or hide their personal beliefs and goals.

What is a camel?
It’s a horse driven by a committee.

This old joke is a cynical comment on the creativity of committees. It is easy to point to mediocre decisions made by groups, but there is a more serious problem than middling decisions. Groups are capable of profoundly bad decisions. Some of the worst decisions in world history were made by groups that would seem to have been assembled in producing rational, creative policies and judgments.

What characteristics make groups particularly susceptible to poor decisions? First and foremost, the group is highly cohesive. Group members know, trust and like each other; they often share common or similar histories; they enjoy being part of the group and value working in it. Second, the group isolates itself from possible influencing of the others. A strong sense of identification with the group leads to lost ties with others who might have some valuable information to share. Third, the group lacks any systematic way of doing things. without formal guidelines for procedure, agenda decisions are made casually and are subject to influences that cut full deliberations. Fourth, the leader of such groups tend to be directive. Fifth, the group is experiencing stress, with a sense of urgency about some crises in which acting quickly seems critical. The choice may be among some unpleasant activities, the available information may be very confusing and incomplete and the group members may be fatigued. Thus solidarity, isolation, sloppy procedures and directive leadership in a stressful situation make some groups vulnerable to groupthink. Two features describe groupthink. First set contains working assumptions and styles of interacting that group members carry with them into the work setting. The second set features describe faulty deliberations as the group sets about its task. The group members lack adequate contingency plans to be prepared for quick response if the preferred course of action does not work as the group hopes and believes it will.

To avoid groupthink, first the leader of the group should actively encourage dissent; she should make it known that dissenting opinions are valued and they are valued not just for variety’s sake but because that they may be right. Second, the leader should avoid letting her own initials be known. Third, parallel subgroups can be set up early on to work separately on the same tasks. These subgroups will probably develop different assessments and plans, which can be brought to the whole group for consideration. This neatly disrupts the tendency of groups to focus on just option for the upcoming decision. A choice is rational if it follows certain careful procedures that lead to the selection of the course of action that offers the greatest expected value or benefit or utility for the chooser. The group members making a rational decision first identify the opportunity and need for a choice. They then identify every conceivable course of action available to them. They determine all possible consequences of each course of action available to them. They evaluate each possible consequence in terms of,

1) its likelihood of occurrence,
2) its value if it does occur.

Now the decision making group has a problem, and a set of possible solutions. This information is then distilled into a single choice by working backwards. The probability of each consequence is then multiplied by its value; the products of all consequences for each course of action are then added up. The resulting sums are the expected values of each possible consequence. The group then simply selects the option with the largest possible expected value (or smallest negative value if a choice is a no-win situation).

There is something called the posterior rationality, where the choice process is discovered after the choice is made. The world may be too unpredictable and complicated for most well intended plans to have much chance of success. If so, traditional rationality may be irrelevant as a model for complex organisations. However, goals and intentions can still be inferred in reverse, by reinterpreting earlier choices and redefining one’s original goals.

In conclusion, political actors, groups and institutions such as governments do not simply observe and understand political circumstances in some automatic fashion that accurately captures true political realities. Political realities are for most part social constructions and the construing process is built on the philosophy and psychology of human cognition. Political cognition like any other cognition is extremely complex. It is easy enough to find examples of poor political judgments: the wonder may be that politics often seems to be rational, given all the challenges and limitations. To the extent that we can find a sense of coherence in politics and government, we should acknowledge the importance of the social construction process in shaping that coherence. Although political activists devote much more time to the political agenda than does the average citizen, still they rely on the same cognitive resources and procedures and hence are subject to the same biases and distortions as any thinking person.