# Probability Space Intertwines Random Walks – Thought of the Day 144.0

Many deliberations of stochasticity start with “let (Ω, F, P) be a probability space”. One can actually follow such discussions without having the slightest idea what Ω is and who lives inside. So, what is “Ω, F, P” and why do we need it? Indeed, for many users of probability and statistics, a random variable X is synonymous with its probability distribution μX and all computations such as sums, expectations, etc., done on random variables amount to analytical operations such as integrations, Fourier transforms, convolutions, etc., done on their distributions. For defining such operations, you do not need a probability space. Isn’t this all there is to it?

One can in fact compute quite a lot of things without using probability spaces in an essential way. However the notions of probability space and random variable are central in modern probability theory so it is important to understand why and when these concepts are relevant.

From a modelling perspective, the starting point is a set of observations taking values in some set E (think for instance of numerical measurement, E = R) for which we would like to build a stochastic model. We would like to represent such observations x1, . . . , xn as samples drawn from a random variable X defined on some probability space (Ω, F, P). It is important to see that the only natural ingredient here is the set E where the random variables will take their values: the set of events Ω is not given a priori and there are many different ways to construct a probability space (Ω, F, P) for modelling the same set of observations.

Sometimes it is natural to identify Ω with E, i.e., to identify the randomness ω with its observed effect. For example if we consider the outcome of a dice rolling experiment as an integer-valued random variable X, we can define the set of events to be precisely the set of possible outcomes: Ω = {1, 2, 3, 4, 5, 6}. In this case, X(ω) = ω: the outcome of the randomness is identified with the randomness itself. This choice of Ω is called the canonical space for the random variable X. In this case the random variable X is simply the identity map X(ω) = ω and the probability measure P is formally the same as the distribution of X. Note that here X is a one-to-one map: given the outcome of X one knows which scenario has happened so any other random variable Y is completely determined by the observation of X. Therefore using the canonical construction for the random variable X, we cannot define, on the same probability space, another random variable which is independent of X: X will be the sole source of randomness for all other variables in the model. This also show that, although the canonical construction is the simplest way to construct a probability space for representing a given random variable, it forces us to identify this particular random variable with the “source of randomness” in the model. Therefore when we want to deal with models with a sufficiently rich structure, we need to distinguish Ω – the set of scenarios of randomness – from E, the set of values of our random variables.

Let us give an example where it is natural to distinguish the source of randomness from the random variable itself. For instance, if one is modelling the market value of a stock at some date T in the future as a random variable S1, one may consider that the stock value is affected by many factors such as external news, market supply and demand, economic indicators, etc., summed up in some abstract variable ω, which may not even have a numerical representation: it corresponds to a scenario for the future evolution of the market. S1(ω) is then the stock value if the market scenario which occurs is given by ω. If the only interesting quantity in the model is the stock price then one can always label the scenario ω by the value of the stock price S1(ω), which amounts to identifying all scenarios where the stock S1 takes the same value and using the canonical construction. However if one considers a richer model where there are now other stocks S2, S3, . . . involved, it is more natural to distinguish the scenario ω from the random variables S1(ω), S2(ω),… whose values are observed in these scenarios but may not completely pin them down: knowing S1(ω), S2(ω),… one does not necessarily know which scenario has happened. In this way one reserves the possibility of adding more random variables later on without changing the probability space.

These have the following important consequence: the probabilistic description of a random variable X can be reduced to the knowledge of its distribution μX only in the case where the random variable X is the only source of randomness. In this case, a stochastic model can be built using a canonical construction for X. In all other cases – as soon as we are concerned with a second random variable which is not a deterministic function of X – the underlying probability measure P contains more information on X than just its distribution. In particular, it contains all the information about the dependence of the random variable X with respect to all other random variables in the model: specifying P means specifying the joint distributions of all random variables constructed on Ω. For instance, knowing the distributions μX, μY of two variables X, Y does not allow to compute their covariance or joint moments. Only in the case where all random variables involved are mutually independent can one reduce all computations to operations on their distributions. This is the case covered in most introductory texts on probability, which explains why one can go quite far, for example in the study of random walks, without formalizing the notion of probability space.

# Utopia as Emergence Initiating a Truth. Thought of the Day 104.0

It is true that, in our contemporary world, traditional utopian models have withered, but today a new utopia of canonical majority has taken over the space of any action transformative of current social relations. Instead of radicalness, conformity has become the main expression of solidarity for the subject abandoned to her consecrated individuality. Where past utopias inscribed a collective vision to be fulfilled for future generations, the present utopia confiscates the future of the individual, unless she registers in a collective, popularized expression of the norm that reaps culture, politics, morality, and the like. The ideological outcome of the canonical utopia is the belief that the majority constitutes a safety net for individuality. If the future of the individual is bleak, at least there is some hope in saving his/her present.

This condition reiterates Ernst Bloch’s distinction between anticipatory and compensatory utopia, with the latter gaining ground today (Ruth Levitas). By discarding the myth of a better future for all, the subject succumbs to the immobilizing myth of a safe present for herself (the ultimate transmutation of individuality to individualism). The world can surmount Difference, simply by taking away its painful radicalness, replacing it with a non-violent, pluralistic, and multi-cultural present, as Žižek harshly criticized it for its anti-rational status. In line with Badiou and Jameson, Žižek discerns behind the multitude of identities and lifestyles in our world the dominance of the One and the eradication of Difference (the void of antagonism). It would have been ideal, if pluralism were not translated to populism and the non-violent to a sanctimonious respect of Otherness.

Badiou also points to the nihilism that permeates modern ethicology that puts forward the “recognition of the other”, the respect of “differences”, and “multi-culturalism”. Such ethics is supposed to protect the subject from discriminatory behaviours on the basis of sex, race, culture, religion, and so on, as one must display “tolerance” towards others who maintain different thinking and behaviour patterns. For Badiou, this ethical discourse is far from effective and truthful, as is revealed by the competing axes it forges (e.g., opposition between “tolerance” and “fanaticism”, “recognition of the other” and “identitarian fixity”).

Badiou denounces the decomposed religiosity of current ethical discourse, in the face of the pharisaic advocates of the right to difference who are “clearly horrified by any vigorously sustained difference”. The pharisaism of this respect for difference lies in the fact that it suggests the acceptance of the other, in so far as s/he is a “good other”; in other words, in so far as s/he is the same as everyone else. Such an ethical attitude ironically affirms the hegemonic identity of those who opt for integration of the different other, which is to say, the other is requested to suppress his/her difference, so that he partakes in the “Western identity”.

Rather than equating being with the One, the law of being is the multiple “without one”, that is, every multiple being is a multiple of multiples, stretching alterity into infinity; alterity is simply “what there is” and our experience is “the infinite deployment of infinite differences”. Only the void can discontinue this multiplicity of being, through the event that “breaks” with the existing order and calls for a “new way of being”. Thus, a radical utopian gesture needs to emerge from the perspective of the event, initiating a truth process.

# Dialectics of God: Lautman’s Mathematical Ascent to the Absolute. Paper.

Figure and Translation, visit Fractal Ontology

The first of Lautman’s two theses (On the unity of the mathematical sciences) takes as its starting point a distinction that Hermann Weyl made on group theory and quantum mechanics. Weyl distinguished between ‘classical’ mathematics, which found its highest flowering in the theory of functions of complex variables, and the ‘new’ mathematics represented by (for example) the theory of groups and abstract algebras, set theory and topology. For Lautman, the ‘classical’ mathematics of Weyl’s distinction is essentially analysis, that is, the mathematics that depends on some variable tending towards zero: convergent series, limits, continuity, differentiation and integration. It is the mathematics of arbitrarily small neighbourhoods, and it reached maturity in the nineteenth century. On the other hand, the ‘new’ mathematics of Weyl’s distinction is ‘global’; it studies the structures of ‘wholes’. Algebraic topology, for example, considers the properties of an entire surface rather than aggregations of neighbourhoods. Lautman re-draws the distinction:

In contrast to the analysis of the continuous and the infinite, algebraic structures clearly have a finite and discontinuous aspect. Though the elements of a group, field or algebra (in the restricted sense of the word) may be infinite, the methods of modern algebra usually consist in dividing these elements into equivalence classes, the number of which is, in most applications, finite.

In his other major thesis, (Essay on the notions of structure and existence in mathematics), Lautman gives his dialectical thought a more philosophical and polemical expression. His thesis is composed of ‘structural schemas’ and ‘origination schemas’ The three structural schemas are: local/global, intrinsic properties/induced properties and the ‘ascent to the absolute’. The first two of these three schemas close to Lautman’s ‘unity’ thesis. The ‘ascent to the absolute’ is a different sort of pattern; it involves a progress from mathematical objects that are in some sense ‘imperfect’, towards an object that is ‘perfect’ or ‘absolute’. His two mathematical examples of this ‘ascent’ are: class field theory, which ‘ascends’ towards the absolute class field, and the covering surfaces of a given surface, which ‘ascend’ towards a simply-connected universal covering surface. In each case, there is a corresponding sequence of nested subgroups, which induces a ‘stepladder’ structure on the ‘ascent’. This dialectical pattern is rather different to the others. The earlier examples were of pairs of notions (finite/infinite, local/global, etc.) and neither member of any pair was inferior to the other. Lautman argues that on some occasions, finite mathematics offers insight into infinite mathematics. In mathematics, the finite is not a somehow imperfect version of the infinite. Similarly, the ‘local’ mathematics of analysis may depend for its foundations on ‘global’ topology, but the former is not a botched or somehow inadequate version of the latter. Lautman introduces the section on the ‘ascent to the absolute’ by rehearsing Descartes’s argument that his own imperfections lead him to recognise the existence of a perfect being (God). Man (for Descartes) is not the dialectical opposite of or alternative to God; rather, man is an imperfect image of his creator. In a similar movement of thought, according to Lautman, reflection on ‘imperfect’ class fields and covering surfaces leads mathematicians up to ‘perfect’, ‘absolute’ class fields and covering surfaces respectively.

Albert Lautman Dialectics in mathematics

# Infinitesimal and Differential Philosophy. Note Quote.

If difference is the ground of being qua becoming, it is not difference as contradiction (Hegel), but as infinitesimal difference (Leibniz). Accordingly, the world is an ideal continuum or transfinite totality (Fold: Leibniz and the Baroque) of compossibilities and incompossibilities analyzable into an infinity of differential relations (Desert Islands and Other Texts). As the physical world is merely composed of contiguous parts that actually divide until infinity, it finds its sufficient reason in the reciprocal determination of evanescent differences (dy/dx, i.e. the perfectly determinable ratio or intensive magnitude between indeterminate and unassignable differences that relate virtually but never actually). But what is an evanescent difference if not a speculation or fiction? Leibniz refuses to make a distinction between the ontological nature and the practical effectiveness of infinitesimals. For even if they have no actuality of their own, they are nonetheless the genetic requisites of actual things.

Moreover, infinitesimals are precisely those paradoxical means through which the finite understanding is capable of probing into the infinite. They are the elements of a logic of sense, that great logical dream of a combinatory or calculus of problems (Difference and Repetition). On the one hand, intensive magnitudes are entities that cannot be determined logically, i.e. in extension, even if they appear or are determined in sensation only in connection with already extended physical bodies. This is because in themselves they are determined at infinite speed. Is not the differential precisely this problematic entity at the limit of sensibility that exists only virtually, formally, in the realm of thought? Isn’t the differential precisely a minimum of time, which refers only to the swiftness of its fictional apprehension in thought, since it is synthesized in Aion, i.e. in a time smaller than the minimum of continuous time and hence in the interstitial realm where time takes thought instead of thought taking time?

Contrary to the Kantian critique that seeks to eliminate the duality between finite understanding and infinite understanding in order to avoid the contradictions of reason, Deleuze thus agrees with Maïmon that we shouldn’t speak of differentials as mere fictions unless they require the status of a fully actual reality in that infinite understanding. The alternative between mere fictions and actual reality is a false problem that hides the paradoxical reality of the virtual as such: real but not actual, ideal but not abstract. If Deleuze is interested in the esoteric history of differential philosophy, this is as a speculative alternative to the exoteric history of the extensional science of actual differences and to Kantian critical philosophy. It is precisely through conceptualizing intensive, differential relations that finite thought is capable of acquiring consistency without losing the infinite in which it plunges. This brings us back to Leibniz and Spinoza. As Deleuze writes about the former: no one has gone further than Leibniz in the exploration of sufficient reason [and] the element of difference and therefore [o]nly Leibniz approached the conditions of a logic of thought. Or as he argues of the latter, fictional abstractions are only a preliminary stage for thought to become more real, i.e. to produce an expressive or progressive synthesis: The introduction of a fiction may indeed help us to reach the idea of God as quickly as possible without falling into the traps of infinite regression. In Maïmon’s reinvention of the Kantian schematism as well as in the Deleuzian system of nature, the differentials are the immanent noumena that are dramatized by reciprocal determination in the complete determination of the phenomenal. Even the Kantian concept of the straight line, Deleuze emphasizes, is a dramatic synthesis or integration of an infinity of differential relations. In this way, infinitesimals constitute the distinct but obscure grounds enveloped by clear but confused effects. They are not empirical objects but objects of thought. Even if they are only known as already developed within the extensional becomings of the sensible and covered over by representational qualities, as differences they are problems that do not resemble their solutions and as such continue to insist in an enveloped, quasi-causal state.

Some content on this page was disabled on May 4, 2020 as a result of a DMCA takedown notice from Columbia University Press. You can learn more about the DMCA here:

# Computer Algebra Systems (CAS): Mathematica. Note Quote.

If we are generous, there is one clear analogue to Leibniz’s vision in the contemporary world, and that is a computer algebra system (CAS), such as Mathematica, Maple, or Maxsyma. A computer algebra system is a piece of software which allows one to perform specific mathematical computations, such as differentiation or integration, as well as basic programming tasks, such as list manipulation, and so on. As the development of CAS’s have progressed hand in hand with the growth of the software industry and scientific computing, they have come to incorporate a large amount of functionality, spanning many different scientific and technical domains.

In this sense, CAS’s have some resemblance to Leibniz vision. Mathematica, for example, has all of the special functions of mathematical physics as well as data from many different sources which can be systematically and uniformly manipulated by the symbolic representation of the Wolfram Language. One could reasonably claim that it incorporates many of the basic desiderata of Leibniz’s universal calculus – it has both the structured data of Leibniz’s hypothetical encyclopedia, as well as symbolic means for manipulating this data.

However, Leibniz’s vision was significantly more ambitious than any contemporary CAS can make claim to have realized. For instance, while a CAS incorporates mathematical knowledge from different domains, these domains are effectively different modules within the software that can be used in a standalone fashion. Consider, for example, that one can use a CAS to perform calculations relevant to both quantum mechanics and general relativity. The existence of both of these capabilities in a single piece of software says nothing about the long-standing theoretical obstacles to creating a unified theory of quantum gravity. Indeed, as has long been bemoaned in the formal verification and theorem proving communities, CAS’s are effectively a large number of small pieces of software neatly packaged into a single bundle that the user interacts with in a monolithic way. This fact has consequences for those interested in the robustness of the underlying computations, but in the present context, it simply serves to highlight a fundamental problem in Leibniz’s agenda.

So in effect, one way to describe Leibniz’s universal calculus, was an attempt to create something like a modern computer algebra system, but which extended across all areas of human knowledge. This goal itself would be quite an ambitious one, but in addition Leibniz wanted the additional property that the symbolic representation should have a transparent relationship to the corresponding encyclopedia, as well as possess the capacity of mnemonics to be memorized with ease. To quote Leibniz (caution: German) himself,

My invention contains all the functions of reason: it is a judge for controversies; an interpreter of notions; a scale for weighing probabilities; a compass which guides us through the ocean of experience; an inventory of things; a table of thoughts; a microscope for scrutinizing things close at hand; an innocent magic; a non-chimerical cabala; a writing which everyone can read in his own language; and finally a language which can be learnt in a few weeks, traveling swiftly across the world, carrying the true religion with it, wherever it goes.

It difficult to not be swept away by the beauty of Leibniz’s imagery. And yet, from our modern vantage point, there is hardly a doubt that this agenda could not possibly have worked.

# Psychological Approaches to Cognition and Rationality in Political Science

The theoretical basis of information processing in politics comes largely from psychologists studying other issues and from fields outside the realm of psychology. We assume that the task of translating available information into sets of focused, legally consistent beliefs, judgement and commitments is a subtle process, not at all a straightforward or an obvious issue, and furthermore, although political reasoning may take place largely outside a person’s awareness, political cognition is a very active mental process. Cognitive theories in politics are largely bent on understanding as to how people selectively attend to, interpret and organise information in ways that allow everyone to reach coherent understandings. For various reasons known or unknown to all of us now, such understandings may deviate substantially from the true set of affairs and from whatever mix of information or disinformation is available to be considered.

The two terms ‘belief’ and ‘system’ have been a familiar part of the language of attitude psychology for decades. Let us define ‘belief system’ in a three point structure:

• a set of related beliefs, attitudes together
• rules of how these contents of mind are linked to one another
• linkages between the related beliefs and ideologies.

Now to model a belief system is an attempt to create an abstract representation of how someone’s set of related beliefs are formed, organised, maintained and modified.

Much of the modern social psychology is concerned with the attribution processes. These refer to subjective connections people make between cause and effect. Attribution processes, by their nature involve going beyond the ‘information given’ in the direct observation of events. They are inferential process that allow us to understand what we think are the meaningful causes, observations and motivations underlying the observable behaviour directly. they are the central elements of the broader constructive processes through which people find meaning in ongoing events. Regardless of how well our attributive reasoning corresponds with objective reality, attribution process provide us with an enhanced sense of confidence that we understand what is going on around us. two kinds of attributive processes are heuristics and biases, when the former can be considered as mental short-cuts, by which one is able to circumvent the tediousness of logically exhaustive reasoning, or to fill in lacunae in our knowledge base and reach conclusions that make sense to our already made up assumptions.

Biases can be thought of as tendencies to come to some kind of conclusions more often than others. We have to often take the short cut of relying on representations of some bit of information, while ignoring other factors that also should be taken into account. We have to attach probabilities. Suppose a foreign service analyst anted to know whether a move by a foreign government to increase security at border was part of a larger plan to prepare for a surprise military attack across the border. the cue for the analyst is border clampdown; one possible meaning is that a military invasion is about to begin. The analyst must decide how likely it is. If the analyst uses the representativeness heuristic, she would decide how typical a border crackdown is as a early sign of a coming invasion. The more typically she feels the border clampdown is a sign of coming invasion, the more credibility she would attach to that interpretation of the change. In fact, representativeness, the degree to which some cue resembles or fits as part of the typical form of an interpretation, is an important and a legitimate aspect of assessing probabilities. The representativeness of heuristic however is the tendency to ignore other relevant information and thereby overemphasise the role of representativeness. Representativeness is one of the most prominently and actively investigated cognitive heuristics. Of course in most real life settings it cannot be proven that we credit or blame the actor too much for her behaviour or consequences. However, in carefully designed experiments in which hapless actors obviously have very little control over what happens to them, observers nonetheless hold the actors responsible for their actions and circumstances.

Now moving on to integrative complexity, it is a combination of two distinct mental activities, differentiation and integration. Differentiation refers to person’s recognition of multiple issues or facets in thinking about a political problem. Undifferentiated thinking occurs when an individual sees a problem as involving very few distinct issues, or that all of the issues nearly lead to the same conclusion. Differentiating one’s understanding of political situation gives one a better grasp on that situation, but it can cause difficulties too. different aspects of a political problem may contradict each other or may lead to contradictory actions. differentiating a problem can also lead a decision maker to the discovery that she really does not have a full grasp on the relevant information, which can be an unpleasant awareness, especially when decisions are to made immediately.

Integration on the other hand refers to the careful consideration of the relationships among parts of the problem. as a political actor formulates opinions and possible choices, integrated thinking allows the person to see how various courses of action may lead to contradictory outcomes, how goals might be well set by actions that violate one’s presuppositions or outcomes. Integration moves the thinker away from all or nothing oversimplification of issues. thus it improves the chances for political compromise, the heart of successful diplomacy. furthermore, by opening up the eyes of the decision maker to the complex interconnections of many political problems, it enables her to anticipate the complicated consequences that may follow from her choices. Obviously, high levels of integration can occur when an individual or a group has successfully differentiated the various issues involved in a problem. without the identification of the issue, there is nothing to integrate. however, simple awareness of all of the potentially conflicting aspects of a problem does not guarantee that a decision maker will pull these elements meaningfully. On can recognise any number of ambiguous qualifications, contradictions and non-sequitors, yet ignore most of them in deciding what to believe and what to do. Thus integration requires differentiation, but generally vice versa does not follow.

Integrating complexity may affect the careers of political leaders. It may also help shape the outcome of entire political and military conflicts, not just the future carer of leaders. For e.g., intense diplomatic activity between the US and the USSR averted a potential WW3, which arose in 1962 when the US objected to the Soviet missile deployment in Cuba. Taking the above case, it was hypothesised that in very complex political situations, highly integrated thinking is necessary in order for leaders to discover the availability and superiority of non-military solutions.

Everyone knows that attitudes about a political problem influence our political actions. Exceptions are there, but people usually act in ways that further their beliefs avoid acting in ways that contradict their beliefs. We no longer claim that the causal link from beliefs to behaviour is simple; instead, attention is now directed towards understanding the complex and subtle ways in which beliefs influence decision-making. General beliefs are considered to be less general in predicting actions such as voting behaviour. Some also maintain that general beliefs are important influences on specific actions, though the influence is not a direct cause-effect link. Instead, general beliefs produce subtle tendencies to favour some interpretation of events over other plausible interpretations, and to favour some general styles of political action over others when choosing a specific political action. Talking of political actor’s operational code, there are diagnostic propensities which are tendencies to interpret ambiguous events in some ways rather that in others, to search for certain kinds of information rather than others, and to exaggerate or ignore the probable role of chance and uncontrollable events. For eg. one national leader may immediately look for the hostile intentions behind any important diplomatic move on the part of arrival nation. Such a person would search for other evidence confirming his or her initial presumption, by contrast, another leader might be aware that the rival nation has severe internal problems, and presume that any important foreign policy initiatives from that nation are attempts to distract its citizens from those problems. Choice propensities are tendencies to prefer certain modes of political action to others. Diagnostic propensities are the expressions in political reasoning of leader’s general views about how to act effectively in political arena.

Politics in its very essence is an impersonal activity. The vast bulk of political planning, commitment and actions take place among groups of people, whether these people come together to pool resources, squabble, or negotiate compromises among their conflicting group interests. What is then the psychology of rationality in political groups? But groups are different. Groups do not negate the picture about the nature of political cognition; they complicate it instead. Groups themselves do not think. It is still the individual people who share or hide their personal beliefs and goals.

What is a camel?
It’s a horse driven by a committee.

This old joke is a cynical comment on the creativity of committees. It is easy to point to mediocre decisions made by groups, but there is a more serious problem than middling decisions. Groups are capable of profoundly bad decisions. Some of the worst decisions in world history were made by groups that would seem to have been assembled in producing rational, creative policies and judgments.

What characteristics make groups particularly susceptible to poor decisions? First and foremost, the group is highly cohesive. Group members know, trust and like each other; they often share common or similar histories; they enjoy being part of the group and value working in it. Second, the group isolates itself from possible influencing of the others. A strong sense of identification with the group leads to lost ties with others who might have some valuable information to share. Third, the group lacks any systematic way of doing things. without formal guidelines for procedure, agenda decisions are made casually and are subject to influences that cut full deliberations. Fourth, the leader of such groups tend to be directive. Fifth, the group is experiencing stress, with a sense of urgency about some crises in which acting quickly seems critical. The choice may be among some unpleasant activities, the available information may be very confusing and incomplete and the group members may be fatigued. Thus solidarity, isolation, sloppy procedures and directive leadership in a stressful situation make some groups vulnerable to groupthink. Two features describe groupthink. First set contains working assumptions and styles of interacting that group members carry with them into the work setting. The second set features describe faulty deliberations as the group sets about its task. The group members lack adequate contingency plans to be prepared for quick response if the preferred course of action does not work as the group hopes and believes it will.

To avoid groupthink, first the leader of the group should actively encourage dissent; she should make it known that dissenting opinions are valued and they are valued not just for variety’s sake but because that they may be right. Second, the leader should avoid letting her own initials be known. Third, parallel subgroups can be set up early on to work separately on the same tasks. These subgroups will probably develop different assessments and plans, which can be brought to the whole group for consideration. This neatly disrupts the tendency of groups to focus on just option for the upcoming decision. A choice is rational if it follows certain careful procedures that lead to the selection of the course of action that offers the greatest expected value or benefit or utility for the chooser. The group members making a rational decision first identify the opportunity and need for a choice. They then identify every conceivable course of action available to them. They determine all possible consequences of each course of action available to them. They evaluate each possible consequence in terms of,

1) its likelihood of occurrence,
2) its value if it does occur.

Now the decision making group has a problem, and a set of possible solutions. This information is then distilled into a single choice by working backwards. The probability of each consequence is then multiplied by its value; the products of all consequences for each course of action are then added up. The resulting sums are the expected values of each possible consequence. The group then simply selects the option with the largest possible expected value (or smallest negative value if a choice is a no-win situation).

There is something called the posterior rationality, where the choice process is discovered after the choice is made. The world may be too unpredictable and complicated for most well intended plans to have much chance of success. If so, traditional rationality may be irrelevant as a model for complex organisations. However, goals and intentions can still be inferred in reverse, by reinterpreting earlier choices and redefining one’s original goals.

In conclusion, political actors, groups and institutions such as governments do not simply observe and understand political circumstances in some automatic fashion that accurately captures true political realities. Political realities are for most part social constructions and the construing process is built on the philosophy and psychology of human cognition. Political cognition like any other cognition is extremely complex. It is easy enough to find examples of poor political judgments: the wonder may be that politics often seems to be rational, given all the challenges and limitations. To the extent that we can find a sense of coherence in politics and government, we should acknowledge the importance of the social construction process in shaping that coherence. Although political activists devote much more time to the political agenda than does the average citizen, still they rely on the same cognitive resources and procedures and hence are subject to the same biases and distortions as any thinking person.