Complete Manifolds’ Pure Logical Necessity as the Totality of Possible Formations. Thought of the Day 124.0

husserl-phenomenology

In Logical Investigations, Husserl called his theory of complete manifolds the key to the only possible solution to how in the realm of numbers impossible, non-existent, meaningless concepts might be dealt with as real ones. In Ideas, he wrote that his chief purpose in developing his theory of manifolds had been to find a theoretical solution to the problem of imaginary quantities (Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy).

Husserl saw how questions regarding imaginary numbers come up in mathematical contexts in which formalization yields constructions which arithmetically speaking are nonsense, but can be used in calculations. When formal reasoning is carried out mechanically as if these symbols have meaning, if the ordinary rules are observed, and the results do not contain any imaginary components, these symbols might be legitimately used. And this could be empirically verified (Philosophy of Arithmetic_ Psychological and Logical Investigations with Supplementary Texts).

In a letter to Carl Stumpf in the early 1890s, Husserl explained how, in trying to understand how operating with contradictory concepts could lead to correct theorems, he had found that for imaginary numbers like √2 and √-1, it was not a matter of the possibility or impossibility of concepts. Through the calculation itself and its rules, as defined for those fictive numbers, the impossible fell away, and a genuine equation remained. One could calculate again with the same signs, but referring to valid concepts, and the result was again correct. Even if one mistakenly imagined that what was contradictory existed, or held the most absurd theories about the content of the corresponding concepts of number, the calculation remained correct if it followed the rules. He concluded that this must be a result of the signs and their rules (Early Writings in the Philosophy of Logic and Mathematics). The fact that one can generalize, produce variations of formal arithmetic that lead outside the quantitative domain without essentially altering formal arithmetic’s theoretical nature and calculational methods brought Husserl to realize that there was more to the mathematical or formal sciences, or the mathematical method of calculation than could be captured in purely quantitative analyses.

Understanding the nature of theory forms, shows how reference to impossible objects can be justified. According to his theory of manifolds, one could operate freely within a manifold with imaginary concepts and be sure that what one deduced was correct when the axiomatic system completely and unequivocally determined the body of all the configurations possible in a domain by a purely analytical procedure. It was the completeness of the axiomatic system that gave one the right to operate in that free way. A domain was complete when each grammatically constructed proposition exclusively using the language of the domain was determined from the outset to be true or false in virtue of the axioms, i.e., necessarily followed from the axioms or did not. In that case, calculating with expressions without reference could never lead to contradictions. Complete manifolds have the

distinctive feature that a finite number of concepts and propositions – to be drawn as occasion requires from the essential nature of the domain under consideration –  determines completely and unambiguously on the lines of pure logical necessity the totality of all possible formations in the domain, so that in principle, therefore, nothing further remains open within it.

In such complete manifolds, he stressed, “the concepts true and formal implication of the axioms are equivalent (Ideas).

Husserl pointed out that there may be two valid discipline forms that stand in relation to one another in such a way that the axiom system of one may be a formal limitation of that of the other. It is then clear that everything deducible in the narrower axiom system is included in what is deducible in the expanded system, he explained. In the arithmetic of cardinal numbers, Husserl explained, there are no negative numbers, for the meaning of the axioms is so restrictive as to make subtracting 4 from 3 nonsense. Fractions are meaningless there. So are irrational numbers, √–1, and so on. Yet in practice, all the calculations of the arithmetic of cardinal numbers can be carried out as if the rules governing the operations are unrestrictedly valid and meaningful. One can disregard the limitations imposed in a narrower domain of deduction and act as if the axiom system were a more extended one. We cannot arbitrarily expand the concept of cardinal number, Husserl reasoned. But we can abandon it and define a new, pure formal concept of positive whole number with the formal system of definitions and operations valid for cardinal numbers. And, as set out in our definition, this formal concept of positive numbers can be expanded by new definitions while remaining free of contradiction. Fractions do not acquire any genuine meaning through our holding onto the concept of cardinal number and assuming that units are divisible, he theorized, but rather through our abandonment of the concept of cardinal number and our reliance on a new concept, that of divisible quantities. That leads to a system that partially coincides with that of cardinal numbers, but part of which is larger, meaning that it includes additional basic elements and axioms. And so in this way, with each new quantity, one also changes arithmetics. The different arithmetics do not have parts in common. They have totally different domains, but an analogous structure. They have forms of operation that are in part alike, but different concepts of operation.

For Husserl, formal constraints banning meaningless expressions, meaningless imaginary concepts, reference to non-existent and impossible objects restrict us in our theoretical, deductive work, but that resorting to the infinity of pure forms and transformations of forms frees us from such conditions and explains why having used imaginaries, what is meaningless, must lead, not to meaningless, but to true results.

Carnap, c-notions. Thought of the Day 87.0

oldcarnap

A central distinction for Carnap is that between definite and indefinite notions. A definite notion is one that is recursive, such as “is a formula” and “is a proof of φ”. An indefinite notion is one that is non-recursive, such as “is an ω-consequence of PA” and “is true in Vω+ω”. This leads to a distinction between (i) the method of derivation (or d-method), which investigates the semi-definite (recursively enumerable) metamathematical notions, such as demonstrable, derivable, refutable, resoluble, and irresoluble, and (ii) the method of consequence (or c-method), which investigates the (typically) non-recursively enumerable metamathematical notions such as consequence, analytic, contradictory, determinate, and synthetic.

A language for Carnap is what we would today call a formal axiom system. The rules of the formal system are definite (recursive) and Carnap is fully aware that a given language cannot include its own c-notions. The logical syntax of a language is what we would today call metatheory. It is here that one formalizes the c-notions for the (object) language. From among the various c-notions Carnap singles out one as central, namely, the notion of (direct) consequence; from this c-notion all of the other c-notions can be defined in routine fashion.

We now turn to Carnap’s account of his fundamental notions, most notably, the analytic/synthetic distinction and the division of primitive terms into ‘logico-mathematical’ and ‘descriptive’. Carnap actually has two approaches. The first approach occurs in his discussion of specific languages – Languages I and II. Here he starts with a division of primitive terms into ‘logico-mathematical’ and ‘descriptive’ and upon this basis defines the c-notions, in particular the notions of being analytic and synthetic. The second approach occurs in the discussion of general syntax. Here Carnap reverses procedure: he starts with a specific c-notion – namely, the notion of direct consequence – and he uses it to define the other c-notions and draw the division of primitive terms into ‘logico-mathematical’ and ‘descriptive’.

In the first approach Carnap introduces two languages – Language I and Language II. The background languages (in the modern sense) of Language I and Language II are quite general – they include expressions that we would call ‘descriptive’. Carnap starts with a demarcation of primitive terms into ‘logico-mathematical’ and ‘descriptive’. The expressions he classifies as ‘logico-mathematical’ are exactly those included in the modern versions of these systems; the remaining expressions are classified as ‘descriptive’. Language I is a version of Primitive Recursive Arithmetic and Language II is a version of finite type theory built over Peano Arithmetic. The d-notions for these languages are the standard proof-theoretic ones.

For Language I Carnap starts with a consequence relation based on two rules – (i) the rule that allows one to infer φ if T \vdash \!\, φ (where T is some fixed ∑10-complete formal system) and (ii) the ω-rule. It is then easily seen that one has a complete theory for the logico-mathematical fragment, that is, for any logico-mathematical sentence φ, either φ or ¬φ is a consequence of the null set. The other c-notions are then defined in the standard fashion. For example, a sentence is analytic if it is a consequence of the null set; contradictory if its negation is analytic; and so on.

For Language II Carnap starts by defining analyticity. His definition is a notational variant of the Tarskian truth definition with one important difference – namely, it involves an asymmetric treatment of the logico-mathematical and descriptive expressions. For the logico-mathematical expressions his definition really just is a notational variant of the Tarskian truth definition. But descriptive expressions must pass a more stringent test to count as analytic – they must be such that if one replaces all descriptive expressions in them by variables of the appropriate type, then the resulting logico-mathematical expression is analytic, that is, true. In other words, to count as analytic a descriptive expression must be a substitution-instance of a general logico-mathematical truth. With this definition in place the other c-notions are defined in the standard fashion.

The content of a sentence is defined to be the set of its non-analytic consequences. It then follows immediately from the definitions that logico-mathematical sentences (of both Language I and Language II) are analytic or contradictory and (assuming consistency) that analytic sentences are without content.

In the second approach, for a given language, Carnap starts with an arbitrary notion of direct consequence and from this notion he defines the other c-notions in the standard fashion. More importantly, in addition to defining the other c-notion, Carnap also uses the primitive notion of direct consequence (along with the derived c-notions) to effect the classification of terms into ‘logico-mathematical’ and ‘descriptive’. The guiding idea is that “the formally expressible distinguishing peculiarity of logical symbols and expressions [consists] in the fact that each sentence constructed solely from them is determinate”. Howsoever the guiding idea is implemented the actual division between “logico-mathematical” and “descriptive” expressions that one obtains as output is sensitive to the scope of the direct consequence relation with which one starts.

With this basic division in place, Carnap can now draw various derivative divisions, most notably, the division between analytic and synthetic statements: Suppose φ is a consequence of Γ. Then φ is said to be an L-consequence of Γ if either (i) φ and the sentences in Γ are logico-mathematical, or (ii) letting φ’ and Γ’ be the result of unpacking all descriptive symbols, then for every result φ” and Γ” of replacing every (primitive) descriptive symbol by an expression of the same genus, maintaining equal expressions for equal symbols, we have that φ” is a consequence of Γ”. Otherwise φ is a P-consequence of Γ. This division of the notion of consequence into L-consequence and P-consequence induces a division of the notion of demonstrable into L-demonstrable and P-demonstrable and the notion of valid into L-valid and P-valid and likewise for all of the other d-notions and c-notions. The terms ‘analytic’, ‘contradictory’, and ‘synthetic’ are used for ‘L-valid’, ‘L-contravalid’, and ‘L-indeterminate’.

It follows immediately from the definitions that logico-mathematical sentences are analytic or contradictory and that analytic sentences are without content. The trouble with the first approach is that the definitions of analyticity that Carnap gives for Languages I and II are highly sensitive to the original classification of terms into ‘logico-mathematical’ and ‘descriptive’. And the trouble with the second approach is that the division between ‘logico-mathematical’ and ‘descriptive’ expressions (and hence division between ‘analytic’ and ‘synthetic’ truths) is sensitive to the scope of the direct consequence relation with which one starts. This threatens to undermine Carnap’s thesis that logico-mathematical truths are analytic and hence without content. 

In the first approach, the original division of terms into ‘logico-mathematical’ and ‘descriptive’ is made by stipulation and if one alters this division one thereby alters the derivative division between analytic and synthetic sentences. For example, consider the case of Language II. If one calls only the primitive terms of first-order logic ‘logico-mathematical’ and then extends the language by adding the machinery of arithmetic and set theory, then, upon running the definition of ‘analytic’, one will have the result that true statements of first-order logic are without content while (the distinctive) statements of arithmetic and set theory have content. For another example, if one takes the language of arithmetic, calls the primitive terms ‘logico-mathematical’ and then extends the language by adding the machinery of finite type theory, calling the basic terms ‘descriptive’, then, upon running the definition of ‘analytic’, the result will be that statements of first-order arithmetic are analytic or contradictory while (the distinctive) statements of second- and higher-order arithmetic are synthetic and hence have content. In general, by altering the input, one alters the output, and Carnap adjusts the input to achieve his desired output.

In the second approach, there are no constraints on the scope of the direct consequence relation with which one starts and if one alters it one thereby alters the derivative division between ‘logico-mathematical’ and ‘descriptive’ expressions. Logical symbols and expressions have the feature that sentences composed solely of them are determinate. The trouble is that the resulting division of terms into ‘logico-mathematical’ and ‘descriptive’ will be highly sensitive to the scope of the direct consequence relation with which one starts. For example, let S be first-order PA and for the direct consequence relation take “provable in PA”. Under this assignment Fermat’s Last Theorem will be deemed descriptive, synthetic, and to have non-trivial content. For an example at the other extreme, let S be an extension of PA that contains a physical theory and let the notion of direct consequence be given by a Tarskian truth definition for the language. Since in the metalanguage one can prove that every sentence is true or false, every sentence will be either analytic (and so have null content) or contradictory (and so have total content). To overcome such counter-examples and get the classification that Carnap desires one must ensure that the consequence relation is (i) complete for the sublanguage consisting of expressions that one wants to come out as ‘logico-mathematical’ and (ii) not complete for the sublanguage consisting of expressions that one wants to come out as ‘descriptive’. Once again, by altering the input, one alters the output.

Carnap merely provides us with a flexible piece of technical machinery involving free parameters that can be adjusted to yield a variety of outcomes concerning the classifications of analytic/synthetic, contentful/non-contentful, and logico-mathematical/descriptive. In his own case, he has adjusted the parameters in such a way that the output is a formal articulation of his logicist view of mathematics that the truths of mathematics are analytic and without content. And one can adjust them differently to articulate a number of other views, for example, the view that the truths of first-order logic are without content while the truths of arithmetic and set theory have content. The point, however, is that we have been given no reason for fixing the parameters one way rather than another. The distinctions are thus not principled distinctions. It is trivial to prove that mathematics is trivial if one trivializes the claim.

Carnap is perfectly aware that to define c-notions like analyticity one must ascend to a stronger metalanguage. However, there is a distinction that he appears to overlook, namely, the distinction between (i) having a stronger system S that can define ‘analytic in S’ and (ii) having a stronger system S that can, in addition, evaluate a given statement of the form ‘φ is analytic in S’. It is an elementary fact that two systems S1 and S2 can employ the same definition (from an intensional point of view) of ‘analytic in S’ (using either the definition given for Language I or Language II) but differ on their evaluation of ‘φ is analytic in S’ (that is, differ on the extension of ‘analytic in S’). Thus, to determine whether ‘φ is analytic in S’ holds one needs to access much more than the “syntactic design” of φ – in addition to ascending to an essentially richer metalanguage one must move to a sufficiently strong system to evaluate ‘φ is analytic in S’.

In fact, to answer ‘Is φ analytic in Language I?’ is just to answer φ and, in the more general setting, to answer all questions of the form ‘Is φ analytic in S?’ (for various mathematical φ and S), where here ‘analytic’ is defined as Carnap defines it for Language II, just to answer all questions of mathematics. The same, of course, applies to the c-notion of consequence. So, when in first stating the Principle of Tolerance, Carnap tells us that we can choose our system S arbitrarily and that ‘no question of justification arises at all, but only the question of the syntactical consequences to which one or other of the choices leads’, where he means the c-notion of consequence.

Distributed Representation Revisited

Figure-132-The-distributed-representation-of-language-meaning-in-neural-networks

If the conventional symbolic model mandates a creation of theory that is sought to address the issues pertaining to the problem, this mandatory theory construction is bypassed in case of distributed representational systems, since the latter is characterized by a large number of interactions occurring in a nonlinear fashion. No such attempts at theoretical construction are to be made in distributed representational systems for fear of high end abstraction, thereby sucking off the nutrient that is the hallmark of the model. Distributed representation is likely to encounter onerous issues if the size of the network inflates, but the issue is addressed through what is commonly known as redundancy technique, whereby, a simultaneous encoding of information generated by numerous interactions take place, thus ameliorating the adequacy of presenting the information to the network. In the words of Paul Cilliers, this is an important point, for,

the network used for the model of a complex system will have to have the same level of complexity as the system itself….However, if the system is truly complex, a network of equal complexity may be the simplest adequate model of such a system, which means that it would be just as difficult to analyze as the system itself.

Following, he also presents a caveat,

This has serious methodological implications for the scientists working with complex systems. A model which reduces the complexity may be easier to implement, and may even provide a number of economical descriptions of the system, but the price paid for this should be considered carefully.

One of the outstanding qualities of distributed representational systems is their adaptability. Adaptability, in the sense of reusing the network to be applicable to other problems to offer solutions. Exactly, what this connotes is, the learning process the network has undergone for a problem ‘A’, could be shared for problem ‘B’, since many of the input neurons are bounded by information learned through ‘A’ that could be applicable to ‘B’. In other words, the weights are the dictators for solving or resolving issues, no matter, when and for which problem the learning took place. There is a slight hitch here, and that being this quality of generalizing solutions could suffer, if the level of abstraction starts to shoot up. This itself could be arrested, if in the initial stages, the right kind of framework is decided upon, thus obscuring the hitch to almost non-affective and non-existence impacting factor. The very notion of weights is considered here by Sterelny as a problematic, and he takes it to attack distributed representation in general and connectionsim as a whole in particular. In an analogically witty paragraph, Sterelny says,

There is no distinction drawable, even in principle, between functional and non- functional connections. A positive linkage between two nodes in a distributed network might mean a constitutive link (eg. Catlike, in a network for tiger); a nomic one (carnivore, in the same network), or a merely associative one (in my case, a particular football team that play in black and orange.

It should be noted that this criticism on weights is derived, since for Sterelny, relationship between distributed representations and the micro-features that compose them is deeply problematic. If such is the criticism, then no doubt, Sterelny still seems to be ensconced within the conventional semantic/symbolic model. And since, all weights can take part in information processing, there is some sort of a democratic liberty that is accorded to the weights within a distributed representation, and hence any talk of constitutive, nomic, or even for that matter associative is mere humbug. Even if there is a disagreement prevailing that a large pattern of weights are not convincing enough for an explanation, as they tend to complicate matters, the distributed representational systems work consistently enough as compared to an alternative system that offers explanation through reasoning, and thereby, it is quite foolhardy to jettison the distributed representation by the sheer force of criticism. If the neural network can be adapted to produce the correct answer for a number of training cases that is large compared with the size of the network, it can be trusted to respond correctly to the previously unseen cases provided they are drawn from the same population using the same distribution as the training cases, thus undermining the commonly held idea that explanations are the necessary feature of the trustworthy systems (Baum and Haussler). Another objection that distributed representation faces is that, if representations are distributed, then the probability of two representations of the same thing as different from one another cannot be ruled out. So, one of them is the true representation, while the other is only an approximation of the representation.(1) This is a criticism of merit and is attributed to Fodor, in his influential book titled Psychosemantics.(2) For, if there is only one representation, Fodor would not shy from saying that this is the yucky solution, folks project believe in. But, since connectionism believes in the plausibility of indeterminate representations, the question of flexibility scores well and high over the conventional semantic/symbolic models, and is it not common sense to encounter flexibility in daily lives? The other response to this objection comes from post-structuralist theories (Baudrillard is quite important here. See the first footnote below). The objection of true representation, and which is a copy of the true representation meets its pharmacy in post-structuralism, where meaning is constituted by synchronic as well as diachronic contextualities, and thereby supplementing the distributed representation with a no-need-for concept and context, as they are inherent in the idea of such a representation itself. Sterelny, still seems to ride on his obstinacy, and in a vitriolic tone poses his demand to know as to why distributed representation should be regarded as states of the system at all. Moreover, he says,

It is not clear that a distributed representation is a representation for the connectionist system at all…given that the influence of node on node is local, given that there is no processor that looks at groups of nodes as a whole, it seems that seeing a distributed representation in a network is just an outsider’s perspective on the system.

This is moving around in circles, if nothing more. Or maybe, he was anticipating what G. F. Marcus would write and echo to some extent in his book The Algebraic Mind. In the words of Marcus,

…I agree with Stemberger(3) that connectionism can make a valuable contribution to cognitive science. The only place, we differ is that, first, he thinks that the contribution will be made by providing a way of eliminating symbols, whereas I think that connectionism will make its greatest contribution by accepting the importance of symbols, seeking ways of supplementing symbolic theories and seeking ways of explaining how symbols could be implemented in the brain. Second, Stemberger feels that symbols may play no role in cognition; I think that they do.

Whatever Sterelny claims, after most of the claims and counter-claims that have been taken into account, the only conclusion for the time being is that distributive representation has been undermined, his adamant position to be notwithstanding.

(1) This notion finds its parallel in Baudrillard’s Simulation. And subsequently, the notion would be invoked in studying the parallel nature. Of special interest is the order of simulacra in the period of post-modernity, where the simulacrum precedes the original, and the distinction between reality and representation vanishes. There is only the simulacrum and the originality becomes a totally meaningless concept.

(2) This book is known for putting folk psychology firmly on the theoretical ground by rejecting any external, holist and existential threat to its position.

(3) Joseph Paul Stemberger is a professor in the Department of Linguistics at The University of British Columbia in Vancouver, British Columbia, Canada, with primary interests in phonology, morphology, and their interactions. My theoretical orientations are towards Optimality Theory, employing our own version of the theory, and towards connectionist models.

 

Representation as a Meaningful Philosophical Quandary

1456831690974

The deliberation on representation indeed becomes a meaningful quandary, if most of the shortcomings are to be overcome, without actually accepting the way they permeate the scientific and philosophical discourse. The problem is more ideological than one could have imagined, since, it is only within the space of this quandary that one can assume success in overthrowing the quandary. Unless the classical theory of representation that guides the expert systems has been accepted as existing, there is no way to dislodge the relationship of symbols and meanings that build up such systems, lest the predicament of falling prey to the Scylla of metaphysically strong notion of meaningful representation as natural or the Charybdis of an external designer should gobble us up. If one somehow escapes these maliciously aporetic entities, representation as a metaphysical monster stands to block our progress. Is it really viable then to think of machines that can survive this representational foe, a foe that gets no aid from the clusters of internal mechanisms? The answer is very much in the affirmative, provided, a consideration of the sort of such a non-representational system as continuous and homogeneous is done away with. And in its place is had functional units that are no more representational ones, for the former derive their efficiency and legitimacy through autopoiesis. What is required is to consider this notional representational critique of distributed systems on the objectivity of science, since objectivity as a property of science has an intrinsic value of independence from the subject who studies the discipline. Kuhn  had some philosophical problems to this precise way of treating science as an objective discipline. For Kuhn, scientists operate under or within paradigms thus obligating hierarchical structures. Such hierarchical structures ensure the position of scientists to voice their authority on matters of dispute, and when there is a crisis within, or, for the paradigm, scientists, to begin with, do not outrightly reject the paradigm, but try their level best at resolution of the same. In cases where resolution becomes a difficult task, an outright rejection of the paradigm would follow suit, thus effecting what is commonly called the paradigm shift. If such were the case, obviously, the objective tag for science goes for a hit, and Kuhn argues in favor of a shift in social order that science undergoes, signifying the subjective element. Importantly, these paradigm shifts occur to benefit scientific progress and in almost all of the cases, occur non-linearly. Such a view no doubt slides Kuhn into a position of relativism, and has been the main point of attack on paradigms shifting. At the forefront of attacks has been Michael Polanyi and his bunch of supporters, whose work on epistemology of science have much of the same ingredients, but was eventually deprived of fame. Kuhn was charged with plagiarism. The commonality of their arguments could be measured by a dissenting voice for objectivity in science. Polanyi thought of it as a false ideal, since for him the epistemological claims that defined science were based more on personal judgments, and therefore susceptible to fallibilism. The objective nature of science that obligates the scientists to see things as they really are is kind of dislodged by the above principle of subjectivity. But, if science were to be seen as objective, then the human subjectivity would indeed create a rupture as far as the purified version of scientific objectivity is sought for. The subject or the observer undergoes what is termed the “observer effect” that refers to the change impacting an act of observation being observed. This effect is as good as ubiquitous in most of the domains of science and technology ranging from Heisenbug(1) in computing via particle physics, science of thermodynamics to quantum mechanics. The quantum mechanics observer effect is quite perplexing, and is a result of a phenomenon called “superposition” that signifies the existence in all possible states and all at once. The superposition gets its credit due to Schrödinger’s cat experiment. The experiment entails a cat that is neither dead nor alive until observed. This has led physicists to take into account the acts of “observation” and “measurement” to comprehend the paradox in question, and thereby come out resolving it. But there is still a minority of quantum physicists out there who vouch for the supremacy of an observer, despite the quantum entanglement effect that go on to explain “observation” and “measurement” impacts.(2) Such a standpoint is indeed reflected in Derrida (9-10) as well, when he says (I quote him in full),

The modern dominance of the principle of reason had to go hand in hand with the interpretation of the essence of beings as objects, and object present as representation (Vorstellung), an object placed and positioned before a subject. This latter, a man who says ‘I’, an ego certain of itself, thus ensures his own technical mastery over the totality of what is. The ‘re-‘ of repraesentation also expresses the movement that accounts for – ‘renders reason to’ – a thing whose presence is encountered by rendering it present, by bringing it to the subject of representation, to the knowing self.

If Derridean deconstruction needs to work on science and theory, the only way out is to relinquish the boundaries that define or divide the two disciplines. Moreover, if there is any looseness encountered in objectivity, the ramifications are felt straight at the levels of scientific activities. Even theory does not remain immune to these consequences. Importantly, as scientific objectivity starts to wane, a corresponding philosophical luxury of avoiding the contingent wanes. Such a loss of representation congruent with a certain theory of meaning we live by has serious ethical-political affectations.

(1) Heisenbug is a pun on the Heisenberg’s uncertainty principle and is a bug in computing that is characterized by a disappearance of the bug itself when an attempt is made to study it. One common example is a bug that occurs in a program that was compiled with an optimizing compiler, but not in the same program when compiled without optimization (e.g., for generating a debug-mode version). Another example is a bug caused by a race condition. A heisenbug may also appear in a system that does not conform to the command-query separation design guideline, since a routine called more than once could return different values each time, generating hard- to-reproduce bugs in a race condition scenario. One common reason for heisenbug-like behaviour is that executing a program in debug mode often cleans memory before the program starts, and forces variables onto stack locations, instead of keeping them in registers. These differences in execution can alter the effect of bugs involving out-of-bounds member access, incorrect assumptions about the initial contents of memory, or floating- point comparisons (for instance, when a floating-point variable in a 32-bit stack location is compared to one in an 80-bit register). Another reason is that debuggers commonly provide watches or other user interfaces that cause additional code (such as property accessors) to be executed, which can, in turn, change the state of the program. Yet another reason is a fandango on core, the effect of a pointer running out of bounds. In C++, many heisenbugs are caused by uninitialized variables. Another similar pun intended bug encountered in computing is the Schrödinbug. A schrödinbug is a bug that manifests only after someone reading source code or using the program in an unusual way notices that it never should have worked in the first place, at which point the program promptly stops working for everybody until fixed. The Jargon File adds: “Though… this sounds impossible, it happens; some programs have harbored latent schrödinbugs for years.”

(2) There is a related issue in quantum mechanics relating to whether systems have pre-existing – prior to measurement, that is – properties corresponding to all measurements that could possibly be made on them. The assumption that they do is often referred to as “realism” in the literature, although it has been argued that the word “realism” is being used in a more restricted sense than philosophical realism. A recent experiment in the realm of quantum physics has been quoted as meaning that we have to “say goodbye” to realism, although the author of the paper states only that “we would [..] have to give up certain intuitive features of realism”. These experiments demonstrate a puzzling relationship between the act of measurement and the system being measured, although it is clear from experiment that an “observer” consisting of a single electron is sufficient – the observer need not be a conscious observer. Also, note that Bell’s Theorem suggests strongly that the idea that the state of a system exists independently of its observer may be false. Note that the special role given to observation (the claim that it affects the system being observed, regardless of the specific method used for observation) is a defining feature of the Copenhagen Interpretation of quantum mechanics. Other interpretations resolve the apparent paradoxes from experimental results in other ways. For instance, the Many- Worlds Interpretation posits the existence of multiple universes in which an observed system displays all possible states to all possible observers. In this model, observation of a system does not change the behavior of the system – it simply answers the question of which universe(s) the observer(s) is(are) located in: In some universes the observer would observe one result from one state of the system, and in others the observer would observe a different result from a different state of the system.