The Second Trichotomy. Thought of the Day 120.0

Figure-2-Peirce's-triple-trichotomy

The second trichotomy (here is the first) is probably the most well-known piece of Peirce’s semiotics: it distinguishes three possible relations between the sign and its (dynamical) object. This relation may be motivated by similarity, by actual connection, or by general habit – giving rise to the sign classes icon, index, and symbol, respectively.

According to the second trichotomy, a Sign may be termed an Icon, an Index, or a Symbol.

An Icon is a sign which refers to the Object that it denotes merely by virtue of characters of its own, and which it possesses, just the same, whether any such Object actually exists or not. It is true that unless there really is such an Object, the Icon does not act as a sign; but this has nothing to do with its character as a sign. Anything whatever, be it quality, existent individual, or law, is an Icon of anything, in so far as it is like that thing and used as a sign of it.

An Index is a sign which refers to the Object that it denotes by virtue of being really affected by that Object. It cannot, therefore, be a Qualisign, because qualities are whatever they are independently of anything else. In so far as the Index is affected by the Object, it necessarily has some Quality in common with the Object, and it is in respect to these that it refers to the Object. It does, therefore, involve a sort of Icon, although an Icon of a peculiar kind; and it is not the mere resemblance of its Object, even in these respects which makes it a sign, but it is the actual modification of it by the Object. 

A Symbol is a sign which refers to the Object that it denotes by virtue of a law, usually an association of general ideas, which operates to cause the Symbol to be interpreted as referring to that Object. It is thus itself a general type or law, that is, a Legisign. As such it acts through a Replica. Not only is it general in itself, but the Object to which it refers is of general nature. Now that which is general has its being in the instances it will determine. There must, therefore, be existent instances of what the Symbol denotes, although we must here understand by ‘existent’, existent in the possibly imaginary universe to which the Symbol refers. The Symbol will indirectly, through the association or other law, be affected by those instances; and thus the Symbol will involve a sort of Index, although an Index of a peculiar kind. It will not, however, be by any means true that the slight effect upon the Symbol of those instances accounts for the significant character of the Symbol.

The icon refers to its object solely by means of its own properties. This implies that an icon potentially refers to an indefinite class of objects, namely all those objects which have, in some respect, a relation of similarity to it. In recent semiotics, it has often been remarked by someone like Nelson Goodman that any phenomenon can be said to be like any other phenomenon in some respect, if the criterion of similarity is chosen sufficiently general, just like the establishment of any convention immediately implies a similarity relation. If Nelson Goodman picks out two otherwise very different objects, then they are immediately similar to the extent that they now have the same relation to Nelson Goodman. Goodman and others have for this reason deemed the similarity relation insignificant – and consequently put the whole burden of semiotics on the shoulders of conventional signs only. But the counterargument against this rejection of the relevance of the icon lies close at hand. Given a tertium comparationis, a measuring stick, it is no longer possible to make anything be like anything else. This lies in Peirce’s observation that ‘It is true that unless there really is such an Object, the Icon does not act as a sign ’ The icon only functions as a sign to the extent that it is, in fact, used to refer to some object – and when it does that, some criterion for similarity, a measuring stick (or, at least, a delimited bundle of possible measuring sticks) are given in and with the comparison. In the quote just given, it is of course the immediate object Peirce refers to – it is no claim that there should in fact exist such an object as the icon refers to. Goodman and others are of course right in claiming that as ‘Anything whatever ( ) is an Icon of anything ’, then the universe is pervaded by a continuum of possible similarity relations back and forth, but as soon as some phenomenon is in fact used as an icon for an object, then a specific bundle of similarity relations are picked out: ‘ in so far as it is like that thing.’

Just like the qualisign, the icon is a limit category. ‘A possibility alone is an Icon purely by virtue of its quality; and its object can only be a Firstness.’ (Charles S. PeirceThe Essential Peirce_ Selected Philosophical Writings). Strictly speaking, a pure icon may only refer one possible Firstness to another. The pure icon would be an identity relation between possibilities. Consequently, the icon must, as soon as it functions as a sign, be more than iconic. The icon is typically an aspect of a more complicated sign, even if very often a most important aspect, because providing the predicative aspect of that sign. This Peirce records by his notion of ‘hypoicon’: ‘But a sign may be iconic, that is, may represent its object mainly by its similarity, no matter what its mode of being. If a substantive is wanted, an iconic representamen may be termed a hypoicon’. Hypoicons are signs which to a large extent makes use of iconical means as meaning-givers: images, paintings, photos, diagrams, etc. But the iconic meaning realized in hypoicons have an immensely fundamental role in Peirce’s semiotics. As icons are the only signs that look-like, then they are at the same time the only signs realizing meaning. Thus any higher sign, index and symbol alike, must contain, or, by association or inference terminate in, an icon. If a symbol can not give an iconic interpretant as a result, it is empty. In that respect, Peirce’s doctrine parallels that of Husserl where merely signitive acts require fulfillment by intuitive (‘anschauliche’) acts. This is actually Peirce’s continuation of Kant’s famous claim that intuitions without concepts are blind, while concepts without intuitions are empty. When Peirce observes that ‘With the exception of knowledge, in the present instant, of the contents of consciousness in that instant (the existence of which knowledge is open to doubt) all our thought and knowledge is by signs’ (Letters to Lady Welby), then these signs necessarily involve iconic components. Peirce has often been attacked for his tendency towards a pan-semiotism which lets all mental and physical processes take place via signs – in the quote just given, he, analogous to Husserl, claims there must be a basic evidence anterior to the sign – just like Husserl this evidence before the sign must be based on a ‘metaphysics of presence’ – the ‘present instant’ provides what is not yet mediated by signs. But icons provide the connection of signs, logic and science to this foundation for Peirce’s phenomenology: the icon is the only sign providing evidence (Charles S. Peirce The New Elements of Mathematics Vol. 4). The icon is, through its timeless similarity, apt to communicate aspects of an experience ‘in the present instant’. Thus, the typical index contains an icon (more or less elaborated, it is true): any symbol intends an iconic interpretant. Continuity is at stake in relation to the icon to the extent that the icon, while not in itself general, is the bearer of a potential generality. The infinitesimal generality is decisive for the higher sign types’ possibility to give rise to thought: the symbol thus contains a bundle of general icons defining its meaning. A special icon providing the condition of possibility for general and rigorous thought is, of course, the diagram.

The index connects the sign directly with its object via connection in space and time; as an actual sign connected to its object, the index is turned towards the past: the action which has left the index as a mark must be located in time earlier than the sign, so that the index presupposes, at least, the continuity of time and space without which an index might occur spontaneously and without any connection to a preceding action. Maybe surprisingly, in the Peircean doctrine, the index falls in two subtypes: designators vs. reagents. Reagents are the simplest – here the sign is caused by its object in one way or another. Designators, on the other hand, are more complex: the index finger as pointing to an object or the demonstrative pronoun as the subject of a proposition are prototypical examples. Here, the index presupposes an intention – the will to point out the object for some receiver. Designators, it must be argued, presuppose reagents: it is only possible to designate an object if you have already been in reagent contact (simulated or not) with it (this forming the rational kernel of causal reference theories of meaning). The closer determination of the object of an index, however, invariably involves selection on the background of continuities.

On the level of the symbol, continuity and generality play a main role – as always when approaching issues defined by Thirdness. The symbol is, in itself a legisign, that is, it is a general object which exists only due to its actual instantiations. The symbol itself is a real and general recipe for the production of similar instantiations in the future. But apart from thus being a legisign, it is connected to its object thanks to a habit, or regularity. Sometimes, this is taken to mean ‘due to a convention’ – in an attempt to distinguish conventional as opposed to motivated sign types. This, however, rests on a misunderstanding of Peirce’s doctrine in which the trichotomies record aspects of sign, not mutually exclusive, independent classes of signs: symbols and icons do not form opposed, autonomous sign classes; rather, the content of the symbol is constructed from indices and general icons. The habit realized by a symbol connects it, as a legisign, to an object which is also general – an object which just like the symbol itself exists in instantiations, be they real or imagined. The symbol is thus a connection between two general objects, each of them being actualized through replicas, tokens – a connection between two continua, that is:

Definition 1. Any Blank is a symbol which could not be vaguer than it is (although it may be so connected with a definite symbol as to form with it, a part of another partially definite symbol), yet which has a purpose.

Axiom 1. It is the nature of every symbol to blank in part. [ ]

Definition 2. Any Sheet would be that element of an entire symbol which is the subject of whatever definiteness it may have, and any such element of an entire symbol would be a Sheet. (‘Sketch of Dichotomic Mathematics’ (The New Elements of Mathematics Vol. 4 Mathematical Philosophy)

The symbol’s generality can be described as it having always blanks having the character of being indefinite parts of its continuous sheet. Thus, the continuity of its blank parts is what grants its generality. The symbol determines its object according to some rule, granting the object satisfies that rule – but leaving the object indeterminate in all other respects. It is tempting to take the typical symbol to be a word, but it should rather be taken as the argument – the predicate and the proposition being degenerate versions of arguments with further continuous blanks inserted by erasure, so to speak, forming the third trichotomy of term, proposition, argument.

Advertisement

Triadomania. Thought of the Day 117.0

figure-2

Peirce’s famous ‘triadomania’ lets most of his decisive distinctions appear in threes, following the tripartition of his list of categories, the famous triad of First, Second, and Third, or Quality, Reaction, Representation, or Possibility, Actuality, Reality.

Firstness is the mode of being of that which is such as it is, positively and without reference to anything else.

Secondness is the mode of being of that which is such as it is, with respect to a second but regardless of any third.

Thirdness is the mode of being of that which is such as it is, in bringing a second and third into relation to each other.

Firstness constitutes the quality of experience: in order for something to appear at all, it must do so due to a certain constellation of qualitative properties. Peirce often uses sensory qualities as examples, but it is important for the understanding of his thought that the examples may refer to phenomena very far from our standard conception of ‘sensory data’, e.g. forms or the ‘feeling’ of a whole melody or of a whole mathematical proof, not to be taken in a subjective sense but as a concept for the continuity of melody or proof as a whole, apart from the analytical steps and sequences in which it may be, subsequently, subdivided. In short, all sorts of simple and complex Gestalt qualities also qualify as Firstnesses. Firstness tend to form continua of possibilities such as the continua of shape, color, tone, etc. These qualities, however, are, taken in themselves, pure possibilities and must necessarily be incarnated in phenomena in order to appear. Secondness is the phenomenological category of ‘incarnation’ which makes this possible: it is the insistency, then, with which the individuated, actualized, existent phenomenon appears. Thus, Secondness necessarily forms discontinuous breaks in Firstness, allowing for particular qualities to enter into existence. The mind may imagine anything whatever in all sorts of quality combinations, but something appears with an irrefutable insisting power, reacting, actively, yielding resistance. Peirce’s favorite example is the resistance of the closed door – which might be imagined reduced to the quality of resistance feeling and thus degenerate to pure Firstness so that his theory imploded into a Hume-like solipsism – but to Peirce this resistance, surprise, event, this thisness, ‘haecceity’ as he calls it with a Scotist term, remains irreducible in the description of the phenomenon (a Kantian idea, at bottom: existence is no predicate). About Thirdness, Peirce may directly state that continuity represents it perfectly: ‘continuity and generality are two names of the same absence of distinction of individuals’. As against Secondness, Thirdness is general; it mediates between First and Second. The events of Secondness are never completely unique, such an event would be inexperiencable, but relates (3) to other events (2) due to certain features (1) in them; Thirdness is thus what facilitates understanding as well as pragmatic action, due to its continuous generality. With a famous example: if you dream about an apple pie, then the very qualities of that dream (taste, smell, warmth, crustiness, etc.) are pure Firstnesses, while the act of baking is composed of a series of actual Secondnesses. But their coordination is governed by a Thirdness: the recipe, being general, can never specify all properties in the individual apple pie, it has a schematic frame-character and subsumes an indefinite series – a whole continuum – of possible apple pies. Thirdness is thus necessarily general and vague. Of course, the recipe may be more or less precise, but no recipe exists which is able to determine each and every property in the cake, including date, hour, place, which tree the apples stem from, etc. – any recipe is necessarily general. In this case, the recipe (3) mediates between dream (1) and fulfilment (2) – its generality, symbolicity, relationality and future orientation are all characteristic for Thirdness. An important aspect of Peirce’s realism is that continuous generality may be experienced directly in perceptual judgments: ‘Generality, Thirdness, pours in upon us in our very perceptual judgments’.

All these determinations remain purely phenomenological, even if the later semiotic and metaphysical interpretations clearly shine through. In a more general, non-Peircean terminology, his phenomenology can be seen as the description of minimum aspects inherent in any imaginable possible world – for this reason it is imaginability which is the main argument, and this might point in the direction that Peirce could be open to critique for subjectivism, so often aimed at Husserl’s project, in some respects analogous. The concept of consciousness is invoked as the basis of imaginability: phenomenology is the study of invariant properties in any phenomenon appearing for a mind. Peirce’s answer would here be, on the one hand, the research community which according to him defines reality – an argument which structurally corresponds to Husserl’s reference to intersubjectivity as a necessary ingredient in objectivity (an object is a phenomenon which is intersubjectively accessible). Peirce, however, has a further argument here, namely his consequent refusal to delimit his concept of mind exclusively to human subjects (a category the use of which he obviously tries to minimize), mind-like processes may take place in nature without any subject being responsible. Peirce will, for continuity reasons, never accept any hard distinction between subject and object and remains extremely parsimonious in the employment of such terms.

From Peirce’s New Elements of Mathematics (The New Elements of Mathematics Vol. 4),

But just as the qualities, which as they are for themselves, are equally unrelated to one other, each being mere nothing for any other, yet form a continuum in which and because of their situation in which they acquire more or less resemblance and contrast with one another; and then this continuum is amplified in the continuum of possible feelings of quality, so the accidents of reaction, which are waking consciousnesses of pairs of qualities, may be expected to join themselves into a continuum. 

Since, then an accidental reaction is a combination or bringing into special connection of two qualities, and since further it is accidental and antigeneral or discontinuous, such an accidental reaction ought to be regarded as an adventitious singularity of the continuum of possible quality, just as two points of a sheet of paper might come into contact.

But although singularities are discontinuous, they may be continuous to a certain extent. Thus the sheet instead of touching itself in the union of two points may cut itself all along a line. Here there is a continuous line of singularity. In like manner, accidental reactions though they are breaches of generality may come to be generalized to a certain extent.

Secondness is now taken to actualize these quality possibilities based on an idea that any actual event involves a clash of qualities – in the ensuing argumentation Peirce underlines that the qualities involved in actualization need not be restrained to two but may be many, if they may only be ‘dissolved’ into pairs and hence do not break into the domain of Thirdness. This appearance of actuality, hence, has the property of singularities, spontaneously popping up in the space of possibilities and actualizing pairs of points in it. This transition from First to Second is conceived of along Aristotelian lines: as an actualization of a possibility – and this is expressed in the picture of a discontinuous singularity in the quality continuum. The topological fact that singularities must in general be defined with respect to the neighborhood of the manifold in which they appear, now becomes the argument for the fact that Secondness can never be completely discontinuous but still ‘inherits’ a certain small measure of continuity from the continuum of Firstness. Singularities, being discontinuous along certain dimensions, may be continuous in others, which provides the condition of possibility for Thirdness to exist as a tendency for Secondness to conform to a general law or regularity. As is evident, a completely pure Secondness is impossible in this continuous metaphysics – it remains a conceivable but unrealizable limit case, because a completely discon- tinuous event would amount to nothing. Thirdness already lies as a germ in the non-discontinuous aspects of the singularity. The occurrences of Secondness seem to be infinitesimal, then, rather than completely extensionless points.

Discontinuous Reality. Thought of the Day 61.0

discontinuousReality-2015

Convention is an invention that plays a distinctive role in Poincaré’s philosophy of science. In terms of how they contribute to the framework of science, conventions are not empirical. They are presupposed in certain empirical tests, so they are (relatively) isolated from doubt. Yet they are not pure stipulations, or analytic, since conventional choices are guided by, and modified in the light of, experience. Finally they have a different character from genuine mathematical intuitions, which provide a fixed, a priori synthetic foundation for mathematics. Conventions are thus distinct from the synthetic a posteriori (empirical), the synthetic a priori and the analytic a priori.

The importance of Poincaré’s invention lies in the recognition of a new category of proposition and its centrality in scientific judgment. This is more important than the special place Poincaré gives Euclidean geometry. Nevertheless, it’s possible to accommodate some of what he says about the priority of Euclidean geometry with the use of non-Euclidean geometry in science, including the inapplicability of any geometry of constant curvature in physical theories of global space. Poincaré’s insistence on Euclidean geometry is based on criteria of simplicity and convenience. But these criteria surely entail that if giving up Euclidean geometry somehow results in an overall gain in simplicity then that would be condoned by conventionalism.

The a priori conditions on geometry – in particular the group concept, and the hypothesis of rigid body motion it encourages – might seem a lingering obstacle to a more flexible attitude towards applied geometry, or an empirical approach to physical space. However, just as the apriority of the intuitive continuum does not restrict physical theories to the continuous; so the apriority of the group concept does not mean that all possible theories of space must allow free mobility. This, too, can be “corrected”, or overruled, by new theories and new data, just as, Poincaré comes to admit, the new quantum theory might overrule our intuitive assumption that nature is continuous. That is, he acknowledges that reality might actually be discontinuous – despite the apriority of the intuitive continuum.

Bernard Cache’s Earth Moves: The Furnishing of Territories (Writing Architecture)

bernard_cache_lectur

Take the concept of singularity. In mathematics, what is said to be singular is not a given point, but rather a set of points on a given curve. A point is not singular; it becomes singularized on a continuum. And several types of singularity exist, starting with fractures in curves and other bumps in the road. We will discount them at the outset, for singularities that are marked by discontinuity signal events that are exterior to the curvature and are themselves easily identifiable. In the same way, we will eliminate singularities such as backup points [points de rebroussement]. For though they are indeed discontinuous, they refer to a vector that is tangential to the curve and thus trace a symmetrical axis that constitutive of the backup point. Whether it be a reflection of the tan- gential plane or a rebound with respect to the orthogonal plane, the backup point is thus not a basic singularity. It is rather the result of an operation effectuated on any part of the curve. Here again, the singular would be the sign of too noisy, too memorable an event, while what we want to do is to deal with what is most smooth: ordinary continua, sleek and polished.

On one hand there are the extrema, the maximum and minimum on a given curve. And on the other there are those singular points that, in relation to the extrema, figure as in-betweens. These are known as points of inflection. They are different from the extrema in that they are defined only in relation to themselves, whereas the definition of the extrema presupposes the prior choice of an axis or an orientation, that is to say of a vector.

Indeed, a maximum or a minimum is a point where the tangent to the curve is directed perpendicularly to the axis of the ordinates [y-axis]. Any new orientation of the coordinate axes repositions the maxima and the min- ima; they are thus extrinsic singularities. The point of inflection, however, designates a pure event of curvature where the tangent crosses the curve; yet this event does not depend in any way on the orientation of the axes, which is why it can be said that inflection is an intrinsic singularity. On either side of the inflection, we know that there will be a highest point and a lowest point, but we cannot designate them as long as the curve has not been related to the orientation of a vector. Points of inflection are singularities in and of themselves, while they confer an indeterminacy to the rest of the curve. Preceding the vector, inflection makes of each of the points a possible extremum in relation to its inverse: virtual maxima and minima. In this way, inflection represents a totality of possibilities, as well as an openness, a receptiveness, or an anticipation……

Bernard Cache Earth Moves The Furnishing of Territories

Cartographies of Disjunction’s Relational Dust. Thought of the Day 27.0

Z_113_s318_Daedalus

The biogrammatic interface, generates a political aesthetic in which action is felt through the affective modulations and/or tonalities it incites. A doubling occurs in the moving towards realization, the rearticulation, of the becoming-thing/gesture. This doubling divides in a central differentiation, referencing a voluminous vocabulary of the interstitial – fissure, gap, disjunction, in-between, crack, interface, fold, non-place – descriptors of a bifurcating rift between content and expression, necessary for realization. Deleuze sums up the crux of this Foucauldian argument:

Things can be realized only through doubling or dissociation, creating divergent forms among which they can be distributed. It is here that we see the great dualities: between different classes, or the governing and the governed, or the public and the private. But more than this, it is here that the two forms of realization diverge or become differentiated: a form of expression and a form of content, a discursive and a non- discursive form, the form of the visible and the form of the articulable. It is precisely because the immanent cause, in both its matter and its functions, disregards form, that it is realized on the basis of a central differentiation which, on the one hand will form visible matter, and on the other will formalize articulable functions.’ (Gilles Deleuze, Sean Hand-Foucault)

It can be argued that this central differentiation or interface distinguishes between the movements of two diagrammatic registers: outside from inside and the forms of realization. Transductive processes between these registers mark portals of entry through which all points of the diagram are in superposition, in passage as intensities of non-localizable relations from one point to another. The diagram distributes affective intensities within the context it maps.

Deleuze elasticizes Foucault’s reach by translating his oeuvre within the folding/unfolding of a knowledge-power-subjectivity continuum, mapping Foucault’s relays between the bifurcating polarities of content/expression, visibilities/statements as they differentiate and integrate through the folding ‘zone of subjectification’. The biogramming interface. The ‘event’ of rearticulation, of knowledge-capture and distribution, takes place through the perceptual filter of differential relations becoming-actual as a perception or thought. This is a topological dynamic mapped by the diagram, affected through the central differentiation (biogram) ‘or the ‘non-place’, as Foucault puts it, where the informal diagram is swallowed up and becomes embodied instead in two different directions that are necessarily divergent and irreducible. The concrete assemblages are therefore opened up by a crack that determines how the abstract machine performs’. It’s the process of swallowing up the relational intensities of a milieu and spitting back out certain selected somethings to be swallowed again that’s of particular interest to political aesthetics of the performative event. Foucault imagined a cartographic container of forces, affects, attractions and repulsions that modulate the diagram, excite the disjunction that separates forms of realization. The abstract machine begins to actualize its virtual potential as it distributes its relational dust.

…….

Kenneth Knoespel notes that diagramma in the original Greek does ‘not simply mean something that is marked out by lines, a figure, a form or a plan, but also carries a second connotation of marking or crossing out,’ suggesting not only ephemerality but also an incompleteness that carries an expectation of potential. ‘What is interesting is that the diagram participates in a geneology of figures that moves from the wax tablet to the computer screen […] the Greek setting of diagram suggests that any figure that is drawn is accompanied by an expectancy that it will be redrawn […] Here a diagram may be thought of as a relay. While a diagram may have been used visually to reinforce an idea one moment, the next it may provide a means of seeing something never seen before. Diagrams As Piloting Devices…

 

Quantum Music

Human neurophysiology suggests that artistic beauty cannot easily be disentangled from sexual attraction. It is, for instance, very difficult to appreciate Sandro Botticelli’s Primavera, the arguably “most beautiful painting ever painted,” when a beautiful woman or man is standing in front of that picture. Indeed so strong may be the distraction, and so deep the emotional impact, that it might not be unreasonable to speculate whether aesthetics, in particular beauty and harmony in art, could be best understood in terms of surrogates for natural beauty. This might be achieved through the process of artistic creation, idealization and “condensation.”

1200px-Botticelli-primavera

In this line of thought, in Hegelian terms, artistic beauty is the sublimation, idealization, completion, condensation and augmentation of natural beauty. Very different from Hegel who asserts that artistic beauty is “born of the spirit and born again, and the higher the spirit and its productions are above nature and its phenomena, the higher, too, is artistic beauty above the beauty of nature” what is believed here is that human neurophysiology can hardly be disregarded for the human creation and perception of art; and, in particular, of beauty in art. Stated differently, we are inclined to believe that humans are invariably determined by (or at least intertwined with) their natural basis that any neglect of it results in a humbling experience of irritation or even outright ugliness; no matter what social pressure groups or secret services may want to promote.

Thus, when it comes to the intensity of the experience, the human perception of artistic beauty, as sublime and refined as it may be, can hardly transcend natural beauty in its full exposure. In that way, art represents both the capacity as well as the humbling ineptitude of its creators and audiences.

Leaving these idealistic realms and come back to the quantization of musical systems. The universe of music consists of an infinity – indeed a continuum – of tones and ways to compose, correlate and arrange them. It is not evident how to quantize sounds, and in particular music, in general. One way to proceed would be a microphysical one: to start with frequencies of sound waves in air and quantize the spectral modes of these (longitudinal) vibrations very similar to phonons in solid state physics.

For the sake of relating to music, however, a different approach that is not dissimilar to the Deutsch-Turing approach to universal (quantum) computability, or Moore’s automata analogues to complementarity: a musical instrument is quantized, concerned with an octave, realized by the eight white keyboard keys typically written c, d, e, f, g, a, b, c′ (in the C major scale).

In analogy to quantum information quantization of tones is considered for a nomenclature in analogy to classical musical representation to be further followed up by introducing typical quantum mechanical features such as the coherent superposition of classically distinct tones, as well as entanglement and complementarity in music…..quantum music

Physical Congruences of Nāgārjuna’s Mūlamadhyamakakārikā, Yukti-sastikâ, śūnyatā and Pratītyasamutpāda. Note Quote

NoahBuddha4x6

The Middle Way of Mādhyamaka refers to the teachings of Nāgārjuna, very interesting are the implications between quantum physics and Mādhyamaka. The basic concept of reality in the philosophy of Nāgārjuna is that the fundamental reality has no firm core but consists of systems of interacting objects. According to the middle way perspective, based on the notion of emptiness, phenomena exist in a relative way, that is, they are empty of any kind of inherent and independent existence. Phenomena are regarded as dependent events existing relationally rather than permanent things, which have their own entity. Nāgārjuna middle way perspective emerges as a relational approach, based on the insight of emptiness.  śūnyatā (emptiness) is the foundation of all things, and it is the basic principle of all phenomena. The emptiness implies the negation of unchanged, fixed substance and thereby the possibility for relational existence and change. This suggests that both the ontological constitution of things and our epistemological schemes are just as relational as everything else. We are fundamentally relational internally and externally. In other words, Nāgārjuna, do not fix any ontological nature of the things:

  1. they do not arise
  2. they do not exist
  3. they are not to be found
  4. they are not
  5. and they are unreal

In short, an invitation do not decide on either existence or non-existence (nondualism). According the theory of  śūnyatā, phenomena exist in a relative state only, a kind of ’ontological relativity’. Phenomena are regarded as dependent (only in relation to something else) events rather than things which have their own inherent nature; thus the extreme of permanence is avoided.

In the Mūlamadhyamakakārikā, a tetralemma is pointed out: “Neither from itself nor from another, nor from both, nor without a cause, does anything whatever anywhere arise”. In the Yukti-sastikâ, Nāgārjuna says, “That which has arisen dependently on this and that that has not arisen substantially (svabhavatah, स्वभावतः). What has not arisen substantially, how can it literally (nama) be called ‘arisen’? […] That which originates due to a cause and does not abide without (certain) conditions but disappears when the conditions are absent, how can it be understood as ‘to exist’?”

By the notions of ‘to arise’ and ‘to exist’, Nāgārjuna does not mean the empirical existence but the substantial existence. When in many passages of Mūlamadhyamakakārikā Nāgārjuna states that things do not arise (7.29), that they do not exist (3.7, 5.8, 14.6), that they are not to be found (2.25, 9.11), that they are not (15.10), that they are unreal (13.1), then clearly this has the meaning: things do not arise substantially. They do not exist out of themselves; their independence cannot be found. They are dependent and in this sense they are substantially unreal. Nāgārjuna only rejects the idea of a substantial arising of things which bear an absolute and independent existence. He does not refute the empirical existence of things as explained in the following: “It exists implies grasping after eternity. It does not exist implies the philosophy of annihilation. Therefore, a discerning person should not decide on either existence or non-existence”. (15.10)

For Nāgārjuna, the expression ‘to exist’ has the meaning of ‘to exist substantially’. His issue is not the empirical existence of things but the conception of a permanent thing i.e. the idea of an own being, without dependence on something else. Nāgārjuna refutes the concept of independent existence which is unchangeable, eternal and existing by itself. Things do not arise out of themselves, they do not exist absolutely and are dependent. Their permanent being or existence cannot be found. The many interpretations of Nāgārjuna which claim that he is also refuting the empirical existence of objects, are making an inadmissible generalization which moves Nāgārjuna near to subjectivism, nihilism and instrumentalism. Such interpretations originate in metaphysical approaches which themselves have a difficulty in recognizing the empirical existence of the data presented. This is not at all the case with Nāgārjuna. Nāgārjuna presents the dependence of phenomena mainly in images.

Pratītyasamutpāda (Sanskrit: प्रतीत्यसमुत्पाद; Pali: पटिच्चसमुप्पाद paṭiccasamuppāda) is an indication of dependence. Dependent bodies are in an intermediate state, they are not properly separated and they are not one entity. Secondly, they rely on each other and are influenced or determined by something else. Thirdly, their behaviour is influenced by something in-between, for example a mover is attracted by gravitational force, a viewer is dependent on rays of light between his eyes and the object, a piano player’s action is determined by the fine motor skills of his fingers, an agent is dependent on his act. Pratītyasamutpāda is an indication of dependence and of something that happens between the objects. One object is bound to the other without being identical to it. The implicit interpretations of Pratītyasamutpāda, are in terms of time, structure and space.

The following citations and references illustrate the term Pratītyasamutpāda. Pratītyasamutpāda is used:

1. as Dependence in Nāgārjuna’s Hymn to the Buddha: “Dialecticians maintain that suffering is created by itself, created by (someone) else, created by both (or) without a cause, but You have stated that it is dependently born”.

2. as an intermediate state by Nāgārjuna: Objects are neither together nor separated

3. as bondage in the Hevajra Tantra: “Men are bound by the bondage of existence and are liberated by understanding the nature of existence”.

4. as an intermediate state by Roger Penrose: “Quantum entanglement is a very strange type of thing. It is somewhere between objects being separate and being in communication with each other”.

5. as something between bodies by Albert Einstein: “A courageous scientific imagination was needed to realize fully that not the behaviour of bodies, but the behaviour of something between them, that is, the field, may be essential for ordering and understanding events”.

6. as the mean between things in modern mathematics: to quote Gioberti: “The mean between two or more things, their juncture, union, transit, passage, crossing, interval, distance, bond and contact – all these are mysterious, for they are rooted in the continuum, in the infinite. The interval that runs between one idea and another, one thing and another, is infinite, and can only be surpassed by the creative act. This is why the dynamic moment and dialectic concept of the mean are no less mysterious than those of the beginning and the end. The mean is a union of two diverse and opposite things in a unity. It is an essentially dialectic concept, and involves an apparent contradiction, namely, the identity of the one and the many, of the same and the diverse. This unity is simple and composite; it is unity and synthesis and harmony. It shares in two extremes without being one or the other. It is the continuum, and therefore the infinite. Now, the infinite identically uniting contraries, clarifies the nature of the interval. In motion, in time, in space, in concepts, the discrete is easy to grasp, because it is finite. The continuum and the interval are mysterious, because they are infinite.”

Catastrophe

Since, natural phenomena are continuously battered by perturbations, any thematic of the fundament that classifies critical points of smooth functions becomes the defining parameter of catastrophe. If a natural system is defined by a function of state variables, then the perturbations are represented by control parameters on which the function depends. An unfolding of a function is such a family: it is a smooth function of the state variables with the parameters satisfying a specific condition. Catastrophe’s aim is then to detect properties of a function by studying its unfoldings.

catastrophe topo

Thom studied the continuous crossing from a variety (space) to another, the connections through common boundaries and points between spaces even endowed with different dimensions (a research on the so called “cobordism” (1) which yielded him the Field Medal in 1958), until he singled out few universal forms, that is mathematical objects representing catastrophes or abrupt, although continuous, transitions of forms: specific singularities appearing when an object is submitted to bonds, such as restrictions with regard to its ordinary dimensions, that it accepts except in particular points where it offers resistance by concentrating there, so to say, its structure. The theory is used to classify how stable equilibria change when parameters are varied, with points in parameter space at which qualitative changes affects behavior termed catastrophe points. Catastrophe theory should apply to any gradient system where the force can be written as the negative gradient of a potential, and the points where the gradient vanishes are what the theory prefers degenerate points. There are seven elementary types of catastrophes or generic singularities of an application and Thom decided to study their applications in caustics, surfaces lit according to different angle shots, reflections and refractions. Initially catastrophe theory was of use just to explain caustic formation and only afterwards many other phenomena, but without yielding quantitative solutions and exact predictions, rather qualitatively framing situations that were uncontrollable by only reductionistic quantitative methods summing up elementary units. The study of forms in irregular, accidental and even chaotic situations had truly in advance led scientists like Poincaré and Hadamard, to single out structurally invariable catastrophic evolutions in the most disparate phenomena, in terms of divergences due to sensitive dependence on little variations of the initial conditions. In such cases there were not exact laws, rather evolutionary asymptotic tendencies, which did not allow exact predictions, in case only statistic ones. While when exact predictions are possible, in terms of strict laws and explicit equations, the catastrophe ceases.

For Thom, catastrophe was a methodology. He says,

Mathematicians should see catastrophe theory as just a part of the theory of local singularities of smooth morphisms, or, if they are interested in the wider ambitions of this theory, as a dubious methodology concerning the stability or instability of natural systems….the whole of qualitative dynamics, all the ‘chaos’ theories talked about so much today, depend more or less on it.

Thom gets more philosophical when it comes to the question of morphogenesis. Stability for Thom is a natural condition to place upon mathematical models for processes in nature because the conditions under which such processes take place can never be duplicated and therefore must be invariant under small perturbations and hence stable. what makes morphogenesis interesting for Thom is the fact that locally, as the transition proceeds, the parameter varies, from a stable state of a vector field to an unstable state and back to a stable state by means of a process which locally models system’s morphogenesis. Furthermore, what is observed in a process undergoing morphogenesis is precisely the shock wave and resulting configuration of chreods (2) separated by strata of the shockwave, at each interval of time and over intervals of observation time. It then follows “that to classify an observed phenomenon or to support a hypothesis about the local underlying dynamic, we need in principle only observe the process, study the observed catastrophe or discontinuity set and try to relate it to one of the finitely many universal catastrophe sets, which would become then our main object of interest. Even if a process depends on a large number of physical parameters, as long as it is described by the gradient model, its description would involve one of seven elementary catastrophes; in particular one can give a relatively simple mathematical description of such apparently complicated processes even if one does not know what the relevant physical parameters are or what the physical mechanism of the process is. According to Thom, “if we consider an unfolding, we can obtain a qualitative intelligence about the behaviors of a system in the neighborhood of an unstable equilibrium point. this idea was not accepted widely and was criticized by applied mathematicians because for them only numerical exactness allowed prediction and therefore efficient action. After the work of Grothendieck, it is known that the theory of singularity unfolding is a particular case of a general category, the theory of flat deformations of an analytic set and for flat local deformations of an analytic set only the hyper surface case has a smooth unfolding of finite dimension. For Thom, this meant the if we wanted to continue the scientific domain of calculable exact laws, we would be justified in considering the instance where an analytic process leads to a singularity of codimension one in internal variables. Might we then not expect that the process be diffused and subsequently propagated in the unfolding according to a mode that is to be defined? Such an argument allows one to think that the Wignerian domain of exact laws can be extended into a region where physical processes are no longer calculable but where analytic continuation remains qualitatively valid.

7e3788316b922465ac872570642857d4

Anyway, catastrophe theory studies forms as qualitative discontinuities though on a continuous substrate. In any case forms as mental facts are immersed in a matter which is still a thought object. The more you try to analyze it the more it appears as a fog, revealing a more and more complex and inexhaustible weaving the more it refines itself through the forms it assumes. In fact complexity is more and more ascertained until a true enigma is reached when you once for all want to define reality as a universe endowed with a high number of dimensions and then object of mental experiences to which even objective phenomena are at the end concretely reduced. Concrete reality is yet more evident than a scientific explanation and naïve ontology appears more concrete than the scientific one. It is steady and universal, while the latter is always problematic and revisable. Besides, according to Bachelard, while naïve explanation is immediately reflected into the ordinary language which is accessible to everybody, the claimed scientific explanation goes with its jargon beyond immediate experience, away from the life world which only we can know immediately.

As for example, the continuous character of reality, which Thom entrusts to a world intuition as a frame of the phenomenological discontinuities themselves, is instead contradicted by the present tendency to reduce all to discrete units of information (bits) of modern computing. Of course it has a practical value: an animal individuating a prey perceives it as an entity which is absolutely distinct from its environment, just as we discretize linguistic phonemata to learn speaking without confounding them. Yet a continuous background remains, notwithstanding the tendency of our brains to discretize. Such background is for example constituted by space and time. Continuum is said an illusion as exemplified by a film which appears continuous to us, while it is made of discrete frames. Really it is an illusion but with a true mental base, otherwise it would not arise at all, and such base is just the existence of continuum. Really we perceive continuum but need discreteness, finiteness in order to keep things under control. Anyway quantum mechanics seems to introduce discreteness in absolute terms, something we do not understand but which is operatively valid, as is shown by the possibility to localize or delocalize a wave packet by simply varying the value distributions of complementary variables as position and momentum or time and energy, according to Heisenberg’s principle of indetermination. Anyway, also the apparent quantum discontinuity hides a continuity which, always according to Heisenberg’s principle, may be only obscured and not cancelled in several phenomena. It is difficult to conceive but not monstrous. The hypothesis according to which we are finite and discrete in our internal structure is afterwards false for we are more than that. We have hundreds billions of neurons, which are in continuous movement, as they are constituted by molecules continuously vibrating in the space, so giving place to infinite possible variations in a considerable dimensions number, even though we are reduced to the smallest possible number of states and dimensions to deal with the system under study, according to a technical and algorithmic thought which is operatively effective, certainly practically motivated but unidentifiable with reality.

(1) Two manifolds M and N are said to be cobordant if their disjoint union is the boundary of some other manifold. Given the extreme difficulty of the classification of manifolds it would seem very unlikely that much progress could be made in classifying manifolds up to cobordism. However, René Thom, in his remarkable, if unreadable, 1954 paper (French), gave the full solution to this problem for unoriented manifolds, as well as many powerful insights into the methods for solving it in the cases of manifolds with additional structure. The key step was the reduction of the cobordism problem to a homotopy problem, although the homotopy problem is still far from trivial. This was later generalized by Lev Pontrjagin, and this result is now known as the Thom-Pontrjagin theorem.

(2) Every natural process decomposes into structurally stable islands, the chreods. The set of chreods and the multidimensional syntax controlling their positions constitute the semantic model. When the chreod is considered as a word of this multidimensional language, the meaning (signification) of this word is precisely that of the global topology of the associated attractor (or attractors) and of the catastrophes that it (or they) undergo. In particular, the signification of a given attractor is defined by the geometry of its domain of existence on the space of external variables and the topology of the regulation catastrophes bounding that domain. One result of this is that the signification of a form (chreod) manifests itself only by the catastrophes that create or destroy it. This gives the axiom dear to the formal linguists: that the meaning of a word is nothing more than the use of the word; this is also the axiom of the “bootstrap” physicists, according to whom a particle is completely defined by the set of interactions in which it participates.

Gauge Geometry and Philosophical Dynamism

basisvectorsintro

Weyl was dissatisfied with his own theory of the predicative construction of the arithmetically definable subset of the classical real continuum by the time he had finished his Das Kontinuum, when he compared it with Husserl’s continuum of time, which possessed a “non-atomistic” character in contradistinction with his own theory. No determined point of time could be exhibited, only approximate fixing is possible, just as in the case of “continuum of spatial intuition”. He himself accepted the necessity that the mathematical concept of continuum, the continuous manifold, should not be characterized in terms of modern set theory enriched by topological axioms, because this would contradict the essence of continuum. Weyl says,

It seems that set theory violates against the essence of continuum, which, by its very nature, cannot at all be battered into a single set of elements. not the relationship of an element to a set, but a part of the whole ought to be taken as a basis for the analysis of the continuum.

For Weyl, single points of continuum were empty abstractions, and made him enter a difficult terrain, as no mathematical conceptual frame was in sight, which could satisfy his methodological postulate in a sufficiently elaborative manner. For some years, he sympathized with Brouwer’s idea to characterize points in the intuitionistic one-dimensional continuum by “free choice sequences” of nested intervals, and even tried to extend the idea to higher dimensions and explored the possibility of a purely combinatorial approach to the concept of manifold, in which point-like localizations were given only by infinite sequences of nested star neighborhoods in barycentric subdivisions of a combinatorially defined “manifold”. There arose, however, the problem of how to characterize the local manifold property in purely combinatorial terms.

Weyl was much more successful on another level to rebuild differential geometry in manifolds from a “purely infinitesimal” point of view. He generalized Riemann’s proposal for a differential geometric metric

ds2(x) = ∑n i, j = 1 gij(x) dxi dxj

From his purely infinitesimal point of view, it seemed a strange effect that the length of two vectors ξ(x) and η(x’) given at different points x and x’ can be immediately and objectively compared in this framework after the calculation of

|ξ(x)|2 = ∑n i, j = 1 gij(x) ξi ξj,

|η(x’)|2 = ∑n i, j = 1 gij(x’) ηi ηj

In this context, it was, comparatively easy for Weyl, to give a perfectly infinitesimal characterization of metrical concepts. He started from a well-known structure of conformal metric, i.e. an equivalence class [g] of semi-Riemannian metrics g = gij(x) and g’ = g’ij(x), which are equal up to a point of dependent positive factor λ(x) > 0, g’ = λg. Then, comparison of length made immediate sense only for vectors attached to the same point x, independently of the gauge of the metric, i.e. the choice of the representative in the conformal class. To achieve comparability of lengths of vectors inside each infinitesimal neighborhood, Weyl introduced the conception of length connection formed in analogy to the affine connection, Γ, just distilled from the classical Christoffel Symbols Γkij of Riemannian geometry by Levi Civita. The localization inside such an infinitesimal neighborhood was given, as would have been done already by the mathematicians of the past, by coordinate parameters x and x’ = x + dx for some infinitesimal displacement dx. Weyl’s length connection consisted, then, in an equivalence class of differential I-forms [Ψ], Ψ ≡ ∑ni = 1 Ψidxi, where an equivalent representation of the form is given by Ψ’ ≡ Ψ – d log λ, corresponding to a change of gauge of the conformal metric by the factor λ. Weyl called this transformation, which he recognized as necessary for the consistency of his extended symbol system, the gauge transformation of the length connection.

Weyl established a purely infinitesimal gauge geometry, where lengths of vectors (or derived metrical concepts in tensor fields) were immediately comparable only in the infinitesimal neighborhood of one point, and for points of finite distance only after an integration procedure. this integration turned out to be, in general, path dependent. Independence of the choice of path between two points x and x’ holds if and only if the length curvature vanishes. the concept of curvature was built in direct analogy to the curvature of the affine connection and turned out to be, in this case, just the exterior derivative of the length connection f ≡ dΨ. This led Weyl to a coherent and conceptually pleasing realization of a metrical differential geometry built upon purely infinitesimal principles. moreover, Weyl was convinced of important consequences of his new gauge geometry for physics. The infinitesimal neighborhoods understood as spheres of activity as Fichte might have said, suggested looking for interpretations of the length connection as a field representing physically active quantities. In fact, building on the mathematically obvious observation df ≡ 0, which was formally identical with the second system of the generally covariant Maxwell equations, Weyl immediately drew the conclusion that the length curvature f ought to be identified with the electromagnetic field.

He, however, gave up the belief in the ontological correctness of the purely field-theoretic approach to matter, where the Mie-Hilbert theory of a combined Lagrange function L(g,Ψ) for the action of the gravitational field (g) and electromagnetism (Ψ) was further geometrized and technically enriched by the principle of gauge invariance (L), substituting in its place a philosophically motivated a priori argumentation for the conceptual superiority of his gauge geometry. The goal of a unified description of gravitation and electromagnetism, and the derivation of matter structures from it, was nothing specific to Weyl. In his theory, the purely infinitesimal approach to manifolds and the ensuing possibility to geometrically unify the two-known interaction fields gravitation and electromagnetism took on a dense and conceptually sophisticated form.

von Neumann & Dis/belief in Hilbert Spaces

I would like to make a confession which may seem immoral: I do not believe absolutely in Hilbert space any more.

— John von Neumann, letter to Garrett Birkhoff, 1935.

15_03

The mathematics: Let us consider the raison d’ˆetre for the Hilbert space formalism. So why would one need all this ‘Hilbert space stuff, i.e. the continuum structure, the field structure of complex numbers, a vector space over it, inner-product structure, etc. Why? According to von Neumann, he simply used it because it happened to be ‘available’. The use of linear algebra and complex numbers in so many different scientific areas, as well as results in model theory, clearly show that quite a bit of modeling can be done using Hilbert spaces. On the other hand, we can also model any movie by means of the data stream that runs through your cables when watching it. But does this mean that these data streams make up the stuff that makes a movie? Clearly not, we should rather turn our attention to the stuff that is being taught at drama schools and directing schools. Similarly, von Neumann turned his attention to the actual physical concepts behind quantum theory, more specifically, the notion of a physical property and the structure imposed on these by the peculiar nature of quantum observation. His quantum logic gave the resulting ‘algebra of physical properties’ a privileged role. All of this leads us to … the physics of it. Birkhoff and von Neumann crafted quantum logic in order to emphasize the notion of quantum superposition. In terms of states of a physical system and properties of that system, superposition means that the strongest property which is true for two distinct states is also true for states other than the two given ones. In order-theoretic terms this means, representing states by the atoms of a lattice of properties, that the join p ∨ q of two atoms p and q is also above other atoms. From this it easily follows that the distributive law breaks down: given atom r ≠ p, q with r < p ∨ q we have r ∧ (p ∨ q) = r while (r ∧ p) ∨ (r ∧ q) = 0 ∨ 0 = 0. Birkhoff and von Neumann as well as many others believed that understanding the deep structure of superposition is the key to obtaining a better understanding of quantum theory as a whole.

For Schrödinger, this is the behavior of compound quantum systems, described by the tensor product. While the quantum information endeavor is to a great extend the result of exploiting this important insight, the language of the field is still very much that of strings of complex numbers, which is akin to the strings of 0’s and 1’s in the early days of computer programming. If the manner in which we describe compound quantum systems captures so much of the essence of quantum theory, then it should be at the forefront of the presentation of the theory, and not preceded by continuum structure, field of complex numbers, vector space over the latter, etc, to only then pop up as some secondary construct. How much quantum phenomena can be derived from ‘compoundness + epsilon’. It turned out that epsilon can be taken to be ‘very little’, surely not involving anything like continuum, fields, vector spaces, but merely a ‘2D space’ of temporal composition and compoundness, together with some very natural purely operational assertion, including one which in a constructive manner asserts entanglement; among many other things, trace structure then follows.