The Second Trichotomy. Thought of the Day 120.0

Figure-2-Peirce's-triple-trichotomy

The second trichotomy (here is the first) is probably the most well-known piece of Peirce’s semiotics: it distinguishes three possible relations between the sign and its (dynamical) object. This relation may be motivated by similarity, by actual connection, or by general habit – giving rise to the sign classes icon, index, and symbol, respectively.

According to the second trichotomy, a Sign may be termed an Icon, an Index, or a Symbol.

An Icon is a sign which refers to the Object that it denotes merely by virtue of characters of its own, and which it possesses, just the same, whether any such Object actually exists or not. It is true that unless there really is such an Object, the Icon does not act as a sign; but this has nothing to do with its character as a sign. Anything whatever, be it quality, existent individual, or law, is an Icon of anything, in so far as it is like that thing and used as a sign of it.

An Index is a sign which refers to the Object that it denotes by virtue of being really affected by that Object. It cannot, therefore, be a Qualisign, because qualities are whatever they are independently of anything else. In so far as the Index is affected by the Object, it necessarily has some Quality in common with the Object, and it is in respect to these that it refers to the Object. It does, therefore, involve a sort of Icon, although an Icon of a peculiar kind; and it is not the mere resemblance of its Object, even in these respects which makes it a sign, but it is the actual modification of it by the Object. 

A Symbol is a sign which refers to the Object that it denotes by virtue of a law, usually an association of general ideas, which operates to cause the Symbol to be interpreted as referring to that Object. It is thus itself a general type or law, that is, a Legisign. As such it acts through a Replica. Not only is it general in itself, but the Object to which it refers is of general nature. Now that which is general has its being in the instances it will determine. There must, therefore, be existent instances of what the Symbol denotes, although we must here understand by ‘existent’, existent in the possibly imaginary universe to which the Symbol refers. The Symbol will indirectly, through the association or other law, be affected by those instances; and thus the Symbol will involve a sort of Index, although an Index of a peculiar kind. It will not, however, be by any means true that the slight effect upon the Symbol of those instances accounts for the significant character of the Symbol.

The icon refers to its object solely by means of its own properties. This implies that an icon potentially refers to an indefinite class of objects, namely all those objects which have, in some respect, a relation of similarity to it. In recent semiotics, it has often been remarked by someone like Nelson Goodman that any phenomenon can be said to be like any other phenomenon in some respect, if the criterion of similarity is chosen sufficiently general, just like the establishment of any convention immediately implies a similarity relation. If Nelson Goodman picks out two otherwise very different objects, then they are immediately similar to the extent that they now have the same relation to Nelson Goodman. Goodman and others have for this reason deemed the similarity relation insignificant – and consequently put the whole burden of semiotics on the shoulders of conventional signs only. But the counterargument against this rejection of the relevance of the icon lies close at hand. Given a tertium comparationis, a measuring stick, it is no longer possible to make anything be like anything else. This lies in Peirce’s observation that ‘It is true that unless there really is such an Object, the Icon does not act as a sign ’ The icon only functions as a sign to the extent that it is, in fact, used to refer to some object – and when it does that, some criterion for similarity, a measuring stick (or, at least, a delimited bundle of possible measuring sticks) are given in and with the comparison. In the quote just given, it is of course the immediate object Peirce refers to – it is no claim that there should in fact exist such an object as the icon refers to. Goodman and others are of course right in claiming that as ‘Anything whatever ( ) is an Icon of anything ’, then the universe is pervaded by a continuum of possible similarity relations back and forth, but as soon as some phenomenon is in fact used as an icon for an object, then a specific bundle of similarity relations are picked out: ‘ in so far as it is like that thing.’

Just like the qualisign, the icon is a limit category. ‘A possibility alone is an Icon purely by virtue of its quality; and its object can only be a Firstness.’ (Charles S. PeirceThe Essential Peirce_ Selected Philosophical Writings). Strictly speaking, a pure icon may only refer one possible Firstness to another. The pure icon would be an identity relation between possibilities. Consequently, the icon must, as soon as it functions as a sign, be more than iconic. The icon is typically an aspect of a more complicated sign, even if very often a most important aspect, because providing the predicative aspect of that sign. This Peirce records by his notion of ‘hypoicon’: ‘But a sign may be iconic, that is, may represent its object mainly by its similarity, no matter what its mode of being. If a substantive is wanted, an iconic representamen may be termed a hypoicon’. Hypoicons are signs which to a large extent makes use of iconical means as meaning-givers: images, paintings, photos, diagrams, etc. But the iconic meaning realized in hypoicons have an immensely fundamental role in Peirce’s semiotics. As icons are the only signs that look-like, then they are at the same time the only signs realizing meaning. Thus any higher sign, index and symbol alike, must contain, or, by association or inference terminate in, an icon. If a symbol can not give an iconic interpretant as a result, it is empty. In that respect, Peirce’s doctrine parallels that of Husserl where merely signitive acts require fulfillment by intuitive (‘anschauliche’) acts. This is actually Peirce’s continuation of Kant’s famous claim that intuitions without concepts are blind, while concepts without intuitions are empty. When Peirce observes that ‘With the exception of knowledge, in the present instant, of the contents of consciousness in that instant (the existence of which knowledge is open to doubt) all our thought and knowledge is by signs’ (Letters to Lady Welby), then these signs necessarily involve iconic components. Peirce has often been attacked for his tendency towards a pan-semiotism which lets all mental and physical processes take place via signs – in the quote just given, he, analogous to Husserl, claims there must be a basic evidence anterior to the sign – just like Husserl this evidence before the sign must be based on a ‘metaphysics of presence’ – the ‘present instant’ provides what is not yet mediated by signs. But icons provide the connection of signs, logic and science to this foundation for Peirce’s phenomenology: the icon is the only sign providing evidence (Charles S. Peirce The New Elements of Mathematics Vol. 4). The icon is, through its timeless similarity, apt to communicate aspects of an experience ‘in the present instant’. Thus, the typical index contains an icon (more or less elaborated, it is true): any symbol intends an iconic interpretant. Continuity is at stake in relation to the icon to the extent that the icon, while not in itself general, is the bearer of a potential generality. The infinitesimal generality is decisive for the higher sign types’ possibility to give rise to thought: the symbol thus contains a bundle of general icons defining its meaning. A special icon providing the condition of possibility for general and rigorous thought is, of course, the diagram.

The index connects the sign directly with its object via connection in space and time; as an actual sign connected to its object, the index is turned towards the past: the action which has left the index as a mark must be located in time earlier than the sign, so that the index presupposes, at least, the continuity of time and space without which an index might occur spontaneously and without any connection to a preceding action. Maybe surprisingly, in the Peircean doctrine, the index falls in two subtypes: designators vs. reagents. Reagents are the simplest – here the sign is caused by its object in one way or another. Designators, on the other hand, are more complex: the index finger as pointing to an object or the demonstrative pronoun as the subject of a proposition are prototypical examples. Here, the index presupposes an intention – the will to point out the object for some receiver. Designators, it must be argued, presuppose reagents: it is only possible to designate an object if you have already been in reagent contact (simulated or not) with it (this forming the rational kernel of causal reference theories of meaning). The closer determination of the object of an index, however, invariably involves selection on the background of continuities.

On the level of the symbol, continuity and generality play a main role – as always when approaching issues defined by Thirdness. The symbol is, in itself a legisign, that is, it is a general object which exists only due to its actual instantiations. The symbol itself is a real and general recipe for the production of similar instantiations in the future. But apart from thus being a legisign, it is connected to its object thanks to a habit, or regularity. Sometimes, this is taken to mean ‘due to a convention’ – in an attempt to distinguish conventional as opposed to motivated sign types. This, however, rests on a misunderstanding of Peirce’s doctrine in which the trichotomies record aspects of sign, not mutually exclusive, independent classes of signs: symbols and icons do not form opposed, autonomous sign classes; rather, the content of the symbol is constructed from indices and general icons. The habit realized by a symbol connects it, as a legisign, to an object which is also general – an object which just like the symbol itself exists in instantiations, be they real or imagined. The symbol is thus a connection between two general objects, each of them being actualized through replicas, tokens – a connection between two continua, that is:

Definition 1. Any Blank is a symbol which could not be vaguer than it is (although it may be so connected with a definite symbol as to form with it, a part of another partially definite symbol), yet which has a purpose.

Axiom 1. It is the nature of every symbol to blank in part. [ ]

Definition 2. Any Sheet would be that element of an entire symbol which is the subject of whatever definiteness it may have, and any such element of an entire symbol would be a Sheet. (‘Sketch of Dichotomic Mathematics’ (The New Elements of Mathematics Vol. 4 Mathematical Philosophy)

The symbol’s generality can be described as it having always blanks having the character of being indefinite parts of its continuous sheet. Thus, the continuity of its blank parts is what grants its generality. The symbol determines its object according to some rule, granting the object satisfies that rule – but leaving the object indeterminate in all other respects. It is tempting to take the typical symbol to be a word, but it should rather be taken as the argument – the predicate and the proposition being degenerate versions of arguments with further continuous blanks inserted by erasure, so to speak, forming the third trichotomy of term, proposition, argument.

Triadomania. Thought of the Day 117.0

figure-2

Peirce’s famous ‘triadomania’ lets most of his decisive distinctions appear in threes, following the tripartition of his list of categories, the famous triad of First, Second, and Third, or Quality, Reaction, Representation, or Possibility, Actuality, Reality.

Firstness is the mode of being of that which is such as it is, positively and without reference to anything else.

Secondness is the mode of being of that which is such as it is, with respect to a second but regardless of any third.

Thirdness is the mode of being of that which is such as it is, in bringing a second and third into relation to each other.

Firstness constitutes the quality of experience: in order for something to appear at all, it must do so due to a certain constellation of qualitative properties. Peirce often uses sensory qualities as examples, but it is important for the understanding of his thought that the examples may refer to phenomena very far from our standard conception of ‘sensory data’, e.g. forms or the ‘feeling’ of a whole melody or of a whole mathematical proof, not to be taken in a subjective sense but as a concept for the continuity of melody or proof as a whole, apart from the analytical steps and sequences in which it may be, subsequently, subdivided. In short, all sorts of simple and complex Gestalt qualities also qualify as Firstnesses. Firstness tend to form continua of possibilities such as the continua of shape, color, tone, etc. These qualities, however, are, taken in themselves, pure possibilities and must necessarily be incarnated in phenomena in order to appear. Secondness is the phenomenological category of ‘incarnation’ which makes this possible: it is the insistency, then, with which the individuated, actualized, existent phenomenon appears. Thus, Secondness necessarily forms discontinuous breaks in Firstness, allowing for particular qualities to enter into existence. The mind may imagine anything whatever in all sorts of quality combinations, but something appears with an irrefutable insisting power, reacting, actively, yielding resistance. Peirce’s favorite example is the resistance of the closed door – which might be imagined reduced to the quality of resistance feeling and thus degenerate to pure Firstness so that his theory imploded into a Hume-like solipsism – but to Peirce this resistance, surprise, event, this thisness, ‘haecceity’ as he calls it with a Scotist term, remains irreducible in the description of the phenomenon (a Kantian idea, at bottom: existence is no predicate). About Thirdness, Peirce may directly state that continuity represents it perfectly: ‘continuity and generality are two names of the same absence of distinction of individuals’. As against Secondness, Thirdness is general; it mediates between First and Second. The events of Secondness are never completely unique, such an event would be inexperiencable, but relates (3) to other events (2) due to certain features (1) in them; Thirdness is thus what facilitates understanding as well as pragmatic action, due to its continuous generality. With a famous example: if you dream about an apple pie, then the very qualities of that dream (taste, smell, warmth, crustiness, etc.) are pure Firstnesses, while the act of baking is composed of a series of actual Secondnesses. But their coordination is governed by a Thirdness: the recipe, being general, can never specify all properties in the individual apple pie, it has a schematic frame-character and subsumes an indefinite series – a whole continuum – of possible apple pies. Thirdness is thus necessarily general and vague. Of course, the recipe may be more or less precise, but no recipe exists which is able to determine each and every property in the cake, including date, hour, place, which tree the apples stem from, etc. – any recipe is necessarily general. In this case, the recipe (3) mediates between dream (1) and fulfilment (2) – its generality, symbolicity, relationality and future orientation are all characteristic for Thirdness. An important aspect of Peirce’s realism is that continuous generality may be experienced directly in perceptual judgments: ‘Generality, Thirdness, pours in upon us in our very perceptual judgments’.

All these determinations remain purely phenomenological, even if the later semiotic and metaphysical interpretations clearly shine through. In a more general, non-Peircean terminology, his phenomenology can be seen as the description of minimum aspects inherent in any imaginable possible world – for this reason it is imaginability which is the main argument, and this might point in the direction that Peirce could be open to critique for subjectivism, so often aimed at Husserl’s project, in some respects analogous. The concept of consciousness is invoked as the basis of imaginability: phenomenology is the study of invariant properties in any phenomenon appearing for a mind. Peirce’s answer would here be, on the one hand, the research community which according to him defines reality – an argument which structurally corresponds to Husserl’s reference to intersubjectivity as a necessary ingredient in objectivity (an object is a phenomenon which is intersubjectively accessible). Peirce, however, has a further argument here, namely his consequent refusal to delimit his concept of mind exclusively to human subjects (a category the use of which he obviously tries to minimize), mind-like processes may take place in nature without any subject being responsible. Peirce will, for continuity reasons, never accept any hard distinction between subject and object and remains extremely parsimonious in the employment of such terms.

From Peirce’s New Elements of Mathematics (The New Elements of Mathematics Vol. 4),

But just as the qualities, which as they are for themselves, are equally unrelated to one other, each being mere nothing for any other, yet form a continuum in which and because of their situation in which they acquire more or less resemblance and contrast with one another; and then this continuum is amplified in the continuum of possible feelings of quality, so the accidents of reaction, which are waking consciousnesses of pairs of qualities, may be expected to join themselves into a continuum. 

Since, then an accidental reaction is a combination or bringing into special connection of two qualities, and since further it is accidental and antigeneral or discontinuous, such an accidental reaction ought to be regarded as an adventitious singularity of the continuum of possible quality, just as two points of a sheet of paper might come into contact.

But although singularities are discontinuous, they may be continuous to a certain extent. Thus the sheet instead of touching itself in the union of two points may cut itself all along a line. Here there is a continuous line of singularity. In like manner, accidental reactions though they are breaches of generality may come to be generalized to a certain extent.

Secondness is now taken to actualize these quality possibilities based on an idea that any actual event involves a clash of qualities – in the ensuing argumentation Peirce underlines that the qualities involved in actualization need not be restrained to two but may be many, if they may only be ‘dissolved’ into pairs and hence do not break into the domain of Thirdness. This appearance of actuality, hence, has the property of singularities, spontaneously popping up in the space of possibilities and actualizing pairs of points in it. This transition from First to Second is conceived of along Aristotelian lines: as an actualization of a possibility – and this is expressed in the picture of a discontinuous singularity in the quality continuum. The topological fact that singularities must in general be defined with respect to the neighborhood of the manifold in which they appear, now becomes the argument for the fact that Secondness can never be completely discontinuous but still ‘inherits’ a certain small measure of continuity from the continuum of Firstness. Singularities, being discontinuous along certain dimensions, may be continuous in others, which provides the condition of possibility for Thirdness to exist as a tendency for Secondness to conform to a general law or regularity. As is evident, a completely pure Secondness is impossible in this continuous metaphysics – it remains a conceivable but unrealizable limit case, because a completely discon- tinuous event would amount to nothing. Thirdness already lies as a germ in the non-discontinuous aspects of the singularity. The occurrences of Secondness seem to be infinitesimal, then, rather than completely extensionless points.

Dynamics of Point Particles: Orthogonality and Proportionality

optical

Let γ be a smooth, future-directed, timelike curve with unit tangent field ξa in our background spacetime (M, gab). We suppose that some massive point particle O has (the image of) this curve as its worldline. Further, let p be a point on the image of γ and let λa be a vector at p. Then there is a natural decomposition of λa into components proportional to, and orthogonal to, ξa:

λa = (λbξba + (λa −(λbξba) —– (1)

Here, the first part of the sum is proportional to ξa, whereas the second one is orthogonal to ξa.

These are standardly interpreted, respectively, as the “temporal” and “spatial” components of λa relative to ξa (or relative to O). In particular, the three-dimensional vector space of vectors at p orthogonal to ξa is interpreted as the “infinitesimal” simultaneity slice of O at p. If we introduce the tangent and orthogonal projection operators

kab = ξa ξb —– (2)

hab = gab − ξa ξb —– (3)

then the decomposition can be expressed in the form

λa = kab λb + hab λb —– (4)

We can think of kab and hab as the relative temporal and spatial metrics determined by ξa. They are symmetric and satisfy

kabkbc = kac —– (5)

habhbc = hac —– (6)

Many standard textbook assertions concerning the kinematics and dynamics of point particles can be recovered using these decomposition formulas. For example, suppose that the worldline of a second particle O′ also passes through p and that its four-velocity at p is ξ′a. (Since ξa and ξ′a are both future-directed, they are co-oriented; i.e., ξa ξ′a > 0.) We compute the speed of O′ as determined by O. To do so, we take the spatial magnitude of ξ′a relative to O and divide by its temporal magnitude relative to O:

v = speed of O′ relative to O = ∥hab ξ′b∥ / ∥kab ξ′b∥ —– (7)

For any vector μa, ∥μa∥ is (μaμa)1/2 if μ is causal, and it is (−μaμa)1/2 otherwise.

We have from equations 2, 3, 5 and 6

∥kab ξ′b∥ = (kab ξ′b kac ξ′c)1/2 = (kbc ξ′bξ′c)1/2 = (ξ′bξb)

and

∥hab ξ′b∥ = (−hab ξ′b hac ξ′c)1/2 = (−hbc ξ′bξ′c)1/2 = ((ξ′bξb)2 − 1)1/2

so

v = ((ξ’bξb)2 − 1)1/2 / (ξ′bξb) < 1 —– (8)

Thus, as measured by O, no massive particle can ever attain the maximal speed 1. We note that equation (8) implies that

(ξ′bξb) = 1/√(1 – v2) —– (9)

It is a basic fact of relativistic life that there is associated with every point particle, at every event on its worldline, a four-momentum (or energy-momentum) vector Pa that is tangent to its worldline there. The length ∥Pa∥ of this vector is what we would otherwise call the mass (or inertial mass or rest mass) of the particle. So, in particular, if Pa is timelike, we can write it in the form Pa =mξa, where m = ∥Pa∥ > 0 and ξa is the four-velocity of the particle. No such decomposition is possible when Pa is null and m = ∥Pa∥ = 0.

Suppose a particle O with positive mass has four-velocity ξa at a point, and another particle O′ has four-momentum Pa there. The latter can either be a particle with positive mass or mass 0. We can recover the usual expressions for the energy and three-momentum of the second particle relative to O if we decompose Pa in terms of ξa. By equations (4) and (2), we have

Pa = (Pbξb) ξa + habPb —– (10)

the first part of the sum is the energy component, while the second is the three-momentum. The energy relative to O is the coefficient in the first term: E = Pbξb. If O′ has positive mass and Pa = mξ′a, this yields, by equation (9),

E = m (ξ′bξb) = m/√(1 − v2) —– (11)

(If we had not chosen units in which c = 1, the numerator in the final expression would have been mc2 and the denominator √(1 − (v2/c2)). The three−momentum relative to O is the second term habPb in the decomposition of Pa, i.e., the component of Pa orthogonal to ξa. It follows from equations (8) and (9) that it has magnitude

p = ∥hab mξ′b∥ = m((ξ′bξb)2 − 1)1/2 = mv/√(1 − v2) —– (12)

Interpretive principle asserts that the worldlines of free particles with positive mass are the images of timelike geodesics. It can be thought of as a relativistic version of Newton’s first law of motion. Now we consider acceleration and a relativistic version of the second law. Once again, let γ : I → M be a smooth, future-directed, timelike curve with unit tangent field ξa. Just as we understand ξa to be the four-velocity field of a massive point particle (that has the image of γ as its worldline), so we understand ξnnξa – the directional derivative of ξa in the direction ξa – to be its four-acceleration field (or just acceleration) field). The four-acceleration vector at any point is orthogonal to ξa. (This is, since ξannξa) = 1/2 ξnnaξa) = 1/2 ξnn (1) = 0). The magnitude ∥ξnnξa∥ of the four-acceleration vector at a point is just what we would otherwise describe as the curvature of γ there. It is a measure of the rate at which γ “changes direction.” (And γ is a geodesic precisely if its curvature vanishes everywhere).

The notion of spacetime acceleration requires attention. Consider an example. Suppose you decide to end it all and jump off the tower. What would your acceleration history be like during your final moments? One is accustomed in such cases to think in terms of acceleration relative to the earth. So one would say that you undergo acceleration between the time of your jump and your calamitous arrival. But on the present account, that description has things backwards. Between jump and arrival, you are not accelerating. You are in a state of free fall and moving (approximately) along a spacetime geodesic. But before the jump, and after the arrival, you are accelerating. The floor of the observation deck, and then later the sidewalk, push you away from a geodesic path. The all-important idea here is that we are incorporating the “gravitational field” into the geometric structure of spacetime, and particles traverse geodesics iff they are acted on by no forces “except gravity.”

The acceleration of our massive point particle – i.e., its deviation from a geodesic trajectory – is determined by the forces acting on it (other than “gravity”). If it has mass m, and if the vector field Fa on I represents the vector sum of the various (non-gravitational) forces acting on it, then the particle’s four-acceleration ξnnξa satisfies

Fa = mξnnξa —– (13)

This is Newton’s second law of motion. Consider an example. Electromagnetic fields are represented by smooth, anti-symmetric fields Fab. If a particle with mass m > 0, charge q, and four-velocity field ξa is present, the force exerted by the field on the particle at a point is given by qFabξb. If we use this expression for the left side of equation (13), we arrive at the Lorentz law of motion for charged particles in the presence of an electromagnetic field:

qFabξb = mξbbξa —– (14)

This equation makes geometric sense. The acceleration field on the right is orthogonal to ξa. But so is the force field on the left, since ξa(Fabξb) = ξaξbFab = ξaξbF(ab), and F(ab) = 0 by the anti-symmetry of Fab.

Infinitesimal and Differential Philosophy. Note Quote.

640px-Panic-attack

If difference is the ground of being qua becoming, it is not difference as contradiction (Hegel), but as infinitesimal difference (Leibniz). Accordingly, the world is an ideal continuum or transfinite totality (Fold: Leibniz and the Baroque) of compossibilities and incompossibilities analyzable into an infinity of differential relations (Desert Islands and Other Texts). As the physical world is merely composed of contiguous parts that actually divide until infinity, it finds its sufficient reason in the reciprocal determination of evanescent differences (dy/dx, i.e. the perfectly determinable ratio or intensive magnitude between indeterminate and unassignable differences that relate virtually but never actually). But what is an evanescent difference if not a speculation or fiction? Leibniz refuses to make a distinction between the ontological nature and the practical effectiveness of infinitesimals. For even if they have no actuality of their own, they are nonetheless the genetic requisites of actual things.

Moreover, infinitesimals are precisely those paradoxical means through which the finite understanding is capable of probing into the infinite. They are the elements of a logic of sense, that great logical dream of a combinatory or calculus of problems (Difference and Repetition). On the one hand, intensive magnitudes are entities that cannot be determined logically, i.e. in extension, even if they appear or are determined in sensation only in connection with already extended physical bodies. This is because in themselves they are determined at infinite speed. Is not the differential precisely this problematic entity at the limit of sensibility that exists only virtually, formally, in the realm of thought? Isn’t the differential precisely a minimum of time, which refers only to the swiftness of its fictional apprehension in thought, since it is synthesized in Aion, i.e. in a time smaller than the minimum of continuous time and hence in the interstitial realm where time takes thought instead of thought taking time?

Contrary to the Kantian critique that seeks to eliminate the duality between finite understanding and infinite understanding in order to avoid the contradictions of reason, Deleuze thus agrees with Maïmon that we shouldn’t speak of differentials as mere fictions unless they require the status of a fully actual reality in that infinite understanding. The alternative between mere fictions and actual reality is a false problem that hides the paradoxical reality of the virtual as such: real but not actual, ideal but not abstract. If Deleuze is interested in the esoteric history of differential philosophy, this is as a speculative alternative to the exoteric history of the extensional science of actual differences and to Kantian critical philosophy. It is precisely through conceptualizing intensive, differential relations that finite thought is capable of acquiring consistency without losing the infinite in which it plunges. This brings us back to Leibniz and Spinoza. As Deleuze writes about the former: no one has gone further than Leibniz in the exploration of sufficient reason [and] the element of difference and therefore [o]nly Leibniz approached the conditions of a logic of thought. Or as he argues of the latter, fictional abstractions are only a preliminary stage for thought to become more real, i.e. to produce an expressive or progressive synthesis: The introduction of a fiction may indeed help us to reach the idea of God as quickly as possible without falling into the traps of infinite regression. In Maïmon’s reinvention of the Kantian schematism as well as in the Deleuzian system of nature, the differentials are the immanent noumena that are dramatized by reciprocal determination in the complete determination of the phenomenal. Even the Kantian concept of the straight line, Deleuze emphasizes, is a dramatic synthesis or integration of an infinity of differential relations. In this way, infinitesimals constitute the distinct but obscure grounds enveloped by clear but confused effects. They are not empirical objects but objects of thought. Even if they are only known as already developed within the extensional becomings of the sensible and covered over by representational qualities, as differences they are problems that do not resemble their solutions and as such continue to insist in an enveloped, quasi-causal state.

Some content on this page was disabled on May 4, 2020 as a result of a DMCA takedown notice from Columbia University Press. You can learn more about the DMCA here:

https://en.support.wordpress.com/copyright-and-the-dmca/

Holonomies: Philosophies of Conjugacy. Part 1.

Figure-6-Holonomy-along-a-leafwise-path

Suppose that N is an irreducible 2n-dimensional Riemannian symmetric space. We may realise N as a coset space N = G/K with Gτ ⊂ K ⊂ (Gτ)0 for some involution τ of G. Now K is (a covering of) the holonomy group of N and similarly the coset fibration G → G/K covers the holonomy bundle P → N. In this setting, J(N) is associated to G:

J(N) ≅ G ×K J (R2n)

and if K/H is a K-orbit in J(R2n) then the corresponding subbundle is G ×K K/H = G/H and the projection is just the coset fibration. Thus, the subbundles of J(N) are just the orbits of G in J(N).

Let j ∈ J (N). Then G · j is an almost complex submanifold of J (N) on which J is integrable iff j lies in the zero-set of the Nijenhuis tensor NJ.

This focusses our attention on the zero-set of NJ which we denote by Z. In favourable circumstances, the structure of this set can be completely described. We begin by assuming that N is of compact type so that G is compact and semi-simple. We also assume that N is inner i.e. that τ is an inner involution of G or, equivalently, that rankG = rankK. The class of inner symmetric spaces include the even-dimensional spheres, the Hermitian symmetric spaces, the quaternionic Kähler symmetric spaces and indeed all symmetric G-spaces for G = SO(2n+1), Sp(n), E7, E8, F4 and G2. Moreover, all inner symmetric spaces are necessarily even-dimensional and so fit into our framework.

Let N = G/K be a simply-connected inner Riemannian symmetric space of compact type. Then Z consists of a finite number of connected components on each of which G acts transitively. Moreover, any G-flag manifold is realised as such an orbit for some N.

The proof for the above requires a detour into the geometry of flag manifolds and reveals an interesting interaction between the complex geometry of flag manifolds and the real geometry of inner symmetric spaces. For this, we begin by noting that a coset space of the form G/C(T) admits several invariant Kählerian complex structures in general. Using a complex realisation of G/C(T) as follows: having fixed a complex structure, the complexified group GC acts transitively on G/C(T) by biholomorphisms with parabolic subgroups as stabilisers. Conversely, if P ⊂ GC is a parabolic subgroup then the action of G on GC/P is transitive and G ∩ P is the centraliser of a torus in G. For the infinitesimal situation: let F = G/C(T) be a flag manifold and let o ∈ F. We have a splitting of the Lie algebra of G

gC = h ⊕ m

with m ≅ ToF and h the Lie algebra of the stabiliser of o in G. An invariant complex structure on F induces an ad h-invariant splitting of mC into (1, 0) and (0, 1) spaces mC = m+ ⊕ m− with [m+, m+] ⊂ m+ by integrability. One can show that m+ and m are nilpotent subalgebras of gC and in fact hC ⊕ m is a parabolic subalgebra of gC with nilradical m. If P is the corresponding parabolic subgroup of GC then P is the stabiliser of o and we obtain a biholomorphism between the complex coset space GC/P and the flag manifold F.

Conversely, let P ⊂ GC be a parabolic subgroup with Lie algebra p and let n be the conjugate of the nilradical of p (with respect to the real form g). Then H = G ∩ P is the centraliser of a torus and we have orthogonal decompositions (with respect to the Killing inner product)

p = hC ⊕ n, gC = hC ⊕ n ⊕ n

which define an invariant complex structure on G/H realising the biholomorphism with GC/P.

The relationship between a flag manifold F = GC/P and an inner symmetric space comes from an examination of the central descending series of n. This is a filtration 0 = nk+1 ⊂ nk ⊂…⊂ n1 = n of n defined by ni = [n, ni−1].

We orthogonalise this filtration using the Killing inner product by setting

gi = ni+1 ∩ ni

for i ≥ 1 and extend this to a decomposition of gC by setting g0 = hC = (g ∩ p)C and g−i = gfor i ≥ 1. Then

gC = ∑gi

is an orthogonal decomposition with

p = ∑i≤0 gi, n = ∑i>0 g

The crucial property of this decomposition is that

[gi, gj] ⊂ gi+j

which can be proved by demonstrating the existence of an element ξ ∈ h with the property that, for each i, adξ has eigenvalue √−1i on gi. This element ξ (necessarily unique since g is semi-simple) is the canonical element of p. Since ad ξ has eigenvalues in √−1Z, ad exp πξ is an involution of g which we exponentiate to obtain an inner involution τξ of G and thus an inner symmetric space G/K where K = (Gτξ)0. Clearly, K has Lie algebra given by

k = g ∩ ∑i g2i

Odd symplectic + Odd Poisson Geometry as a Generalization of Symplectic (Poisson) Geometry to the Supercase

A symplectic structure on a manifold M is defined by a non-degenerate closed two-form ω. In a vicinity of an arbitrary point one can consider coordinates (x1, . . . , x2n) such that ω = ∑ni=1 dxidxi+n. Such coordinates are called Darboux coordinates. To a symplectic structure corresponds a non-degenerate Poisson structure { , }. In Darboux coordinates {xi,xj} = 0 if |i−j| ≠ n and {xi,xi+n} = −{xi+n,xi} = 1. The condition of closedness of the two-form ω corresponds to the Jacobi identity {f,{g,h}} + {g,{h,f}} + {h,{f,g}} = 0

for the Poisson bracket. If a symplectic or Poisson structure is given, then every function f defines a vector field (the Hamiltonian vector field) Df such that Dfg = {f,g} = −ω(Df,Dg).

A Poisson structure can be defined independently of a symplectic structure. In general it can be degenerate, i.e., there exist non-constant functions f such that Df = 0. In the case when a Poisson structure is non-degenerate (corresponds to a symplectic structure), the map from T∗M to T M defined by the relation f → Df is an isomorphism.

One can straightforwardly generalize these constructions to the supercase and consider symplectic and Poisson structures (even or odd) on supermanifolds. An even (odd) symplectic structure on a supermanifold is defined by an even (odd) non-degenerate closed two-form. In the same way as the existence of a symplectic structure on an ordinary manifold implies that the manifold is even-dimensional (by the non-degeneracy condition for the form ω), the existence of an even or odd symplectic structure on a supermanifold implies that the dimension of the supermanifold is equal either to (2p.q) for an even structure or to (m.m) for an odd structure. Darboux coordinates exist in both cases. For an even structure, the two-form in Darboux coordinates

zA = (x1,…, x2p1,…, θq) has the form ∑i=1p dxi dxp+i + ∑a=1q εaaa,

where εa = ±1. For an odd structure, the two-form in Darboux coordinates zA = (x1,…,xm1,…,θm) has the form ∑i=1m dxii.

The non-degenerate odd Poisson bracket corresponding to an odd symplectic structure has the following appearance in Darboux coordinates: {xi, xj} = 0, {θij} = 0 for all i,j and {xij} = −{θj,xi} = δji. Thus for arbitrary two functions f, g

Untitled

where we denote by p(f) the parity of a function f (p(xi) = 0, p(θj) = 1). Similarly one can write down the formulae for the non-degenerate even Poisson structure corresponding to an even symplectic structure.

A Poisson structure (odd or even) can be defined on a supermanifold independently of a symplectic structure as a bilinear operation on functions (bracket) satisfying the following relations taken as axioms:

Untitled

where ε is the parity of the bracket (ε = 0 for an even Poisson structure and ε = 1 for an odd one). The correspondence between functions and Hamiltonian vector fields is defined in the same way as on ordinary manifolds: Dfg = {f, g}. Notice a possible parity shift: p(Df) = p(f) + ε. Every Hamiltonian vector field Df defines an infinitesimal transformation preserving the Poisson structure (and the corresponding symplectic structure in the case of a non-degenerate Poisson bracket).

Notice that even or odd Poisson structures on an arbitrary supermanifold can be obtained as “derived” brackets from the canonical symplectic structure on the cotangent bundle, in the following way.

Let M be a supermanifold and T∗M be its cotangent bundle. By changing parity of coordinates in the fibres of T∗M we arrive at the supermanifold ΠT ∗M. If zA are arbitrary coordinates on the supermanifold M, then we denote by (zA,pB) the corresponding coordinates on the supermanifold T∗M and by (zA,z∗B) the corresponding coordinates on ΠT∗M: p(zA) = p(pA) = p(z∗A) + 1. If (zA) are another coordinates on M, zA = zA(z′), then the coordinates z∗A transform in the same way as the coordinates pA (and as the partial derivatives ∂/∂zA):

pA = ∂zB(z′)/∂zA pB and z∗A = ∂zB(z′)/∂zA z∗B

One can consider the canonical non-degenerate even Poisson structure { , }0 (the canonical even symplectic structure) on T∗M defined by the relations {zA,zB}0 = {pC,pD}0 = 0, {zA,pB}0 = δBA, and, respectively, the canonical non-degenerate odd Poisson structure { , }1 (the canonical odd symplectic structure) on ΠT∗M defined by the relations {zA,zB}0 = {z∗C,z∗D}0 = 0, {zA,z∗B}0 = δAB.

Now consider Hamiltonians on T∗M or on ΠT∗M that are quadratic in coordinates of the fibres. An arbitrary odd quadratic Hamiltonian on T∗M (an arbitrary even quadratic Hamiltonian on ΠT∗M):

S(z,p) = SABpApB (p(S) = 1) or S(z,z∗) = SABz∗Az∗B (p(S) = 0) —– (1)

satisfying the condition that the canonical Poisson bracket of this Hamiltonian with itself vanishes:

{S,S}0 = 0 or {S,S}1 = 0 —– (2)

defines an odd Poisson structure (an even Poisson structure) on M by the formula

{f,g}Sε+1 = {f,{S,g}ε}ε —–(3)

The Hamiltonian S which generates an odd (even) Poisson structure on M via the canonical even (odd) Poisson structure on T∗M (ΠT∗M) can be called the master Hamiltonian. The bracket is a “derived bracket”. The Jacobi identity for it is equivalent to the vanishing of the canonical Poisson bracket for the master Hamiltonian. One can see that an arbitrary Poisson structure on a supermanifold can be obtained as a derived bracket.

What happens if we change the parity of the master Hamiltonian in (3)? The answer is the following. If S is an even quadratic Hamiltonian on T∗M (an odd quadratic Hamiltonian on ΠT∗M), then the condition of vanishing of the canonical even Poisson bracket { , }0 (the canonical odd Poisson bracket { , }1) becomes empty (it is obeyed automatically) and the relation (3) defines an even Riemannian metric (an odd Riemannian metric) on M.

Formally, odd symplectic (and odd Poisson) geometry is a generalization of symplectic (Poisson) geometry to the supercase. However, there are unexpected analogies between the constructions in odd symplectic geometry and in Riemannian geometry. The construction of derived brackets could explain close relations between odd Poisson structures in supermathematics and the Riemannian geometry.

Homogeneity: Leibniz Contra Euler. Note Quote.

1200px-RationalBezier2D.png

Euler insists that the relation of equality holds between any infinitesimal and zero. Similarly, Leibniz worked with a generalized relation of “equality” which was an equality up to a negligible term. Leibniz codified this relation in terms of his transcendental law of homogeneity (TLH), or lex homogeneorum transcendentalis in the original Latin. Leibniz had already referred to the law of homogeneity in his first work on the calculus: “the only remaining differential quantities, namely dx, dy, are found always outside the numerators and roots, and each member is acted on by either dx, or by dy, always with the law of homogeneity maintained with regard to these two quantities, in whatever manner the calculation may turn out.”

The TLH governs equations involving differentials. Bos interprets it as follows:

A quantity which is infinitely small with respect to an- other quantity can be neglected if compared with that quantity. Thus all terms in an equation except those of the highest order of infinity, or the lowest order of infinite smallness, can be discarded. For instance,

a + dx = a —– (1)

dx+ddy = dx

etc. The resulting equations satisfy this . . . requirement of homogeneity.

(here the expression ddx denotes a second-order differential obtained as a second difference). Thus, formulas like Euler’s

a + dx = a —– (2)

(where a “is any finite quantity”; (Euler) belong in the Leibnizian tradition of drawing inferences in accordance with the TLH and as reported by Bos in formula (1) above. The principle of cancellation of infinitesimals was, of course, the very basis of the technique. However, it was also the target of Berkeley’s charge of a logical inconsistency (Berkeley). This can be expressed in modern notation by the conjunction (dx ≠ 0) ∧ (dx = 0). But the Leibnizian framework does not suffer from an inconsistency of type (dx ≠ 0) ∧ (dx = 0) given the more general relation of “equality up to”; in other words, the dx is not identical to zero but is merely discarded at the end of the calculation in accordance with the TLH.

Relations of equality: What Euler and Leibniz appear to have realized more clearly than their contemporaries is that there is more than one relation falling under the general heading of “equality”. Thus, to explain formulas like (2), Euler elaborated two distinct ways, arithmetic and geometric, of comparing quantities. He described the two modalities of comparison in the following terms:

Since we are going to show that an infinitely small quantity is really zero (cyphra), we must meet the objection of why we do not always use the same symbol 0 for infinitely small quantities, rather than some special ones…

[S]ince we have two ways to compare them [a more pre- cise translation would be “there are two modalities of comparison”], either arithmetic or geometric, let us look at the quotients of quantities to be compared in order to see the difference. (Euler)

Furthermore,

If we accept the notation used in the analysis of the infi- nite, then dx indicates a quantity that is infinitely small, so that both dx = 0 and a dx = 0, where a is any finite quantity. Despite this, the geometric ratio a dx : dx is finite, namely a : 1. For this reason, these two infinitely small quantities, dx and adx, both being equal to 0, cannot be confused when we consider their ratio. In a similar way, we will deal with infinitely small quantities dx and dy.

Having defined the two modalities of comparison of quantities, arithmetic and geometric, Euler proceeds to clarify the difference between them as follows:

Let a be a finite quantity and let dx be infinitely small. The arithmetic ratio of equals is clear:

Since ndx = 0, we have

a ± ndx − a = 0 —– (3)

On the other hand, the geometric ratio is clearly of equals, since

(a ± ndx)/a =1 —– (4)

While Euler speaks of distinct modalities of comparison, he writes them down symbolically in terms of two distinct relations, both denoted by the equality sign “=”; namely, (3) and (4). Euler concludes as follows:

From this we obtain the well-known rule that the infinitely small vanishes in comparison with the finite and hence can be neglected [with respect to it].

Note that in the Latin original, the italicized phrase reads infinite parva prae finitis evanescant, atque adeo horum respectu reiici queant. The term evanescant can mean either vanish or lapse, but the term prae makes it read literally as “the infinitely small vanishes before (or by the side of ) the finite,” implying that the infinitesimal disappears because of the finite, and only once it is compared to the finite.

A possible interpretation is that any motion or activity involved in the term evanescant does not indicate that the infinitesimal quantity is a dynamic entity that is (in and of itself) in a state of disappearing, but rather is a static entity that changes, or disappears, only “with respect to” (horum respectu) a finite entity. To Euler, the infinitesimal has a different status depending on what it is being compared to. The passage suggests that Euler’s usage accords more closely with reasoning exploiting static infinitesimals than with dynamic limit-type reasoning.

Single Asset Optimal Investment Fraction

Protecting-your-nest-egg_investment-outcomes

We first consider a situation, when an investor can spend a fraction of his capital to buy shares of just one risky asset. The rest of his money he keeps in cash.

Generalizing Kelly, we consider the following simple strategy of the investor: he regularly checks the asset’s current price p(t), and sells or buys some asset shares in order to keep the current market value of his asset holdings a pre-selected fraction r of his total capital. These readjustments are made periodically at a fixed interval, which we refer to as readjustment interval, and select it as the discrete unit of time. In this work the readjustment time interval is selected once and for all, and we do not attempt optimization of its length.

We also assume that on the time-scale of this readjustment interval the asset price p(t) undergoes a geometric Brownian motion:

p(t + 1) = eη(t)p(t) —– (1)

i.e. at each time step the random number η(t) is drawn from some probability distribution π(η), and is independent of it’s value at previous time steps. This exponential notation is particularly convenient for working with multiplicative noise, keeping the necessary algebra at minimum. Under these rules of dynamics the logarithm of the asset’s price, ln p(t), performs a random walk with an average drift v = ⟨η⟩ and a dispersion D = ⟨η2⟩ − ⟨η⟩2.

It is easy to derive the time evolution of the total capital W(t) of an investor, following the above strategy:

W(t + 1) = (1 − r)W(t) + rW(t)eη(t) —– (2)

Let us assume that the value of the investor’s capital at t = 0 is W(0) = 1. The evolution of the expectation value of the expectation value of the total capital ⟨W (t)⟩ after t time steps is obviously given by the recursion ⟨W (t + 1)⟩ = (1 − r + r⟨eη⟩)⟨W (t)⟩. When ⟨eη⟩ > 1, at first thought the investor should invest all his money in the risky asset. Then the expectation value of his capital would enjoy an exponential growth with the fastest growth rate. However, it would be totally unreasonable to expect that in a typical realization of price fluctuations, the investor would be able to attain the average growth rate determined as vavg = d⟨W(t)⟩/dt. This is because the main contribution to the expectation value ⟨W(t)⟩ comes from exponentially unlikely outcomes, when the price of the asset after a long series of favorable events with η > ⟨η⟩ becomes exponentially big. Such outcomes lie well beyond reasonable fluctuations of W (t), determined by the standard deviation √Dt of ln W (t) around its average value ⟨ln W (t)⟩ = ⟨η⟩t. For the investor who deals with just one realization of the multiplicative process it is better not to rely on such unlikely events, and maximize his gain in a typical outcome of a process. To quantify the intuitively clear concept of a typical value of a random variable x, we define xtyp as a median of its distribution, i.e xtyp has the property that Prob(x > xtyp) = Prob(x < xtyp) = 1/2. In a multiplicative process (2) with r = 1, W (t + 1) = eη(t)W (t), one can show that Wtyp(t) – the typical value of W(t) – grows exponentially in time: Wtyp(t) = e⟨η⟩t at a rate vtyp = ⟨η⟩, while the expectation value ⟨W(t)⟩ also grows exponentially as ⟨W(t)⟩ = ⟨eη⟩t, but at a faster rate given by vavg = ln⟨eη⟩. Notice that ⟨lnW(t)⟩ always grows with the typical growth rate, since those very rare outcomes when W (t) is exponentially big, do not make significant contribution to this average.

The question we are going to address is: which investment fraction r provides the investor with the best typical growth rate vtyp of his capital. Kelly has answered this question for a particular realization of multiplicative stochastic process, where the capital is multiplied by 2 with probability q > 1/2, and by 0 with probability p = 1 − q. This case is realized in a gambling game, where betting on the right outcome pays 2:1, while you know the right outcome with probability q > 1/2. In our notation this case corresponds to η being equal to ln 2 with probability q and −∞ otherwise. The player’s capital in Kelly’s model with r = 1 enjoys the growth of expectation value ⟨W(t)⟩ at a rate vavg = ln2q > 0. In this case it is however particularly clear that one should not use maximization of the expectation value of the capital as the optimum criterion. If the player indeed bets all of his capital at every time step, sooner or later he will loose everything and would not be able to continue to play. In other words, r = 1 corresponds to the worst typical growth of the capital: asymptotically the player will be bankrupt with probability 1. In this example it is also very transparent, where the positive average growth rate comes from: after T rounds of the game, in a very unlikely (Prob = qT) event that the capital was multiplied by 2 at all times (the gambler guessed right all the time!), the capital is equal to 2T. This exponentially large value of the capital outweighs exponentially small probability of this event, and gives rise to an exponentially growing average. This would offer condolence to a gambler who lost everything.

We generalize Kelly’s arguments for arbitrary distribution π(η). As we will see this generalization reveals some hidden results, not realized in Kelly’s “betting” game. As we learned above, the growth of the typical value of W(t), is given by the drift of ⟨lnW(t)⟩ = vtypt, which in our case can be written as

vtyp(r) = ∫ dη π(η) ln(1 + r(eη − 1)) —– (3)

One can check that vtyp(0) = 0, since in this case the whole capital is in the form of cash and does not change in time. In another limit one has vtyp(1) = ⟨η⟩, since in this case the whole capital is invested in the asset and enjoys it’s typical growth rate (⟨η⟩ = −∞ for Kelly’s case). Can one do better by selecting 0 < r < 1? To find the maximum of vtyp(r) one differentiates (3) with respect to r and looks for a solution of the resulting equation: 0 = v’typ(r) = ∫ dη π(η) (eη −1)/(1+r(eη −1)) in the interval 0 ≤ r ≤ 1. If such a solution exists, it is unique since v′′typ(r) = − ∫ dη π(η) (eη − 1)2 / (1 + r(eη − 1))2 < 0 everywhere. The values of the v’typ(r) at 0 and 1 are given by v’typ(0) = ⟨eη⟩ − 1, and v’typ(1) = 1−⟨e−η⟩. One has to consider three possibilities:

(1) ⟨eη⟩ is realized at r = 0 and is equal to 0. In other words, one should never invest in an asset with negative average return per capital ⟨eη⟩ − 1 < 0.

(2) ⟨eη⟩ > 1 , and ⟨e−η⟩ > 1. In this case v’typ(0) > 0, but v’typ(1) < 0 and the maximum of v(r) is realized at some 0 < r < 1, which is a unique solution to v’typ(r) = 0. The typical growth rate in this case is always positive (because you could have always selected r = 0 to make it zero), but not as big as the average rate ln⟨eη⟩, which serves as an unattainable ideal limit. An intuitive understanding of why one should select r < 1 in this case comes from the following observation: the condition ⟨e−η⟩ > 1 makes ⟨1/p(t)⟩ to grow exponentially in time. Such an exponential growth indicates that the outcomes with very small p(t) are feasible and give dominant contribution to ⟨1/p(t)⟩. This is an indicator that the asset price is unstable and one should not trust his whole capital to such a risky investment.

(3) ⟨eη⟩ > 1 , and ⟨e−η⟩ < 1. This is a safe asset and one can invest his whole capital in it. The maximum vtyp(r) is achieved at r = 1 and is equal to vtyp(1) = ln⟨η⟩. A simple example of this type of asset is one in which the price p(t) with equal probabilities is multiplied by 2 or by a = 2/3. As one can see this is a marginal case in which ⟨1/p(t)⟩ = const. For a < 2/3 one should invest only a fraction r < 1 of his capital in the asset, while for a ≥ 2/3 the whole sum could be trusted to it. The specialty of the case with a = 2/3 cannot not be guessed by just looking at the typical and average growth rates of the asset! One has to go and calculate ⟨e−η⟩ to check if ⟨1/p(t)⟩ diverges. This “reliable” type of asset is a new feature of the model with a general π(η). It is never realized in Kelly’s original model, which always has ⟨η⟩ = −∞, so that it never makes sense to gamble the whole capital every time.

An interesting and somewhat counterintuitive consequence of the above results is that under certain conditions one can make his capital grow by investing in asset with a negative typical growth rate ⟨η⟩ < 0. Such asset certainly loses value, and its typical price experiences an exponential decay. Any investor bold enough to trust his whole capital in such an asset is losing money with the same rate. But as long as the fluctuations are strong enough to maintain a positive average return per capital ⟨eη⟩ − 1 > 0) one can maintain a certain fraction of his total capital invested in this asset and almost certainly make money! A simple example of such mind-boggling situation is given by a random multiplicative process in which the price of the asset with equal probabilities is doubled (goes up by 100%) or divided by 3 (goes down by 66.7%). The typical price of this asset drifts down by 18% each time step. Indeed, after T time steps one could reasonably expect the price of this asset to be ptyp(T) = 2T/2 3−T/2 = (√2/3)T ≃ 0.82T. On the other hand, the average ⟨p(t)⟩ enjoys a 17% growth ⟨p(t + 1)⟩ = 7/6 ⟨p(t)⟩ ≃ 1.17⟨W (t)⟩. As one can easily see, the optimum of the typical growth rate is achieved by maintaining a fraction r = 1/4 of the capital invested in this asset. The typical rate in this case is a meager √(25/24) ≃ 1.02, meaning that in a long run one almost certainly gets a 2% return per time step, but it is certainly better than losing 18% by investing the whole capital in this asset.

Of course the properties of a typical realization of a random multiplicative process are not fully characterized by the drift vtyp(r)t in the position of the center of mass of P(h,t), where h(t) = lnW(t) is a logarithm of the wealth of the investor. Indeed, asymptotically P (h, t) has a Gaussian shape P (h, t) =1/ (√2π D(r)t) (exp(−(h−vtyp(r)t)2)/(2D(r)t), where vtyp(r) is given by eq. (3). One needs to know the dispersion D(r) to estimate √D(r)t, which is the magnitude of characteristic deviations of h(t) away from its typical value htyp(t) = vtypt. At the infinite time horizon t → ∞, the process with the biggest vtyp(r) will certainly be preferable over any other process. This is because the separation between typical values of h(t) for two different investment fractions r grows linearly in time, while the span of typical fluctuations grows only as a √t. However, at a finite time horizon the investor should take into account both vtyp(r) and D(r) and decide what he prefers: moderate growth with small fluctuations or faster growth with still bigger fluctuations. To quantify this decision one needs to introduce an investor’s “utility function” which we will not attempt in this work. The most conservative players are advised to always keep their capital in cash, since with any other arrangement the fluctuations will certainly be bigger. As a rule one can show that the dispersion D(r) = ∫ π(η) ln2[1 + r(eη − 1)]dη − v2typ monotonically increases with r. Therefore, among two solutions with equal vtyp(r) one should always select the one with a smaller r, since it would guarantee smaller fluctuations. Here it is more convenient to switch to the standard notation. It is customary to use the random variable

Λ(t)= (p(t+1)−p(t))/p(t) = eη(t) −1 —– (4)

which is referred to as return per unit capital of the asset. The properties of a random multiplicative process are expressed in terms of the average return per capital α = ⟨Λ⟩ = ⟨eη⟩ − 1, and the volatility (standard deviation) of the return per capital σ = √(⟨Λ2⟩ – ⟨Λ⟩2. In our notation, α = ⟨eη⟩ – 1, is determined by the average and not typical growth rate of the process. For η ≪ 1 , α ≃ v + D/2 + v2/2, while the volatility σ is related to D ( the dispersion of η) through σ ≃ √D.

 

Automorphisms. Note Quote.

GraphAutormophismGroupExamples-theilmbh

A group automorphism is an isomorphism from a group to itself. If G is a finite multiplicative group, an automorphism of G can be described as a way of rewriting its multiplication table without altering its pattern of repeated elements. For example, the multiplication table of the group of 4th roots of unity G={1,-1,i,-i} can be written as shown above, which means that the map defined by

 1|->1,    -1|->-1,    i|->-i,    -i|->i

is an automorphism of G.

Looking at classical geometry and mechanics, Weyl followed Newton and Helmholtz in considering congruence as the basic relation which lay at the heart of the “art of measuring” by the handling of that “sort of bodies we call rigid”. He explained how the local congruence relations established by the comparison of rigid bodies can be generalized and abstracted to congruences of the whole space. In this respect Weyl followed an empiricist approach to classical physical geometry, based on a theoretical extension of the material practice with rigid bodies and their motions. Even the mathematical abstraction to mappings of the whole space carried the mark of their empirical origin and was restricted to the group of proper congruences (orientation preserving isometries of Euclidean space, generated by the translations and rotations) denoted by him as ∆+. This group seems to express “an intrinsic structure of space itself; a structure stamped by space upon all the inhabitants of space”.

But already on the earlier level of physical knowledge, so Weyl argued, the mathematical automorphisms of space were larger than ∆. Even if one sees “with Newton, in congruence the one and only basic concept of geometry from which all others derive”, the group Γ of automorphisms in the mathematical sense turns out to be constituted by the similarities.

The structural condition for an automorphism C ∈ Γ of classical congruence geometry is that any pair (v1,v2) of congruent geometric configurations is transformed into another pair (v1*,v2*) of congruent configurations (vj* = C(vj), j = 1,2). For evaluating this property Weyl introduced the following diagram:

IMG_20170320_040116_HDR

Because of the condition for automorphisms just mentioned the maps C T C-1 and C-1TC belong to ∆+ whenever T does. By this argument he showed that the mathematical automorphism group Γ is the normalizer of the congruences ∆+ in the group of bijective mappings of Euclidean space.

More generally, it also explains the reason for his characterization of generalized similarities in his analysis of the problem of space in the early 1920s. In 1918 he translated the relationship between physical equivalences as congruences to the mathematical automorphisms as the similarities/normalizer of the congruences from classical geometry to special relativity (Minkowski space) and “localized” them (in the sense of physics), i.e., he transferred the structural relationship to the infinitesimal neighbourhoods of the differentiable manifold characterizing spacetime (in more recent language, to the tangent spaces) and developed what later would be called Weylian manifolds, a generalization of Riemannian geometry. In his discussion of the problem of space he generalized the same relationship even further by allowing any (closed) sub-group of the general linear group as a candidate for characterizing generalized congruences at every point.

Moreover, Weyl argued that the enlargement of the physico-geometrical automorphisms of classical geometry (proper congruences) by the mathematical automorphisms (similarities) sheds light on Kant’s riddle of the “incongruous counterparts”. Weyl presented it as the question: Why are “incongruous counterparts” like the left and right hands intrinsically indiscernible, although they cannot be transformed into another by a proper motion? From his point of view the intrinsic indiscernibility could be characterized by the mathematical automorphisms Γ. Of course, the congruences ∆ including the reflections are part of the latter, ∆ ⊂ Γ; this implies indiscernibility between “left and right” as a special case. In this way Kant’s riddle was solved by a Leibnizian type of argument. Weyl very cautiously indicated a philosophical implication of this observation:

And he (Kant) is inclined to think that only transcendental idealism is able to solve this riddle. No doubt, the meaning of congruence and similarity is founded in spatial intuition. Kant seems to aim at some subtler point. But just this point is one which can be completely clarified by general concepts, namely by subsuming it under the general and typical group-theoretic situation explained before . . . .

Weyl stopped here without discussing the relationship between group theoretical methods and the “subtler point” Kant aimed at more explicitly. But we may read this remark as an indication that he considered his reflections on automorphism groups as a contribution to the transcendental analysis of the conceptual constitution of modern science. In his book on Symmetry, he went a tiny step further. Still with the Weylian restraint regarding the discussion of philosophical principles he stated: “As far as I see all a priori statements in physics have their origin in symmetry” (126).

To prepare for the following, Weyl specified the subgroup ∆o ⊂ ∆ with all those transformations that fix one point (∆o = O(3, R), the orthogonal group in 3 dimensions, R the field of real numbers). In passing he remarked:

In the four-dimensional world the Lorentz group takes the place of the orthogonal group. But here I shall restrict myself to the three-dimensional space, only occasionally pointing to the modifications, the inclusion of time into the four-dimensional world brings about.

Keeping this caveat in mind (restriction to three-dimensional space) Weyl characterized the “group of automorphisms of the physical world”, in the sense of classical physics (including quantum mechanics) by the combination (more technically, the semidirect product ̧) of translations and rotations, while the mathematical automorphisms arise from a normal extension:

– physical automorphisms ∆ ≅ R3 X| ∆o with ∆o ≅ O(3), respectively ∆ ≅ R4 X| ∆o for the Lorentz group ∆o ≅ O(1, 3),

– mathematical automorphisms Γ = R+ X ∆
(R+ the positive real numbers with multiplication).

In Weyl’s view the difference between mathematical and physical automorphisms established a fundamental distinction between mathematical geometry and physics.

Congruence, or physical equivalence, is a geometric concept, the meaning of which refers to the laws of physical phenomena; the congruence group ∆ is essentially the group of physical automorphisms. If we interpret geometry as an abstract science dealing with such relations and such relations only as can be logically defined in terms of the one concept of congruence, then the group of geometric automorphisms is the normalizer of ∆ and hence wider than ∆.

He considered this as a striking argument against what he considered to be the Cartesian program of a reductionist geometrization of physics (physics as the science of res extensa):

According to this conception, Descartes’s program of reducing physics to geometry would involve a vicious circle, and the fact that the group of geometric automorphisms is wider than that of physical automorphisms would show that such a reduction is actually impossible.” 

In this Weyl alluded to an illusion he himself had shared for a short time as a young scientist. After the creation of his gauge geometry in 1918 and the proposal of a geometrically unified field theory of electromagnetism and gravity he believed, for a short while, to have achieved a complete geometrization of physics.

He gave up this illusion in the middle of the 1920s under the impression of the rising quantum mechanics. In his own contribution to the new quantum mechanics groups and their linear representations played a crucial role. In this respect the mathematical automorphisms of geometry and the physical automorphisms “of Nature”, or more precisely the automorphisms of physical systems, moved even further apart, because now the physical automorphism started to take non-geometrical material degrees of freedom into account (phase symmetry of wave functions and, already earlier, the permutation symmetries of n-particle systems).

But already during the 19th century the physical automorphism group had acquired a far deeper aspect than that of the mobility of rigid bodies:

In physics we have to consider not only points but many types of physical quantities such as velocity, force, electromagnetic field strength, etc. . . .

All these quantities can be represented, relative to a Cartesian frame, by sets of numbers such that any orthogonal transformation T performed on the coordinates keeps the basic physical relations, the physical laws, invariant. Weyl accordingly stated:

All the laws of nature are invariant under the transformations thus induced by the group ∆. Thus physical relativity can be completely described by means of a group of transformations of space-points.

By this argumentation Weyl described a deep shift which ocurred in the late 19th century for the understanding of physics. He described it as an extension of the group of physical automorphisms. The laws of physics (“basic relations” in his more abstract terminology above) could no longer be directly characterized by the motion of rigid bodies because the physics of fields, in particular of electric and magnetic fields, had become central. In this context, the motions of material bodies lost their epistemological primary status and the physical automorphisms acquired a more abstract character, although they were still completely characterizable in geometric terms, by the full group of Euclidean isometries. The indistinguishability of left and right, observed already in clear terms by Kant, acquired the status of a physical symmetry in electromagnetism and in crystallography.

Weyl thus insisted that in classical physics the physical automorphisms could be characterized by the group ∆ of Euclidean isometries, larger than the physical congruences (proper motions) ∆+ but smaller than the mathe- matical automorphisms (similarities) Γ.

This view fitted well to insights which Weyl drew from recent developments in quantum physics. He insisted – differently to what he had thought in 1918 – on the consequence that “length is not relative but absolute” (Hs, p. 15). He argued that physical length measurements were no longer dependent on an arbitrary chosen unit, like in Euclidean geometry. An “absolute standard of length” could be fixed by the quantum mechanical laws of the atomic shell:

The atomic constants of charge and mass of the electron atomic constants and Planck’s quantum of action h, which enter the universal field laws of nature, fix an absolute standard of length, that through the wave lengths of spectral lines is made available for practical measurements.