Canonical Actions on Bundles – Philosophizing Identity Over Gauge Transformations.

Untitled

In physical applications, fiber bundles often come with a preferred group of transformations (usually the symmetry group of the system). The modem attitude of physicists is to regard this group as a fundamental structure which should be implemented from the very beginning enriching bundles with a further structure and defining a new category.

A similar feature appears on manifolds as well: for example, on ℜ2 one can restrict to Cartesian coordinates when we regard it just as a vector space endowed with a differentiable structure, but one can allow also translations if the “bigger” affine structure is considered. Moreover, coordinates can be chosen in much bigger sets: for instance one can fix the symplectic form w = dx ∧ dy on ℜ2 so that ℜ2 is covered by an atlas of canonical coordinates (which include all Cartesian ones). But ℜ2 also happens to be identifiable with the cotangent bundle T*ℜ so that we can restrict the previous symplectic atlas to allow only natural fibered coordinates. Finally, ℜ2 can be considered as a bare manifold so that general curvilinear coordinates should be allowed accordingly; only if the full (i.e., unrestricted) manifold structure is considered one can use a full maximal atlas. Other choices define instead maximal atlases in suitably restricted sub-classes of allowed charts. As any manifold structure is associated with a maximal atlas, geometric bundles are associated to “maximal trivializations”. However, it may happen that one can restrict (or enlarge) the allowed local trivializations, so that the same geometrical bundle can be trivialized just using the appropriate smaller class of local trivializations. In geometrical terms this corresponds, of course, to impose a further structure on the bare bundle. Of course, this newly structured bundle is defined by the same basic ingredients, i.e. the same base manifold M, the same total space B, the same projection π and the same standard fiber F, but it is characterized by a new maximal trivialization where, however, maximal refers now to a smaller set of local trivializations.

Examples are: vector bundles are characterized by linear local trivializations, affine bundles are characterized by affine local trivializations, principal bundles are characterized by left translations on the fiber group. Further examples come from Physics: gauge transformations are used as transition functions for the configuration bundles of any gauge theory. For these reasons we give the following definition of a fiber bundle with structure group.

A fiber bundle with structure group G is given by a sextuple B = (E, M, π; F ;>.., G) such that:

  • (E, M, π; F) is a fiber bundle. The structure group G is a Lie group (possibly a discrete one) and λ : G —–> Diff(F) defines a left action of G on the standard fiber F .
  • There is a family of preferred trivializations {(Uα, t(α)}α∈I of B such that the following holds: let the transition functions be gˆ(αβ) : Uαβ —–> Diff(F) and let eG be the neutral element of G. ∃ a family of maps g(αβ) : Uαβ —–> G such

    that, for each x ∈ Uαβγ = Uα ∩ Uβ ∩ Uγ

    g(αα)(x) = eG

    g(αβ)(x) = [g(βα)(x)]-1

    g(αβ)(x) . g(βγ)(x) . g(γα)(x) = eG

    and

    (αβ)(x) = λ(g(αβ)(x)) ∈ Diff(F)

The maps g(αβ) : Uαβ —–> G, which depend on the trivialization, are said to form a cocycle with values in G. They are called the transition functions with values in G (or also shortly the transition functions). The preferred trivializations will be said to be compatible with the structure. Whenever dealing with fiber bundles with structure group the choice of a compatible trivialization will be implicitly assumed.

Fiber bundles with structure group provide the suitable framework to deal with bundles with a preferred group of transformations. To see this, let us begin by introducing the notion of structure bundle of a fiber bundle with structure group B = (B, M, π; F; x, G).

Let B = (B, M, π; F; x, G) be a bundle with a structure group; let us fix a trivialization {(Uα, t(α)}α∈I and denote by g(αβ) : Uαβ —–> G its transition functions. By using the canonical left action L : G —–> Diff(G) of G onto itself, let us define gˆ(αβ) : Uαβ —–> Diff(G) given by gˆ(αβ)(x) = L (g(αβ)(x)); they obviously satisfy the cocycle properties. Now by constructing a (unique modulo isomorphisms) principal bundle PB = P(B) having G as structure group and g(αβ) as transition functions acting on G by left translation Lg : G —> G.

The principal bundle P(B) = (P, M, p; G) constructed above is called the structure bundle of B = (B, M, π; F; λ, G).

Notice that there is no similar canonical way of associating a structure bundle to a geometric bundle B = (B, M, π; F), since in that case the structure group G is at least partially undetermined.

Each automorphism of P(B) naturally acts over B.

Let, in fact, {σ(α)}α∈I be a trivialization of PB together with its transition functions g(αβ) : Uαβ —–> G defined by σ(β) = σ(α) . g(αβ). Then any principal morphism Φ = (Φ, φ) over PB is locally represented by local maps ψ(α) : Uα —> G such that

Φ : [x, h]α ↦ [φ(α)(x), ψ(α)(x).h](α)

Since Φ is a global automorphism of PB for the above local expression, the following property holds true in Uαβ.

φ(α)(x) = φ(β)(x) ≡ x’

ψ(α)(x) = g(αβ)(x’) . ψ(β)(x) . g(βα)(x)

By using the family of maps {(φ(α), ψ(α))} one can thence define a family of global automorphisms of B. In fact, using the trivialization {(Uα, t(α)}α∈I, one can define local automorphisms of B given by

Φ(α)B : (x, y) ↦ (φ(α)(x), [λ(ψ(α)(x))](y))

These local maps glue together to give a global automorphism ΦB of the bundle B, due to the fact that g(αβ) are also transition functions of B with respect to its trivialization {(Uα, t(α)}α∈I.

In this way B is endowed with a preferred group of transformations, namely the group Aut(PB) of automorphisms of the structure bundle PB, represented on B by means of the canonical action. These transformations are called (generalized) gauge transformations. Vertical gauge transformations, i.e. gauge transformations projecting over the identity, are also called pure gauge transformations.

Individuation. Thought of the Day 91.0

Figure-6-Concepts-of-extensionality

The first distinction is between two senses of the word “individuation” – one semantic, the other metaphysical. In the semantic sense of the word, to individuate an object is to single it out for reference in language or in thought. By contrast, in the metaphysical sense of the word, the individuation of objects has to do with “what grounds their identity and distinctness.” Sets are often used to illustrate the intended notion of “grounding.” The identity or distinctness of sets is said to be “grounded” in accordance with the principle of extensionality, which says that two sets are identical iff they have precisely the same elements:

SET(x) ∧ SET(y) → [x = y ↔ ∀u(u ∈ x ↔ u ∈ y)]

The metaphysical and semantic senses of individuation are quite different notions, neither of which appears to be reducible to or fully explicable in terms of the other. Since sufficient sense cannot be made of the notion of “grounding of identity” on which the metaphysical notion of individuation is based, focusing on the semantic notion of individuation is an easy way out. This choice of focus means that our investigation is a broadly empirical one drawn on empirical linguistics and psychology.

What is the relation between the semantic notion of individuation and the notion of a criterion of identity? It is by means of criteria of identity that semantic individuation is effected. Singling out an object for reference involves being able to distinguish this object from other possible referents with which one is directly presented. The final distinction is between two types of criteria of identity. A one-level criterion of identity says that two objects of some sort F are identical iff they stand in some relation RF:

Fx ∧ Fy → [x = y ↔ RF(x,y)]

Criteria of this form operate at just one level in the sense that the condition for two objects to be identical is given by a relation on these objects themselves. An example is the set-theoretic principle of extensionality.

A two-level criterion of identity relates the identity of objects of one sort to some condition on entities of another sort. The former sort of objects are typically given as functions of items of the latter sort, in which case the criterion takes the following form:

f(α) = f(β) ↔ α ≈ β

where the variables α and β range over the latter sort of item and ≈ is an equivalence relation on such items. An example is Frege’s famous criterion of identity for directions:

d(l1) = d(l2) ↔ l1 || l2

where the variables l1 and l2 range over lines or other directed items. An analogous two-level criterion relates the identity of geometrical shapes to the congruence of things or figures having the shapes in question. The decision to focus on the semantic notion of individuation makes it natural to focus on two-level criteria. For two-level criteria of identity are much more useful than one-level criteria when we are studying how objects are singled out for reference. A one-level criterion provides little assistance in the task of singling out objects for reference. In order to apply a one-level criterion, one must already be capable of referring to objects of the sort in question. By contrast, a two-level criterion promises a way of singling out an object of one sort in terms of an item of another and less problematic sort. For instance, when Frege investigated how directions and other abstract objects “are given to us”, although “we cannot have any ideas or intuitions of them”, he proposed that we relate the identity of two directions to the parallelism of the two lines in terms of which these directions are presented. This would be explanatory progress since reference to lines is less puzzling than reference to directions.

Weyl and Automorphism of Nature. Drunken Risibility.

MTH6105spider

In classical geometry and physics, physical automorphisms could be based on the material operations used for defining the elementary equivalence concept of congruence (“equality and similitude”). But Weyl started even more generally, with Leibniz’ explanation of the similarity of two objects, two things are similar if they are indiscernible when each is considered by itself. Here, like at other places, Weyl endorsed this Leibnzian argument from the point of view of “modern physics”, while adding that for Leibniz this spoke in favour of the unsubstantiality and phenomenality of space and time. On the other hand, for “real substances” the Leibnizian monads, indiscernability implied identity. In this way Weyl indicated, prior to any more technical consideration, that similarity in the Leibnizian sense was the same as objective equality. He did not enter deeper into the metaphysical discussion but insisted that the issue “is of philosophical significance far beyond its purely geometric aspect”.

Weyl did not claim that this idea solves the epistemological problem of objectivity once and for all, but at least it offers an adequate mathematical instrument for the formulation of it. He illustrated the idea in a first step by explaining the automorphisms of Euclidean geometry as the structure preserving bijective mappings of the point set underlying a structure satisfying the axioms of “Hilbert’s classical book on the Foundations of Geometry”. He concluded that for Euclidean geometry these are the similarities, not the congruences as one might expect at a first glance. In the mathematical sense, we then “come to interpret objectivity as the invariance under the group of automorphisms”. But Weyl warned to identify mathematical objectivity with that of natural science, because once we deal with real space “neither the axioms nor the basic relations are given”. As the latter are extremely difficult to discern, Weyl proposed to turn the tables and to take the group Γ of automorphisms, rather than the ‘basic relations’ and the corresponding relata, as the epistemic starting point.

Hence we come much nearer to the actual state of affairs if we start with the group Γ of automorphisms and refrain from making the artificial logical distinction between basic and derived relations. Once the group is known, we know what it means to say of a relation that it is objective, namely invariant with respect to Γ.

By such a well chosen constitutive stipulation it becomes clear what objective statements are, although this can be achieved only at the price that “…we start, as Dante starts in his Divina Comedia, in mezzo del camin”. A phrase characteristic for Weyl’s later view follows:

It is the common fate of man and his science that we do not begin at the beginning; we find ourselves somewhere on a road the origin and end of which are shrouded in fog.

Weyl’s juxtaposition of the mathematical and the physical concept of objectivity is worthwhile to reflect upon. The mathematical objectivity considered by him is relatively easy to obtain by combining the axiomatic characterization of a mathematical theory with the epistemic postulate of invariance under a group of automorphisms. Both are constituted in a series of acts characterized by Weyl as symbolic construction, which is free in several regards. For example, the group of automorphisms of Euclidean geometry may be expanded by “the mathematician” in rather wide ways (affine, projective, or even “any group of transformations”). In each case a specific realm of mathematical objectivity is constituted. With the example of the automorphism group Γ of (plane) Euclidean geometry in mind Weyl explained how, through the use of Cartesian coordinates, the automorphisms of Euclidean geometry can be represented by linear transformations “in terms of reproducible numerical symbols”.

For natural science the situation is quite different; here the freedom of the constitutive act is severely restricted. Weyl described the constraint for the choice of Γ at the outset in very general terms: The physicist will question Nature to reveal him her true group of automorphisms. Different to what a philosopher might expect, Weyl did not mention, the subtle influences induced by theoretical evaluations of empirical insights on the constitutive choice of the group of automorphisms for a physical theory. He even did not restrict the consideration to the range of a physical theory but aimed at Nature as a whole. Still basing on his his own views and radical changes in the fundamental views of theoretical physics, Weyl hoped for an insight into the true group of automorphisms of Nature without any further specifications.

Black Hole Complementarity: The Case of the Infalling Observer

The four postulates of black hole complementarity are:

Postulate 1: The process of formation and evaporation of a black hole, as viewed by a distant observer, can be described entirely within the context of standard quantum theory. In particular, there exists a unitary S-matrix which describes the evolution from infalling matter to outgoing Hawking-like radiation.

Postulate 2: Outside the stretched horizon of a massive black hole, physics can be described to good approximation by a set of semi-classical field equations.

Postulate 3: To a distant observer, a black hole appears to be a quantum system with discrete energy levels. The dimension of the subspace of states describing a black hole of mass M is the exponential of the Bekenstein entropy S(M).

We take as implicit in postulate 2 that the semi-classical field equations are those of a low energy effective field theory with local Lorentz invariance. These postulates do not refer to the experience of an infalling observer, but states a ‘certainty,’ which for uniformity we label as a further postulate:

Postulate 4: A freely falling observer experiences nothing out of the ordinary when crossing the horizon.

To be more specific, we will assume that postulate 4 means both that any low-energy dynamics this observer can probe near his worldline is well-described by familiar Lorentz-invariant effective field theory and also that the probability for an infalling observer to encounter a quantum with energy E ≫ 1/rs (measured in the infalling frame) is suppressed by an exponentially decreasing adiabatic factor as predicted by quantum field theory in curved spacetime. We will argue that postulates 1, 2, and 4 are not consistent with one another for a sufficiently old black hole.

Consider a black hole that forms from collapse of some pure state and subsequently decays. Dividing the Hawking radiation into an early part and a late part, postulate 1 implies that the state of the Hawking radiation is pure,

|Ψ⟩= ∑ii⟩E ⊗|i⟩L —– (1)

Here we have taken an arbitrary complete basis |i⟩L for the late radiation. We use postulates 1, 2, and 3 to make the division after the Page time when the black hole has emitted half of its initial Bekenstein-Hawking entropy; we will refer to this as an ‘old’ black hole. The number of states in the early subspace will then be much larger than that in the late subspace and, as a result, for typical states |Ψ⟩ the reduced density matrix describing the late-time radiation is close to the identity. We can therefore construct operators acting on the early radiation, whose action on |Ψ⟩ is equal to that of a projection operator onto any given subspace of the late radiation.

To simplify the discussion, we treat gray-body factors by taking the transmission coefficients T to have unit magnitude for a few low partial waves and to vanish for higher partial waves. Since the total radiated energy is finite, this allows us to think of the Hawking radiation as defining a finite-dimensional Hilbert space.

Now, consider an outgoing Hawking mode in the later part of the radiation. We take this mode to be a localized packet with width of order rs corresponding to a superposition of frequencies O(r−1s). Note that postulate 2 allows us to assign a unique observer-independent s lowering operator b to this mode. We can project onto eigenspaces of the number operator bb. In other words, an observer making measurements on the early radiation can know the number of photons that will be present in a given mode of the late radiation.

Following postulate 2, we can now relate this Hawking mode to one at earlier times, as long as we stay outside the stretched horizon. The earlier mode is blue-shifted, and so may have frequency ω* much larger than O(r−1s) though still sub-Planckian.

Next consider an infalling observer and the associated set of infalling modes with lowering operators a. Hawking radiation arises precisely because

b = ∫0 dω B(ω)aω + C(ω)aω —– (2)

so that the full state cannot be both an a-vacuum (a|Ψ⟩ = 0) and a bb eigenstate. Here again we have used our simplified gray-body factors.

The application of postulates 1 and 2 has thus led to the conclusion that the infalling observer will encounter high-energy modes. Note that the infalling observer need not have actually made the measurement on the early radiation: to guarantee the presence of the high energy quanta it is enough that it is possible, just as shining light on a two-slit experiment destroys the fringes even if we do not observe the scattered light. Here we make the implicit assumption that the measurements of the infalling observer can be described in terms of an effective quantum field theory. Instead we could simply suppose that if he chooses to measure bb he finds the expected eigenvalue, while if he measures the noncommuting operator aa instead he finds the expected vanishing value. But this would be an extreme modification of the quantum mechanics of the observer, and does not seem plausible.

Figure below gives a pictorial summary of our argument, using ingoing Eddington-Finkelstein coordinates. The support of the mode b is shaded. At large distance it is a well-defined Hawking photon, in a predicted eigenstate of bb by postulate 1. The observer encounters it when its wavelength is much shorter: the field must be in the ground state aωaω = 0, by postulate 4, and so cannot be in an eigenstate of bb. But by postulate 2, the evolution of the mode outside the horizon is essentially free, so this is a contradiction.

Untitled

Figure: Eddington-Finkelstein coordinates, showing the infalling observer encountering the outgoing Hawking mode (shaded) at a time when its size is ω−1* ≪ rs. If the observer’s measurements are given by an eigenstate of aa, postulate 1 is violated; if they are given by an eigenstate of bb, postulate 4 is violated; if the result depends on when the observer falls in, postulate 2 is violated.

To restate our paradox in brief, the purity of the Hawking radiation implies that the late radiation is fully entangled with the early radiation, and the absence of drama for the infalling observer implies that it is fully entangled with the modes behind the horizon. This is tantamount to cloning. For example, it violates strong subadditivity of the entropy,

SAB + SBC ≥ SB + SABC —– (3)

Let A be the early Hawking modes, B be outgoing Hawking mode, and C be its interior partner mode. For an old black hole, the entropy is decreasing and so SAB < SA. The absence of infalling drama means that SBC = 0 and so SABC = SA. Subadditivity then gives SA ≥ SB + SA, which fails substantially since the density matrix for system B by itself is thermal.

Actually, assuming the Page argument, the inequality is violated even more strongly: for an old black hole the entropy decrease is maximal, SAB = SA − SB, so that we get from subadditivity that SA ≥ 2SB + SA.

Note that the measurement of Nb takes place entirely outside the horizon, while the measurement of Na (real excitations above the infalling vacuum) must involve a region that extends over both sides of the horizon. These are noncommuting measurements, but by measuring Nb the observer can infer something about what would have happened if Na had been measured instead. For an analogy, consider a set of identically prepared spins. If each is measured along the x-axis and found to be +1/2, we can infer that a measurement along the z-axis would have had equal probability to return +1/2 and −1/2. The multiple spins are needed to reduce statistical variance; similarly in our case the observer would need to measure several modes Nb to have confidence that he was actually entangled with the early radiation. One might ask if there could be a possible loophole in the argument: A physical observer will have a nonzero mass, and so the mass and entropy of the black hole will increase after he falls in. However, we may choose to consider a particular Hawking wavepacket which is already separated from the streched horizon by a finite amount when it is encountered by the infalling observer. Thus by postulate 2 the further evolution of this mode is semiclassical and not affected by the subsequent merging of the observer with the black hole. In making this argument we are also assuming that the dynamics of the stretched horizon is causal.

Thus far the asymptotically flat discussion applies to a black hole that is older than the Page time; we needed this in order to frame a sharp paradox using the entanglement with the Hawking radiation. However, we are discussing what should be intrinsic properties of the black hole, not dependent on its entanglement with some external system. After the black hole scrambling time, almost every small subsystem of the black hole is in an almost maximally mixed state. So if the degrees of freedom sampled by the infalling observer can be considered typical, then they are ‘old’ in an intrinsic sense. Our conclusions should then hold. If the black hole is a fast scrambler the scrambling time is rs ln(rs/lP), after which we have to expect either drama for the infalling observer or novel physics outside the black hole.

We note that the three postulates that are in conflict – purity of the Hawking radiation, absence of infalling drama, and semiclassical behavior outside the horizon — are widely held even by those who do not explicitly label them as ‘black hole complementarity.’ For example, one might imagine that if some tunneling process were to cause a shell of branes to appear at the horizon, an infalling observer would just go ‘splat,’ and of course Postulate 4 would not hold.

Derivability from Relational Logic of Charles Sanders Peirce to Essential Laws of Quantum Mechanics

Charles_Sanders_Peirce

Charles Sanders Peirce made important contributions in logic, where he invented and elaborated novel system of logical syntax and fundamental logical concepts. The starting point is the binary relation SiRSj between the two ‘individual terms’ (subjects) Sj and Si. In a short hand notation we represent this relation by Rij. Relations may be composed: whenever we have relations of the form Rij, Rjl, a third transitive relation Ril emerges following the rule

RijRkl = δjkRil —– (1)

In ordinary logic the individual subject is the starting point and it is defined as a member of a set. Peirce considered the individual as the aggregate of all its relations

Si = ∑j Rij —– (2)

The individual Si thus defined is an eigenstate of the Rii relation

RiiSi = Si —– (3)

The relations Rii are idempotent

R2ii = Rii —– (4)

and they span the identity

i Rii = 1 —– (5)

The Peircean logical structure bears resemblance to category theory. In categories the concept of transformation (transition, map, morphism or arrow) enjoys an autonomous, primary and irreducible role. A category consists of objects A, B, C,… and arrows (morphisms) f, g, h,… . Each arrow f is assigned an object A as domain and an object B as codomain, indicated by writing f : A → B. If g is an arrow g : B → C with domain B, the codomain of f, then f and g can be “composed” to give an arrow gof : A → C. The composition obeys the associative law ho(gof) = (hog)of. For each object A there is an arrow 1A : A → A called the identity arrow of A. The analogy with the relational logic of Peirce is evident, Rij stands as an arrow, the composition rule is manifested in equation (1) and the identity arrow for A ≡ Si is Rii.

Rij may receive multiple interpretations: as a transition from the j state to the i state, as a measurement process that rejects all impinging systems except those in the state j and permits only systems in the state i to emerge from the apparatus, as a transformation replacing the j state by the i state. We proceed to a representation of Rij

Rij = |ri⟩⟨rj| —– (6)

where state ⟨ri | is the dual of the state |ri⟩ and they obey the orthonormal condition

⟨ri |rj⟩ = δij —– (7)

It is immediately seen that our representation satisfies the composition rule equation (1). The completeness, equation (5), takes the form

n|ri⟩⟨ri|=1 —– (8)

All relations remain satisfied if we replace the state |ri⟩ by |ξi⟩ where

i⟩ = 1/√N ∑n |ri⟩⟨rn| —– (9)

with N the number of states. Thus we verify Peirce’s suggestion, equation (2), and the state |ri⟩ is derived as the sum of all its interactions with the other states. Rij acts as a projection, transferring from one r state to another r state

Rij |rk⟩ = δjk |ri⟩ —– (10)

We may think also of another property characterizing our states and define a corresponding operator

Qij = |qi⟩⟨qj | —– (11)

with

Qij |qk⟩ = δjk |qi⟩ —– (12)

and

n |qi⟩⟨qi| = 1 —– (13)

Successive measurements of the q-ness and r-ness of the states is provided by the operator

RijQkl = |ri⟩⟨rj |qk⟩⟨ql | = ⟨rj |qk⟩ Sil —– (14)

with

Sil = |ri⟩⟨ql | —– (15)

Considering the matrix elements of an operator A as Anm = ⟨rn |A |rm⟩ we find for the trace

Tr(Sil) = ∑n ⟨rn |Sil |rn⟩ = ⟨ql |ri⟩ —– (16)

From the above relation we deduce

Tr(Rij) = δij —– (17)

Any operator can be expressed as a linear superposition of the Rij

A = ∑i,j AijRij —– (18)

with

Aij =Tr(ARji) —– (19)

The individual states could be redefined

|ri⟩ → ei |ri⟩ —– (20)

|qi⟩ → ei |qi⟩ —– (21)

without affecting the corresponding composition laws. However the overlap number ⟨ri |qj⟩ changes and therefore we need an invariant formulation for the transition |ri⟩ → |qj⟩. This is provided by the trace of the closed operation RiiQjjRii

Tr(RiiQjjRii) ≡ p(qj, ri) = |⟨ri |qj⟩|2 —– (22)

The completeness relation, equation (13), guarantees that p(qj, ri) may assume the role of a probability since

j p(qj, ri) = 1 —– (23)

We discover that starting from the relational logic of Peirce we obtain all the essential laws of Quantum Mechanics. Our derivation underlines the outmost relational nature of Quantum Mechanics and goes in parallel with the analysis of the quantum algebra of microscopic measurement.

Conjuncted: Internal Logic. Thought of the Day 46.1

adler-3DFiltration1

So, what exactly is an internal logic? The concept of topos is a generalization of the concept of set. In the categorial language of topoi, the universe of sets is just a topos. The consequence of this generalization is that the universe, or better the conglomerate, of topoi is of overwhelming amplitude. In set theory, the logic employed in the derivation of its theorems is classical. For this reason, the propositions about the different properties of sets are two-valued. There can only be true or false propositions. The traditional fundamental principles: identity, contradiction and excluded third, are absolutely valid.

But if the concept of a topos is a generalization of the concept of set, it is obvious that the logic needed to study, by means of deduction, the properties of all non-set-theoretical topoi, cannot be classic. If it were so, all topoi would coincide with the universe of sets. This fact suggests that to deductively study the properties of a topos, a non-classical logic must be used. And this logic cannot be other than the internal logic of the topos. We know, presently, that the internal logic of all topoi is intuitionistic logic as formalized by Heyting (a disciple of Brouwer). It is very interesting to compare the formal system of classical logic with the intuitionistic one. If both systems are axiomatized, the axioms of classical logic encompass the axioms of intuitionistic logic. The latter has all the axioms of the former, except one: the axiom that formally corresponds to the principle of the excluded middle. This difference can be shown in all kinds of equivalent versions of both logics. But, as Mac Lane says, “in the long run, mathematics is essentially axiomatic.” (Mac Lane). And it is remarkable that, just by suppressing an axiom of classical logic, the soundness of the theory (i.e., intuitionistic logic) can be demonstrated only through the existence of a potentially infinite set of truth-values.

We see, then, that the appellation “internal” is due to the fact that the logic by means of which we study the properties of a topos is a logic that functions within the topos, just as classical logic functions within set theory. As a matter of fact, classical logic is the internal logic of the universe of sets.

Another consequence of the fact that the general internal logic of every topos is the intuitionistic one, is that many different axioms can be added to the axioms of intuitionistic logic. This possibility enriches the internal logic of topoi. Through its application it reveals many new and quite unexpected properties of topoi. This enrichment of logic cannot be made in classical logic because, if we add one or more axioms to it, the new system becomes redundant or inconsistent. This does not happen with intuitionistic logic. So, topos theory shows that classical logic, although very powerful concerning the amount of the resulting theorems, is limited in its mathematical applications. It cannot be applied to study the properties of a mathematical system that cannot be reduced to the system of sets. Of course, if we want, we can utilize classical logic to study the properties of a topos. But, then, there are important properties of the topos that cannot be known, they are occult in the interior of the topos. Classical logic remains external to the topos.

Meillassoux’s Principle of Unreason Towards an Intuition of the Absolute In-itself. Note Quote.

geotime_usgs

The principle of reason such as it appears in philosophy is a principle of contingent reason: not only how philosophical reason concerns difference instead of identity, we but also why the Principle of Sufficient Reason can no longer be understood in terms of absolute necessity. In other words, Deleuze disconnects the Principle of Sufficient Reason from the ontotheological tradition no less than from its Heideggerian deconstruction. What remains then of Meillassoux’s criticism in After finitude: An Essay on the Necessity of Contigency that Deleuze no less than Hegel hypostatizes or absolutizes the correlation between thinking and being and thus brings back a vitalist version of speculative idealism through the back door?

At stake in Meillassoux’s criticism of the Principle of Sufficient Reason is a double problem: the conditions of possibility of thinking and knowing an absolute and subsequently the conditions of possibility of rational ideology critique. The first problem is primarily epistemological: how can philosophy justify scientific knowledge claims about a reality that is anterior to our relation to it and that is hence not given in the transcendental object of possible experience (the arche-fossil )? This is a problem for all post-Kantian epistemologies that hold that we can only ever know the correlate of being and thought. Instead of confronting this weak correlationist position head on, however, Meillassoux seeks a solution in the even stronger correlationist position that denies not only the knowability of the in itself, but also its very thinkability or imaginability. Simplified: if strong correlationists such as Heidegger or Wittgenstein insist on the historicity or facticity (non-necessity) of the correlation of reason and ground in order to demonstrate the impossibility of thought’s self-absolutization, then the very force of their argument, if it is not to contradict itself, implies more than they are willing to accept: the necessity of the contingency of the transcendental structure of the for itself. As a consequence, correlationism is incapable of demonstrating itself to be necessary. This is what Meillassoux calls the principle of factiality or the principle of unreason. It says that it is possible to think of two things that exist independently of thought’s relation to it: contingency as such and the principle of non-contradiction. The principle of unreason thus enables the intellectual intuition of something that is absolutely in itself, namely the absolute impossibility of a necessary being. And this in turn implies the real possibility of the completely random and unpredictable transformation of all things from one moment to the next. Logically speaking, the absolute is thus a hyperchaos or something akin to Time in which nothing is impossible, except it be necessary beings or necessary temporal experiences such as the laws of physics.

There is, moreover, nothing mysterious about this chaos. Contingency, and Meillassoux consistently refers to this as Hume’s discovery, is a purely logical and rational necessity, since without the principle of non-contradiction not even the principle of factiality would be absolute. It is thus a rational necessity that puts the Principle of Sufficient Reason out of action, since it would be irrational to claim that it is a real necessity as everything that is is devoid of any reason to be as it is. This leads Meillassoux to the surprising conclusion that [t]he Principle of Sufficient Reason is thus another name for the irrational… The refusal of the Principle of Sufficient Reason is not the refusal of reason, but the discovery of the power of chaos harboured by its fundamental principle (non-contradiction). (Meillassoux 2007: 61) The principle of factiality thus legitimates or founds the rationalist requirement that reality be perfectly amenable to conceptual comprehension at the same time that it opens up [a] world emancipated from the Principle of Sufficient Reason (Meillassoux) but founded only on that of non-contradiction.

This emancipation brings us to the practical problem Meillassoux tries to solve, namely the possibility of ideology critique. Correlationism is essentially a discourse on the limits of thought for which the deabsolutization of the Principle of Sufficient Reason marks reason’s discovery of its own essential inability to uncover an absolute. Thus if the Galilean-Copernican revolution of modern science meant the paradoxical unveiling of thought’s capacity to think what there is regardless of whether thought exists or not, then Kant’s correlationist version of the Copernican revolution was in fact a Ptolemaic counterrevolution. Since Kant and even more since Heidegger, philosophy has been adverse precisely to the speculative import of modern science as a formal, mathematical knowledge of nature. Its unintended consequence is therefore that questions of ultimate reasons have been dislocated from the domain of metaphysics into that of non-rational, fideist discourse. Philosophy has thus made the contemporary end of metaphysics complicit with the religious belief in the Principle of Sufficient Reason beyond its very thinkability. Whence Meillassoux’s counter-intuitive conclusion that the refusal of the Principle of Sufficient Reason furnishes the minimal condition for every critique of ideology, insofar as ideology cannot be identified with just any variety of deceptive representation, but is rather any form of pseudo-rationality whose aim is to establish that what exists as a matter of fact exists necessarily. In this way a speculative critique pushes skeptical rationalism’s relinquishment of the Principle of Sufficient Reason to the point where it affirms that there is nothing beneath or beyond the manifest gratuitousness of the given nothing, but the limitless and lawless power of its destruction, emergence, or persistence. Such an absolutizing even though no longer absolutist approach would be the minimal condition for every critique of ideology: to reject dogmatic metaphysics means to reject all real necessity, and a fortiori to reject the Principle of Sufficient Reason, as well as the ontological argument.

On the one hand, Deleuze’s criticism of Heidegger bears many similarities to that of Meillassoux when he redefines the Principle of Sufficient Reason in terms of contingent reason or with Nietzsche and Mallarmé: nothing rather than something such that whatever exists is a fiat in itself. His Principle of Sufficient Reason is the plastic, anarchic and nomadic principle of a superior or transcendental empiricism that teaches us a strange reason, that of the multiple, chaos and difference. On the other hand, however, the fact that Deleuze still speaks of reason should make us wary. For whereas Deleuze seeks to reunite chaotic being with systematic thought, Meillassoux revives the classical opposition between empiricism and rationalism precisely in order to attack the pre-Kantian, absolute validity of the Principle of Sufficient Reason. His argument implies a return to a non-correlationist version of Kantianism insofar as it relies on the gap between being and thought and thus upon a logic of representation that renders Deleuze’s Principle of Sufficient Reason unrecognizable, either through a concept of time, or through materialism.

Deleuzian Grounds. Thought of the Day 42.0

1g5bj4l_115_l

With difference or intensity instead of identity as the ultimate philosophical one could  arrive at the crux of Deleuze’s use of the Principle of Sufficient Reason in Difference and Repetition. At the beginning of the first chapter, he defines the quadruple yoke of conceptual representation identity, analogy, opposition, resemblance in correspondence with the four principle aspects of the Principle of Sufficient Reason: the form of the undetermined concept, the relation between ultimate determinable concepts, the relation between determinations within concepts, and the determined object of the concept itself. In other words, sufficient reason according to Deleuze is the very medium of representation, the element in which identity is conceptually determined. In itself, however, this medium or element remains different or unformed (albeit not formless): Difference is the state in which one can speak of determination as such, i.e. determination in its occurrent quality of a difference being made, or rather making itself in the sense of a unilateral distinction. It is with the event of difference that what appears to be a breakdown of representational reason is also a breakthrough of the rumbling ground as differential element of determination (or individuation). Deleuze illustrates this with an example borrowed from Nietzsche:

Instead of something distinguished from something else, imagine something which distinguishes itself and yet that from which it distinguishes itself, does not distinguish itself from it. Lightning, for example, distinguishes itself from the black sky but must also trail behind it . It is as if the ground rose to the surface without ceasing to be the ground.

Between the abyss of the indeterminate and the superficiality of the determined, there thus appears an intermediate element, a field potential or intensive depth, which perhaps in a way exceeds sufficient reason itself. This is a depth which Deleuze finds prefigured in Schelling’s and Schopenhauer’s differend conceptualization of the ground (Grund) as both ground (fond) and grounding (fondement). The ground attains an autonomous power that exceeds classical sufficient reason by including the grounding moment of sufficient reason for itself. Because this self-grounding ground remains groundless (sans-fond) in itself, however, Hegel famously ridiculed Schelling’s ground as the indeterminate night in which all cows are black. He opposed it to the surface of determined identities that are only negatively correlated to each other. By contrast, Deleuze interprets the self-grounding ground through Nietzsche’s eternal return of the same. Whereas the passive syntheses of habit (connective series) and memory (conjunctions of connective series) are the processes by which representational reason grounds itself in time, the eternal return (disjunctive synthesis of series) ungrounds (effonde) this ground by introducing the necessity of future becomings, i.e. of difference as ongoing differentiation. Far from being a denial of the Principle of Sufficient Reason, this threefold process of self-(un)grounding constitutes the positive, relational system that brings difference out of the night of the Identical, and with finer, more varied and more terrifying flashes of lightning than those of contradiction: progressivity.

The breakthrough of the ground in the process of ungrounding itself in sheer distinction-production of the multiple against the indistinguishable is what Deleuze calls violence or cruelty, as it determines being or nature in a necessary system of asymmetric relations of intensity by the acausal action of chance, like an ontological game in which the throw of the dice is the only rule or principle. But it is also the vigil, the insomnia of thought, since it is here that reason or thought achieves its highest power of determination. It becomes a pure creativity or virtuality in which no well-founded identity (God, World, Self) remains: [T]hought is that moment in which determination makes itself one, by virtue of maintaining a unilateral and precise relation to the indeterminate. Since it produces differential events without subjective or objective remainder, however, Deleuze argues that thought belongs to the pure and empty form of time, a time that is no longer subordinate to (cosmological, psychological, eternal) movement in space. Time qua form of transcendental synthesis is the ultimate ground of everything that is, reasons and acts. It is the formal element of multiple becoming, no longer in the sense of finite a priori conditioning, but in the sense of a transfinite a posteriori synthesizer: an empt interiority in ongoing formation and materialization. As Deleuze and Guattari define synthesizer in A Thousand Plateaus: The synthesizer, with its operation of consistency, has taken the place of the ground in a priori synthetic judgment: its synthesis is of the molecular and the cosmic, material and force, not form and matter, Grund and territory.

Some content on this page was disabled on May 4, 2020 as a result of a DMCA takedown notice from Columbia University Press. You can learn more about the DMCA here:

https://en.support.wordpress.com/copyright-and-the-dmca/

Einstein Algebra and General Theory of Relativity Preserve the Empirical Structure of the Theories

8ZQfK

In general relativity, we represent possible universes using relativistic spacetimes, which are Lorentzian manifolds (M, g), where M is a smooth four dimensional manifold, and g is a smooth Lorentzian metric. An isometry between spacetimes (M,g) and (M,g′) is a smooth map φ : M → M′ such that φ ∗ (g′) = g, where φ∗ is the pullback along φ. We do not require isometries to be diffeomorphisms, so these are not necessarily isomorphisms, i.e., they may not be invertible. Two spacetimes (M,g), (M′,g′) are isomorphic, if there is an invertible isometry between them, i.e., if there exists a diffeomorphism φ : M → M′ that is also an isometry. We then say the spacetimes are isometric.

The use of category theoretic tools to examine relationships between theories is motivated by a simple observation: The class of models of a physical theory often has the structure of a category. In what follows, we will represent general relativity with the category GR, whose objects are relativistic spacetimes (M,g) and whose arrows are isometries between spacetimes.

According to the criterion for theoretical equivalence that we will consider, two theories are equivalent if their categories of models are “isomorphic” in an appropriate sense. In order to describe this sense, we need some basic notions from category theory. Two (covariant) functors F : C → D and G : C → D are naturally isomorphic if there is a family ηc : Fc → Gc of isomorphisms of D indexed by the objects c of C that satisfies ηc ◦ Ff = Gf ◦ ηc for every arrow f : c → c′ in C. The family of maps η is called a natural isomorphism and denoted η : F ⇒ G. The existence of a natural isomorphism between two functors captures a sense in which the functors are themselves “isomorphic” to one another as maps between categories. Categories C and D are dual if there are contravariant functors F : C → D and G : D → C such that GF is naturally isomorphic to the identity functor 1C and FG is naturally isomorphic to the identity functor 1D. Roughly speaking, F and G give a duality, or contravariant equivalence, between two categories if they are contravariant isomorphisms in the category of categories up to isomorphism in the category of functors. One can think of dual categories as “mirror images” of one another, in the sense that the two categories differ only in that the directions of their arrows are systematically reversed.

For the purposes of capturing the relationship between general relativity and the theory of Einstein algebras, we will appeal to the following standard of equivalence.

Theories T1 and T2 are equivalent if the category of models of T1 is dual to the category of models of T2.

Equivalence differs from duality only in that the two functors realizing an equivalence are covariant, rather than contravariant. When T1 and T2 are equivalent in either sense, there is a way to “translate” (or perhaps better, “transform”) models of T1 into models of T2, and vice versa. These transformations take objects of one category – models of one theory—to objects of the other in a way that preserves all of the structure of the arrows between objects, including, for instance, the group structure of the automorphisms of each object, the inclusion relations of “sub-objects”, and so on. These transformations are guaranteed to be inverses to one another “up to isomorphism,” in the sense that if one begins with an object of one category, maps using a functor realizing (half) an equivalence or duality to the corresponding object of the other category, and then maps back with the appropriate corresponding functor, the object one ends up with is isomorphic to the object with which one began. In the case of the theory of Einstein algebras and general relativity, there is also a precise sense in which they preserve the empirical structure of the theories.

Diagrammatic Political Via The Exaptive Processes

thing politics v2x copy

The principle of individuation is the operation that in the matter of taking form, by means of topological conditions […] carries out an energy exchange between the matter and the form until the unity leads to a state – the energy conditions express the whole system. Internal resonance is a state of the equilibrium. One could say that the principle of individuation is the common allagmatic system which requires this realization of the energy conditions the topological conditions […] it can produce the effects in all the points of the system in an enclosure […]

This operation rests on the singularity or starting from a singularity of average magnitude, topologically definite.

If we throw in a pinch of Gilbert Simondon’s concept of transduction there’s a basis recipe, or toolkit, for exploring the relational intensities between the three informal (theoretical) dimensions of knowledge, power and subjectification pursued by Foucault with respect to formal practice. Supplanting Foucault’s process of subjectification with Simondon’s more eloquent process of individuation marks an entry for imagining the continuous, always partial, phase-shifting resolutions of the individual. This is not identity as fixed and positionable, it’s a preindividual dynamic that affects an always becoming- individual. It’s the pre-formative as performative. Transduction is a process of individuation. It leads to individuated beings, such as things, gadgets, organisms, machines, self and society, which could be the object of knowledge. It is an ontogenetic operation which provisionally resolves incompatibilities between different orders or different zones of a domain.

What is at stake in the bigger picture, in a diagrammatic politics, is double-sided. Just as there is matter in expression and expression in matter, there is event-value in an  exchange-value paradigm, which in fact amplifies the force of its power relations. The economic engine of our time feeds on event potential becoming-commodity. It grows and flourishes on the mass production of affective intensities. Reciprocally, there are degrees of exchange-value in eventness. It’s the recursive loopiness of our current Creative Industries diagram in which the social networking praxis of Web 2.0 is emblematic and has much to learn.