Whitehead and Peirce’s Synchronicity with Hegel’s Capital Error. Thought of the Day 97.0

6a00d83451aec269e201b8d28c1a40970c-800wi

The focus on experience ensures that Whitehead’s metaphysics is grounded. Otherwise the narrowness of approach would only culminate in sterile measurement. This becomes especially evident with regard to the science of history. Whitehead gives a lucid example of such ‘sterile measurement’ lacking the immediacy of experience.

Consider, for example, the scientific notion of measurement. Can we elucidate the turmoil of Europe by weighing its dictators, its prime ministers, and its editors of newspapers? The idea is absurd, although some relevant information might be obtained. (Alfred North Whitehead – Modes of Thought)

The wealth of experience leaves us with the problem of how to cope with it. Selection of data is required. This selection is done by a value judgment – the judgment of importance. Although Whitehead opposes the dichotomy of the two notions ‘importance’ and ‘matter of fact’, it is still necessary to distinguish grades and types of importance, which enables us to structure our experience, to focus it. This is very similar to hermeneutical theories in Schleiermacher, Gadamer and Habermas: the horizon of understanding structures the data. Therefore, we not only need judgment but the process of concrescence implicitly requires an aim. Whitehead explains that

By this term ‘aim’ is meant the exclusion of the boundless wealth of alternative potentiality and the inclusion of that definite factor of novelty which constitutes the selected way of entertaining those data in that process of unification.

The other idea that underlies experience is “matter of fact.”

There are two contrasted ideas which seem inevitably to underlie all width of experience, one of them is the notion of importance, the sense of importance, the presupposition of importance. The other is the notion of matter of fact. There is no escape from sheer matter of fact. It is the basis of importance; and importance is important because of the inescapable character of matter of fact.

By stressing the “alien character” of feeling that enters into the privately felt feeling of an occasion, Whitehead is able to distinguish the responsive and the supplemental stages of concrescence. The responsive stage being a purely receptive phase, the latter integrating the former ‘alien elements’ into a unity of feeling. The alien factor in the experiencing subjects saves Whitehead’s concept from being pure Spirit (Geist) in a Hegelian sense. There are more similarities between Hegelian thinking and Whitehead’s thought than his own comments on Hegel may suggest. But, his major criticism could probably be stated with Peirce, who wrote that

The capital error of Hegel which permeates his whole system in every part of it is that he almost altogether ignores the Outward clash. (The Essential Peirce 1)

Whitehead refers to that clash as matter of fact. Although, even there, one has to keep in mind that matter-of-fact is an abstraction. 

Matter of fact is an abstraction, arrived at by confining thought to purely formal relations which then masquerade as the final reality. This is why science, in its perfection, relapses into the study of differential equations. The concrete world has slipped through the meshes of the scientific net.

Whitehead clearly keeps the notion of prehension in his late writings as developed in Process and Reality. Just to give one example, 

I have, in my recent writings, used the word ‘prehension’ to express this process of appropriation. Also I have termed each individual act of immediate self-enjoyment an ‘occasion of experience’. I hold that these unities of existence, these occasions of experience, are the really real things which in their collective unity compose the evolving universe, ever plunging into the creative advance. 

Process needs an aim in Process and Reality as much as in Modes of Thought:

We must add yet another character to our description of life. This missing characteristic is ‘aim’. By this term ‘aim’ is meant the exclusion of the boundless wealth of alternative potentiality, and the inclusion of that definite factor of novelty which constitutes the selected way of entertaining those data in that process of unification. The aim is at that complex of feeling which is the enjoyment of those data in that way. ‘That way of enjoyment’ is selected from the boundless wealth of alternatives. It has been aimed at for actualization in that process.

Advertisements

10 or 11 Dimensions? Phenomenological Conundrum. Drunken Risibility.

supersymmetry_experienc_2014_02

It is not the fact that we are living in a ten-dimensional world which forces string theory to a ten-dimensional description. It is that perturbative string theories are only anomaly-free in ten dimensions; and they contain gravitons only in a ten-dimensional formulation. The resulting question, how the four-dimensional spacetime of phenomenology comes off from ten-dimensional perturbative string theories (or its eleven-dimensional non-perturbative extension: the mysterious M theory), led to the compactification idea and to the braneworld scenarios.

It is not the fact that empirical indications for supersymmetry were found, that forces consistent string theories to include supersymmetry. Without supersymmetry, string theory has no fermions and no chirality, but there are tachyons which make the vacuum instable; and supersymmetry has certain conceptual advantages: it leads very probably to the finiteness of the perturbation series, thereby avoiding the problem of non-renormalizability which haunted all former attempts at a quantization of gravity; and there is a close relation between supersymmetry and Poincaré invariance which seems reasonable for quantum gravity. But it is clear that not all conceptual advantages are necessarily part of nature – as the example of the elegant, but unsuccessful Grand Unified Theories demonstrates.

Apart from its ten (or eleven) dimensions and the inclusion of supersymmetry – both have more or less the character of only conceptually, but not empirically motivated ad-hoc assumptions – string theory consists of a rather careful adaptation of the mathematical and model-theoretical apparatus of perturbative quantum field theory to the quantized, one-dimensionally extended, oscillating string (and, finally, of a minimal extension of its methods into the non-perturbative regime for which the declarations of intent exceed by far the conceptual successes). Without any empirical data transcending the context of our established theories, there remains for string theory only the minimal conceptual integration of basic parts of the phenomenology already reproduced by these established theories. And a significant component of this phenomenology, namely the phenomenology of gravitation, was already used up in the selection of string theory as an interesting approach to quantum gravity. Only, because string theory – containing gravitons as string states – reproduces in a certain way the phenomenology of gravitation, it is taken seriously.

But consistency requirements, the minimal inclusion of basic phenomenological constraints, and the careful extension of the model-theoretical basis of quantum field theory are not sufficient to establish an adequate theory of quantum gravity. Shouldn’t the landscape scenario of string theory be understood as a clear indication, not only of fundamental problems with the reproduction of the gauge invariances of the standard model of quantum field theory (and the corresponding phenomenology), but of much more severe conceptual problems? Almost all attempts at a solution of the immanent and transcendental problems of string theory seem to end in the ambiguity and contingency of the multitude of scenarios of the string landscape. That no physically motivated basic principle is known for string theory and its model-theoretical procedures might be seen as a problem which possibly could be overcome in future developments. But, what about the use of a static background spacetime in string theory which falls short of the fundamental insights of general relativity and which therefore seems to be completely unacceptable for a theory of quantum gravity?

At least since the change of context (and strategy) from hadron physics to quantum gravity, the development of string theory was dominated by immanent problems which led with their attempted solutions deeper. The result of this successively increasing self- referentiality is a more and more enhanced decoupling from phenomenological boundary conditions and necessities. The contact with the empirical does not increase, but gets weaker and weaker. The result of this process is a labyrinthic mathematical structure with a completely unclear physical relevance

Geometry and Localization: An Unholy Alliance? Thought of the Day 95.0

SYM5

There are many misleading metaphors obtained from naively identifying geometry with localization. One which is very close to that of String Theory is the idea that one can embed a lower dimensional Quantum Field Theory (QFT) into a higher dimensional one. This is not possible, but what one can do is restrict a QFT on a spacetime manifold to a submanifold. However if the submanifold contains the time axis (a ”brane”), the restricted theory has too many degrees of freedom in order to merit the name ”physical”, namely it contains as many as the unrestricted; the naive idea that by using a subspace one only gets a fraction of phase space degrees of freedom is a delusion, this can only happen if the subspace does not contain a timelike line as for a null-surface (holographic projection onto a horizon).

The geometric picture of a string in terms of a multi-component conformal field theory is that of an embedding of an n-component chiral theory into its n-dimensional component space (referred to as a target space), which is certainly a string. But this is not what modular localization reveals, rather those oscillatory degrees of freedom of the multicomponent chiral current go into an infinite dimensional Hilbert space over one localization point and do not arrange themselves according according to the geometric source-target idea. A theory of this kind is of course consistent but String Theory is certainly a very misleading terminology for this state of affairs. Any attempt to imitate Feynman rules by replacing word lines by word sheets (of strings) may produce prescriptions for cooking up some mathematically interesting functions, but those results can not be brought into the only form which counts in a quantum theory, namely a perturbative approach in terms of operators and states.

String Theory is by no means the only area in particle theory where geometry and modular localization are at loggerheads. Closely related is the interpretation of the Riemann surfaces, which result from the analytic continuation of chiral theories on the lightray/circle, as the ”living space” in the sense of localization. The mathematical theory of Riemann surfaces does not specify how it should be realized; if its refers to surfaces in an ambient space, a distinguished subgroup of Fuchsian group or any other of the many possible realizations is of no concern for a mathematician. But in the context of chiral models it is important not to confuse the living space of a QFT with its analytic continuation.

Whereas geometry as a mathematical discipline does not care about how it is concretely realized the geometrical aspects of modular localization in spacetime has a very specific geometric content namely that which can be encoded in subspaces (Reeh-Schlieder spaces) generated by operator subalgebras acting onto the vacuum reference state. In other words the physically relevant spacetime geometry and the symmetry group of the vacuum is contained in the abstract positioning of certain subalgebras in a common Hilbert space and not that which comes with classical theories.

Kant and Non-Euclidean Geometries. Thought of the Day 94.0

ei5yC

The argument that non-Euclidean geometries contradict Kant’s doctrine on the nature of space apparently goes back to Hermann Helmholtz and was retaken by several philosophers of science such as Hans Reichenbach (The Philosophy of Space and Time) who devoted much work to this subject. In a essay written in 1870, Helmholtz argued that the axioms of geometry are not a priori synthetic judgments (in the sense given by Kant), since they can be subjected to experiments. Given that Euclidian geometry is not the only possible geometry, as was believed in Kant’s time, it should be possible to determine by means of measurements whether, for instance, the sum of the three angles of a triangle is 180 degrees or whether two straight parallel lines always keep the same distance among them. If it were not the case, then it would have been demonstrated experimentally that space is not Euclidean. Thus the possibility of verifying the axioms of geometry would prove that they are empirical and not given a priori.

Helmholtz developed his own version of a non-Euclidean geometry on the basis of what he believed to be the fundamental condition for all geometries: “the possibility of figures moving without change of form or size”; without this possibility, it would be impossible to define what a measurement is. According to Helmholtz:

the axioms of geometry are not concerned with space-relations only but also at the same time with the mechanical deportment of solidest bodies in motion.

Nevertheless, he was aware that a strict Kantian might argue that the rigidity of bodies is an a priori property, but

then we should have to maintain that the axioms of geometry are not synthetic propositions… they would merely define what qualities and deportment a body must have to be recognized as rigid.

At this point, it is worth noticing that Helmholtz’s formulation of geometry is a rudimentary version of what was later developed as the theory of Lie groups. As for the transport of rigid bodies, it is well known that rigid motion cannot be defined in the framework of the theory of relativity: since there is no absolute simultaneity of events, it is impossible to move all parts of a material body in a coordinated and simultaneous way. What is defined as the length of a body depends on the reference frame from where it is observed. Thus, it is meaningless to invoke the rigidity of bodies as the basis of a geometry that pretend to describe the real world; it is only in the mathematical realm that the rigid displacement of a figure can be defined in terms of what mathematicians call a congruence.

Arguments similar to those of Helmholtz were given by Reichenbach in his intent to refute Kant’s doctrine on the nature of space and time. Essentially, the argument boils down to the following: Kant assumed that the axioms of geometry are given a priori and he only had classical geometry in mind, Einstein demonstrated that space is not Euclidean and that this could be verified empirically, ergo Kant was wrong. However, Kant did not state that space must be Euclidean; instead, he argued that it is a pure form of intuition. As such, space has no physical reality of its own, and therefore it is meaningless to ascribe physical properties to it. Actually, Kant never mentioned Euclid directly in his work, but he did refer many times to the physics of Newton, which is based on classical geometry. Kant had in mind the axioms of this geometry which is a most powerful tool of Newtonian mechanics. Actually, he did not even exclude the possibility of other geometries, as can be seen in his early speculations on the dimensionality of space.

The important point missed by Reichenbach is that Riemannian geometry is necessarily based on Euclidean geometry. More precisely, a Riemannian space must be considered as locally Euclidean in order to be able to define basic concepts such as distance and parallel transport; this is achieved by defining a flat tangent space at every point, and then extending all properties of this flat space to the globally curved space (Luther Pfahler Eisenhart Riemannian Geometry). To begin with, the structure of a Riemannian space is given by its metric tensor gμν from which the (differential) length is defined as ds2 = gμν dxμ dxν; but this is nothing less than a generalization of the usual Pythagoras theorem in Euclidean space. As for the fundamental concept of parallel transport, it is taken directly from its analogue in Euclidean space: it refers to the transport of abstract (not material, as Helmholtz believed) figures in such a space. Thus Riemann’s geometry cannot be free of synthetic a priori propositions because it is entirely based upon concepts such as length and congruence taken form Euclid. We may conclude that Euclids geometry is the condition of possibility for a more general geometry, such as Riemann’s, simply because it is the natural geometry adapted to our understanding; Kant would say that it is our form of grasping space intuitively. The possibility of constructing abstract spaces does not refute Kant’s thesis; on the contrary, it reinforces it.

Mailvox: The Origins of the Alt-Retard

Alt-Left-perspectives3

The reaction to degeneracy can sometimes happen within the spirit of degeneracy. Genocide is not the morally wholesome solution to whoredom. The Marxist-Lenninsts regard Fascism as form of bourgeois reaction. That is their frame, it is how they like to position their argument as it emphasises the difference between the two, but I think it is far better to think of Socialism as Left Modernism and Fascism as being Right Modernism. With Left and Right being dispositional/temperamental distinctions. They might be different teams but they’re both playing the same game.

A Generation X reader sent me this analysis of the Fake Right Clown Posse, which somehow manages to be both sympathetic of the plight being faced by the young men of today and contemptuous of what some of them have become in response. I think he is largely correct, and explains why their attempts to defend their race and their nations so often go awry.

We have no choice but to help them. The challenge is that the only answer to ignorance is information, and as we know, as we have witnessed, there are some who cannot be instructed by information.

Mailvox: The Origins of the Alt-Retard

Individuation. Thought of the Day 91.0

Figure-6-Concepts-of-extensionality

The first distinction is between two senses of the word “individuation” – one semantic, the other metaphysical. In the semantic sense of the word, to individuate an object is to single it out for reference in language or in thought. By contrast, in the metaphysical sense of the word, the individuation of objects has to do with “what grounds their identity and distinctness.” Sets are often used to illustrate the intended notion of “grounding.” The identity or distinctness of sets is said to be “grounded” in accordance with the principle of extensionality, which says that two sets are identical iff they have precisely the same elements:

SET(x) ∧ SET(y) → [x = y ↔ ∀u(u ∈ x ↔ u ∈ y)]

The metaphysical and semantic senses of individuation are quite different notions, neither of which appears to be reducible to or fully explicable in terms of the other. Since sufficient sense cannot be made of the notion of “grounding of identity” on which the metaphysical notion of individuation is based, focusing on the semantic notion of individuation is an easy way out. This choice of focus means that our investigation is a broadly empirical one drawn on empirical linguistics and psychology.

What is the relation between the semantic notion of individuation and the notion of a criterion of identity? It is by means of criteria of identity that semantic individuation is effected. Singling out an object for reference involves being able to distinguish this object from other possible referents with which one is directly presented. The final distinction is between two types of criteria of identity. A one-level criterion of identity says that two objects of some sort F are identical iff they stand in some relation RF:

Fx ∧ Fy → [x = y ↔ RF(x,y)]

Criteria of this form operate at just one level in the sense that the condition for two objects to be identical is given by a relation on these objects themselves. An example is the set-theoretic principle of extensionality.

A two-level criterion of identity relates the identity of objects of one sort to some condition on entities of another sort. The former sort of objects are typically given as functions of items of the latter sort, in which case the criterion takes the following form:

f(α) = f(β) ↔ α ≈ β

where the variables α and β range over the latter sort of item and ≈ is an equivalence relation on such items. An example is Frege’s famous criterion of identity for directions:

d(l1) = d(l2) ↔ l1 || l2

where the variables l1 and l2 range over lines or other directed items. An analogous two-level criterion relates the identity of geometrical shapes to the congruence of things or figures having the shapes in question. The decision to focus on the semantic notion of individuation makes it natural to focus on two-level criteria. For two-level criteria of identity are much more useful than one-level criteria when we are studying how objects are singled out for reference. A one-level criterion provides little assistance in the task of singling out objects for reference. In order to apply a one-level criterion, one must already be capable of referring to objects of the sort in question. By contrast, a two-level criterion promises a way of singling out an object of one sort in terms of an item of another and less problematic sort. For instance, when Frege investigated how directions and other abstract objects “are given to us”, although “we cannot have any ideas or intuitions of them”, he proposed that we relate the identity of two directions to the parallelism of the two lines in terms of which these directions are presented. This would be explanatory progress since reference to lines is less puzzling than reference to directions.

Contact Geometry and Manifolds

Fig-1-Contact-geometry-of-a-rough-body-against-a-plane-d-c-denotes-d-0-d-x-c-TH-h-c

Let M be a manifold of dimension 2n + 1. A contact structure on M is a distribution ξ ⊂ TM of dimension 2n, such that the defining 1-form α satisfies

α ∧ (dα)n ≠ 0 —– (1)

A 1-form α satisfying (1) is said to be a contact form on M. Let α be a contact form on M; then there exists a unique vector field Rα on M such that

α(Rα) = 1, ιRα dα = 0,

where ιRα dα denotes the contraction of dα along Rα. By definition Rα is called the Reeb vector field of the contact form α. A contact manifold is a pair (M, ξ) where M is a 2n + 1-dimensional manifold and ξ is a contact structure. Let (M, ξ) be a contact manifold and fix a defining (contact) form α. Then the 2-form κ = 1/2 dα defines a symplectic form on the contact structure ξ; therefore the pair (ξ, κ) is a symplectic vector bundle over M. A complex structure on ξ is the datum of J ∈ End(ξ) such that J2 = −Iξ.

Let α be a contact form on M, with ξ = ker α and let κ = 1/2 dα. A complex structure J on ξ is said to be κ-calibrated if gJ [x](·, ·) := κ[x](·, Jx ·) is a JxHermitian inner product on ξx for any x ∈ M.

The set of κ-calibrated complex structures on ξ will be denoted by Cα(M). If J is a complex structure on ξ = ker α, then we extend it to an endomorphism of TM by setting

J(Rα) = 0.

Note that such a J satisfies

J2 =−I + α ⊗ Rα

If J is κ-calibrated, then it induces a Riemannian metric g on M given by

g := gJ + α ⊗ α —– (2)

Furthermore the Nijenhuis tensor of J is defined by

NJ (X, Y) = [JX, JY] − J[X, JY] − J[Y, JX] + J2[X, Y] for any X, Y ∈ TM

A Sasakian structure on a 2n + 1-dimensional manifold M is a pair (α, J), where

• α is a contact form;

• J ∈ Cα(M) satisfies NJ = −dα ⊗ Rα

The triple (M, α, J) is said to be a Sasakian manifold. Let (M, ξ) be a contact manifold. A differential r-form γ on M is said to be basic if

ιRα γ = 0, LRα γ = 0,

where L denotes the Lie derivative and Rα is the Reeb vector field of an arbitrary contact form defining ξ. We will denote by ΛrB(M) the set of basic r-forms on (M, ξ). Note that

rB(M) ⊂ Λr+1B(M)

The cohomology HB(M) of this complex is called the basic cohomology of (M, ξ). If (M, α, J) is a Sasakian manifold, then

J(ΛrB(M)) = ΛrB(M), where, as usual, the action of J on r-forms is defined by

Jφ(X1,…, Xr) = φ(JX1,…, JXr)

Consequently ΛrB(M) ⊗ C splits as

ΛrB(M) ⊗ C = ⊕p+q=r Λp,qJ(ξ)

and, according with this gradation, it is possible to define the cohomology groups Hp,qB(M). The r-forms belonging to Λp,qJ(ξ) are said to be of type (p, q) with respect to J. Note that κ = 1/2 dα ∈ Λ1,1J(ξ) and it determines a non-vanishing cohomology class in H1,1B(M). The Sasakian structure (α, J) also induces a natural connection ∇ξ on ξ given by

ξX Y = (∇X Y)ξ if X ∈ ξ

= [Rα, Y] if X = Rα

where the subscript ξ denotes the projection onto ξ. One easily gets

ξX J = 0, ∇ξXgJ = 0, ∇ξX dα = 0, ∇ξX Y − ∇ξY X = [X,Y]ξ,

for any X, Y ∈ TM. Consequently we have Hol(∇ξ) ⊆ U(n).

The basic cohomology class

cB1(M) = 1/2π [ρT] ∈ H1,1B(M)

is called the first basic Chern class of (M, α, J) and, if it vanishes, then (M, α, J) is said to be null-Sasakian.

Furthermore a Sasakian manifold is called α-Einstein if there exist λ, ν ∈ C(M, R) such that

Ric = λg + να ⊗ α, where Ric is the Ricci Tensor.

A submanifold p: L ֒→ M of a 2n + 1-dimensional contact manifold (M, ξ) is said to be Legendrian if :

1) dimRL = n,

2) p(TL) ⊂ ξ

Observe that, if α is a defining form of the contact structure ξ, then condition 2) is equivalent to say that p(α) = 0. Hence Legendrian submanifolds are the analogue of Lagrangian submanifolds in contact geometry.