Spinorial Algebra

Pascal-g1

Superspace is to supersymmetry as Minkowski space is to the Lorentz group. Superspace provides the most natural geometrical setting in which to describe supersymmetrical theories. Almost no physicist would utilize the component of Lorentz four-vectors or higher rank tensor to describe relativistic physics.

In a field theory, boson and fermions are to be regarded as diffeomorphisms generating two different vector spaces; the supersymmetry generators are nothing but sets of linear maps between these spaces. We can thus include a supersymmetric theory in a more general geometrical framework defining the collection of diffeomorphisms,

φi : R → RdL, i = 1,…, dL —– (1)

ψαˆ : R → RdR, i = 1,…, dR —– (2)

where the one-dimensional dependence reminds us that we restrict our attention to mechanics. The free vector spaces generated by {φi}i=1dL and {ψαˆ}αˆdR are respectively VL and VR, isomorphic to RdL and RdR. For matrix representations in the following, the two integers are restricted to the case dL = dR = d. Four different linear mappings can act on VL and VR

ML : VL → VR, MR : VR → VL

UL : VL → VL, UR : VR → VR —– (3)

with linear map space dimensions

dimML = dimMR = dRdL = d2,

dimUL = dL2 = d2, dimUR = dR2 = d2 —– (4)

as a consequence of linearity. To relate this construction to a general real (≡ GR) algebraic structure of dimension d and rank N denoted by GR(d,N), two more requirements need to be added.

Defining the generators of GR(d,N) as the family of N + N linear maps

LI ∈ {ML}, I = 1,…, N

RK ∈ {MR}, K = 1,…, N —– (5)

such that ∀ I, K = 1,…, N, we have

LI ◦ RK + LK ◦ RI = −2δIKIVR

RI ◦ LK + RK ◦ LI = −2δIKIVL —– (6)

where IVL and IVR are identity maps on VL and VR. Equations (6) will later be embedded into a Clifford algebra but one point has to be emphasized, we are working with real objects.

After equipping VL and VR with euclidean inner products ⟨·,·⟩VL and ⟨·,·⟩VR, respectively, the generators satisfy the property

⟨φ, RI(ψ)⟩VL = −⟨LI(φ), ψ⟩VR, ∀ (φ, ψ) ∈ VL ⊕ VR —— (7)

This condition relates LI to the hermitian conjugate of RI, namely RI, defined as usual by

⟨φ, RI(ψ)⟩VL = ⟨RI(φ), ψ⟩VR —– (8)

such that

RI = RIt = −LI —– (9)

The role of {UL} and {UR} maps is to connect different representations once a set of generators defined by conditions (6) and (7) has been chosen. Notice that (RILJ)ij ∈ UL and (LIRJ)αˆβˆ ∈ UR. Let us consider A ∈ {UL} and B ∈ {UR} such that

A : φ → φ′ = Aφ

B : ψ → ψ′ = Bψ —– (10)

with Vas an example,

⟨φ, RI(ψ)⟩VL → ⟨Aφ, RI B(ψ)⟩VL

= ⟨φ,A RI B(ψ)⟩VL

= ⟨φ, RI (ψ)⟩VL —– (11)

so a change of representation transforms the generators in the following manner:

LI → LI = BLIA

RI → RI = ARIB —– (12)

In general (6) and (7) do not identify a unique set of generators. Thus, an equivalence relation has to be defined on the space of possible sets of generators, say {LI, RI} ∼ {LI, RI} iff ∃ A ∈ {UL} and B ∈ {UR} such that L′ = BLIA and R′ = ARIB.

Moving on to how supersymmetry is born, we consider the manner in which algebraic derivations are defined by

δεφi = iεI(RI)iαˆψαˆ

δεψαˆ = −εI(LI)αˆiτφi —– (13)

where the real-valued fields {φi}i=1dL and {ψαˆ}αˆ=1dR can be interpreted as bosonic and fermionic respectively. The fermionic nature attributed to the VR elements implies that ML and MR generators, together with supersymmetry transformation parameters εI, anticommute among themselves. Introducing the dL + dR dimensional space VL ⊕ VR with vectors

Ψ = (ψ φ) —– (14)

(13) reads

δε(Ψ) = (iεRψ εL∂τφ) —– (15)

such that

ε1, δε2]Ψ = iε1Iε2J (RILJτφ LIRJτψ) – iε2Jε1I (RJLIτφ LJRIτψ) = – 2iε1Iε2IτΨ —– (16)

utilizing that we have classical anticommuting parameters and that (6) holds. From (16) it is clear that δε acts as a supersymmetry generator, so that we can set

δQΨ := δεΨ = iεIQIΨ —– (17)

which is equivalent to writing

δQφi = i(εIQIψ)i

δQψαˆ = i(εIQIφ)αˆ —– (18)

with

Q1 = (0LIH RI0) —– (19)

where H = i∂τ. As a consequence of (16) a familiar anticommutation relation appears

{QI, QJ} = − 2iδIJH —– (20)

confirming that we are about to recognize supersymmetry, and once this is achieved, we can associate to the algebraic derivations (13), the variations defining the scalar supermultiplets. However, the choice (13) is not unique, for this is where we could have a spinorial one,

δQξαˆ = εI(LI)αˆiFi

δQFi = − iεI(RI)iαˆτξαˆ —– (21)

Advertisements

Valencies of Predicates. Thought of the Day 125.0

Naturalizing semiotics - The triadic sign of Charles Sanders Pei

Since icons are the means of representing qualities, they generally constitute the predicative side of more complicated signs:

The only way of directly communicating an idea is by means of an icon; and every indirect method of communicating an idea must depend for its establishment upon the use of an icon. Hence, every assertion must contain an icon or set of icons, or else must contain signs whose meaning is only explicable by icons. The idea which the set of icons (or the equivalent of a set of icons) contained in an assertion signifies may be termed the predicate of the assertion. (Collected Papers of Charles Sanders Peirce)

Thus, the predicate in logic as well as ordinary language is essentially iconic. It is important to remember here Peirce’s generalization of the predicate from the traditional subject-copula-predicate structure. Predicates exist with more than one subject slot; this is the basis for Peirce’s logic of relatives and permits at the same time enlarging the scope of logic considerably and approaching it to ordinary language where several-slot-predicates prevail, for instance in all verbs with a valency larger than one. In his definition of these predicates by means of valency, that is, number of empty slots in which subjects or more generally indices may be inserted, Peirce is actually the founder of valency grammar in the tradition of Tesnière. So, for instance, the structure ‘_ gives _ to _’ where the underlinings refer to slots, is a trivalent predicate. Thus, the word classes associated with predicates are not only adjectives, but verbs and common nouns; in short all descriptive features in language are predicative.

This entails the fact that the similarity charted in icons covers more complicated cases than does the ordinary use of the word. Thus,

where ordinary logic considers only a single, special kind of relation, that of similarity, – a relation, too, of a particularly featureless and insignificant kind, the logic of relatives imagines a relation in general to be placed. Consequently, in place of the class, which is composed of a number of individual objects or facts brought together by means of their relation of similarity, the logic of relatives considers the system, which is composed of objects brought together by any kind of relations whatsoever. (The New Elements of Mathematics)

This allows for abstract similarity because one phenomenon may be similar to another in so far as both of them partake in the same relation, or more generally, in the same system – relations and systems being complicated predicates.

But not only more abstract features may thus act as the qualities invoked in an icon; these qualities may be of widely varying generality:

But instead of a single icon, or sign by resemblance of a familiar image or ‘dream’, evocable at will, there may be a complexus of such icons, forming a composite image of which the whole is not familiar. But though the whole is not familiar, yet not only are the parts familiar images, but there will also be a familiar image in its mode of composition. ( ) The sort of idea which an icon embodies, if it be such that it can convey any positive information, being applicable to some things but not to others, is called a first intention. The idea embodied by an icon, which cannot of itself convey any information, being applicable to everything or nothing, but which may, nevertheless, be useful in modifying other icons, is called a second intention. 

What Peirce distinguishes in these scholastic standard notions borrowed from Aquinas via Scotus, is, in fact, the difference between Husserlian formal and material ontology. Formal qualities like genus, species, dependencies, quantities, spatial and temporal extension, and so on are of course attributable to any phenomenon and do not as such, in themselves, convey any information in so far as they are always instantiated in and thus, like other Second Intentions, in the Husserlian manner dependent upon First Intentions, but they are nevertheless indispensable in the composition of first intentional descriptions. The fact that a certain phenomenon is composed of parts, has a form, belongs to a species, has an extension, has been mentioned in a sentence etc. does not convey the slightest information of it until it by means of first intentional icons is specified which parts in which composition, which species, which form, etc. Thus, here Peirce makes a hierarchy of icons which we could call material and formal, respectively, in which the latter are dependent on the former. One may note in passing that the distinctions in Peirce’s semiotics are themselves built upon such Second Intentions; thus it is no wonder that every sign must possess some Iconic element. Furthermore, the very anatomy of the proposition becomes just like in Husserlian rational grammar a question of formal, synthetic a priori regularities.

Among Peirce’s forms of inference, similarity plays a certain role within abduction, his notion for a ‘qualified guess’ in which a particular fact gives rise to the formation of a hypothesis which would have the fact in question as a consequence. Many such different hypotheses are of course possible for a given fact, and this inference is not necessary, but merely possible, suggestive. Precisely for this reason, similarity plays a seminal role here: an

originary Argument, or Abduction, is an argument which presents facts in its Premiss which presents a similarity to the fact stated in the conclusion but which could perfectly be true without the latter being so.

The hypothesis proposed is abducted by some sort of iconic relation to the fact to be explained. Thus, similarity is the very source of new ideas – which must subsequently be controlled deductively and inductively, to be sure. But iconicity does not only play this role in the contents of abductive inference, it plays an even more important role in the very form of logical inference in general:

Given a conventional or other general sign of an object, to deduce any other truth than that which it explicitly signifies, it is necessary, in all cases, to replace that sign by an icon. This capacity of revealing unexpected truth is precisely that wherein the utility of algebraic formulae consists, so that the iconic character is the prevailing one.

The very form of inferences depends on it being an icon; thus for Peirce the syllogistic schema inherent in reasoning has an iconic character:

‘Whenever one thing suggests another, both are together in the mind for an instant. [ ] every proposition like the premiss, that is having an icon like it, would involve [ ] a proposition related to it as the conclusion [ ]’. Thus, first and foremost deduction is an icon: ‘I suppose it would be the general opinion of logicians, as it certainly was long mine, that the Syllogism is a Symbol, because of its Generality.’ …. The truth, however, appears to be that all deductive reasoning, even simple syllogism, involves an element of observation; namely deduction consists in constructing an icon or diagram the relation of whose parts shall present a complete analogy with those of the parts of the objects of reasoning, of experimenting upon this image in the imagination, and of observing the result so as to discover unnoticed and hidden relations among the parts. 

It then is no wonder that synthetic a priori truths exist – even if Peirce prefers notions like ‘observable, universal truths’ – the result of a deduction may contain more than what is immediately present in the premises, due to the iconic quality of the inference.

Tranche Declension.

800px-CDO_-_FCIC_and_IMF_Diagram

With the CDO (collateralized debt obligation) market picking up, it is important to build a stronger understanding of pricing and risk management models. The role of the Gaussian copula model, has well-known deficiencies and has been criticized, but it continues to be fundamental as a starter. Here, we draw attention to the applicability of Gaussian inequalities in analyzing tranche loss sensitivity to correlation parameters for the Gaussian copula model.

We work with an RN-valued Gaussian random variable X = (X1, … , XN), where each Xj is normalized to mean 0 and variance 1, and study the equity tranche loss

L[0,a] = ∑m=1Nlm1[xm≤cm] – {∑m=1Nlm1[xm≤cm] – a}

where l1 ,…, lN > 0, a > 0, and c1,…, cN ∈ R are parameters. We thus establish an identity between the sensitivity of E[L[0,a]] to the correlation rjk = E[XjXk] and the parameters cj and ck, from where subsequently we come to the inequality

∂E[L[0,a]]/∂rjk ≤ 0

Applying this inequality to a CDO containing N names whose default behavior is governed by the Gaussian variables Xj shows that an increase in name-to-name correlation decreases expected loss in an equity tranche. This is a generalization of the well-known result for Gaussian copulas with uniform correlation.

Consider a CDO consisting of N names, with τj denoting the (random) default time of the jth name. Let

Xj = φj-1(Fjj))

where Fj is the distribution function of τj (relative to the market pricing measure), assumed to be continuous and strictly increasing, and φj is the standard Gaussian distribution function. Then for any x ∈ R we have

P[Xj ≤ x] = P[τj ≤ Fj-1j(x))] = Fj(Fj-1j(x))) = φj(x)

which means that Xj has standard Gaussian distribution. The Gaussian copula model posits that the joint distribution of the Xj is Gaussian; thus,

X = (X1, …., Xn)

is an RN-valued Gaussian variable whose marginals are all standard Gaussian. The correlation

τj = E[XjXk]

reflects the default correlation between the names j and k. Now let

pj = E[τj ≤ T] = P[Xj ≤ cj]

be the probability that the jth name defaults within a time horizon T, which is held constant, and

cj = φj−1(Fj(T))

is the default threshold of the jth name.

In schematics, when we explore the essential phenomenon, the default of name j, which happens if the default time τis within the time horizon T, results in a loss of amount lj > 0 in the CDO portfolio. Thus, the total loss during the time period [0, T] is

L = ∑m=1Nlm1[xm≤cm]

This is where we are essentially working with a one-period CDO, and ignoring discounting from the random time of actual default. A tranche is simply a range of loss for the portfolio; it is specified by a closed interval [a, b] with 0 ≤ a ≤ b. If the loss x is less than a, then this tranche is unaffected, whereas if x ≥ b then the entire tranche value b − a is eaten up by loss; in between, if a ≤ x ≤ b, the loss to the tranche is x − a. Thus, the tranche loss function t[a, b] is given by

t[a, b](x) = 0 if x < a; = x – a, if x ∈ [a, b]; = b – a; if x > b

or compactly,

t[a, b](x) = (x – a)+ – (x – b)+

From this, it is clear that t[a, b](x) is continuous in (a, b, x), and we see that it is a non-decreasing function of x. Thus, the loss in an equity tranche [0, a] is given by

t[0,a](L) = L − (L − a)+

with a > 0.

CUSUM Deceleration. Drunken Risibility.

Untitled

CUSUM, or cumulative sum is used for detecting and monitoring change detection. Let us introduce a measurable space (Ω, F), where Ω = R, F = ∪nFn and F= σ{Yi, i ∈ {0, 1, …, n}}. The law of the sequence  Yi, i = 1, …., is defined by the family of probability measures {Pv}, v ∈ N*. In other words, the probability measure Pv for a given v > 0, playing the role of the change point, is the measure generated on Ω by the sequence Yi, i = 1, … , when the distribution of the Yi’s changes at time v. The probability measures P0 and P are the measures generated on Ω by the random variables Yi when they have an identical distribution. In other words, the system defined by the sequence Yi undergoes a “regime change” from the distribution P0 to the distribution P at the change point time v.

The CUSUM (cumulative sum control chart) statistic is defined as the maximum of the log-likelihood ratio of the measure Pv to the measure P on the σ-algebra Fn. That is,

Cn := max0≤v≤n log dPv/dP|Fn —– (1)

is the CUSUM statistic on the σ-algebra Fn. The CUSUM statistic process is then the collection of the CUSUM statistics {Cn} of (1) for n = 1, ….

The CUSUM stopping rule is then

T(h) := inf {n ≥ 0: max0≤v≤n log dPv/dP|Fn ≥ h} —– (2)

for some threshold h > 0. In the CUSUM stopping rule (2), the CUSUM statistic process of (1) is initialized at

C0 = 0 —– (3)

The CUSUM statistic process was first introduced by E. S. Page in the form that it takes when the sequence of random variables Yis independent and Gaussian; that is, Yi ∼ N(μ, 1), i = 1, 2,…, with μ = μ0 for i < 𝜈 and μ = μ1 for i ≥ 𝜈. Since its introduction by Page, the CUSUM statistic process of (1) and its associated CUSUM stopping time of (2) have been used in a plethora of applications where it is of interest to perform detection of abrupt changes in the statistical behavior of observations in real time. Examples of such applications are signal processing, monitoring the outbreak of an epidemic, financial surveillance, and computer vision. The popularity of the CUSUM stopping time (2) is mainly due to its low complexity and optimality properties in both discrete and continuous time models.

Let Yi ∼ N(μ0, σ2) that change to Yi ∼ N(μ1, σ2) at the change point time v. We now proceed to derive the form of the CUSUM statistic process (1) and its associated CUSUM stopping time (2). Let us denote by φ(x) = 1/√2π e-x2/2 the Gaussian kernel. For the sequence of random variables Yi given earlier,

Cn := max0≤v≤n log dPv/dP|Fn

= max0≤v≤n log (∏i=1v-1φ(Yi0)/σ ∏i=vnφ(Yi1)/σ)/∏i=1nφ(Yi0)/σ

= 1/σ2max0≤v≤n 1 – μ0)∑i=vn[Yi – (μ1 + μ0)/2] —– (4)

In view of (3), let us initialize the sequence (4) at Y0 = (μ1 + μ0)/2 and distinguish two cases.

a) μ> μ0: divide out (μ1 – μ0), multiply by the constant σ2 in (4) and use (2) to obtain CUSUM stopping T+:

T+(h+) = inf {n ≥ 0: max0≤v≤n i=vn[Yi – (μ1 + μ0)/2] ≥ h+} —– (5)

for an appropriately scaled threshold h> 0.

b) μ< μ0: divide out (μ1 – μ0), multiply by the constant σ2 in (4) and use (2) to obtain CUSUM stopping T:

T(h) = inf {n ≥ 0: max0≤v≤n i=vn[(μ1 + μ0)/2 – Yi] ≥ h} —– (6)

for an appropriately scaled threshold h > 0.

The sequences form a CUSUM according to the deviation of the monitored sequential observations from the average of their pre- and postchange means. Although the stopping times (5) and (6) can be derived by formal CUSUM regime change considerations, they may also be used as general nonparametric stopping rules directly applied to sequential observations.

Philosophizing Loops – Why Spin Foam Constraints to 3D Dynamics Evolution?

I02-31-theories2

The philosophy of loops is canonical, i.e., an analysis of the evolution of variables defined classically through a foliation of spacetime by a family of space-like three- surfaces ∑t. The standard choice is the three-dimensional metric gij, and its canonical conjugate, related to the extrinsic curvature. If the system is reparametrization invariant, the total hamiltonian vanishes, and this hamiltonian constraint is usually called the Wheeler-DeWitt equation. Choosing the canonical variables is fundamental, to say the least.

Abhay Ashtekar‘s insights stems from the definition of an original set of variables stemming from Einstein-Hilbert Lagrangian written in the form,

S = ∫ea ∧ eb ∧ Rcdεabcd —– (1)

where, eare the one-forms associated to the tetrad,

ea ≡ eaμdxμ —– (2)

The associated SO(1, 3) connection one-form ϖab is called the spin connection. Its field strength is the curvature expressed as a two form:

Rab ≡ dϖab + ϖac ∧ ϖcb —– (3)

Ashtekar’s variables are actually based on the SU(2) self-dual connection

A = ϖ − i ∗ ϖ —– (4)

Its field strength is

F ≡ dA + A ∧ A —– (5)

The dynamical variables are then (Ai, Ei ≡ F0i). The main virtue of these variables is that constraints are then linearized. One of them is exactly analogous to Gauss’ law:

DiEi = 0 —– (6)

There is another one related to three-dimensional diffeomorphisms invariance,

trFijEi = 0 —– (7)

and, finally, there is the Hamiltonian constraint,

trFijEiEj = 0 —– (8)

On a purely mathematical basis, there is no doubt that Astekhar’s variables are of a great ingenuity. As a physical tool to describe the metric of space, they are not real in general. This forces a reality condition to be imposed, which is akward. For this reason it is usually prefered to use the Barbero-Immirzi formalism in which the connection depends on a free parameter, γ

Aia + ϖia + γKia —– (9)

ϖ being the spin connection, and K the extrinsic curvature. When γ = i, Ashtekar’s formalism is recovered, for other values of γ, the explicit form of the constraints is more complicated. Even if there is a Hamiltonian constraint that seems promising, was isn’t particularly clear is if the quantum constraint algebra is isomorphic to the classical algebra.

Some states which satisfy the Astekhar constraints are given by the loop representation, which can be introduced from the construct (depending both on the gauge field A and on a parametrized loop γ)

W (γ, A) ≡ trPeφγA —– (10)

and a functional transform mapping functionals of the gauge field ψ(A) into functionals of loops, ψ(γ):

ψ(γ) ≡ ∫DAW(γ, A) ψ(A) —– (11)

When one divides by diffeomorphisms, it is found that functions of knot classes (diffeomorphisms classes of smooth, non self-intersecting loops) satisfy all the constraints. Some particular states sought to reproduce smooth spaces at coarse graining are the Weaves. It is not clear to what extent they also approach the conjugate variables (that is, the extrinsic curvature) as well.

In the presence of a cosmological constant the hamiltonian constraint reads:

εijkEaiEbj(Fkab + λ/3εabcEck) = 0 —– (12)

A particular class of solutions expounded by Lee Smolin of the constraint are self-dual solutions of the form

Fiab = -λ/3εabcEci —– (13)

Loop states in general (suitable symmetrized) can be represented as spin network states: colored lines (carrying some SU(2) representation) meeting at nodes where intertwining SU(2) operators act. There is also a path integral representation, known as spin foam, a topological theory of colored surfaces representing the evolution of a spin network. Spin foams can also be considered as an independent approach to the quantization of the gravitational field. In addition to its specific problems, the hamiltonian constraint does not say in what sense (with respect to what) the three-dimensional dynamics evolve.

Complete Manifolds’ Pure Logical Necessity as the Totality of Possible Formations. Thought of the Day 124.0

husserl-phenomenology

In Logical Investigations, Husserl called his theory of complete manifolds the key to the only possible solution to how in the realm of numbers impossible, non-existent, meaningless concepts might be dealt with as real ones. In Ideas, he wrote that his chief purpose in developing his theory of manifolds had been to find a theoretical solution to the problem of imaginary quantities (Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy).

Husserl saw how questions regarding imaginary numbers come up in mathematical contexts in which formalization yields constructions which arithmetically speaking are nonsense, but can be used in calculations. When formal reasoning is carried out mechanically as if these symbols have meaning, if the ordinary rules are observed, and the results do not contain any imaginary components, these symbols might be legitimately used. And this could be empirically verified (Philosophy of Arithmetic_ Psychological and Logical Investigations with Supplementary Texts).

In a letter to Carl Stumpf in the early 1890s, Husserl explained how, in trying to understand how operating with contradictory concepts could lead to correct theorems, he had found that for imaginary numbers like √2 and √-1, it was not a matter of the possibility or impossibility of concepts. Through the calculation itself and its rules, as defined for those fictive numbers, the impossible fell away, and a genuine equation remained. One could calculate again with the same signs, but referring to valid concepts, and the result was again correct. Even if one mistakenly imagined that what was contradictory existed, or held the most absurd theories about the content of the corresponding concepts of number, the calculation remained correct if it followed the rules. He concluded that this must be a result of the signs and their rules (Early Writings in the Philosophy of Logic and Mathematics). The fact that one can generalize, produce variations of formal arithmetic that lead outside the quantitative domain without essentially altering formal arithmetic’s theoretical nature and calculational methods brought Husserl to realize that there was more to the mathematical or formal sciences, or the mathematical method of calculation than could be captured in purely quantitative analyses.

Understanding the nature of theory forms, shows how reference to impossible objects can be justified. According to his theory of manifolds, one could operate freely within a manifold with imaginary concepts and be sure that what one deduced was correct when the axiomatic system completely and unequivocally determined the body of all the configurations possible in a domain by a purely analytical procedure. It was the completeness of the axiomatic system that gave one the right to operate in that free way. A domain was complete when each grammatically constructed proposition exclusively using the language of the domain was determined from the outset to be true or false in virtue of the axioms, i.e., necessarily followed from the axioms or did not. In that case, calculating with expressions without reference could never lead to contradictions. Complete manifolds have the

distinctive feature that a finite number of concepts and propositions – to be drawn as occasion requires from the essential nature of the domain under consideration –  determines completely and unambiguously on the lines of pure logical necessity the totality of all possible formations in the domain, so that in principle, therefore, nothing further remains open within it.

In such complete manifolds, he stressed, “the concepts true and formal implication of the axioms are equivalent (Ideas).

Husserl pointed out that there may be two valid discipline forms that stand in relation to one another in such a way that the axiom system of one may be a formal limitation of that of the other. It is then clear that everything deducible in the narrower axiom system is included in what is deducible in the expanded system, he explained. In the arithmetic of cardinal numbers, Husserl explained, there are no negative numbers, for the meaning of the axioms is so restrictive as to make subtracting 4 from 3 nonsense. Fractions are meaningless there. So are irrational numbers, √–1, and so on. Yet in practice, all the calculations of the arithmetic of cardinal numbers can be carried out as if the rules governing the operations are unrestrictedly valid and meaningful. One can disregard the limitations imposed in a narrower domain of deduction and act as if the axiom system were a more extended one. We cannot arbitrarily expand the concept of cardinal number, Husserl reasoned. But we can abandon it and define a new, pure formal concept of positive whole number with the formal system of definitions and operations valid for cardinal numbers. And, as set out in our definition, this formal concept of positive numbers can be expanded by new definitions while remaining free of contradiction. Fractions do not acquire any genuine meaning through our holding onto the concept of cardinal number and assuming that units are divisible, he theorized, but rather through our abandonment of the concept of cardinal number and our reliance on a new concept, that of divisible quantities. That leads to a system that partially coincides with that of cardinal numbers, but part of which is larger, meaning that it includes additional basic elements and axioms. And so in this way, with each new quantity, one also changes arithmetics. The different arithmetics do not have parts in common. They have totally different domains, but an analogous structure. They have forms of operation that are in part alike, but different concepts of operation.

For Husserl, formal constraints banning meaningless expressions, meaningless imaginary concepts, reference to non-existent and impossible objects restrict us in our theoretical, deductive work, but that resorting to the infinity of pure forms and transformations of forms frees us from such conditions and explains why having used imaginaries, what is meaningless, must lead, not to meaningless, but to true results.

Metaphysical Would-Be(s). Drunken Risibility.

2483194094_be8c241308_b

If one were to look at Quine’s commitment to similarity, natural kinds, dispositions, causal statements, etc., it is evident, that it takes him close to Peirce’s conception of Thirdness – even if Quine in an utopian vision imagines that all such concepts in a remote future will dissolve and vanish in favor of purely microstructural descriptions.

A crucial difference remains, however, which becomes evident when one looks at Quine’s brief formula for ontological commitment, the famous idea that ‘to be is to be the value of a bound variable’. For even if this motto is stated exactly to avoid commitment to several different types of being, it immediately prompts the question: the equation, in which the variable is presumably bound, which status does it have? Governing the behavior of existing variable values, is that not in some sense being real?

This will be Peirce’s realist idea – that regularities, tendencies, dispositions, patterns, may possess real existence, independent of any observer. In Peirce, this description of Thirdness is concentrated in the expression ‘real possibility’, and even it may sound exceedingly metaphysical at a first glance, it amounts, at a closer look, to regularities charted by science that are not mere shorthands for collections of single events but do possess reality status. In Peirce, the idea of real possibilities thus springs from his philosophy of science – he observes that science, contrary to philosophy, is spontaneously realist, and is right in being so. Real possibilities are thus counterposed to mere subjective possibilities due to lack of knowledge on the part of the subject speaking: the possibility of ‘not known not to be true’.

In a famous piece of self-critique from his late, realist period, Peirce attacks his earlier arguments (from ‘How to Make Our Ideas Clear’, in the late 1890s considered by himself the birth certificate of pragmatism after James’s reference to Peirce as pragmatism’s inventor). Then, he wrote

let us ask what we mean by calling a thing hard. Evidently that it will not be scratched by many other substances. The whole conception of this quality, as of every other, lies in its conceived effects. There is absolutely no difference between a hard thing and a soft thing so long as they are not brought to the test. Suppose, then, that a diamond could be crystallized in the midst of a cushion of soft cotton, and should remain there until it was finally burned up. Would it be false to say that that diamond was soft? […] Reflection will show that the reply is this: there would be no falsity in such modes of speech.

More than twenty-five years later, however, he attacks this argument as bearing witness to the nominalism of his youth. Now instead he supports the

scholastic doctrine of realism. This is usually defined as the opinion that there are real objects that are general, among the number being the modes of determination of existent singulars, if, indeed, these be not the only such objects. But the belief in this can hardly escape being accompanied by the acknowledgment that there are, besides, real vagues, and especially real possibilities. For possibility being the denial of a necessity, which is a kind of generality, is vague like any other contradiction of a general. Indeed, it is the reality of some possibilities that pragmaticism is most concerned to insist upon. The article of January 1878 endeavored to gloze over this point as unsuited to the exoteric public addressed; or perhaps the writer wavered in his own mind. He said that if a diamond were to be formed in a bed of cotton-wool, and were to be consumed there without ever having been pressed upon by any hard edge or point, it would be merely a question of nomenclature whether that diamond should be said to have been hard or not. No doubt this is true, except for the abominable falsehood in the word MERELY, implying that symbols are unreal. Nomenclature involves classification; and classification is true or false, and the generals to which it refers are either reals in the one case, or figments in the other. For if the reader will turn to the original maxim of pragmaticism at the beginning of this article, he will see that the question is, not what did happen, but whether it would have been well to engage in any line of conduct whose successful issue depended upon whether that diamond would resist an attempt to scratch it, or whether all other logical means of determining how it ought to be classed would lead to the conclusion which, to quote the very words of that article, would be ‘the belief which alone could be the result of investigation carried sufficiently far.’ Pragmaticism makes the ultimate intellectual purport of what you please to consist in conceived conditional resolutions, or their substance; and therefore, the conditional propositions, with their hypothetical antecedents, in which such resolutions consist, being of the ultimate nature of meaning, must be capable of being true, that is, of expressing whatever there be which is such as the proposition expresses, independently of being thought to be so in any judgment, or being represented to be so in any other symbol of any man or men. But that amounts to saying that possibility is sometimes of a real kind. (The Essential Peirce Selected Philosophical Writings, Volume 2)

In the same year, he states, in a letter to the Italian pragmatist Signor Calderoni:

I myself went too far in the direction of nominalism when I said that it was a mere question of the convenience of speech whether we say that a diamond is hard when it is not pressed upon, or whether we say that it is soft until it is pressed upon. I now say that experiment will prove that the diamond is hard, as a positive fact. That is, it is a real fact that it would resist pressure, which amounts to extreme scholastic realism. I deny that pragmaticism as originally defined by me made the intellectual purport of symbols to consist in our conduct. On the contrary, I was most careful to say that it consists in our concept of what our conduct would be upon conceivable occasions. For I had long before declared that absolute individuals were entia rationis, and not realities. A concept determinate in all respects is as fictitious as a concept definite in all respects. I do not think we can ever have a logical right to infer, even as probable, the existence of anything entirely contrary in its nature to all that we can experience or imagine. 

Here lies the core of Peirce’s metaphysical insistence on the reality of ‘would-be’s. Real possibilities, or would-bes, are vague to the extent that they describe certain tendential, conditional behaviors only, while they do not prescribe any other aspect of the single objects they subsume. They are, furthermore, represented in rationally interrelated clusters of concepts: the fact that the diamond is in fact hard, no matter if it scratches anything or not, lies in the fact that the diamond’s carbon structure displays a certain spatial arrangement – so it is an aspect of the very concept of diamond. And this is why the old pragmatic maxim may not work without real possibilities: it is they that the very maxim rests upon, because it is they that provide us with the ‘conceived consequences’ of accepting a concept. The maxim remains a test to weed out empty concepts with no conceived consequences – that is, empty a priori reasoning and superfluous metaphysical assumptions. But what remains after the maxim has been put to use, is real possibilities. Real possibilities thus connect epistemology, expressed in the pragmatic maxim, to ontology: real possibilities are what science may grasp in conditional hypotheses.

The question is whether Peirce’s revision of his old ‘nominalist’ beliefs form part of a more general development in Peirce from nominalism to realism. The locus classicus of this idea is Max Fisch (Peirce, Semeiotic and Pragmatism) where Fisch outlines a development from an initial nominalism (albeit of a strange kind, refusing, as always in Peirce, the existence of individuals determinate in all respects) via a series of steps towards realism, culminating after the turn of the century. Fisch’s first step is then Peirce’s theory of the real as that which reasoning would finally have as its result; the second step his Berkeley review with its anti-nominalism and the idea that the real is what is unaffected by what we may think of it; the third step is his pragmatist idea that beliefs are conceived habits of action, even if he here clings to the idea that the conditionals in which habits are expressed are material implications only – like the definition of ‘hard’; the fourth step his reading of Abbott’s realist Scientific Theism (which later influenced his conception of scientific universals) and his introduction of the index in his theory of signs; the fifth step his acceptance of the reality of continuity; the sixth the introduction of real possibilities, accompanied by the development of existential graphs, topology and Peirce’s changing view of Hegelianism; the seventh, the identification of pragmatism with realism; the eighth ‘his last stronghold, that of Philonian or material implication’. A further realist development exchanging Peirce’s early frequentist idea of probability for a dispositional theory of probability was, according to Fisch, never finished.

The issue of implication concerns the old discussion quoted by Cicero between the Hellenistic logicians Philo and Diodorus. The former formulated what we know today as material implication, while the latter objected on common-sense ground that material implication does not capture implication in everyday language and thought and another implication type should be sought. As is well known, material implication says that p ⇒ q is equivalent to the claim that either p is false or q is true – so that p ⇒ q is false only when p is true and q is false. The problems arise when p is false, for any false p makes the implication true, and this leads to strange possibilities of true inferences. The two parts of the implication have no connection with each other at all, such as would be the spontaneous idea in everyday thought. It is true that Peirce as a logician generally supports material (‘Philonian’) implication – but it is also true that he does express some second thoughts at around the same time as the afterthoughts on the diamond example.

Peirce is a forerunner of the attempts to construct alternatives such as strict implication, and the reason why is, of course, that real possibilities are not adequately depicted by material implication. Peirce is in need of an implication which may somehow picture the causal dependency of q on p. The basic reason for the mature Peirce’s problems with the representation of real possibilities is not primarily logical, however. It is scientific. Peirce realizes that the scientific charting of anything but singular, actual events necessitates the real existence of tendencies and relations connecting singular events. Now, what kinds are those tendencies and relations? The hard diamond example seems to emphasize causality, but this probably depends on the point of view chosen. The ‘conceived consequences’ of the pragmatic maxim may be causal indeed: if we accept gravity as a real concept, then masses will attract one another – but they may all the same be structural: if we accept horse riders as a real concept, then we should expect horses, persons, the taming of horses, etc. to exist, or they may be teleological. In any case, the interpretation of the pragmatic maxim in terms of real possibilities paves the way for a distinction between empty a priori suppositions and real a priori structures.