# Define Operators Corresponding to Cobordisms Only Iff Each Connected Component of the Cobordism has Non-empty Outgoing Boundary. Drunken Risibility.

Define a category B whose objects are the oriented submanifolds of X, and whose vector space of morphisms from Y to Z is OYZ = ExtH(X)(H(Y), H(Z)) – the cohomology, as usual, has complex coefficients, and H(Y) and H(Z) are regarded as H(X)-modules by restriction. The composition of morphisms is given by the Yoneda composition of Ext groups. With this definition, however, it will not be true that OYZ is dual to OZY. (To see this it is enough to consider the case when Y = Z is a point of X, and X is a product of odd-dimensional spheres; then OYZ is a symmetric algebra, and is not self-dual as a vector space.)

We can do better by defining a cochain complex O’YZ of morphisms by

O’YZ = BΩ(X)(Ω(Y), Ω(Z)) —– (1)

where Ω(X) denotes the usual de Rham complex of a manifold X, and BA(B,C), for a differential graded algebra A and differential graded A- modules B and C, is the usual cobar resolution

Hom(B, C) → Hom(A ⊗ B, C) → Hom(A ⊗ A ⊗ B, C) → · · ·  —– (2)

in which the differential is given by

dƒ(a1 ⊗ · · · ⊗ ak ⊗ b) = 􏰝a1 ƒ(a2 ⊗ · · · ⊗ ak ⊗ b) + ∑(-1)i ƒ(a1 ⊗ · · · ⊗ aiai+1 ⊗ ak ⊗ b) + (-1)k ƒ(a1 ⊗ · · · ⊗ ak-1 ⊗ akb) —– (3)

whose cohomology is ExtA(B,C). This is different from OYZ = ExtH(X)(H(Y), H(Z)), but related to it by a spectral sequence whose E2-term is OYZ and which converges to H(O’YZ) = ExtΩ(X)(Ω(Y), Ω(Z)). But more important is that H(O’YZ) is the homology of the space PYZ of paths in X which begin in Y and end in Z. To be precise, Hp(O’YZ) ≅ Hp+dZ(PYZ), where dZ is the dimension of Z. On the cochain complexes the Yoneda composition is associative up to cochain homotopy, and defines a structure of an A category B’. The corresponding composition of homology groups

Hi(PYZ) × Hj(PZW) → Hi+j−dZ(PYW) —— (4)

is the composition of the Gysin map associated to the inclusion of the codimension dZ submanifold M of pairs of composable paths in the product PYZ × PZW with the concatenation map M → PYW.

Now let’s attempt to fit the closed string cochain algebra C to this A category. C is equivalent to the usual Hochschild complex of the differential graded algebra Ω(X), whose cohomology is the homology of the free loop space LX with its degrees shifted downwards by the dimension dX of X, so that the cohomology Hi(C) is potentially non-zero for −dX ≤ i < ∞. There is a map Hi(X) → H−i(C) which embeds the ordinary cohomology ring of X to the Pontrjagin ring of the based loop space L0X, based at any chosen point in X.

The structure is, however, not a cochain-level open and closed theory, as we have no trace maps inducing inner products on H(O’YZ). When one tries to define operators corresponding to cobordisms it turns out to be possible only when each connected component of the cobordism has non-empty outgoing boundary.

# Intuitive Algebra (Groupoid/Categorical Structure) of Open Strings As Morphisms

A geometric Dirichlet brane is a triple (L, E, ∇E) – a submanifold L ⊂ M, carrying a vector bundle E, with connection ∇E.

The real dimension of L is also often brought into the nomenclature, so that one speaks of a Dirichlet p-brane if p = dimRL.

An open string which stretches from a Dirichlet brane (L, E, ∇E) to a Dirichlet brane (K, F, ∇F), is a map X from an interval I ≅ [0,1] to M, such that X(0) ∈ L and X(1) ∈ K. An “open string history” is a map from R into open strings, or equivalently a map from a two-dimensional surface with boundary, say Σ ≡ I × R, to M , such that the two boundaries embed into L and K.

The quantum theory of these open strings is defined by a functional integral over these histories, with a weight which depends on the connections ∇E and ∇F. It describes the time evolution of an open string state which is a wave function in a Hilbert space HB,B′ labelled by the two choices of brane B = (L, E, ∇E) and B′ = (K, F, ∇F).

Distinct Dirichlet branes can embed into the same submanifold L. One way to represent this would be to specify the configurations of Dirichlet branes as a set of submanifolds with multiplicity. However, we can also represent this choice by using the choice of bundle E. Thus, a set of N identical branes will be represented by tensoring the bundle E with CN. The connection is also obtained by tensor product. An N-fold copy of the Dirichlet brane (L, E, ∇E) is thus a triple (L, E ⊗CN, ∇E ⊗ idN).

In physics, one visualizes this choice by labelling each open string boundary with a basis vector of CN, which specifies a choice among the N identical branes. These labels are called Chan-Paton factors. One then uses them to constrain the interactions between open strings. If we picture such an interaction as the joining of two open strings to one, the end of the first to the beginning of the second, we require not only the positions of the two ends to agree, but also the Chan-Paton factors. This operation is the intuitive algebra of open strings.

Mathematically, an algebra of open strings can always be tensored with a matrix algebra, in general producing a noncommutative algebra. More generally, if there is more than one possible boundary condition, then, rather than an algebra, it is better to think of this as a groupoid or categorical structure on the boundary conditions and the corresponding open strings. In the language of groupoids, particular open strings are elements of the groupoid, and the composition law is defined only for pairs of open strings with a common boundary. In the categorical language, boundary conditions are objects, and open strings are morphisms. The simplest intuitive argument that a non-trivial choice can be made here is to call upon the general principle that any local deformation of the world-sheet action should be a physically valid choice. In particular, particles in physics can be charged under a gauge field, for example the Maxwell field for an electron, the color Yang-Mills field for a quark, and so on. The wave function for a charged particle is then not complex-valued, but takes values in a bundle E.

Now, the effect of a general connection ∇E is to modify the functional integral by modifying the weight associated to a given history of the particle. Suppose the trajectory of a particle is defined by a map φ : R → M; then a natural functional on trajectories associated with a connection ∇ on M is simply its holonomy along the trajectory, a linear map from E|φ(t1) to E|φ(t2). The functional integral is now defined physically as a sum over trajectories with this holonomy included in the weight.

The simplest way to generalize this to a string is to consider the ls → 0 limit. Now the constraint of finiteness of energy is satisfied only by a string of vanishingly small length, effectively a particle. In this limit, both ends of the string map to the same point, which must therefore lie on L ∩ K.

The upshot is that, in this limit, the wave function of an open string between Dirichlet branes (L, E, ∇) and (K, F, ∇F) transforms as a section of E ⊠ F over L ∩ K, with the natural connection on the direct product. In the special case of (L, E, ∇E) ≅ (K, F, ∇F), this reduces to the statement that an open string state is a section of EndE. Open string states are sections of a graded vector bundle End E ⊗ Λ•T∗L, the degree-1 part of which corresponds to infinitesimal deformations of ∇E. In fact, these open string states are the infinitesimal deformations of ∇E, in the standard sense of quantum field theory, i.e., a single open string is a localized excitation of the field obtained by quantizing the connection ∇E. Similarly, other open string states are sections of the normal bundle of L within X, and are related in the same way to infinitesimal deformations of the submanifold. These relations, and their generalizations to open strings stretched between Dirichlet branes, define the physical sense in which the particular set of Dirichlet branes associated to a specified background X can be deduced from string theory.

# The Biological Kant. Note Quote.

The biological treatise takes as its object the realm of physics left out of Kant’s critical demarcations of scientific, that is, mathematical and mechanistic, physics. Here, the main idea was that scientifically understandable Nature was defined by lawfulness. In his Metaphysical Foundations of Natural Science, this idea was taken further in the following claim:

I claim, however, that there is only as much proper science to be found in any special doctrine on nature as there is mathematics therein, and further ‘a pure doctrine on nature about certain things in nature (doctrine on bodies and doctrine on minds) is only possible by means of mathematics’.

The basic idea is thus to identify Nature’s lawfulness with its ability to be studied by means of mathematical schemata uniting understanding and intuition. The central schema, to Kant, was numbers, so apt to be used in the understanding of mechanically caused movement. But already here, Kant is very well aware of a whole series of aspects of spontaneuosly experienced Nature is left out of sight by the concentration on matter in movement, and he calls for these further realms of Nature to be studied by a continuation of the Copernican turn, by the mind’s further study of the utmost limits of itself. Why do we spontaneously see natural purposes, in Nature? Purposiveness is wholly different from necessity, crucial to Kant’s definition of Nature. There is no reason in the general concept of Nature (as lawful) to assume that nature’s objects may serve each other as purposes. Nevertheless, we do not stop assuming just that. But what we do when we ascribe purposes to Nature is using the faculties of mind in another way than in science, much closer to the way we use them in the appreciation of beauty and art, the object of the first part of the book immediately before the treatment of teleological judgment. This judgment is characterized by a central distinction, already widely argued in this first part of the book: the difference between determinative and reflective judgments, respectively. While the judgment used scientifically to decide whether a specific case follows a certain rule in explanation by means of a derivation from a principle, and thus constitutes the objectivity of the object in question – the judgment which is reflective lacks all these features. It does not proceed by means of explanation, but by mere analogy; it is not constitutive, but merely regulative; it does not prove anything but merely judges, and it has no principle of reason to rest its head upon but the very act of judging itself. These ideas are now elaborated throughout the critic of teleological judgment.

In the section Analytik der teleologischen Urteilskraft, Kant gradually approaches the question: first is treated the merely formal expediency: We may ascribe purposes to geometry in so far as it is useful to us, just like rivers carrying fertile soils with them for trees to grow in may be ascribed purposes; these are, however, merely contingent purposes, dependent on an external telos. The crucial point is the existence of objects which are only possible as such in so far as defined by purposes:

That its form is not possible after mere natural laws, that is, such things which may not be known by us through understanding applied to objects of the senses; on the contrary that even the empirical knowledge about them, regarding their cause and effect, presupposes concepts of reason.

The idea here is that in order to conceive of objects which may not be explained with reference to understanding and its (in this case, mechanical) concepts only, these must be grasped by the non-empirical ideas of reason itself. If causes are perceived as being interlinked in chains, then such contingencies are to be thought of only as small causal circles on the chain, that is, as things being their own cause. Hence Kant’s definition of the Idea of a natural purpose:

an object exists as natural purpose, when it is cause and effect of itself.

This can be thought as an idea without contradiction, Kant maintains, but not conceived. This circularity (the small causal circles) is a very important feature in Kant’s tentative schematization of purposiveness. Another way of coining this Idea is – things as natural purposes are organized beings. This entails that naturally purposeful objects must possess a certain spatio-temporal construction: the parts of such a thing must be possible only through their relation to the whole – and, conversely, the parts must actively connect themselves to this whole. Thus, the corresponding idea can be summed up as the Idea of the Whole which is necessary to pass judgment on any empirical organism, and it is very interesting to note that Kant sums up the determination of any part of a Whole by all other parts in the phrase that a natural purpose is possible only as an organized and self-organizing being. This is probably the very birth certificate of the metaphysics of self-organization. It is important to keep in mind that Kant does not feel any vitalist temptation at supposing any organizing power or any autonomy on the part of the whole which may come into being only by this process of self-organization between its parts. When Kant talks about the forming power in the formation of the Whole, it is thus nothing outside of this self-organization of its parts.

This leads to Kant’s final definition: an organized being is that in which all that is alternating is ends and means. This idea is extremely important as a formalization of the idea of teleology: the natural purposes do not imply that there exists given, stable ends for nature to pursue, on the contrary, they are locally defined by causal cycles, in which every part interchangeably assumes the role of ends and means. Thus, there is no absolute end in this construal of nature’s teleology; it analyzes teleology formally at the same time as it relativizes it with respect to substance. Kant takes care to note that this maxim needs not be restricted to the beings – animals – which we spontaneously tend to judge as purposeful. The idea of natural purposes thus entails that there might exist a plan in nature rendering processes which we have all reasons to disgust purposeful for us. In this vision, teleology might embrace causality – and even aesthetics:

Also natural beauty, that is, its harmony with the free play of our epistemological faculties in the experience and judgment of its appearance can be seen in the way of objective purposivity of nature in its totality as system, in which man is a member.

An important consequence of Kant’s doctrine is that their teleology is so to speak secularized in two ways: (1) it is formal, and (2) it is local. It is formal because self-organization does not ascribe any special, substantial goal for organisms to pursue – other than the sustainment of self-organization. Thus teleology is merely a formal property in certain types of systems. This is why teleology is also local – it is to be found in certain systems when the causal chain form loops, as Kant metaphorically describes the cycles involved in self-organization – it is no overarching goal governing organisms from the outside. Teleology is a local, bottom-up, process only.

Kant does not in any way doubt the existence of organized beings, what is at stake is the possibility of dealing with them scientifically in terms of mechanics. Even if they exist as a given thing in experience, natural purposes can not receive any concept. This implies that biology is evident in so far as the existence of organisms cannot be doubted. Biology will never rise to the heights of science, its attempts at doing so are beforehand delimited, all scientific explanations of organisms being bound to be mechanical. Following this line of argument, it corresponds very well to present-day reductionism in biology, trying to take all problems of phenotypical characters, organization, morphogenesis, behavior, ecology, etc. back to the biochemistry of genetics. But the other side of the argument is that no matter how successful this reduction may prove, it will never be able to reduce or replace the teleological point of view necessary in order to understand the organism as such in the first place.

Evidently, there is something deeply unsatisfactory in this conclusion which is why most biologists have hesitated at adopting it and cling to either full-blown reductionism or to some brand of vitalism, subjecting themselves to the dangers of ‘transcendental illusion’ and allowing for some Goethe-like intuitive idea without any schematization. Kant tries to soften up the question by philosophical means by establishing an crossing over from metaphysics to physics, or, from the metaphysical constraints on mechanical physics and to physics in its empirical totality, including the organized beings of biology. Pure mechanics leaves physics as a whole unorganized, and this organization is sought to be established by means of mediating concepts’. Among them is the formative power, which is not conceived of in a vitalist substantialist manner, but rather a notion referring to the means by which matter manages to self-organize. It thus comprehends not only biological organization, but macrophysic solid matter physics as well. Here, he adds an important argument to the critic of judgment:

Because man is conscious of himself as a self-moving machine, without being able to further understand such a possibility, he can, and is entitled to, introduce a priori organic-moving forces of bodies into the classification of bodies in general and thus to distinguish mere mechanical bodies from self-propelled organic bodies.

# Interleaves

Many important spaces in topology and algebraic geometry have no odd-dimensional homology. For such spaces, functorial spatial homology truncation simplifies considerably. On the theory side, the simplification arises as follows: To define general spatial homology truncation, we used intermediate auxiliary structures, the n-truncation structures. For spaces that lack odd-dimensional homology, these structures can be replaced by a much simpler structure. Again every such space can be embedded in such a structure, which is the analogon of the general theory. On the application side, the crucial simplification is that the truncation functor t<n will not require that in truncating a given continuous map, the map preserve additional structure on the domain and codomain of the map. In general, t<n is defined on the category CWn⊃∂, meaning that a map must preserve chosen subgroups “Y ”. Such a condition is generally necessary on maps, for otherwise no truncation exists. So arbitrary continuous maps between spaces with trivial odd-dimensional homology can be functorially truncated. In particular the compression rigidity obstructions arising in the general theory will not arise for maps between such spaces.

Let ICW be the full subcategory of CW whose objects are simply connected CW-complexes K with finitely generated even-dimensional homology and vanishing odd-dimensional homology for any coefficient group. We call ICW the interleaf category.

For example, the space K = S22 e3 is simply connected and has vanishing integral homology in odd dimensions. However, H3(K;Z/2) = Z/2 ≠ 0.

Let X be a space whose odd-dimensional homology vanishes for any coefficient group. Then the even-dimensional integral homology of X is torsion-free.

Taking the coefficient group Q/Z, we have

Tor(H2k(X),Q/Z) = H2k+1(X) ⊗ Q/Z ⊕ Tor(H2k(X),Q/Z) = H2k+1(X;Q/Z) = 0.

Thus H2k(X) is torsion-free, since the group Tor(H2k(X),Q/Z) is isomorphic to the torsion subgroup of H2k(X).

Any simply connected closed 4-manifold is in ICW. Indeed, such a manifold is homotopy equivalent to a CW-complex of the form

Vi=1kSi2ƒe4

where the homotopy class of the attaching map ƒ : S3 → Vi=1k Si2 may be viewed as a symmetric k × k matrix with integer entries, as π3(Vi=1kSi2) ≅ M(k), with M(k) the additive group of such matrices.

Any simply connected closed 6-manifold with vanishing integral middle homology group is in ICW. If G is any coefficient group, then H1(M;G) ≅ H1(M) ⊗ G ⊕ Tor(H0M,G) = 0, since H0(M) = Z. By Poincaré duality,

0 = H3(M) ≅ H3(M) ≅ Hom(H3M,Z) ⊕ Ext(H2M,Z),

so that H2(M) is free. This implies that Tor(H2M,G) = 0 and hence H3(M;G) ≅ H3(M) ⊗ G ⊕ Tor(H2M,G) = 0. Finally, by G-coefficient Poincaré duality,

H5(M;G) ≅ H1(M;G) ≅ Hom(H1M,G) ⊕ Ext(H0M,G) = Ext(Z,G) = 0

Any smooth, compact toric variety X is in ICW: Danilov’s Theorem implies that H(X;Z) is torsion-free and the map A(X) → H(X;Z) given by composing the canonical map from Chow groups to homology, Ak(X) = An−k(X) → H2n−2k(X;Z), where n is the complex dimension of X, with Poincaré duality H2n−2k(X;Z) ≅ H2k(X;Z), is an isomorphism. Since the odd-dimensional cohomology of X is not in the image of this map, this asserts in particular that Hodd(X;Z) = 0. By Poincaré duality, Heven(X;Z) is free and Hodd(X;Z) = 0. These two statements allow us to deduce from the universal coefficient theorem that Hodd(X;G) = 0 for any coefficient group G. If we only wanted to establish Hodd(X;Z) = 0, then it would of course have been enough to know that the canonical, degree-doubling map A(X) → H(X;Z) is onto. One may then immediately reduce to the case of projective toric varieties because every complete fan Δ has a projective subdivision Δ, the corresponding proper birational morphism X(Δ) → X(Δ) induces a surjection H(X(Δ);Z) → H(X(Δ);Z) and the diagram

commutes.

Let G be a complex, simply connected, semisimple Lie group and P ⊂ G a connected parabolic subgroup. Then the homogeneous space G/P is in ICW. It is simply connected, since the fibration P → G → G/P induces an exact sequence

1 = π1(G) → π1(G/P) → π0(P) → π0(G) = 0,

which shows that π1(G/P) → π0(P) is a bijection. Accordingly, ∃ elements sw(P) ∈ H2l(w)(G/P;Z) (“Schubert classes,” given geometrically by Schubert cells), indexed by w ranging over a certain subset of the Weyl group of G, that form a basis for H(G/P;Z). (For w in the Weyl group, l(w) denotes the length of w when written as a reduced word in certain specified generators of the Weyl group.) In particular Heven(G/P;Z) is free and Hodd(G/P;Z) = 0. Thus Hodd(G/P;G) = 0 for any coefficient group G.

The linear groups SL(n, C), n ≥ 2, and the subgroups S p(2n, C) ⊂ SL(2n, C) of transformations preserving the alternating bilinear form

x1yn+1 +···+ xny2n −xn+1y1 −···−x2nyn

on C2n × C2n are examples of complex, simply connected, semisimple Lie groups. A parabolic subgroup is a closed subgroup that contains a Borel group B. For G = SL(n,C), B is the group of all upper-triangular matrices in SL(n,C). In this case, G/B is the complete flag manifold

G/B = {0 ⊂ V1 ⊂···⊂ Vn−1 ⊂ Cn}

of flags of subspaces Vi with dimVi = i. For G = Sp(2n,C), the Borel subgroups B are the subgroups preserving a half-flag of isotropic subspaces and the quotient G/B is the variety of all such flags. Any parabolic subgroup P may be described as the subgroup that preserves some partial flag. Thus (partial) flag manifolds are in ICW. A special case is that of a maximal parabolic subgroup, preserving a single subspace V. The corresponding quotient SL(n, C)/P is a Grassmannian G(k, n) of k-dimensional subspaces of Cn. For G = Sp(2n,C), one obtains Lagrangian Grassmannians of isotropic k-dimensional subspaces, 1 ≤ k ≤ n. So Grassmannians are objects in ICW. The interleaf category is closed under forming fibrations.

# Quantifier – Ontological Commitment: The Case for an Agnostic. Note Quote.

What about the mathematical objects that, according to the platonist, exist independently of any description one may offer of them in terms of comprehension principles? Do these objects exist on the fictionalist view? Now, the fictionalist is not committed to the existence of such mathematical objects, although this doesn’t mean that the fictionalist is committed to the non-existence of these objects. The fictionalist is ultimately agnostic about the issue. Here is why.

There are two types of commitment: quantifier commitment and ontological commitment. We incur quantifier commitment to the objects that are in the range of our quantifiers. We incur ontological commitment when we are committed to the existence of certain objects. However, despite Quine’s view, quantifier commitment doesn’t entail ontological commitment. Fictional discourse (e.g. in literature) and mathematical discourse illustrate that. Suppose that there’s no way of making sense of our practice with fiction but to quantify over fictional objects. Still, people would strongly resist the claim that they are therefore committed to the existence of these objects. The same point applies to mathematical objects.

This move can also be made by invoking a distinction between partial quantifiers and the existence predicate. The idea here is to resist reading the existential quantifier as carrying any ontological commitment. Rather, the existential quantifier only indicates that the objects that fall under a concept (or have certain properties) are less than the whole domain of discourse. To indicate that the whole domain is invoked (e.g. that every object in the domain have a certain property), we use a universal quantifier. So, two different functions are clumped together in the traditional, Quinean reading of the existential quantifier: (i) to assert the existence of something, on the one hand, and (ii) to indicate that not the whole domain of quantification is considered, on the other. These functions are best kept apart. We should use a partial quantifier (that is, an existential quantifier free of ontological commitment) to convey that only some of the objects in the domain are referred to, and introduce an existence predicate in the language in order to express existence claims.

By distinguishing these two roles of the quantifier, we also gain expressive resources. Consider, for instance, the sentence:

(∗) Some fictional detectives don’t exist.

Can this expression be translated in the usual formalism of classical first-order logic with the Quinean interpretation of the existential quantifier? Prima facie, that doesn’t seem to be possible. The sentence would be contradictory! It would state that ∃ fictional detectives who don’t exist. The obvious consistent translation here would be: ¬∃x Fx, where F is the predicate is a fictional detective. But this states that fictional detectives don’t exist. Clearly, this is a different claim from the one expressed in (∗). By declaring that some fictional detectives don’t exist, (∗) is still compatible with the existence of some fictional detectives. The regimented sentence denies this possibility.

However, it’s perfectly straightforward to express (∗) using the resources of partial quantification and the existence predicate. Suppose that “∃” stands for the partial quantifier and “E” stands for the existence predicate. In this case, we have: ∃x (Fx ∧¬Ex), which expresses precisely what we need to state.

Now, under what conditions is the fictionalist entitled to conclude that certain objects exist? In order to avoid begging the question against the platonist, the fictionalist cannot insist that only objects that we can causally interact with exist. So, the fictionalist only offers sufficient conditions for us to be entitled to conclude that certain objects exist. Conditions such as the following seem to be uncontroversial. Suppose we have access to certain objects that is such that (i) it’s robust (e.g. we blink, we move away, and the objects are still there); (ii) the access to these objects can be refined (e.g. we can get closer for a better look); (iii) the access allows us to track the objects in space and time; and (iv) the access is such that if the objects weren’t there, we wouldn’t believe that they were. In this case, having this form of access to these objects gives us good grounds to claim that these objects exist. In fact, it’s in virtue of conditions of this sort that we believe that tables, chairs, and so many observable entities exist.

But recall that these are only sufficient, and not necessary, conditions. Thus, the resulting view turns out to be agnostic about the existence of the mathematical entities the platonist takes to exist – independently of any description. The fact that mathematical objects fail to satisfy some of these conditions doesn’t entail that these objects don’t exist. Perhaps these entities do exist after all; perhaps they don’t. What matters for the fictionalist is that it’s possible to make sense of significant features of mathematics without settling this issue.

Now what would happen if the agnostic fictionalist used the partial quantifier in the context of comprehension principles? Suppose that a vector space is introduced via suitable principles, and that we establish that there are vectors satisfying certain conditions. Would this entail that we are now committed to the existence of these vectors? It would if the vectors in question satisfied the existence predicate. Otherwise, the issue would remain open, given that the existence predicate only provides sufficient, but not necessary, conditions for us to believe that the vectors in question exist. As a result, the fictionalist would then remain agnostic about the existence of even the objects introduced via comprehension principles!

# Fictionalism. Drunken Risibility.

Applied mathematics is often used as a source of support for platonism. How else but by becoming platonists can we make sense of the success of applied mathematics in science? As an answer to this question, the fictionalist empiricist will note that it’s not the case that applied mathematics always works. In several cases, it doesn’t work as initially intended, and it works only when accompanied by suitable empirical interpretations of the mathematical formalism. For example, when Dirac found negative energy solutions to the equation that now bears his name, he tried to devise physically meaningful interpretations of these solutions. His first inclination was to ignore these negative energy solutions as not being physically significant, and he took the solutions to be just an artifact of the mathematics – as is commonly done in similar cases in classical mechanics. Later, however, he identified a physically meaningful interpretation of these negative energy solutions in terms of “holes” in a sea of electrons. But the resulting interpretation was empirically inadequate, since it entailed that protons and electrons had the same mass. Given this difficulty, Dirac rejected that interpretation and formulated another. He interpreted the negative energy solutions in terms of a new particle that had the same mass as the electron but opposite charge. A couple of years after Dirac’s final interpretation was published Carl Anderson detected something that could be interpreted as the particle that Dirac posited. Asked as to whether Anderson was aware of Dirac’s papers, Anderson replied that he knew of the work, but he was so busy with his instruments that, as far as he was concerned, the discovery of the positron was entirely accidental.

The application of mathematics is ultimately a matter of using the vocabulary of mathematical theories to express relations among physical entities. Given that, for the fictionalist empiricist, the truth of the various theories involved – mathematical, physical, biological, and whatnot – is never asserted, no commitment to the existence of the entities that are posited by such theories is forthcoming. But if the theories in question – and, in particular, the mathematical theories – are not taken to be true, how can they be successfully applied? There is no mystery here. First, even in science, false theories can have true consequences. The situation here is analogous to what happens in fiction. Novels can, and often do, provide insightful, illuminating descriptions of phenomena of various kinds – for example, psychological or historical events – that help us understand the events in question in new, unexpected ways, despite the fact that the novels in question are not true. Second, given that mathematical entities are not subject to spatial-temporal constraints, it’s not surprising that they have no active role in applied contexts. Mathematical theories need only provide a framework that, suitably interpreted, can be used to describe the behavior of various types of phenomena – whether the latter are physical, chemical, biological, or whatnot. Having such a descriptive function is clearly compatible with the (interpreted) mathematical framework not being true, as Dirac’s case illustrates so powerfully. After all, as was just noted, one of the interpretations of the mathematical formalism was empirically inadequate.

On the fictionalist empiricist account, mathematical discourse is clearly taken on a par with scientific discourse. There is no change in the semantics. Mathematical and scientific statements are treated in exactly the same way. Both sorts of statements are truth-apt, and are taken as describing (correctly or not) the objects and relations they are about. The only shift here is on the aim of the research. After all, on the fictionalist empiricist proposal, the goal is not truth, but something weaker: empirical adequacy – or truth only with respect to the observable phenomena. However, once again, this goal matters to both science and (applied) mathematics, and the semantic uniformity between the two fields is still preserved. According to the fictionalist empiricist, mathematical discourse is also taken literally. If a mathematical theory states that “There are differentiable functions such that…”, the theory is not going to be reformulated in any way to avoid reference to these functions. The truth of the theory, however, is never asserted. There’s no need for that, given that only the empirical adequacy of the overall theoretical package is required.

# Philosophy of Dimensions: M-Theory. Thought of the Day 85.0

Superstrings provided a perturbatively finite theory of gravity which, after compactification down to 3+1 dimensions, seemed potentially capable of explaining the strong, weak and electromagnetic forces of the Standard Model, including the required chiral representations of quarks and leptons. However, there appeared to be not one but five seemingly different but mathematically consistent superstring theories: the E8 × E8 heterotic string, the SO(32) heterotic string, the SO(32) Type I string, and Types IIA and IIB strings. Each of these theories corresponded to a different way in which fermionic degrees of freedom could be added to the string worldsheet.

Supersymmetry constrains the upper limit on the number of spacetime dimensions to be eleven. Why, then, do superstring theories stop at ten? In fact, before the “first string revolution” of the mid-1980’s, many physicists sought superunification in eleven-dimensional supergravity. Solutions to this most primitive supergravity theory include the elementary supermembrane and its dual partner, the solitonic superfivebrane. These are supersymmetric objects extended over two and five spatial dimensions, respectively. This brings to mind another question: why do superstring theories generalize zero-dimensional point particles only to one-dimensional strings, rather than p-dimensional objects?

During the “second superstring revolution” of the mid-nineties it was found that, in addition to the 1+1-dimensional string solutions, string theory contains soliton-like Dirichlet branes. These Dp-branes have p + 1-dimensional worldvolumes, which are hyperplanes in 9 + 1-dimensional spacetime on which strings are allowed to end. If a closed string collides with a D-brane, it can turn into an open string whose ends move along the D-brane. The end points of such an open string satisfy conventional free boundary conditions along the worldvolume of the D-brane, and fixed (Dirichlet) boundary conditions are obeyed in the 9 − p dimensions transverse to the D-brane.

D-branes make it possible to probe string theories non-perturbatively, i.e., when the interactions are no longer assumed to be weak. This more complete picture makes it evident that the different string theories are actually related via a network of “dualities.” T-dualities relate two different string theories by interchanging winding modes and Kaluza-Klein states, via R → α′/R. For example, Type IIA string theory compactified on a circle of radius R is equivalent to Type IIB string theory compactified on a circle of radius 1/R. We have a similar relation between E8 × E8 and SO(32) heterotic string theories. While T-dualities remain manifest at weak-coupling, S-dualities are less well-established strong/weak-coupling relationships. For example, the SO(32) heterotic string is believed to be S-dual to the SO(32) Type I string, while the Type IIB string is self-S-dual. There is a duality of dualities, in which the T-dual of one theory is the S-dual of another. Compactification on various manifolds often leads to dualities. The heterotic string compactified on a six-dimensional torus T6 is believed to be self-S-dual. Also, the heterotic string on T4 is dual to the type II string on four-dimensional K3. The heterotic string on T6 is dual to the Type II string on a Calabi-Yau manifold. The Type IIA string on a Calabi-Yau manifold is dual to the Type IIB string on the mirror Calabi-Yau manifold.

This led to the discovery that all five string theories are actually different sectors of an eleven-dimensional non-perturbative theory, known as M-theory. When M-theory is compactified on a circle S1 of radius R11, it leads to the Type IIA string, with string coupling constant gs = R3/211. Thus, the illusion that this string theory is ten-dimensional is a remnant of weak-coupling perturbative methods. Similarly, if M-theory is compactified on a line segment S1/Z2, then the E8 × E8 heterotic string is recovered.

Just as a given string theory has a corresponding supergravity in its low-energy limit, eleven-dimensional supergravity is the low-energy limit of M-theory. Since we do not yet know what the full M-theory actually is, many different names have been attributed to the “M,” including Magical, Mystery, Matrix, and Membrane! Whenever we refer to “M-theory,” we mean the theory which subsumes all five string theories and whose low-energy limit is eleven-dimensional supergravity. We now have an adequate framework with which to understand a wealth of non-perturbative phenomena. For example, electric-magnetic duality in D = 4 is a consequence of string-string duality in D = 6, which in turn is the result of membrane-fivebrane duality in D = 11. Furthermore, the exact electric-magnetic duality has been extended to an effective duality of non-conformal N = 2 Seiberg-Witten theory, which can be derived from M-theory. In fact, it seems that all supersymmetric quantum field theories with any gauge group could have a geometrical interpretation through M-theory, as worldvolume fields propagating on a common intersection of stacks of p-branes wrapped around various cycles of compactified manifolds.

In addition, while perturbative string theory has vacuum degeneracy problems due to the billions of Calabi-Yau vacua, the non-perturbative effects of M-theory lead to smooth transitions from one Calabi-Yau manifold to another. Now the question to ask is not why do we live in one topology but rather why do we live in a particular corner of the unique topology. M-theory might offer a dynamical explanation of this. While supersymmetry ensures that the high-energy values of the Standard Model coupling constants meet at a common value, which is consistent with the idea of grand unification, the gravitational coupling constant just misses this meeting point. In fact, M-theory may resolve long-standing cosmological and quantum gravitational problems. For example, M-theory accounts for a microscopic description of black holes by supplying the necessary non-perturbative components, namely p-branes. This solves the problem of counting black hole entropy by internal degrees of freedom.

# Reductionism of Numerical Complexity: A Wittgensteinian Excursion

Wittgenstein’s criticism of Russell’s logicist foundation of mathematics contained in (Remarks on the Foundation of Mathematics) consists in saying that it is not the formalized version of mathematical deduction which vouches for the validity of the intuitive version but conversely.

If someone tries to shew that mathematics is not logic, what is he trying to shew? He is surely trying to say something like: If tables, chairs, cupboards, etc. are swathed in enough paper, certainly they will look spherical in the end.

He is not trying to shew that it is impossible that, for every mathematical proof, a Russellian proof can be constructed which (somehow) ‘corresponds’ to it, but rather that the acceptance of such a correspondence does not lean on logic.

Taking up Wittgenstein’s criticism, Hao Wang (Computation, Logic, Philosophy) discusses the view that mathematics “is” axiomatic set theory as one of several possible answers to the question “What is mathematics?”. Wang points out that this view is epistemologically worthless, at least as far as the task of understanding the feature of cognition guiding is concerned:

Mathematics is axiomatic set theory. In a definite sense, all mathematics can be derived from axiomatic set theory. [ . . . ] There are several objections to this identification. [ . . . ] This view leaves unexplained why, of all the possible consequences of set theory, we select only those which happen to be our mathematics today, and why certain mathematical concepts are more interesting than others. It does not help to give us an intuitive grasp of mathematics such as that possessed by a powerful mathematician. By burying, e.g., the individuality of natural numbers, it seeks to explain the more basic and the clearer by the more obscure. It is a little analogous to asserting that all physical objects, such as tables, chairs, etc., are spherical if we swathe them with enough stuff.

Reductionism is an age-old project; a close forerunner of its incarnation in set theory was the arithmetization program of the 19th century. It is interesting that one of its prominent representatives, Richard Dedekind (Essays on the Theory of Numbers), exhibited a quite distanced attitude towards a consequent carrying out of the program:

It appears as something self-evident and not new that every theorem of algebra and higher analysis, no matter how remote, can be expressed as a theorem about natural numbers [ . . . ] But I see nothing meritorious [ . . . ] in actually performing this wearisome circumlocution and insisting on the use and recognition of no other than rational numbers.

Perec wrote a detective novel without using the letter ‘e’ (La disparition, English A void), thus proving not only that such an enormous enterprise is indeed possible but also that formal constraints sometimes have great aesthetic appeal. The translation of mathematical propositions into a poorer linguistic framework can easily be compared with such painful lipogrammatical exercises. In principle all logical connectives can be simulated in a framework exclusively using Sheffer’s stroke, and all cuts (in Gentzen’s sense) can be eliminated; one can do without common language at all in mathematics and formalize everything and so on: in principle, one could leave out a whole lot of things. However, in doing so one would depart from the true way of thinking employed by the mathematician (who really uses “and” and “not” and cuts and who does not reduce many things to formal systems). Obviously, it is the proof theorist as a working mathematician who is interested in things like the reduction to Sheffer’s stroke since they allow for more concise proofs by induction in the analysis of a logical calculus. Hence this proof theorist has much the same motives as a mathematician working on other problems who avoids a completely formalized treatment of these problems since he is not interested in the proof-theoretical aspect.

There might be quite similar reasons for the interest of some set theorists in expressing usual mathematical constructions exclusively with the expressive means of ZF (i.e., in terms of ∈). But beyond this, is there any philosophical interpretation of such a reduction? In the last analysis, mathematicians always transform (and that means: change) their objects of study in order to make them accessible to certain mathematical treatments. If one considers a mathematical concept as a tool, one does not only use it in a way different from the one in which it would be used if it were considered as an object; moreover, in semiotical representation of it, it is given a form which is different in both cases. In this sense, the proof theorist has to “change” the mathematical proof (which is his or her object of study to be treated with mathematical tools). When stating that something is used as object or as tool, we have always to ask: in which situation, or: by whom.

A second observation is that the translation of propositional formulæ in terms of Sheffer’s stroke in general yields quite complicated new formulæ. What is “simple” here is the particularly small number of symbols needed; but neither the semantics becomes clearer (p|q means “not both p and q”; cognitively, this looks more complex than “p and q” and so on), nor are the formulæ you get “short”. What is looked for in this case, hence, is a reduction of numerical complexity, while the primitive basis attained by the reduction cognitively looks less “natural” than the original situation (or, as Peirce expressed it, “the consciousness in the determined cognition is more lively than in the cognition which determines it”); similarly in the case of cut elimination. In contrast to this, many philosophers are convinced that the primitive basis of operating with sets constitutes really a “natural” basis of mathematical thinking, i.e., such operations are seen as the “standard bricks” of which this thinking is actually made – while no one will reasonably claim that expressions of the type p|q play a similar role for propositional logic. And yet: reduction to set theory does not really have the task of “explanation”. It is true, one thus reduces propositions about “complex” objects to propositions about “simple” objects; the propositions themselves, however, thus become in general more complex. Couched in Fregean terms, one can perhaps more easily grasp their denotation (since the denotation of a proposition is its truth value) but not their meaning. A more involved conceptual framework, however, might lead to simpler propositions (and in most cases has actually just been introduced in order to do so). A parallel argument concerns deductions: in its totality, a deduction becomes more complex (and less intelligible) by a decomposition into elementary steps.

Now, it will be subject to discussion whether in the case of some set operations it is admissible at all to claim that they are basic for thinking (which is certainly true in the case of the connectives of propositional logic). It is perfectly possible that the common sense which organizes the acceptance of certain operations as a natural basis relies on something different, not having the character of some eternal laws of thought: it relies on training.

Is it possible to observe that a surface is coloured red and blue; and not to observe that it is red? Imagine a kind of colour adjective were used for things that are half red and half blue: they are said to be ‘bu’. Now might not someone to be trained to observe whether something is bu; and not to observe whether it is also red? Such a man would then only know how to report: “bu” or “not bu”. And from the first report we could draw the conclusion that the thing was partly red.

# Mathematical Reductionism: As Case Via C. S. Peirce’s Hypothetical Realism.

During the 20th century, the following epistemology of mathematics was predominant: a sufficient condition for the possibility of the cognition of objects is that these objects can be reduced to set theory. The conditions for the possibility of the cognition of the objects of set theory (the sets), in turn, can be given in various manners; in any event, the objects reduced to sets do not need an additional epistemological discussion – they “are” sets. Hence, such an epistemology relies ultimately on ontology. Frege conceived the axioms as descriptions of how we actually manipulate extensions of concepts in our thinking (and in this sense as inevitable and intuitive “laws of thought”). Hilbert admitted the use of intuition exclusively in metamathematics where the consistency proof is to be done (by which the appropriateness of the axioms would be established); Bourbaki takes the axioms as mere hypotheses. Hence, Bourbaki’s concept of justification is the weakest of the three: “it works as long as we encounter no contradiction”; nevertheless, it is still epistemology, because from this hypothetical-deductive point of view, one insists that at least a proof of relative consistency (i.e., a proof that the hypotheses are consistent with the frequently tested and approved framework of set theory) should be available.

Doing mathematics, one tries to give proofs for propositions, i.e., to deduce the propositions logically from other propositions (premisses). Now, in the reductionist perspective, a proof of a mathematical proposition yields an insight into the truth of the proposition, if the premisses are already established (if one has already an insight into their truth); this can be done by giving in turn proofs for them (in which new premisses will occur which ask again for an insight into their truth), or by agreeing to put them at the beginning (to consider them as axioms or postulates). The philosopher tries to understand how the decision about what propositions to take as axioms is arrived at, because he or she is dissatisfied with the reductionist claim that it is on these axioms that the insight into the truth of the deduced propositions rests. Actually, this epistemology might contain a short-coming since Poincaré (and Wittgenstein) stressed that to have a proof of a proposition is by no means the same as to have an insight into its truth.

Attempts to disclose the ontology of mathematical objects reveal the following tendency in epistemology of mathematics: Mathematics is seen as suffering from a lack of ontological “determinateness”, namely that this science (contrarily to many others) does not concern material data such that the concept of material truth is not available (especially in the case of the infinite). This tendency is embarrassing since on the other hand mathematical cognition is very often presented as cognition of the “greatest possible certainty” just because it seems not to be bound to material evidence, let alone experimental check.

The technical apparatus developed by the reductionist and set-theoretical approach nowadays serves other purposes, partly for the reason that tacit beliefs about sets were challenged; the explanations of the science which it provides are considered as irrelevant by the practitioners of this science. There is doubt that the above mentioned sufficient condition is also necessary; it is not even accepted throughout as a sufficient one. But what happens if some objects, as in the case of category theory, do not fulfill the condition? It seems that the reductionist approach, so to say, has been undocked from the historical development of the discipline in several respects; an alternative is required.

Anterior to Peirce, epistemology was dominated by the idea of a grasp of objects; since Descartes, intuition was considered throughout as a particular, innate capacity of cognition (even if idealists thought that it concerns the general, and empiricists that it concerns the particular). The task of this particular capacity was the foundation of epistemology; already from Aristotle’s first premisses of syllogism, what was aimed at was to go back to something first. In this traditional approach, it is by the ontology of the objects that one hopes to answer the fundamental question concerning the conditions for the possibility of the cognition of these objects. One hopes that there are simple “basic objects” to which the more complex objects can be reduced and whose cognition is possible by common sense – be this an innate or otherwise distinguished capacity of cognition common to all human beings. Here, epistemology is “wrapped up” in (or rests on) ontology; to do epistemology one has to do ontology first.

Peirce shares Kant’s opinion according to which the object depends on the subject; however, he does not agree that reason is the crucial means of cognition to be criticised. In his paper “Questions concerning certain faculties claimed for man”, he points out the basic assumption of pragmatist philosophy: every cognition is semiotically mediated. He says that there is no immediate cognition (a cognition which “refers immediately to its object”), but that every cognition “has been determined by a previous cognition” of the same object. Correspondingly, Peirce replaces critique of reason by critique of signs. He thinks that Kant’s distinction between the world of things per se (Dinge an sich) and the world of apparition (Erscheinungswelt) is not fruitful; he rather distinguishes the world of the subject and the world of the object, connected by signs; his position consequently is a “hypothetical realism” in which all cognitions are only valid with reservations. This position does not negate (nor assert) that the object per se (with the semiotical mediation stripped off) exists, since such assertions of “pure” existence are seen as necessarily hypothetical (that means, not withstanding philosophical criticism).

By his basic assumption, Peirce was led to reveal a problem concerning the subject matter of epistemology, since this assumption means in particular that there is no intuitive cognition in the classical sense (which is synonymous to “immediate”). Hence, one could no longer consider cognitions as objects; there is no intuitive cognition of an intuitive cognition. Intuition can be no more than a relation. “All the cognitive faculties we know of are relative, and consequently their products are relations”. According to this new point of view, intuition cannot any longer serve to found epistemology, in departure from the former reductionist attitude. A central argument of Peirce against reductionism or, as he puts it,

the reply to the argument that there must be a first is as follows: In retracing our way from our conclusions to premisses, or from determined cognitions to those which determine them, we finally reach, in all cases, a point beyond which the consciousness in the determined cognition is more lively than in the cognition which determines it.

Peirce gives some examples derived from physiological observations about perception, like the fact that the third dimension of space is inferred, and the blind spot of the retina. In this situation, the process of reduction loses its legitimacy since it no longer fulfills the function of cognition justification. At such a place, something happens which I would like to call an “exchange of levels”: the process of reduction is interrupted in that the things exchange the roles performed in the determination of a cognition: what was originally considered as determining is now determined by what was originally considered as asking for determination.

The idea that contents of cognition are necessarily provisional has an effect on the very concept of conditions for the possibility of cognitions. It seems that one can infer from Peirce’s words that what vouches for a cognition is not necessarily the cognition which determines it but the livelyness of our consciousness in the cognition. Here, “to vouch for a cognition” means no longer what it meant before (which was much the same as “to determine a cognition”), but it still means that the cognition is (provisionally) reliable. This conception of the livelyness of our consciousness roughly might be seen as a substitute for the capacity of intuition in Peirce’s epistemology – but only roughly, since it has a different coverage.

# Metaphysics of the Semantics of HoTT. Thought of the Day 73.0

Types and tokens are interpreted as concepts (rather than spaces, as in the homotopy interpretation). In particular, a type is interpreted as a general mathematical concept, while a token of a given type is interpreted as a more specific mathematical concept qua instance of the general concept. This accords with the fact that each token belongs to exactly one type. Since ‘concept’ is a pre-mathematical notion, this interpretation is admissible as part of an autonomous foundation for mathematics.

Expressions in the language are the names of types and tokens. Those naming types correspond to propositions. A proposition is ‘true’ just if the corresponding type is inhabited (i.e. there is a token of that type, which we call a ‘certificate’ to the proposition). There is no way in the language of HoTT to express the absence or non-existence of a token. The negation of a proposition P is represented by the type P → 0, where P is the type corresponding to proposition P and 0 is a type that by definition has no token constructors (corresponding to a contradiction). The logic of HoTT is not bivalent, since the inability to construct a token of P does not guarantee that a token of P → 0 can be constructed, and vice versa.

The rules governing the formation of types are understood as ways of composing concepts to form more complex concepts, or as ways of combining propositions to form more complex propositions. They follow from the Curry-Howard correspondence between logical operations and operations on types. However, we depart slightly from the standard presentation of the Curry-Howard correspondence, in that the tokens of types are not to be thought of as ‘proofs’ of the corresponding propositions but rather as certificates to their truth. A proof of a proposition is the construction of a certificate to that proposition by a sequence of applications of the token construction rules. Two different such processes can result in construction of the same token, and so proofs and tokens are not in one-to-one correspondence.

When we work formally in HoTT we construct expressions in the language according to the formal rules. These expressions are taken to be the names of tokens and types of the theory. The rules are chosen such that if a construction process begins with non-contradictory expressions that all name tokens (i.e. none of the expressions are ‘empty names’) then the result will also name a token (i.e. the rules preserve non-emptiness of names).

Since we interpret tokens and types as concepts, the only metaphysical commitment required is to the existence of concepts. That human thought involves concepts is an uncontroversial position, and our interpretation does not require that concepts have any greater metaphysical status than is commonly attributed to them. Just as the existence of a concept such as ‘unicorn’ does not require the existence of actual unicorns, likewise our interpretation of tokens and types as mathematical concepts does not require the existence of mathematical objects. However, it is compatible with such beliefs. Thus a Platonist can take the concept, say, ‘equilateral triangle’ to be the concept corresponding to the abstract equilateral triangle (after filling in some account of how we come to know about these abstract objects in a way that lets us form the corresponding concepts). Even without invoking mathematical objects to be the ‘targets’ of mathematical concepts, one could still maintain that concepts have a mind-independent status, i.e. that the concept ‘triangle’ continues to exist even while no-one is thinking about triangles, and that the concept ‘elliptic curve’ did not come into existence at the moment someone first gave the definition. However, this is not a necessary part of the interpretation, and we could instead take concepts to be mind-dependent, with corresponding implications for the status of mathematics itself.