Badiou Contra Grothendieck Functorally. Note Quote.

What makes categories historically remarkable and, in particular, what demonstrates that the categorical change is genuine? On the one hand, Badiou fails to show that category theory is not genuine. But, on the other, it is another thing to say that mathematics itself does change, and that the ‘Platonic’ a priori in Badiou’s endeavour is insufficient, which could be demonstrated empirically.

Yet the empirical does not need to stand only in a way opposed to mathematics. Rather, it relates to results that stemmed from and would have been impossible to comprehend without the use of categories. It is only through experience that we are taught the meaning and use of categories. An experience obviously absent from Badiou’s habituation in mathematics.

To contrast, Grothendieck opened up a new regime of algebraic geometry by generalising the notion of a space first scheme-theoretically (with sheaves) and then in terms of groupoids and higher categories. Topos theory became synonymous to the study of categories that would satisfy the so called Giraud’s axioms based on Grothendieck’s geometric machinery. By utilising such tools, Pierre Deligne was able to prove the so called Weil conjectures, mod-p analogues of the famous Riemann hypothesis.

These conjectures – anticipated already by Gauss – concern the so called local ζ-functions that derive from counting the number of points of an algebraic variety over a finite field, an algebraic structure similar to that of for example rational Q or real numbers R but with only a finite number of elements. By representing algebraic varieties in polynomial terms, it is possible to analyse geometric structures analogous to Riemann hypothesis but over finite fields Z/pZ (the whole numbers modulo p). Such ‘discrete’ varieties had previously been excluded from topological and geometric inquiry, while it now occurred that geometry was no longer overshadowed by a need to decide between ‘discrete’ and ‘continuous’ modalities of the subject (that Badiou still separates).

Along with the continuous ones, also discrete variates could then be studied based on Betti numbers, and similarly as what Cohen’s argument made manifest in set-theory, there seemed to occur ‘deeper’, topological precursors that had remained invisible under the classical formalism. In particular, the so called étale-cohomology allowed topological concepts (e.g., neighbourhood) to be studied in the context of algebraic geometry whose classical, Zariski-description was too rigid to allow a meaningful interpretation. Introducing such concepts on the basis of Jean-Pierre Serre’s suggestion, Alexander Grothendieck did revolutionarize the field of geometry, and Pierre Deligne’s proof of the Weil-conjenctures, not to mention Wiles’ work on Fermat’s last theorem that subsequentely followed.

Grothendieck’s crucial insight drew on his observation that if morphisms of varieties were considered by their ‘adjoint’ field of functions, it was possible to consider geometric morphisms as equivalent to algebraic ones. The algebraic category was restrictive, however, because field-morphisms are always monomorphisms which makes geometric morphisms: to generalize the notion of a neighbourhood to algebraic category he needed to embed algebraic fields into a larger category of rings. While a traditional Kuratowski covering space is locally ‘split’ – as mathematicians call it – the same was not true for the dual category of fields. In other words, the category of fields did not have an operator analogous to pull-backs (fibre products) unless considered as being embedded within rings from which pull-backs have a co-dual expressed by the tensor operator ⊗. Grothendieck thus realized he could replace ‘incorporeal’ or contained neighborhoods U ֒→ X by a more relational description: as maps U → X that are not necessarily monic, but which correspond to ring-morphisms instead.

Topos theory applies similar insight but not in the context of only specific varieties but for the entire theory of sets instead. Ultimately, Lawvere and Tierney realized the importance of these ideas to the concept of classification and truth in general. Classification of elements between two sets comes down to a question: does this element belong to a given set or not? In category of S ets this question calls for a binary answer: true or false. But not in a general topos in which the composition of the subobject-classifier is more geometric.

Indeed, Lawvere and Tierney then considered this characteristc map ‘either/or’ as a categorical relationship instead without referring to its ‘contents’. It was the structural form of this morphism (which they called ‘true’) and as contrasted with other relationships that marked the beginning of geometric logic. They thus rephrased the binary complete Heyting algebra of classical truth with the categorical version Ω defined as an object, which satisfies a specific pull-back condition. The crux of topos theory was then the so called Freyd–Mitchell embedding theorem which effectively guaranteed the explicit set of elementary axioms so as to formalize topos theory. The Freyd–Mitchell embedding theorem says that every abelian category is a full subcategory of a category of modules over some ring R and that the embedding is an exact functor. It is easy to see that not every abelian category is equivalent to RMod for some ring R. The reason is that RMod has all small limits and colimits. But for instance the category of finitely generated R-modules is an abelian category but lacks these properties.

But to understand its significance as a link between geometry and language, it is useful to see how the characteristic map (either/or) behaves in set theory. In particular, by expressing truth in this way, it became possible to reduce Axiom of Comprehension, which states that any suitable formal condition λ gives rise to a peculiar set {x ∈ λ}, to a rather elementary statement regarding adjoint functors.

At the same time, many mathematical structures became expressible not only as general topoi but in terms of a more specific class of Grothendieck-topoi. There, too, the ‘way of doing mathematics’ is different in the sense that the object-classifier is categorically defined and there is no empty set (initial object) but mathematics starts from the terminal object 1 instead. However, there is a material way to express the ‘difference’ such topoi make in terms of set theory: for every such a topos there is a sheaf-form enabling it to be expressed as a category of sheaves S etsC for a category C with a specific Grothendieck-topology.

1 + 2 + 3 + … = -1/12. ✓✓✓

The Bernoulli numbers B_n are a sequence of signed rational numbers that can be defined by the exponential generating function

numberedequation1

These numbers arise in the series expansions of trigonometric functions, and are extremely important in number theory and analysis.

The Bernoulli number  can be defined by the contour integral

numberedequation2

where the contour encloses the origin, has radius less than 2pi (to avoid the poles at +/-2pii), and is traversed in a counterclockwise direction.

bernoullinumberdigits_1000

The numbers of digits in the numerator of B_n for the n=2, 4, … are 1, 1, 1, 1, 1, 3, 1, 4, 5, 6, 6, 9, 7, 11, … , while the numbers of digits in the corresponding denominators are 1, 2, 2, 2, 2, 4, 1, 3, 3, 3, 3, 4, 1, 3, 5, 3, …. Both of these are plotted above.

The denominator of B_(2n) is given by

numberedequation4

where the product is taken over the primes p, a result which is related to the von Staudt-Clausen theorem.

In 1859 Riemann published a paper giving an explicit formula for the number of primes up to any preassigned limit—a decided improvement over the approximate value given by the prime number theorem. However, Riemann’s formula depended on knowing the values at which a generalized version of the zeta function equals zero. (The Riemann zeta function is defined for all complex numbers—numbers of the form x + iy, where i = (−1), except for the line x = 1.) Riemann knew that the function equals zero for all negative even integers −2, −4, −6, … (so-called trivial zeros), and that it has an infinite number of zeros in the critical strip of complex numbers between the lines x = 0 and x = 1, and he also knew that all nontrivial zeros are symmetric with respect to the critical line x = 1/2. Riemann conjectured that all of the nontrivial zeros are on the critical line, a conjecture that subsequently became known as the Riemann hypothesis. In 1900 the German mathematician David Hilbert called the Riemann hypothesis one of the most important questions in all of mathematics, as indicated by its inclusion in his influential list of 23 unsolved problems with which he challenged 20th-century mathematicians. In 1915 the English mathematician Godfrey Hardy proved that an infinite number of zeros occur on the critical line, and by 1986 the first 1,500,000,001 nontrivial zeros were all shown to be on the critical line. Although the hypothesis may yet turn out to be false, investigations of this difficult problem have enriched the understanding of complex numbers.

Suppose you want to put a probability distribution on the natural numbers for the purpose of doing number theory. What properties might you want such a distribution to have? Well, if you’re doing number theory then you want to think of the prime numbers as acting “independently”: knowing that a number is divisible by p should give you no information about whether it’s divisible by q

That quickly leads you to the following realization: you should choose the exponent of each prime in the prime factorization independently. So how should you choose these? It turns out that the probability distribution on the non-negative integers with maximum entropy and a given mean is a geometric distribution. So let’s take the probability that the exponent of p is k to be equal to (1−rp)rpk  for some constant rp

This gives the probability that a positive integer n = p1e1…pkek occurs as

C ∏ki=1 rpei

, where

C = ∏p(1-rp)

So we need to choose rp such that this product converges. Now, we’d like the probability that n occurs to be monotonically decreasing as a function of n. It turns out that this is true iff r= p−s for some s > 1 (since C has to converge), which gives the probability that n occurs as

1/ns/ζ(s), 

ζ(s) is the zeta function.

The Riemann-Zeta function is a complex function that tells us many things about the theory of numbers. Its mystery is increased by the fact it has no closed form – i.e. it can’t be expressed a single formula that contains other standard (elementary) functions.

riemannzetareimabs

riemannzetaridges_700

The plot above shows the “ridges” of  inline4 for  0 < x < 1 and 0 < y < 100. The fact that the ridges appear to decrease monotonically for 0 ≤ x ≤ 1/2 is not a coincidence since it turns out that monotonic decrease implies the Riemann hypothesis. 

On the real line with x>1, the Riemann-Zeta function can be defined by the integral

numberedequation1-1

where Gamma(x) is the gamma function. If x is an integer n, then we have the identity,

inline14

=  inline17

=  inline20

so,

numberedequation2-1

The Riemann zeta function can also be defined in the complex plane by the contour integral

numberedequation16

inline78 , where the contour is illustrated below

riemannzetafunctiongamma_1000

Zeros of inline79 come in (at least) two different types. So-called “trivial zeros” occur at all negative even integers , , , …, and “nontrivial zeros” at certain

numberedequation17

for s in the “critical strip0<sigma<1. The Riemann hypothesis asserts that the nontrivial Riemann zeta function zeros of zeta(s) all have real part sigma=R[s]=1/2, a line called the “critical line.” This is now known to be true for the first 250×10^9 roots.

riemannzetacriticalstrip_700

The plot above shows the real and imaginary parts of zeta(1/2+iy) (i.e., values of zeta(z) along the critical line) as y is varied from 0 to 35.

Now consider this John Cook’s take…

S_p(n) = \sum_{k=1}^n k^p

where p is a positive integer. Here looking at what happens when p becomes a negative integer and we let n go to infinity.

If p < -1, then the limit as n goes to infinity of Sp(n) is ζ(-p). That is, for s > 1, the Riemann-Zeta function ζ(s) is defined by

\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}

We don’t have to limit ourselves to real numbers s > 1; the definition holds for complex numbers s with real part greater than 1. That’ll be important below.

When s is a positive even number, there’s a formula for ζ(s) in terms of the Bernoulli numbers:

\zeta(2n) = (-1)^{n-1} 2^{2n-1} \pi^{2n} \frac{B_{2n}}{(2n)!}

The best-known special case of this formula is that

1 + 1/4 + 1/9 + 1/16 + … = π2 / 6.

It’s a famous open problem to find a closed-form expression for ζ(3) or any other odd argument.

The formula relating the zeta function and Bernoulli tells us a couple things about the Bernoulli numbers. First, for n ≥ 1 the Bernoulli numbers with index 2n alternate sign. Second, by looking at the sum defining ζ(2n) we can see that it is approximately 1 for large n. This tells us that for large n, |B2n| is approximately (2n)! / 22n-1 π2n.

We said above that the sum defining the Riemann zeta function is valid for complex numbers s with real part greater than 1. There is a unique analytic extension of the zeta function to the rest of the complex plane, except at s = 1. The zeta function is defined, for example, at negative integers, but the sum defining zeta in the half-plane Re(s) > 1 is not valid.

One must have seen the equation

1 + 2 + 3 + … = -1/12.

This is an abuse of notation. The sum on the left clearly diverges to infinity. But if the sum defining ζ(s) for Re(s) > 1 were valid for s = -1 (which it is not) then the left side would equal ζ(-1). The analytic continuation of ζ is valid at -1, and in fact ζ(-1) = -1/12. So the equation above is true if you interpret the left side, not as an ordinary sum, but as a way of writing ζ(-1). The same approach could be used to make sense of similar equations such as

12 + 22 + 32 + … = 0

and

13 + 23 + 33 + … = 1/120.