# Schematic Grothendieck Representation

A spectral Grothendieck representation Rep is said to be schematic if for every triple γ ≤ τ ≤ δ in Top(A), for every A in R^(Ring) we have a commutative diagram in R^:

If Rep is schematic, then, P : Top(A) → R^ is a presheaf with values in R^ over the lattice Top(A)o, for every A in R.

The modality is to restrict attention to Tors(Rep(A)); that is, a lattice in the usual sense; and hence this should be viewed as the commutative shadow of a suitable noncommutative theory.

For obtaining the complete lattice Q(A), a duality is expressed by an order-reversing bijection: (−)−1 : Q(A) → Q((Rep(A))o). (Rep(A))o is not a Grothendieck category. It is additive and has a projective generator; moreover, it is known to be a varietal category (also called triplable) in the sense that it has a projective regular generator P, it is co-complete and has kernel pairs with respect to the functor Hom(P, −), and moreover every equivalence relation in the category is a kernel pair. If a comparison functor is constructed via Hom(P, −) as a functor to the category of sets, it works well for the category of set-valued sheaves over a Grothendieck topology.

Now (−)−1 is defined as an order-reversing bijection between idempotent radicals on Rep(A) and (Rep(A))o, implying we write (Top(A))−1 for the image of Top(A) in Q((Rep( A))o). This is encoded in the exact sequence in Rep(A):

0 → ρ(M) → M → ρ−1(M) → 0

(reversed in (Rep(A))o). By restricting attention to hereditary torsion theories (kernel functors) when defining Tors(−), we introduce an asymmetry that breaks the duality because Top(A)−1 is not in Tors((Rep(A))op). If notationally, TT(G) is the complete lattice of torsion theories (not necessarily hereditary) of the category G; then (TT(G))−1 ≅ TT(Gop). Hence we may view Tors(G)−1 as a complete sublattice of TT(Gop).

# Grothendieckian Construction of K-Theory with a Bundle that is Topologically Trivial and Class that is Torsion.

All relativistic quantum theories contain “antiparticles,” and allow the process of particle-antiparticle annihilation. This inspires a physical version of the Grothendieck construction of K-theory. Physics uses topological K-theory of manifolds, whose motivation is to organize vector bundles over a space into an algebraic invariant, that turns out to be useful. Algebraic K-theory started from Ki defined for i, with relations to classical constructions in algebra and number theory, followed by Quillen’s homotopy-theoretic definition ∀ i. The connections to algebra and number theory often persist for larger values of i, but in ways that are subtle and conjectural, such as special values of zeta- and L-functions.

One could also use the conserved charges of a configuration which can be measured at asymptotic infinity. By definition, these are left invariant by any physical process. Furthermore, they satisfy quantization conditions, of which the prototype is the Dirac condition on allowed electric and magnetic charges in Maxwell theory.

There is an elementary construction which, given a physical theory T, produces an abelian group of conserved charges K(T). Rather than considering the microscopic dynamics of the theory, all that is needed to be known is a set S of “particles” described by T, and a set of “bound state formation/decay processes” by which the particles combine or split to form other particles. These are called “binding processes.” Two sets of particles are “physically equivalent” if some sequence of binding processes convert the one to the other. We then define the group K(T) as the abelian group ZS of formal linear combinations of particles, quotiented by this equivalence relation.

Suppose T contains the particles S = {A,B,C}.

If these are completely stable, we could clearly define three integral conserved charges, their individual numbers, so K(T) ≅ Z3.

Introducing a binding process

A + B ↔ C —– (1)

with the bidirectional arrow to remind us that the process can go in either direction. Clearly K(T) ≅ Z2 in this case.

One might criticize this proposal on the grounds that we have assumed that configurations with a negative number of particles can exist. However, in all physical theories which satisfy the constraints of special relativity, charged particles in physical theories come with “antiparticles,” with the same mass but opposite charge. A particle and antiparticle can annihilate (combine) into a set of zero charge particles. While first discovered as a prediction of the Dirac equation, this follows from general axioms of quantum field theory, which also hold in string theory.

Thus, there are binding processes

B + B̄ ↔ Z1 + Z2 + · · · .

where B̄ is the antiparticle to a particle B, and Zi are zero charge particles, which must appear by energy conservation. To define the K-theory, we identify any such set of zero charge particles with the identity, so that

B + B̄ ↔ 0

Thus the antiparticles provide the negative elements of K(T).

Granting the existence of antiparticles, this construction of K-theory can be more simply rephrased as the Grothendieck construction. We can define K(T) as the group of pairs (E, F) ∈ (ZS, ZS), subject to the relations (E, F) ≅ (E+B, F +B) ≅ (E+L, F +R) ≅ (E+R, F +L), where (L, R) are the left and right hand side of a binding process (1).

Thinking of these as particles, each brane B must have an antibrane, which we denote by B̄. If B wraps a submanifold L, one expects that B̄ is a brane which wraps a submanifold L of opposite orientation. A potential problem is that it is not a priori obvious that the orientation of L actually matters physically, especially in degenerate cases such as L a point.

Now, let us take X as a Calabi-Yau threefold for definiteness. A physical A-brane, which are branes of the A-model topological string and thereby a TQFT shadow of the D-branes of the superstring, is specified by a pair (L, E) of a special Lagrangian submanifold L with a flat bundle E. The obvious question could be: When are (L1, E1) and (L2, E2) related by a binding process? A simple heuristic answer to this question is given by the Feynman path integral. Two configurations are connected, if they are connected by a continuous path through the configuration space; any such path (or a small deformation of it) will appear in the functional integral with some non-zero weight. Thus, the question is essentially topological. Ignoring the flat bundles for a moment, this tells us that the K-theory group for A-branes is H3(Y, Z), and the class of a brane is simply (rank E)·[L] ∈ H3(Y, Z). This is also clear if the moduli space of flat connections on L is connected.

But suppose it is not, say π1(L) is torsion. In this case, we need deeper physical arguments to decide whether the K-theory of these D-branes is H3(Y, Z), or some larger group. But a natural conjecture is that it will be K1(Y), which classifies bundles on odd-dimensional submanifolds. Two branes which differ only in the choice of flat connection are in fact connected in string theory, consistent with the K-group being H3(Y, Z). For Y a simply connected Calabi-Yau threefold, K1(Y) ≅ H3(Y, Z), so the general conjecture is borne out in this case

There is a natural bilinear form on H3(Y, Z) given by the oriented intersection number

I(L1, L2) = #([L1] ∩ [L2]) —– (2)

It has symmetry (−1)n. In particular, it is symplectic for n = 3. Furthermore, by Poincaré duality, it is unimodular, at least in our topological definition of K-theory.

D-branes, which are extended objects defined by mixed Dirichlet-Neumann boundary conditions in string theory, break half of the supersymmetries of the type II superstring and carry a complete set of electric and magnetic Ramond-Ramond charges. The product of the electric and magnetic charges is a single Dirac unit, and that the quantum of charge takes the value required by string duality. Saying that a D-brane has RR-charge means that it is a source for an “RR potential,” a generalized (p + 1)-form gauge potential in ten-dimensional space-time, which can be verified from its world-volume action that contains a minimal coupling term,

∫C(p + 1) —–(3)

where C(p + 1) denotes the gauge potential, and the integral is taken over the (p+1)-dimensional world-volume of the brane. For p = 0, C(1) is a one-form or “vector” potential (as in Maxwell theory), and thus the D0-brane is an electrically charged particle with respect to this 10d Maxwell theory. Upon further compactification, by which, the ten dimensions are R4 × X, and a Dp-brane which wraps a p-dimensional cycle L; in other words its world-volume is R × L where R is a time-like world-line in R4. Using the Poincaré dual class ωL ∈ H2n−p(X, R) to L in X, to rewrite (3) as an integral

R × X C(p + 1) ∧ ωL —– (4)

We can then do the integral over X to turn this into the integral of a one-form over a world-line in R4, which is the right form for the minimal electric coupling of a particle in four dimensions. Thus, such a wrapped brane carries a particular electric charge which can be detected at asymptotic infinity. Summarizing the RR-charge more formally,

LC = ∫XC ∧ ωL —– (5)

where C ∈ H∗(X, R). In other words, it is a class in Hp(X, R).

In particular, an A-brane (for n = 3) carries a conserved charge in H3(X, R). Of course, this is weaker than [L] ∈ H3(X, Z). To see this physically, we would need to see that some of these “electric” charges are actually “magnetic” charges, and study the Dirac-Schwinger-Zwanziger quantization condition between these charges. This amounts to showing that the angular momentum J of the electromagnetic field satisfies the quantization condition J = ħn/2 for n ∈ Z. Using an expression from electromagnetism, J⃗ = E⃗ × B⃗ , this is precisely the condition that (2) must take an integer value. Thus the physical and mathematical consistency conditions agree. Similar considerations apply for coisotropic A-branes. If X is a genuine Calabi-Yau 3-fold (i.e., with strict SU(3) holonomy), then a coisotropic A-brane which is not a special Lagrangian must be five-dimensional, and the corresponding submanifold L is rationally homologically trivial, since H5(X, Q) = 0. Thus, if the bundle E is topologically trivial, the homology class of L and thus its K-theory class is torsion.

If X is a torus, or a K3 surface, the situation is more complicated. In that case, even rationally the charge of a coisotropic A-brane need not lie in the middle-dimensional cohomology of X. Instead, it takes its value in a certain subspace of ⊕p Hp(X, Q), where the summation is over even or odd p depending on whether the complex dimension of X is even or odd. At the semiclassical level, the subspace is determined by the condition

(L − Λ)α = 0, α ∈ ⊕p Hp(X, Q)

where L and Λ are generators of the Lefschetz SL(2, C) action, i.e., L is the cup product with the cohomology class of the Kähler form, and Λ is its dual.

# Conjuncted: Affine Schemes: How Would Functors Carry the Same Information?

If we go to the generality of schemes, the extra structure overshadows the topological points and leaves out crucial details so that we have little information, without the full knowledge of the sheaf. For example the evaluation of odd functions on topological points is always zero. This implies that the structure sheaf of a supermanifold cannot be reconstructed from its underlying topological space.

The functor of points is a categorical device to bring back our attention to the points of a scheme; however the notion of point needs to be suitably generalized to go beyond the points of the topological space underlying the scheme.

Grothendieck’s idea behind the definition of the functor of points associated to a scheme is the following. If X is a scheme, for each commutative ring A, we can define the set of the A-points of X in analogy to the way the classical geometers used to define the rational or integral points on a variety. The crucial difference is that we do not focus on just one commutative ring A, but we consider the A-points for all commutative rings A. In fact, the scheme we start from is completely recaptured only by the collection of the A-points for every commutative ring A, together with the admissible morphisms.

Let (rings) denote the category of commutative rings and (schemes) the category of schemes.

Let (|X|, OX) be a scheme and let T ∈ (schemes). We call the T-points of X, the set of all scheme morphisms {T → X}, that we denote by Hom(T, X). We then define the functor of points hX of the scheme X as the representable functor defined on the objects as

hX: (schemes)op → (sets), haX(A) = Hom(Spec A, X) = A-points of X

Notice that when X is affine, X ≅ Spec O(X) and we have

haX(A) = Hom(Spec A, O(X)) = Hom(O(X), A)

In this case the functor haX is again representable.

Consider the affine schemes X = Spec O(X) and Y = Spec O(Y). There is a one-to-one correspondence between the scheme morphisms X → Y and the ring morphisms O(X) → O(Y). Both hX and haare defined on morphisms in the natural way. If φ: T → S is a morphism and ƒ ∈ Hom(S, X), we define hX(φ)(ƒ) = ƒ ○ φ. Similarly, if ψ: A → Bis a ring morphism and g ∈ Hom(O(X), A), we define haX(ψ)(g) = ψ ○ g. The functors hX and haare for a given scheme X not really different but carry the same information. The functor of points hof a scheme X is completely determined by its restriction to the category of affine schemes, or equivalently by the functor

haX: (rings) → (sets), haX(A) = Hom(Spec A, X)

Let M = (|M|, OM) be a locally ringed space and let (rspaces) denote the category of locally ringed spaces. We define the functor of points of locally ringed spaces M as the representable functor

hM: (rspaces)op → (sets), hM(T) = Hom(T, M)

hM is defined on the manifold as

hM(φ)(g) = g ○ φ

If the locally ringed space M is a differentiable manifold, then

Hom(M, N) ≅ Hom(C(N), C(M))

This takes us to the theory of Yoneda’s Lemma.

Let C be a category, and let X, Y be objects in C and let hX: Cop → (sets) be the representable functors defined on the objects as hX(T) = Hom(T, X), and on the arrows as hX(φ)(ƒ) = ƒ . φ, for φ: T → S, ƒ ∈ Hom(T, X)

If F: Cop → (sets), then we have a one-to-one correspondence between sets:

{hX → F} ⇔ F(X)

The functor

h: C → Fun(Cop, (sets)), X ↦ hX,

is an equivalence of C with a full subcategory of functors. In particular, hX ≅ hY iff X ≅ Y and the natural transformations hX → hY are in one-to-one correspondence with the morphisms X → Y.

Two schemes (manifolds) are isomorphic iff their functors of points are isomorphic.

The advantages of using the functorial language are many. Morphisms of schemes are just maps between the sets of their A-points, respecting functorial properties. This often simplifies matters, allowing allowing for leaving the sheaves machinery in the background. The problem with such an approach, however, is that not all the functors from (schemes) to (sets) are the functors of points of a scheme, i.e., they are representable.

A functor F: (rings) → (sets) is of the form F(A) = Hom(Spec A, X) for a scheme X iff:

F is local or is a sheaf in Zariski Topology. This means that for each ring R and for every collection αi ∈ F(Rƒi), with (ƒi, i ∈ I) = R, so that αi and αj map to the same element in F(Rƒiƒj) ∀ i and j ∃ a unique element α ∈ F(R) mapping to each αi, and

F admits a cover by open affine subfunctors, which means that ∃ a family Ui of subfunctors of F, i.e. Ui(R) ⊂ F(R) ∀ R ∈ (rings), Ui = hSpec Ui, with the property that ∀ natural transformations ƒ: hSpec A  → F, the functors ƒ-1(Ui), defined as ƒ-1(Ui)(R) = ƒ-1(Ui(R)), are all representable, i.e. ƒ-1(Ui) = hVi, and the Vi form an open covering for Spec A.

This states the conditions we expect for F to be the functor of points of a scheme. Namely, locally, F must look like the functor of points of a scheme, moreover F must be a sheaf, i.e. F must have a gluing property that allows us to patch together the open affine cover.

# Grothendieck’s Universes and Wiles Proof (Fermat’s Last Theorem). Thought of the Day 77.0

In formulating the general theory of cohomology Grothendieck developed the concept of a universe – a collection of sets large enough to be closed under any operation that arose. Grothendieck proved that the existence of a single universe is equivalent over ZFC to the existence of a strongly inaccessible cardinal. More precisely, 𝑈 is the set 𝑉𝛼 of all sets with rank below 𝛼 for some uncountable strongly inaccessible cardinal.

Colin McLarty summarised the general situation:

Large cardinals as such were neither interesting nor problematic to Grothendieck and this paper shares his view. For him they were merely legitimate means to something else. He wanted to organize explicit calculational arithmetic into a geometric conceptual order. He found ways to do this in cohomology and used them to produce calculations which had eluded a decade of top mathematicians pursuing the Weil conjectures. He thereby produced the basis of most current algebraic geometry and not only the parts bearing on arithmetic. His cohomology rests on universes but weaker foundations also suffice at the loss of some of the desired conceptual order.

The applications of cohomology theory implicitly rely on universes. Most number theorists regard the applications as requiring much less than their ‘on their face’ strength and in particular believe the large cardinal appeals are ‘easily eliminable’. There are in fact two issues. McLarty writes:

Wiles’s proof uses hard arithmetic some of which is on its face one or two orders above PA, and it uses functorial organizing tools some of which are on their face stronger than ZFC.

There are two current programs for verifying in detail the intuition that the formal requirements for Wiles proof of Fermat’s last theorem can be substantially reduced. On the one hand, McLarty’s current work aims to reduce the ‘on their face’ strength of the results in cohomology from large cardinal hypotheses to finite order Peano. On the other hand Macintyre aims to reduce the ‘on their face’ strength of results in hard arithmetic to Peano. These programs may be complementary or a full implementation of Macintyre’s might avoid the first.

McLarty reduces

1. ‘ all of SGA (Revêtements Étales et Groupe Fondamental)’ to Bounded Zermelo plus a Universe.
2. “‘the currently existing applications” to Bounded Zermelo itself, thus the con-sistency strength of simple type theory.’ The Grothendieck duality theorem and others like it become theorem schema.

The essential insight of the McLarty’s papers on cohomology is the role of replacement in giving strength to the universe hypothesis. A 𝑍𝐶-universe is defined to be a transitive set U modeling 𝑍𝐶 such that every subset of an element of 𝑈 is itself an element of 𝑈. He remarks that any 𝑉𝛼 for 𝛼 a limit ordinal is provable in 𝑍𝐹𝐶 to be a 𝑍𝐶-universe. McLarty then asserts the essential use of replacement in the original Grothendieck formulation is to prove: For an arbitrary ring 𝑅 every module over 𝑅 embeds in an injective 𝑅-module and thus injective resolutions exist for all 𝑅-modules. But he gives a proof in a system with the proof theoretic strength of finite order arithmetic that every sheaf of modules on any small site has an infinite resolution.

Angus Macintyre dismisses with little comment the worries about the use of ‘large-structure’ tools in Wiles proof. He begins his appendix,

At present, all roads to a proof of Fermat’s Last Theorem pass through some version of a Modularity Theorem (generically MT) about elliptic curves defined over Q . . . A casual look at the literature may suggest that in the formulation of MT (or in some of the arguments proving whatever version of MT is required) there is essential appeal to higher-order quantification, over one of the following.

He then lists such objects as C, modular forms, Galois representations …and summarises that a superficial formulation of MT would be 𝛱1m for some small 𝑚. But he continues,

I hope nevertheless that the present account will convince all except professional sceptics that MT is really 𝛱01.

There then follows a 13 page highly technical sketch of an argument for the proposition that MT can be expressed by a sentence in 𝛱01 along with a less-detailed strategy for proving MT in PA.

Macintyre’s complexity analysis is in traditional proof theoretic terms. But his remark that ‘genus’ is more a useful geometric classification of curves than the syntactic notion of degree suggests that other criteria may be relevant. McLarty’s approach is not really a meta-theorem, but a statement that there was only one essential use of replacement and it can be eliminated. In contrast, Macintyre argues that ‘apparent second order quantification’ can be replaced by first order quantification. But the argument requires deep understanding of the number theory for each replacement in a large number of situations. Again, there is no general theorem that this type of result is provable in PA.

# Hypercoverings, or Fibrant Homotopies

Given that a Grothendieck topology is essentially about abstracting a notion of ‘covering’, it is not surprising that modified Čech methods can be applied. Artin and Mazur used Verdier’s idea of a hypercovering to get, for each Grothendieck topos, E, a pro-object in Ho(S) (i.e. an inverse system of simplicial sets), which they call the étale homotopy type of the topos E (which for them is ‘sheaves for the étale topology on a variety’). Applying homotopy group functors gives pro-groups πi(E) such that π1(E) is essentially the same as Grothendieck’s π1(E).

Grothendieck’s nice π1 has thus an interpretation as a limit of a Čech type, or shape theoretic, system of π1s of ‘hypercoverings’. Can shape theory be useful for studying ́etale homotopy type? Not without extra work, since the Artin-Mazur-Verdier approach leads one to look at inverse systems in proHo(S), i.e. inverse systems in a homotopy category not a homotopy category of inverse systems as in Strong Shape Theory.

One of the difficulties with this hypercovering approach is that ‘hypercovering’ is a difficult concept and to the ‘non-expert’ seem non-geometric and lacking in intuition. As the Grothendieck topos E ‘pretends to be’ the category of Sets, but with a strange logic, we can ‘do’ simplicial set theory in Simp(E) as long as we take care of the arguments we use. To see a bit of this in action we can note that the object [0] in Simp(E) will be the constant simplicial sheaf with value the ordinary [0], “constant” here taking on two meanings at the same time, (a) constant sheaf, i.e. not varying ‘over X’ if E is thought of as Sh(X), and (b) constant simplicial object, i.e. each Kn is the same and all face and degeneracy maps are identities. Thus [0] interpreted as an étale space is the identity map X → X as a space over X. Of course not all simplicial objects are constant and so Simp(E) can store a lot of information about the space (or site) X. One can look at the homotopy structure of Simp(E). Ken Brown showed it had a fibration category structure (i.e. more or less dual to the axioms) and if we look at those fibrant objects K in which the natural map

p : K → [0]

is a weak equivalence, we find that these K are exactly the hypercoverings. Global sections of p give a simplicial set, Γ(K) and varying K amongst the hypercoverings gives a pro-simplicial set (still in proHo(S) not in Hopro(S) unfortunately) which determines the Artin-Mazur pro-homotopy type of E.

This makes the link between shape theoretic methods and derived category theory more explicit. In the first, the ‘space’ is resolved using ‘coverings’ and these, in a sheaf theoretic setting, lead to simplicial objects in Sh(X) that are weakly equivalent to [0]; in the second, to evaluate the derived functor of some functor F : C → A, say, on an object C, one takes the ‘average’ of the values of F on objects weakly equivalent to G, i.e. one works with the functor

F′ : W(C) → A

(where W(C) has objects, α : C → C′, α a weak equivalence, and maps, the commuting ‘triangles’, and this has a ‘domain’ functor δ : W(C) → C, δ(α) = C′ and F′ is the composite Fδ). This is in many cases a pro-object in A – unfortunately standard derived functor theory interprets ‘commuting triangles’ in too weak a sense and thus corresponds to shape rather than strong shape theory – one thus, in some sense, arrives in proHo(A) instead of in Ho(proA).

# Abelian Categories, or Injective Resolutions are Diagrammatic. Note Quote.

Jean-Pierre Serre gave a more thoroughly cohomological turn to the conjectures than Weil had. Grothendieck says

Anyway Serre explained the Weil conjectures to me in cohomological terms around 1955 – and it was only in these terms that they could possibly ‘hook’ me …I am not sure anyone but Serre and I, not even Weil if that is possible, was deeply convinced such [a cohomology] must exist.

Specifically Serre approached the problem through sheaves, a new method in topology that he and others were exploring. Grothendieck would later describe each sheaf on a space T as a “meter stick” measuring T. The cohomology of a given sheaf gives a very coarse summary of the information in it – and in the best case it highlights just the information you want. Certain sheaves on T produced the Betti numbers. If you could put such “meter sticks” on Weil’s arithmetic spaces, and prove standard topological theorems in this form, the conjectures would follow.

By the nuts and bolts definition, a sheaf F on a topological space T is an assignment of Abelian groups to open subsets of T, plus group homomorphisms among them, all meeting a certain covering condition. Precisely these nuts and bolts were unavailable for the Weil conjectures because the arithmetic spaces had no useful topology in the then-existing sense.

At the École Normale Supérieure, Henri Cartan’s seminar spent 1948-49 and 1950-51 focussing on sheaf cohomology. As one motive, there was already de Rham cohomology on differentiable manifolds, which not only described their topology but also described differential analysis on manifolds. And during the time of the seminar Cartan saw how to modify sheaf cohomology as a tool in complex analysis. Given a complex analytic variety V Cartan could define sheaves that reflected not only the topology of V but also complex analysis on V.

These were promising for the Weil conjectures since Weil cohomology would need sheaves reflecting algebra on those spaces. But understand, this differential analysis and complex analysis used sheaves and cohomology in the usual topological sense. Their innovation was to find particular new sheaves which capture analytic or algebraic information that a pure topologist might not focus on.

The greater challenge to the Séminaire Cartan was, that along with the cohomology of topological spaces, the seminar looked at the cohomology of groups. Here sheaves are replaced by G-modules. This was formally quite different from topology yet it had grown from topology and was tightly tied to it. Indeed Eilenberg and Mac Lane created category theory in large part to explain both kinds of cohomology by clarifying the links between them. The seminar aimed to find what was common to the two kinds of cohomology and they found it in a pattern of functors.

The cohomology of a topological space X assigns to each sheaf F on X a series of Abelian groups HnF and to each sheaf map f : F → F′ a series of group homomorphisms Hnf : HnF → HnF′. The definition requires that each Hn is a functor, from sheaves on X to Abelian groups. A crucial property of these functors is:

HnF = 0 for n > 0

for any fine sheaf F where a sheaf is fine if it meets a certain condition borrowed from differential geometry by way of Cartan’s complex analytic geometry.

The cohomology of a group G assigns to each G-module M a series of Abelian groups HnM and to each homomorphism f : M →M′ a series of homomorphisms HnF : HnM → HnM′. Each Hn is a functor, from G-modules to Abelian groups. These functors have the same properties as topological cohomology except that:

HnM = 0 for n > 0

for any injective module M. A G-module I is injective if: For every G-module inclusion N M and homomorphism f : N → I there is at least one g : M → I making this commute

Cartan could treat the cohomology of several different algebraic structures: groups, Lie groups, associative algebras. These all rest on injective resolutions. But, he could not include topological spaces, the source of the whole, and still one of the main motives for pursuing the other cohomologies. Topological cohomology rested on the completely different apparatus of fine resolutions. As to the search for a Weil cohomology, this left two questions: What would Weil cohomology use in place of topological sheaves or G-modules? And what resolutions would give their cohomology? Specifically, Cartan & Eilenberg defines group cohomology (like several other constructions) as a derived functor, which in turn is defined using injective resolutions. So the cohomology of a topological space was not a derived functor in their technical sense. But a looser sense was apparently current.

I have realized that by formulating the theory of derived functors for categories more general than modules, one gets the cohomology of spaces at the same time at small cost. The existence follows from a general criterion, and fine sheaves will play the role of injective modules. One gets the fundamental spectral sequences as special cases of delectable and useful general spectral sequences. But I am not yet sure if it all works as well for non-separated spaces and I recall your doubts on the existence of an exact sequence in cohomology for dimensions ≥ 2. Besides this is probably all more or less explicit in Cartan-Eilenberg’s book which I have not yet had the pleasure to see.

Here he lays out the whole paper, commonly cited as Tôhoku for the journal that published it. There are several issues. For one thing, fine resolutions do not work for all topological spaces but only for the paracompact – that is, Hausdorff spaces where every open cover has a locally finite refinement. The Séminaire Cartan called these separated spaces. The limitation was no problem for differential geometry. All differential manifolds are paracompact. Nor was it a problem for most of analysis. But it was discouraging from the viewpoint of the Weil conjectures since non-trivial algebraic varieties are never Hausdorff.

The fact that sheaf cohomology is a special case of derived func- tors (at least for the paracompact case) is not in Cartan-Sammy. Cartan was aware of it and told [David] Buchsbaum to work on it, but he seems not to have done it. The interest of it would be to show just which properties of fine sheaves we need to use; and so one might be able to figure out whether or not there are enough fine sheaves in the non-separated case (I think the answer is no but I am not at all sure!).

So Grothendieck began rewriting Cartan-Eilenberg before he had seen it. Among other things he preempted the question of resolutions for Weil cohomology. Before anyone knew what “sheaves” it would use, Grothendieck knew it would use injective resolutions. He did this by asking not what sheaves “are” but how they relate to one another. As he later put it, he set out to:

consider the set13 of all sheaves on a given topological space or, if you like, the prodigious arsenal of all the “meter sticks” that measure it. We consider this “set” or “arsenal” as equipped with its most evident structure, the way it appears so to speak “right in front of your nose”; that is what we call the structure of a “category”…From here on, this kind of “measuring superstructure” called the “category of sheaves” will be taken as “incarnating” what is most essential to that space.

The Séminaire Cartan had shown this structure in front of your nose suffices for much of cohomology. Definitions and proofs can be given in terms of commutative diagrams and exact sequences without asking, most of the time, what these are diagrams of.  Grothendieck went farther than any other, insisting that the “formal analogy” between sheaf cohomology and group cohomology should become “a common framework including these theories and others”. To start with, injectives have a nice categorical sense: An object I in any category is injective if, for every monic N → M and arrow f : N → I there is at least one g : M → I such that

Fine sheaves are not so diagrammatic.

Grothendieck saw that Reinhold Baer’s original proof that modules have injective resolutions was largely diagrammatic itself. So Grothendieck gave diagrammatic axioms for the basic properties used in cohomology, and called any category that satisfies them an Abelian category. He gave further diagrammatic axioms tailored to Baer’s proof: Every category satisfying these axioms has injective resolutions. Such a category is called an AB5 category, and sometimes around the 1960s a Grothendieck category though that term has been used in several senses.

So sheaves on any topological space have injective resolutions and thus have derived functor cohomology in the strict sense. For paracompact spaces this agrees with cohomology from fine, flabby, or soft resolutions. So you can still use those, if you want them, and you will. But Grothendieck treats paracompactness as a “restrictive condition”, well removed from the basic theory, and he specifically mentions the Weil conjectures.

Beyond that, Grothendieck’s approach works for topology the same way it does for all cohomology. And, much further, the axioms apply to many categories other than categories of sheaves on topological spaces or categories of modules. They go far beyond topological and group cohomology, in principle, though in fact there were few if any known examples outside that framework when they were given.

# Badiou Contra Grothendieck Functorally. Note Quote.

What makes categories historically remarkable and, in particular, what demonstrates that the categorical change is genuine? On the one hand, Badiou fails to show that category theory is not genuine. But, on the other, it is another thing to say that mathematics itself does change, and that the ‘Platonic’ a priori in Badiou’s endeavour is insufficient, which could be demonstrated empirically.

Yet the empirical does not need to stand only in a way opposed to mathematics. Rather, it relates to results that stemmed from and would have been impossible to comprehend without the use of categories. It is only through experience that we are taught the meaning and use of categories. An experience obviously absent from Badiou’s habituation in mathematics.

To contrast, Grothendieck opened up a new regime of algebraic geometry by generalising the notion of a space first scheme-theoretically (with sheaves) and then in terms of groupoids and higher categories. Topos theory became synonymous to the study of categories that would satisfy the so called Giraud’s axioms based on Grothendieck’s geometric machinery. By utilising such tools, Pierre Deligne was able to prove the so called Weil conjectures, mod-p analogues of the famous Riemann hypothesis.

These conjectures – anticipated already by Gauss – concern the so called local ζ-functions that derive from counting the number of points of an algebraic variety over a finite field, an algebraic structure similar to that of for example rational Q or real numbers R but with only a finite number of elements. By representing algebraic varieties in polynomial terms, it is possible to analyse geometric structures analogous to Riemann hypothesis but over finite fields Z/pZ (the whole numbers modulo p). Such ‘discrete’ varieties had previously been excluded from topological and geometric inquiry, while it now occurred that geometry was no longer overshadowed by a need to decide between ‘discrete’ and ‘continuous’ modalities of the subject (that Badiou still separates).

Along with the continuous ones, also discrete variates could then be studied based on Betti numbers, and similarly as what Cohen’s argument made manifest in set-theory, there seemed to occur ‘deeper’, topological precursors that had remained invisible under the classical formalism. In particular, the so called étale-cohomology allowed topological concepts (e.g., neighbourhood) to be studied in the context of algebraic geometry whose classical, Zariski-description was too rigid to allow a meaningful interpretation. Introducing such concepts on the basis of Jean-Pierre Serre’s suggestion, Alexander Grothendieck did revolutionarize the field of geometry, and Pierre Deligne’s proof of the Weil-conjenctures, not to mention Wiles’ work on Fermat’s last theorem that subsequentely followed.

Grothendieck’s crucial insight drew on his observation that if morphisms of varieties were considered by their ‘adjoint’ field of functions, it was possible to consider geometric morphisms as equivalent to algebraic ones. The algebraic category was restrictive, however, because field-morphisms are always monomorphisms which makes geometric morphisms: to generalize the notion of a neighbourhood to algebraic category he needed to embed algebraic fields into a larger category of rings. While a traditional Kuratowski covering space is locally ‘split’ – as mathematicians call it – the same was not true for the dual category of fields. In other words, the category of fields did not have an operator analogous to pull-backs (fibre products) unless considered as being embedded within rings from which pull-backs have a co-dual expressed by the tensor operator ⊗. Grothendieck thus realized he could replace ‘incorporeal’ or contained neighborhoods U ֒→ X by a more relational description: as maps U → X that are not necessarily monic, but which correspond to ring-morphisms instead.

Topos theory applies similar insight but not in the context of only specific varieties but for the entire theory of sets instead. Ultimately, Lawvere and Tierney realized the importance of these ideas to the concept of classification and truth in general. Classification of elements between two sets comes down to a question: does this element belong to a given set or not? In category of S ets this question calls for a binary answer: true or false. But not in a general topos in which the composition of the subobject-classifier is more geometric.

Indeed, Lawvere and Tierney then considered this characteristc map ‘either/or’ as a categorical relationship instead without referring to its ‘contents’. It was the structural form of this morphism (which they called ‘true’) and as contrasted with other relationships that marked the beginning of geometric logic. They thus rephrased the binary complete Heyting algebra of classical truth with the categorical version Ω defined as an object, which satisfies a specific pull-back condition. The crux of topos theory was then the so called Freyd–Mitchell embedding theorem which effectively guaranteed the explicit set of elementary axioms so as to formalize topos theory. The Freyd–Mitchell embedding theorem says that every abelian category is a full subcategory of a category of modules over some ring R and that the embedding is an exact functor. It is easy to see that not every abelian category is equivalent to RMod for some ring R. The reason is that RMod has all small limits and colimits. But for instance the category of finitely generated R-modules is an abelian category but lacks these properties.

But to understand its significance as a link between geometry and language, it is useful to see how the characteristic map (either/or) behaves in set theory. In particular, by expressing truth in this way, it became possible to reduce Axiom of Comprehension, which states that any suitable formal condition λ gives rise to a peculiar set {x ∈ λ}, to a rather elementary statement regarding adjoint functors.

At the same time, many mathematical structures became expressible not only as general topoi but in terms of a more specific class of Grothendieck-topoi. There, too, the ‘way of doing mathematics’ is different in the sense that the object-classifier is categorically defined and there is no empty set (initial object) but mathematics starts from the terminal object 1 instead. However, there is a material way to express the ‘difference’ such topoi make in terms of set theory: for every such a topos there is a sheaf-form enabling it to be expressed as a category of sheaves S etsC for a category C with a specific Grothendieck-topology.

# Hyperstructures

In many areas of mathematics there is a need to have methods taking local information and properties to global ones. This is mostly done by gluing techniques using open sets in a topology and associated presheaves. The presheaves form sheaves when local pieces fit together to global ones. This has been generalized to categorical settings based on Grothendieck topologies and sites.

The general problem of going from local to global situations is important also outside of mathematics. Consider collections of objects where we may have information or properties of objects or subcollections, and we want to extract global information.

This is where hyperstructures are very useful. If we are given a collection of objects that we want to investigate, we put a suitable hyperstructure on it. Then we may assign “local” properties at each level and by the generalized Grothendieck topology for hyperstructures we can now glue both within levels and across the levels in order to get global properties. Such an assignment of global properties or states we call a globalizer.

To illustrate our intuition let us think of a society organized into a hyperstructure. Through levelwise democratic elections leaders are elected and the democratic process will eventually give a “global” leader. In this sense democracy may be thought of as a sociological (or political) globalizer. This applies to decision making as well.

In “frustrated” spin systems in physics one may possibly think of the “frustation” being resolved by creating new levels and a suitable globalizer assigning a global state to the system corresponding to various exotic physical conditions like, for example, a kind of hyperstructured spin glass or magnet. Acting on both classical and quantum fields in physics may be facilitated by putting a hyperstructure on them.

There are also situations where we are given an object or a collection of objects with assignments of properties or states. To achieve a certain goal we need to change, let us say, the state. This may be very difficult and require a lot of resources. The idea is then to put a hyperstructure on the object or collection. By this we create levels of locality that we can glue together by a generalized Grothendieck topology.

It may often be much easier and require less resources to change the state at the lowest level and then use a globalizer to achieve the desired global change. Often it may be important to find a minimal hyperstructure needed to change a global state with minimal resources.

Again, to support our intuition let us think of the democratic society example. To change the global leader directly may be hard, but starting a “political” process at the lower individual levels may not require heavy resources and may propagate through the democratic hyperstructure leading to a change of leader.

Hence, hyperstructures facilitates local to global processes, but also global to local processes. Often these are called bottom up and top down processes. In the global to local or top down process we put a hyperstructure on an object or system in such a way that it is represented by a top level bond in the hyperstructure. This means that to an object or system X we assign a hyperstructure

H = {B0,B1,…,Bn} in such a way that X = bn for some bn ∈ B binding a family {bi1n−1} of Bn−1 bonds, each bi1n−1 binding a family {bi2n−2} of Bn−2 bonds, etc. down to B0 bonds in H. Similarly for a local to global process. To a system, set or collection of objects X, we assign a hyperstructure H such that X = B0. A hyperstructure on a set (space) will create “global” objects, properties and states like what we see in organized societies, organizations, organisms, etc. The hyperstructure is the “glue” or the “law” of the objects. In a way, the globalizer creates a kind of higher order “condensate”. Hyperstructures represent a conceptual tool for translating organizational ideas like for example democracy, political parties, etc. into a mathematical framework where new types of arguments may be carried through.

# Symmetry: Mirror of a Manifold is the Opposite of its Fundamental Poincaré ∞-groupoid

Given a set X = {a,b,c,..} such as the natural numbers N = {0,1,…,p,…}, there is a standard procedure that amounts to regard X as a category with only identity morphisms. This is the discrete functor that takes X to the category denoted by Disc(X) where the hom-sets are given by Hom(a,b) = ∅ if a ≠ b, and Hom(a,b) = {Ida} = 1 if a = b. Disc(X) is in fact a groupoid.

But in category theory, there is also a procedure called opposite or dual, that takes a general category C to its opposite Cop. Let us call Cop the reflection of C by the mirror functor (−)op.

Now the problem is that if we restrict this procedure to categories such as Disc(X), there is no way to distinguish Disc(X) from Disc(X)op. And this is what we mean by sets don’t show symmetries. In the program of Voevodsky, we can interpret this by saying that:

The identity type is not good for sets, instead we should use the Equivalence type. But to get this, we need to move to from sets to Kan complexes i.e., ∞-groupoids.

The notion of a Kan complex is an abstraction of the combinatorial structure found in the singular simplicial complex of a topological space. There the existence of retractions of any geometric simplex to any of its horns – simplices missing one face and their interior – means that all horns in the singular complex can be filled with genuine simplices, the Kan filler condition.

At the same time, the notion of a Kan complex is an abstraction of the structure found in the nerve of a groupoid, the Duskin nerve of a 2-groupoid and generally the nerves of n-groupoids ∀ n ≤ ∞ n. In other words, Kan complexes constitute a geometric model for ∞-groupoids/homotopy types which is based on the shape given by the simplex category. Thus Kan complexes serve to support homotopy theory.

So far we’ve used set theory with this lack of symmetries, as foundations for mathematics. Grothendieck has seen this when he moved from sheaves of sets, to sheaves of groupoid (stacks), because he wanted to allow objects to have symmetries (automorphisms). If we look at the Giraud-Grothendieck picture on nonabelian cohomology, then what happens is an extension of coefficients U : Set ֒→ Cat. We should consider first the comma category Cat ↓ U, whose objects are functors C → Disc(X). And then we should consider the full subcategory consisting of functors C → Disc(X) that are equivalences of categories. This will force C to be a groupoid, that looks like a set. And we call such C → Disc(X) a Quillen-Segal U-object.

This category of Quillen-Segal objects should be called the category of sets with symmetries. Following Grothendieck’s point of view, we’ve denoted by CatU[Set] the comma category, and think of it as categories with coefficients or coordinates in sets. This terminology is justified by the fact that the functor U : Set ֒→ Cat is a morphism of (higher) topos, that defines a geometric point in Cat. The category of set with symmetries is like the homotopy neighborhood of this point, similar to a one-point going to a disc or any contractible object. The advantage of the Quillen-Segal formalism is the presence of a Quillen model structure on CatU[Set] such that the fibrant objects are Quillen-Segal objects.

In standard terminology this means that if we embed a set X in Cat as Disc(X), and take an ‘projective resolution’ of it, then we get an equivalence of groupoids P → Disc(X), and P has symmetries. Concretely what happens is just a factorization of the identity (type) Id : Disc(X) → Disc(X) as a cofibration followed by a trivial fibration:

Disc(X)  ֒→ P → Disc(X)

This process of embedding Set ֒→ QS{CatU[Set]} is a minimal homotopy enhancement. The idea is that there is no good notion of homotopy (weak equivalence) in Set, but there are at least two notions in Cat: equivalences of categories and the equivalences of classifying spaces. This last class of weak equivalences is what happens with mirror phenomenons. The mirror of a manifold should be the opposite of its fundamental Poincaré ∞-groupoid.