Principal Bundles Preserve Structures…

Untitled

A bundle P = (P, M ,π; G) is a principal bundle if the standard fiber is a Lie group G and ∃ (at least) one trivialization the transition functions of which act on G by left translations Lg : G → G : h ↦ f  g . h (where . denotes here the group multiplication).

The principal bundles are slightly different from affine bundles and vector bundles. In fact, while in affine bundles the fibers π-1(x) have a canonical structure of affine spaces and in vector bundles the fibers π-1(x) have a canonical structure of vector spaces, in principal bundles the fibers have no canonical Lie group structure. This is due to the fact that, while in affine bundles transition functions act by means of affine transformations and in vector bundles transition functions act by means of linear transformations, in principal bundles transition functions act by means of left translations which are not group automorphisms. Thus the fibers of a principal bundle do not carry a canonical group structure, but rather many non-canonical (trivialization-depending) group structures. In the fibers of a vector bundle there exists a preferred element (the “zero”) the definition of which does not depend on the local trivialization. On the contrary, in the fibers of a principal bundle there is no preferred point which is fixed by transition functions to be selected as an identity. Thus, while in affine bundles affine morphisms are those which preserve the affine structure of the fibers and in vector bundles linear morphisms are the ones which preserve the linear structure of the fibers, in a principal bundle P = (P, M, π; G) principal morphisms preserve instead a structure, the right action of G on P.

Let P = (P, M, π; G) be a principal bundle and {(Uα, t(α)}α∈I a trivialization. We can locally consider the maps

R(α)g : π-1(Uα) → π-1(Uα) : [x, h](α) ↦ [x, h . g](α) —– (1)

∃ a (global) right action Rg of G on P which is free, vertical and transitive on fibers; the local expression in the given trivialization of this action is given by R(α)g .

Using the local trivialization, we set p = [x, h](α) = [x, g(βα)(x) . h]β following diagram commutes:

Untitled

which clearly shows that the local expressions agree on the overlaps Uαβ, to define a right action. This is obviously a vertical action; it is free because of the following:

Rgp = p => [x, h . g](α) = [x, h](α) => h · g = h => g = e —– (2)

Finally, if p = [x, h1](α) and q = [x, h2](α) are two points in the same fiber of p, one can choose g = h2-1 . h1 (where · denotes the group multiplication) so that p = Rgq. This shows that the right action is also transitive on the fibers.

On the contrary, that a global left action cannot be defined by using the local maps

L(α)g : π-1(Uα) → π-1(Uα) : [x, h](α) ↦ [x, g . h](α) —– (3)

since these local maps do not satisfy a compatibility condition analogous to the condition of the commuting diagram.

let P = (P, M, π; G) and P’ = (P’, M’, π’ ; G’ ) be two principal bundles and θ : G → G’ be a homomorphism of Lie groups. A bundle morphism Φ = (Φ, φ) : P → P’ is a principal morphism with respect to θ if the following diagram is commutative:

Untitled

When G = G’ and θ = idG we just say that Φ is a principal morphism.

A trivial principal bundle (M x G, M, π; G) naturally admits the global unity section I ∈ Γ(M x G), defined with respect to a global trivialization, I : x ↦ (x, e), e being the unit element of G. Also, principal bundles allow global sections iff they are trivial. In fact, on principal bundles there is a canonical correspondence between local sections and local trivializations, due to the presence of the global right action.

Grothendieck’s Universes and Wiles Proof (Fermat’s Last Theorem). Thought of the Day 77.0

math-equations-16133692

In formulating the general theory of cohomology Grothendieck developed the concept of a universe – a collection of sets large enough to be closed under any operation that arose. Grothendieck proved that the existence of a single universe is equivalent over ZFC to the existence of a strongly inaccessible cardinal. More precisely, 𝑈 is the set 𝑉𝛼 of all sets with rank below 𝛼 for some uncountable strongly inaccessible cardinal.

Colin McLarty summarised the general situation:

Large cardinals as such were neither interesting nor problematic to Grothendieck and this paper shares his view. For him they were merely legitimate means to something else. He wanted to organize explicit calculational arithmetic into a geometric conceptual order. He found ways to do this in cohomology and used them to produce calculations which had eluded a decade of top mathematicians pursuing the Weil conjectures. He thereby produced the basis of most current algebraic geometry and not only the parts bearing on arithmetic. His cohomology rests on universes but weaker foundations also suffice at the loss of some of the desired conceptual order.

The applications of cohomology theory implicitly rely on universes. Most number theorists regard the applications as requiring much less than their ‘on their face’ strength and in particular believe the large cardinal appeals are ‘easily eliminable’. There are in fact two issues. McLarty writes:

Wiles’s proof uses hard arithmetic some of which is on its face one or two orders above PA, and it uses functorial organizing tools some of which are on their face stronger than ZFC.

There are two current programs for verifying in detail the intuition that the formal requirements for Wiles proof of Fermat’s last theorem can be substantially reduced. On the one hand, McLarty’s current work aims to reduce the ‘on their face’ strength of the results in cohomology from large cardinal hypotheses to finite order Peano. On the other hand Macintyre aims to reduce the ‘on their face’ strength of results in hard arithmetic to Peano. These programs may be complementary or a full implementation of Macintyre’s might avoid the first.

McLarty reduces

  1. ‘ all of SGA (Revêtements Étales et Groupe Fondamental)’ to Bounded Zermelo plus a Universe.
  2. “‘the currently existing applications” to Bounded Zermelo itself, thus the con-sistency strength of simple type theory.’ The Grothendieck duality theorem and others like it become theorem schema.

The essential insight of the McLarty’s papers on cohomology is the role of replacement in giving strength to the universe hypothesis. A 𝑍𝐶-universe is defined to be a transitive set U modeling 𝑍𝐶 such that every subset of an element of 𝑈 is itself an element of 𝑈. He remarks that any 𝑉𝛼 for 𝛼 a limit ordinal is provable in 𝑍𝐹𝐶 to be a 𝑍𝐶-universe. McLarty then asserts the essential use of replacement in the original Grothendieck formulation is to prove: For an arbitrary ring 𝑅 every module over 𝑅 embeds in an injective 𝑅-module and thus injective resolutions exist for all 𝑅-modules. But he gives a proof in a system with the proof theoretic strength of finite order arithmetic that every sheaf of modules on any small site has an infinite resolution.

Angus Macintyre dismisses with little comment the worries about the use of ‘large-structure’ tools in Wiles proof. He begins his appendix,

At present, all roads to a proof of Fermat’s Last Theorem pass through some version of a Modularity Theorem (generically MT) about elliptic curves defined over Q . . . A casual look at the literature may suggest that in the formulation of MT (or in some of the arguments proving whatever version of MT is required) there is essential appeal to higher-order quantification, over one of the following.

He then lists such objects as C, modular forms, Galois representations …and summarises that a superficial formulation of MT would be 𝛱1m for some small 𝑚. But he continues,

I hope nevertheless that the present account will convince all except professional sceptics that MT is really 𝛱01.

There then follows a 13 page highly technical sketch of an argument for the proposition that MT can be expressed by a sentence in 𝛱01 along with a less-detailed strategy for proving MT in PA.

Macintyre’s complexity analysis is in traditional proof theoretic terms. But his remark that ‘genus’ is more a useful geometric classification of curves than the syntactic notion of degree suggests that other criteria may be relevant. McLarty’s approach is not really a meta-theorem, but a statement that there was only one essential use of replacement and it can be eliminated. In contrast, Macintyre argues that ‘apparent second order quantification’ can be replaced by first order quantification. But the argument requires deep understanding of the number theory for each replacement in a large number of situations. Again, there is no general theorem that this type of result is provable in PA.

Mappings, Manifolds and Kantian Abstract Properties of Synthesis

sol-lewitt

An inverse system is a collection of sets which are connected by mappings. We start off with the definitions before relating these to abstract properties of synthesis.

Definition: A directed set is a set T together with an ordering relation ≤ such that

(1) ≤ is a partial order, i.e. transitive, reflexive, anti-symmetric

(2) ≤ is directed, i.e. for any s, t ∈ T there is r ∈ T with s, t ≤ r

Definition: An inverse system indexed by T is a set D = {Ds|s ∈ T} together with a family of mappings F = {hst|s ≥ t, hst : Ds → Dt}. The mappings in F must satisfy the coherence requirement that if s ≥ t ≥ r, htr ◦ hst = hsr.

Interpretation of the index set: The index set represents some abstract properties of synthesis. The ‘synthesis of apprehension in intuition’ proceeds by a ’running through and holding together of the manifold’ and is thus a process that takes place in time. We may now think of an index s ∈ T as an interval of time available for the process of ’running through and holding together’. More formally, s can be taken to be a set of instants or events, ordered by a ‘precedes’ relation; the relation t ≤ s then stands for: t is a substructure of s. It is immediate that on this interpretation ≤ is a partial order. The directedness is related to what Kant called ‘the formal unity of the consciousness in the synthesis of the manifold of representations’ or ‘the necessary unity of self-consciousness, thus also of the synthesis of the manifold, through a common function of the mind for combining it in one representation’ – the requirement that ‘for any s, t ∈ T there is r ∈ T with s, t ≤ r’ creates the formal conditions for combining the syntheses executed during s and t in one representation, coded by r.

Interpretation of the Ds and the mappings hst : Ds → Dt. An object in Ds can thought of as a possible ‘indeterminate object of empirical intuition’ synthesised in the interval s. If s ≥ t, the mapping hst : Ds → Dt expresses a consistency requirement: if d ∈ Ds represents an indeterminate object of empirical intuition synthesised in interval s, so that a particular manifold of features can be ‘run through and held together’ during s, some indeterminate object of empirical intuition must already be synthesisable by ‘running through and holding together’ in interval t, e.g. by combining a subset of the features characaterising d. This interpretation justifies the coherence condition s ≥ t ≥ r, htr ◦ hst = hsr: the synthesis obtained from first restricting the interval available for ‘running through and holding together’ to interval t, and then to interval r should not differ from the synthesis obtained by restricting to r directly.

We do not put any further requirements on the mappings hst : Ds → Dt, such as surjectivity or injectivity. Some indeterminate object of experience in Dt may have disappeared in Ds: more time for ‘running through and holding together’ may actually yield fewer features that can be combined. Thus we do not require the mappings to be surjective. It may also happen that an indeterminate object of experience in Dt corresponds to two or more of such objects in Ds, as when a building viewed from afar upon closer inspection turns out to be composed of two spatially separated buildings; thus the mappings need not be injective.

The interaction of the directedness of the index set and the mappings hst is of some interest. If r ≥ s, t there are mappings hrs : Dr → Ds and hrt : Ds → Dt. Each ‘indeterminate object of empirical intuition’ in d ∈ Dr can be seen as a synthesis of such objects hrs(d) ∈ Ds and hrt(d) ∈ Dt. For example, the ‘manifold of a house’ can be viewed as synthesised from a ‘manifold of the front’ and a ‘manifold of the back’. The operation just described has some of the characteristics of the synthesis of reproduction in imagination: the fact that the front of the house can be unified with the back to produce a coherent object presupposes that the front can be reproduced as it is while we are staring at the back. The mappings hrs : Dr → Ds and hrt : Ds → Dt capture the idea that d ∈ Dr arises from reproductions of hrs(d) and hrt(d) in r.

Nomological Possibility and Necessity

6a010535ce1cf6970c01bb096e3f72970d

An event E is nomologically possible in history h at time t if the initial segment of that history up to t admits at least one continuation in Ω that lies in E; and E is nomologically necessary in h at t if every continuation of the history’s initial segment up to t lies in E.

More formally, we say that one history, h’, is accessible from another, h, at time t if the initial segments of h and h’ up to time t coincide, i.e., ht = ht‘. We then write h’Rth. The binary relation Rt on possible histories is in fact an equivalence relation (reflexive, symmetric, and transitive). Now, an event E ⊆ Ω is nomologically possible in history h at time t if some history h’ in Ω that is accessible from h at t is contained in E. Similarly, an event E ⊆ Ω is nomologically necessary in history h at time t if every history h’ in Ω that is accessible from h at t is contained in E.

In this way, we can define two modal operators, ♦t and ¤t, to express possibility and necessity at time t. We define each of them as a mapping from events to events. For any event E ⊆ Ω,

t E = {h ∈ Ω : for some h’ ∈ Ω with h’Rth, we have h’ ∈ E},

¤t E = {h ∈ Ω : for all h’ ∈ Ω with h’Rth, we have h’ ∈ E}.

So, ♦t E is the set of all histories in which E is possible at time t, and ¤t E is the set of all histories in which E is necessary at time t. Accordingly, we say that “ ♦t E” holds in history h if h is an element of ♦t E, and “ ¤t E” holds in h if h is an element of ¤t E. As one would expect, the two modal operators are duals of each other: for any event E ⊆ Ω, we have ¤t E = ~ ♦t ~E and ♦E = ~ ¤t ~E.

Although we have here defined nomological possibility and necessity, we can analogously define logical possibility and necessity. To do this, we must simply replace every occurrence of the set Ω of nomologically possible histories in our definitions with the set H of logically possible histories. Second, by defining the operators ♦t and ¤t as functions from events to events, we have adopted a semantic definition of these modal notions. However, we could also define them syntactically, by introducing an explicit modal logic. For each point in time t, the logic corresponding to the operators ♦t and ¤t would then be an instance of a standard S5 modal logic.

The analysis shows how nomological possibility and necessity depend on the dynamics of the system. In particular, as time progresses, the notion of possibility becomes more demanding: fewer events remain possible at each time. And the notion of necessity becomes less demanding: more events become necessary at each time, for instance due to having been “settled” in the past. Formally, for any t and t’ in T with t < t’ and any event E ⊆ Ω,

if ♦t’ E then ♦E,

if ¤t E then ¤t’ E.

Furthermore, in a deterministic system, for every event E and any time t, we have ♦t E = ¤t E. In other words, an event is possible in any history h at time t if and only if it is necessary in h at t. In an indeterministic system, by contrast, necessity and possibility come apart.

Let us say that one history, h’, is accessible from another, h, relative to a set T’ of time points, if the restrictions of h and h’ to T’ coincide, i.e., h’T’ = hT’. We then write h’RT’h. Accessibility at time t is the special case where T’ is the set of points in time up to time t. We can define nomological possibility and necessity relative to T’ as follows. For any event E ⊆ Ω,

T’ E = {h ∈ Ω : for some h’ ∈ Ω with h’RT’h, we have h’ ∈ E},

¤T’ E = {h ∈ Ω : for all h’ ∈ Ω with h’RT’h, we have h’ ∈ E}.

Although these modal notions are much less familiar than the standard ones (possibility and necessity at time t), they are useful for some purposes. In particular, they allow us to express the fact that the states of a system during a particular period of time, T’ ⊆ T, render some events E possible or necessary.

Finally, our definitions of possibility and necessity relative to some general subset T’ of T also allow us to define completely “atemporal” notions of possibility and necessity. If we take T’ to be the empty set, then the accessibility relation RT’ becomes the universal relation, under which every history is related to every other. An event E is possible in this atemporal sense (i.e., ♦E) iff E is a non-empty subset of Ω, and it is necessary in this atemporal sense (i.e., ¤E) if E coincides with all of Ω. These notions might be viewed as possibility and necessity from the perspective of some observer who has no temporal or historical location within the system and looks at it from the outside.

Conjuncted: Operations of Truth. Thought of the Day 47.1

mathBIG2

Conjuncted here.

Let us consider only the power set of the set of all natural numbers, which is the smallest infinite set – the countable infinity. By a model of set theory we understand a set in which  – if we restrict ourselves to its elements only – all axioms of set theory are satisfied. It follows from Gödel’s completeness theorem that as long as set theory is consistent, no statement which is true in some model of set theory can contradict logical consequences of its axioms. If the cardinality of p(N) was such a consequence, there would exist a cardinal number κ such that the sentence the cardinality of p(N) is κ would be true in all the models. However, for every cardinal κ the technique of forcing allows for finding a model M where the cardinality of p(N) is not equal to κ. Thus, for no κ, the sentence the cardinality of p(N) is κ is a consequence of the axioms of set theory, i.e. they do not decide the cardinality of p(N).

The starting point of forcing is a model M of set theory – called the ground model – which is countably infinite and transitive. As a matter of fact, the existence of such a model cannot be proved but it is known that there exists a countable and transitive model for every finite subset of axioms.

A characteristic subtlety can be observed here. From the perspective of an inhabitant of the universe, that is, if all the sets are considered, the model M is only a small part of this universe. It is deficient in almost every respect; for example all of its elements are countable, even though the existence of uncountable sets is a consequence of the axioms of set theory. However, from the point of view of an inhabitant of M, that is, if elements outside of M are disregarded, everything is in order. Some of M because in this model there are no functions establishing a one-to-one correspondence between them and ω0. One could say that M simulates the properties of the whole universe.

The main objective of forcing is to build a new model M[G] based on M, which contains M, and satisfies certain additional properties. The model M[G] is called the generic extension of M. In order to accomplish this goal, a particular set is distinguished in M and its elements are referred to as conditions which will be used to determine basic properties of the generic extension. In case of the forcing that proves the undecidability of the cardinality of p(N), the set of conditions codes finite fragments of a function witnessing the correspondence between p(N) and a fixed cardinal κ.

In the next step, an appropriately chosen set G is added to M as well as other sets that are indispensable in order for M[G] to satisfy the axioms of set theory. This set – called generic – is a subset of the set of conditions that always lays outside of M. The construction of M[G] is exceptional in the sense that its key properties can be described and proved using M only, or just the conditions, thus, without referring to the generic set. This is possible for three reasons. First of all, every element x of M[G] has a name existing already in M (that is, an element in M that codes x in some particular way). Secondly, based on these names, one can design a language called the forcing language or – as Badiou terms it – the subject language that is powerful enough to express every sentence of set theory referring to the generic extension. Finally, it turns out that the validity of sentences of the forcing language in the extension M[G] depends on the set of conditions: the conditions force validity of sentences of the forcing language in a precisely specified sense. As it has already been said, the generic set G consists of some of the conditions, so even though G is outside of M, its elements are in M. Recognizing which of them will end up in G is not possible for an inhabitant of M, however in some cases the following can be proved: provided that the condition p is an element of G, the sentence S is true in the generic extension constructed using this generic set G. We say then that p forces S.

In this way, with an aid of the forcing language, one can prove that every generic set of the Cohen forcing codes an entire function defining a one-to-one correspondence between elements of p(N) and a fixed (uncountable) cardinal number – it turns out that all the conditions force the sentence stating this property of G, so regardless of which conditions end up in the generic set, it is always true in the generic extension. On the other hand, the existence of a generic set in the model M cannot follow from axioms of set theory, otherwise they would decide the cardinality of p(N).

The method of forcing is of fundamental importance for Badious philosophy. The event escapes ontology; it is “that-which-is-not-being-qua-being”, so it has no place in set theory or the forcing construction. However, the post-evental truth that enters, and modifies the situation, is presented by forcing in the form of a generic set leading to an extension of the ground model. In other words, the situation, understood as the ground model M, is transformed by a post-evental truth identified with a generic set G, and becomes the generic model M[G]. Moreover, the knowledge of the situation is interpreted as the language of set theory, serving to discern elements of the situation; and as axioms of set theory, deciding validity of statements about the situation. Knowledge, understood in this way, does not decide the existence of a generic set in the situation nor can it point to its elements. A generic set is always undecidable and indiscernible.

Therefore, from the perspective of knowledge, it is not possible to establish, whether a situation is still the ground-model or it has undergone a generic extension resulting from the occurrence of an event; only the subject can interventionally decide this. And it is only the subject who decides about the belonging of particular elements to the generic set (i.e. the truth). A procedure of truth or procedure of fidelity (Alain Badiou – Being and Event) supported in this way gives rise to the subject language. It consists of sentences of set theory, so in this respect it is a part of knowledge, although the veridicity of the subject language originates from decisions of the faithful subject. Consequently, a procedure of fidelity forces statements about the situation as it is after being extended, and modified by the operation of truth.

Impasse to the Measure of Being. Thought of the Day 47.0

IMG_7622-1038x576

The power set p(x) of x – the state of situation x or its metastructure (Alain Badiou – Being and Event) – is defined as the set of all subsets of x. Now, basic relations between sets can be expressed as the following relations between sets and their power sets. If for some x, every element of x is also a subset of x, then x is a subset of p(x), and x can be reduced to its power set. Conversely, if every subset of x is an element of x, then p(x) is a subset of x, and the power set p(x) can be reduced to x. Sets that satisfy the first condition are called transitive. For obvious reasons the empty set is transitive. However, the second relation never holds. The mathematician Georg Cantor proved that not only p(x) can never be a subset of x, but in some fundamental sense it is strictly larger than x. On the other hand, axioms of set theory do not determine the extent of this difference. Badiou says that it is an “excess of being”, an excess that at the same time is its impasse.

In order to explain the mathematical sense of this statement, recall the notion of cardinality, which clarifies and generalizes the common understanding of quantity. We say that two sets x and y have the same cardinality if there exists a function defining a one-to-one correspondence between elements of x and elements of y. For finite sets, this definition agrees with common intuitions: if a finite set y has more elements than a finite set x, then regardless of how elements of x are assigned to elements of y, something will be left over in y precisely because it is larger. In particular, if y contains x and some other elements, then y does not have the same cardinality as x. This seemingly trivial fact is not always true outside of the domain of finite sets. To give a simple example, the set of all natural numbers contains quadratic numbers, that is, numbers of the form n2, as well as some other numbers but the set of all natural numbers, and the set of quadratic numbers have the same cardinality. The correspondence witnessing this fact assigns to every number n a unique quadratic number, namely n2.

Counting finite sets has always been done via natural numbers 0, 1, 2, . . . In set theory, the concept of such a canonical measure can be extended to infinite sets, using the notion of cardinal numbers. Without getting into details of their definition, let us say that the series of cardinal numbers begins with natural numbers, which are directly followed by the number ω0, that is, the size of the set of all natural numbers , then by ω1, the first uncountable cardinal numbers, etc. The hierarchy of cardinal numbers has the property that every set x, finite or infinite, has cardinality (i.e. size) equal to exactly one cardinal number κ. We say then that κ is the cardinality of x.

The cardinality of the power set p(x) is 2n for every finite set x of cardinality n. However, something quite paradoxical happens when infinite sets are considered. Even though Cantor’s theorem does state that the cardinality of p(x) is always larger than x – similarly as in the case of finite sets – axioms of set theory never determine the exact cardinality of p(x). Moreover, one can formally prove that there exists no proof determining the cardinality of the power sets of any given infinite set. There is a general method of building models of set theory, discovered by the mathematician Paul Cohen, and called forcing, that yields models, where – depending on construction details – cardinalities of infinite power sets can take different values. Consequently, quantity – “a fetish of objectivity” as Badiou calls it – does not define a measure of being but it leads to its impasse instead. It reveals an undetermined gap, where an event can occur – “that-which-is-not being-qua-being”.

Simultaneity

Untitled

Let us introduce the concept of space using the notion of reflexive action (or reflex action) between two things. Intuitively, a thing x acts on another thing y if the presence of x disturbs the history of y. Events in the real world seem to happen in such a way that it takes some time for the action of x to propagate up to y. This fact can be used to construct a relational theory of space à la Leibniz, that is, by taking space as a set of equitemporal things. It is necessary then to define the relation of simultaneity between states of things.

Let x and y be two things with histories h(xτ) and h(yτ), respectively, and let us suppose that the action of x on y starts at τx0. The history of y will be modified starting from τy0. The proper times are still not related but we can introduce the reflex action to define the notion of simultaneity. The action of y on x, started at τy0, will modify x from τx1 on. The relation “the action of x on y is reflected to x” is the reflex action. Historically, Galileo introduced the reflection of a light pulse on a mirror to measure the speed of light. With this relation we will define the concept of simultaneity of events that happen on different basic things.

Untitled

Besides we have a second important fact: observation and experiment suggest that gravitation, whose source is energy, is a universal interaction, carried by the gravitational field.

Let us now state the above hypothesis axiomatically.

Axiom 1 (Universal interaction): Any pair of basic things interact. This extremely strong axiom states not only that there exist no completely isolated things but that all things are interconnected.

This universal interconnection of things should not be confused with “universal interconnection” claimed by several mystical schools. The present interconnection is possible only through physical agents, with no mystical content. It is possible to model two noninteracting things in Minkowski space assuming they are accelerated during an infinite proper time. It is easy to see that an infinite energy is necessary to keep a constant acceleration, so the model does not represent real things, with limited energy supply.

Now consider the time interval (τx1 − τx0). Special Relativity suggests that it is nonzero, since any action propagates with a finite speed. We then state

Axiom 2 (Finite speed axiom): Given two different and separated basic things x and y, such as in the above figure, there exists a minimum positive bound for the interval (τx1 − τx0) defined by the reflex action.

Now we can define Simultaneity as τy0 is simultaneous with τx1/2 =Df (1/2)(τx1 + τx0)

The local times on x and y can be synchronized by the simultaneity relation. However, as we know from General Relativity, the simultaneity relation is transitive only in special reference frames called synchronous, thus prompting us to include the following axiom:

Axiom 3 (Synchronizability): Given a set of separated basic things {xi} there is an assignment of proper times τi such that the relation of simultaneity is transitive.

With this axiom, the simultaneity relation is an equivalence relation. Now we can define a first approximation to physical space, which is the ontic space as the equivalence class of states defined by the relation of simultaneity on the set of things is the ontic space EO.

The notion of simultaneity allows the analysis of the notion of clock. A thing y ∈ Θ is a clock for the thing x if there exists an injective function ψ : SL(y) → SL(x), such that τ < τ′ ⇒ ψ(τ) < ψ(τ′). i.e.: the proper time of the clock grows in the same way as the time of things. The name Universal time applies to the proper time of a reference thing that is also a clock. From this we see that “universal time” is frame dependent in agreement with the results of Special Relativity.

Galois Connections. Part 3.

Let (P,≤P) and (Q,≤Q) be posets, and consider two set functions ∗ ∶ P ⇄ Q ∶ ∗. We will denote these by p ↦ p ∗ and q ↦ q ∗ for all p ∈ P and q ∈ Q. This pair of functions is called a Galois connection if, for all p ∈ P and q ∈ Q, we have

p ≤ P q ∗ ⇐⇒ q ≤ Q p  ∗

Let ∗ ∶ P ⇄ Q ∶ ∗ be a Galois connection. For all elements x of P or Q we will use the notations x ∗ ∗ ∶= (x ∗)∗ and x ∗ ∗ ∗ ∶= (x ∗ ∗)∗.

(1) For all p ∈ P and q ∈ Q we have

p ≤ P p ∗ ∗ and q ≤ Q q ∗ ∗.

(2) For all elements p1, p2 ∈ P and q1, q2 ∈ Q we have

p1 ≤ P p2 ⇒ p ∗ 2 ≤ Q p ∗ 1 and q1 ≤ Q q2 ⇒ q2 ∗ ≤ P q1 ∗.

(3) For all elements p ∈ P and q ∈ Q we have

p ∗ ∗ ∗ = p ∗ and q ∗ ∗ ∗ = q ∗

Proof:

Since the definition of a Galois connection is symmetric in P and Q, we will simplify the proof by using the notation

x ≤ y ∗ ⇐⇒ y ≤ x ∗

for all elements x,y such that the inequalities make sense. To prove (1) note that for any element x we have x ∗ ≤ x ∗ by the reflexivity of partial order. Then from the definition of Galois connection we obtain,

(x ∗) ≤ (x) ∗ ⇒ (x) ≤ (x ∗) ∗ ⇒ x ≤ x ∗ ∗

To prove (2) consider elements x, y such that x ≤ y. From (1) and the transitivity of partial x ≤ y ≤ y ∗ ∗ ⇒ x ≤ y ∗ ∗. Then from the definition of Galois connection we obtain

(x) ≤ (y ∗) ∗ ⇒ (y ∗) ≤ (x) ∗ ⇒ y ∗ ≤ x ∗.

To prove (3) consider any element x. On the one hand, part (1) tells us that

(x ∗) ≤ (x ∗) ∗ ∗ ⇒ x ∗ ≤ x ∗ ∗ ∗.

On the other hand, part (1) tells us that x ≤ x ∗ ∗ and then part (2) says that

(x) ≤ (x ∗ ∗) ⇒ (x ∗ ∗) ∗ ≤ (x) ∗ ⇒ x ∗ ∗ ∗ ≤ x ∗

Finally, the antisymmetry of partial order says that x∗∗∗ = x∗, which we interpret as isomorphism of objects in the poset category. The following definition captures the essence of these three basic properties.

Definition of Closure in a Poset. Given a poset (P,≤), we say that a function cl ∶ P → P is a closure operator if it satisfies the following three properties:

(i) Extensive: ∀p ∈ P, p ≤ cl(p)

(ii) Monotone: ∀ p,q ∈ P, p ≤ q ⇒ cl(p) ≤ cl(q)

(iii) Idempotent: ∀ p ∈ P, cl(cl(p)) = p.

[Remark: If P = 2U is a Boolean lattice, and if the closure cl ∶ 2U → 2U also preserves finite unions, then we call it a Kuratowski closure. Kuratowski proved that such a closure is equivalent to a topology on the set U.]

If ∗ ∶ P → Q ∶ ∗ is a Galois connection, then the basic properties above immediately imply that the compositions ∗ ∗ ∶ P → P and ∗ ∗ ∶ Q → Q are closure operators.

Proof: Property (ii) follows from applying property (2) twice and property (iii) follows from applying to property (3).

Fundamental Theorem of Galois Connections: Any Galois connection ∗ ∶ P ⇄ Q ∶ ∗ determines two closure operators ∗ ∗ ∶ P → P and ∗ ∗ ∶ Q → Q. We will say that the element p ∈  P (resp. q ∈  Q) is ∗ ∗-closed if p∗ ∗ = p (resp. q∗ ∗ = q). Then the Galois connection restricts to an order-reversing bijection between the subposets of ∗ ∗-closed elements.

Proof: Let Q ∗ ⊆ P and P ∗ ⊆ Q denote the images of the functions ∗ ∶ Q → P and ∗ ∶ P  → Q, respectively. The restriction of the connection to these subsets defines an order-reversing bijection:

img_20170204_065156

Indeed, consider any p ∈ Q ∗, so that p = q ∗ for some q ∈ Q. Then by properties (1) and (3) of Galois connections we have

(p) ∗ ∗ = (q ∗) ∗ ∗ ⇒ p ∗ ∗ = q ∗ ∗ ∗ ⇒ p ∗ ∗ = q ∗ ⇒ p ∗ ∗ = p

Similarly, for all q ∈ P ∗ we have q ∗ ∗ = q. The bijections reverse order because of property (2).

Finally, note that Q ∗ and P ∗ are exactly the subsets of ∗ ∗-closed elements in P and Q, respectively. Indeed, we have seen above that every element of Q ∗ is ∗ ∗-closed. Conversely, if p ∈ P is ∗ ∗-closed then we have

p = p ∗ ∗ ⇒ p = (p ∗) ∗,

and it follows that p ∈ Q ∗. Similarly, every element of P ∗ is ∗ ∗-closed.

Thus, a Galois connection is something like a “loose bijection”. It’s not necessarily a bijection but it becomes one after we “tighten it up”. Sort of like tightening your shoelaces.

img_20170204_071135

The shaded subposets here consist of the ∗ ∗-closed elements. They are supposed to look (anti-) isomorphic. The unshaded parts of the posets get “tightened up” into the shaded subposets. Note that the top elements are ∗ ∗-closed. Indeed, property (2) tells us that 1P ≤ P ≤ 1p∗∗ and then from the universal property of the top element we have 1P** = 1P. Since the left hand side is always true, so is the right hand side. But then from the universal property of the top element in Q we conclude that 0P = 1Q. As a consequence of this, the arbitrary meet of ∗ ∗-closed elements (if it exists) is still ∗ ∗-closed. We will see, however, that the join of ∗ ∗-closed elements is not necessarily ∗ ∗-closed. And hence not all Galois connections induce topologies.

Galois connections between Boolean lattices have a particularly nice form, which is closely related to the universal quantifier ““. Galois Connections of Boolean Lattices. Let U,V be sets and let ∼ ⊆ U × V be any subset (called a relation) between U and V . As usual, we will write “u ∼ v” in place of the statement “(u,v) ∈ ∼“, and we read this as “u is related to v“. Then for all S ∈ 2U and T ∈ 2V we define,

S ∶= {v ∈ V ∶ ∀ s ∈ S, s ∼ v} ∈ 2V,

T ∶= {u ∈ U ∶ ∀ t ∈ T , u ∼ t} ∈ 2U

The pair of functions S ↦ S and T ↦ T is a Galois connection, ∼ ∶ 2U ⇄ 2V ∶ ∼.

To see this, note that ∀ subsets S ∈ 2U and T ∈ 2V we have

S ⊆ T ⇐⇒ ∀ s ∈ S, s ∈ T

⇐⇒ ∀ s ∈ S,∀ t ∈ T, s ∼ t

⇐⇒ ∀ t ∈ T, ∀ s ∈ S, s ∼ t

⇐⇒ ∀ t ∈ T, t ∈ S

⇐⇒ T ⊆ S.

Moreover, one can prove that any Galois connection between 2U and 2V arises in this way from a unique relation.

Orthogonal Complement: Let V be a vector space over field K and let V ∗ be the dual space, consisting of linear functions α ∶ V → K. We define the relation ⊥ ⊆ V ∗ × V by

α ⊥ v ⇐⇒ α(v) = 0.

The resulting ⊥⊥-closed subsets are precisely the linear subspaces on both sides. Thus the Fundamental Theorem of Galois Connections gives us an order-reversing bijection between the subspaces of V ∗ and the subspaces of V.

Convex Complement: Let V be a Euclidean space, i.e., a real vector space with an inner product ⟨-,-⟩ ∶ V ×V → ℜ. We define the relation ∼ ⊆ V ×V by

u ∼ v ⇐⇒ ⟨u,v⟩ ≤ 0.

∀ S ⊆ V the operation S ↦ S ∼ ∼ gives the cone genrated by S, thus the ∼ ∼-closed sets are precisely the cones. Here is a picture:

img_20170204_075300

Original Galois Connection: Let L be a field and let G be a finite group of automorphisms of L, i.e., each g ∈ G is a function g ∶ L → L preserving addition and multiplication. We define a relation ∼ ⊆ G × L by

g ∼ l ⇐⇒ g(l) = l.

Define K ∶= L ∼ to be the “subfield fixed by G“. The original Fundamental Theorem of Galois Theory says that the ∼ ∼-closed subsets of G are precisely the subgroups and the ∼ ∼-closed subsets of L are precisely the subfields containing K.

Hilbert’s Nullstellensatz: Let K be a field and consider the ring of polynomials K[x] ∶= K[x1,…,xn] in n commuting variables. For each polynomial f(x) ∶= f(x1,…,xn) ∈ K[x] and for each n-tuple of field elements α ∶= (α1,…,αn) ∈ Kn, we denote the evaluation by f(α) ∶= f(α1,…,αn) ∈ K. Now we define a relation ∼ ⊆ K[x] × Kn by

f(x) ∼ α ⇐⇒ f(α) = 0

By definition, the closure operator ∼ ∼ on subsets of Kn is called the Zariski closure. It is not difficult to prove that it satisfies the additional property of a Kuratowski closure (i.e., finite unions of closed sets are closed) and hence it defines a topology on Kn, called the Zariski topology. Hilbert’s Nullstellensatz says that if K is algebraically closed, then the ∼ ∼-closed subsets of K[x] are precisely the radical ideals (i.e., ideals closed under taking arbitrary roots).

Marching Along Categories, Groups and Rings. Part 2

A category C consists of the following data:

A collection Obj(C) of objects. We will write “x ∈ C” to mean that “x ∈ Obj(C)

For each ordered pair x, y ∈ C there is a collection HomC (x, y) of arrows. We will write α∶x→y to mean that α ∈ HomC(x,y). Each collection HomC(x,x) has a special element called the identity arrow idx ∶ x → x. We let Arr(C) denote the collection of all arrows in C.

For each ordered triple of objects x, y, z ∈ C there is a function

○ ∶ HomC (x, y) × HomC(y, z) → HomC (x, z), which is called composition of  arrows. If  α ∶ x → y and β ∶ y → z then we denote the composite arrow by β ○ α ∶ x → z.

If each collection of arrows HomC(x,y) is a set then we say that the category C is locally small. If in addition the collection Obj(C) is a set then we say that C is small.

Identitiy: For each arrow α ∶ x → y the following diagram commutes:

img_20170202_165814

Associative: For all arrows α ∶ x → y, β ∶ y → z, γ ∶ z → w, the following diagram commutes:

img_20170202_165833

We say that C′ ⊆ C is a subcategory if Obj(C′) ⊆ Obj(C) and if ∀ x,y ∈ Obj(C′) we have HomC′(x,y) ⊆ HomC(x,y). We say that the subcategory is full if each inclusion of hom sets is an equality.

Let C be a category. A diagram D ⊆ C is a collection of objects in C with some arrows between them. Repetition of objects and arrows is allowed. OR. Let I be any small category, which we think of as an “index category”. Then any functor D ∶ I → C is called a diagram of shape I in C. In either case, we say that the diagram D commutes if for all pairs of objects x,y in D, any two directed paths in D from x to y yield the same arrow under composition.

Identity arrows generalize the reflexive property of posets, and composition of arrows generalizes the transitive property of posets. But whatever happened to the antisymmetric property? Well, it’s the same issue we had before: we should really define equivalence of objects in terms of antisymmetry.

Isomorphism: Let C be a category. We say that two objects x,y ∈ C are isomorphic in C if there exist arrows α ∶ x → y and β ∶ y → x such that the following diagram commutes:

img_20170202_175924

In this case we write x ≅C y, or just x ≅ y if the category is understood.

If γ ∶ y → x is any other arrow satisfying the same diagram as β, then by the axioms of identity and associativity we must have

γ = γ ○ idy = γ ○ (α ○ β) = (γ ○ α) ○ β = idx ○ β = β

This allows us to refer to β as the inverse of the arrow α. We use the notations β = α−1 and

β−1 = α.

A category with one object is called a monoid. A monoid in which each arrow is invertible is called a group. A small category in which each arrow is invertible is called a groupoid.

Subcategories of Set are called concrete categories. Given a concrete category C ⊆ Set we can think of its objects as special kinds of sets and its arrows as special kinds of functions. Some famous examples of conrete categories are:

• Grp = groups & homomorphisms
• Ab = abelian groups & homomorphisms
• Rng = rings & homomorphisms
• CRng = commutative rings & homomorphisms

Note that Ab ⊆ Grp and CRng ⊆ Rng are both full subcategories. In general, the arrows of a concrete category are called morphisms or homomorphisms. This explains our notation of HomC.

Homotopy: The most famous example of a non-concrete category is the fundamental groupoid π1(X) of a topological space X. Here the objects are points and the arrows are homotopy classes of continuous directed paths. The skeleton is the set π0(X) of path components (really a discrete category, i.e., in which the only arrows are the identities). Categories like this are the reason we prefer the name “arrow” instead of “morphism”.

Limit/Colimit: Let D ∶ I → C be a diagram in a category C (thus D is a functor and I is a small “index” category). A cone under D consists of

• an object c ∈ C,

• a collection of arrows αi ∶ x → D(i), one for each index i ∈ I,

such that for each arrow δ ∶ i → j in I we have αj = D(δ) ○ α

In visualizing this:

img_20170202_182016

The cone (c,(αi)i∈I) is called a limit of the diagram D if, for any cone (z,(βi)i∈I) under D, the following picture holds:

img_20170202_182041

[This picture means that there exists a unique arrow υ ∶ z → c such that, for each arrow δ ∶ i → j in I (including the identity arrows), the following diagram commutes:

img_20170202_182906

When δ = idi this diagram just says that βi = αi ○ υ. We do not assume that D itself is commutative. Dually, a cone over D consists of an object c ∈ C and a set of arrows αi ∶ D(i) → c satisfying αi = αj ○ D(δ) for each arrow δ ∶ i → j in I. This cone is called a colimit of the diagram D if, for any cone (z,(βi)i∈I) over D, the following picture holds:

img_20170202_183619

When the (unique) limit or colimit of the diagram D ∶ I → C exists, we denote it by (limI D, (φi)i∈I) or (colimI D, (φi)i∈I), respectively. Sometimes we omit the canonical arrows φi from the notation and refer to the object limID ∈ C as “the limit of D”. However, we should not forget that the arrows are part of the structure, i.e., the limit is really a cone.

Posets: Let P be a poset. We have already seen that the product/coproduct in P (if they exist) are the meet/join, respectively, and that the final/initial objects in P (if they exist) are the top/bottom elements, respectively. The only poset with a zero object is the one element poset.

Sets: The empty set ∅ ∈ Set is an initial object and the one point set ∗ ∈ Set is a final object. Note that two sets are isomorphic in Set precisely when there is a bijection between them, i.e., when they have the same cardinality. Since initial/final objects are unique up to isomorphism, we can identify the initial object with the cardinal number 0 and the final object with the cardinal number 1. There is no zero object in Set.

Products and coproducts exist in Set. The product of S,T ∈ Set consists of the Cartesian product S × T together with the canonical projections πS ∶ S × T → S and πT ∶ S × T → T. The coproduct of S, T ∈ Set consists of the disjoint union S ∐ T together with the canonical injections ιS ∶ S → S ∐ T and ιT ∶ T → S ∐ T. After passing to the skeleton, the product and coproduct of sets become the product and sum of cardinal numbers.

[Note: The “external disjoint union” S ∐ T is a formal concept. The familiar “internal disjoint union” S ⊔ T is only defined when there exists a set U containing both S and T as subsets. Then the union S ∪ T is the join operation in the Boolean lattice 2U ; we call the union “disjoint” when S ∩ T = ∅.]

Groups: The trivial group 1 ∈ Grp is a zero object, and for any groups G, H ∈ Grp the zero homomorphism 1 ∶ G → H sends all elements of G to the identity element 1H ∈ H. The product of groups G, H ∈ Grp is their direct product G × H and the coproduct is their free product G ∗ H, along with the usual canonical morphisms.

Let Ab ⊆ Grp be the full subcategory of abelian groups. The zero object and product are inherited from Grp, but we give them new names: we denote the zero object by 0 ∈ Ab and for any A, B ∈ Ab we denote the zero arrow by 0 ∶ A → B. We denote the Cartesian product by A ⊕ B and we rename it the direct sum. The big difference between Grp and Ab appears when we consider coproducts: it turns out that the product group A ⊕ B is also the coproduct group. We emphasize this fact by calling A ⊕ B the biproduct in Ab. It comes equipped with four canonical homomorphisms πA, πB, ιA, ιB satisfying the usual properties, as well as the following commutative diagram:

img_20170202_185619

This diagram is the ultimate reason for matrix notation. The universal properties of product and coproduct tell us that each endomorphism φ ∶ A ⊕ B → A ⊕ B is uniquely determined by its four components φij ∶= πi ○ φ ○ ιj for i, j ∈ {A,B},so we can represent it as a matrix:

img_20170202_185557

Then the composition of endomorphisms becomes matrix multiplication.

Rings. We let Rng denote the category of rings with unity, together with their homomorphisms. The initial object is the ring of integers Z ∈ Rng and the final object is the zero ring 0 ∈ Rng, i.e., the unique ring in which 0R = 1R. There is no zero object. The product of two rings R, S ∈ Rng is the direct product R × S ∈ Rng with component wise addition and multiplication. Let CRng ⊆ Rng be the full subcategory of commutative rings. The initial/final objects and product in CRng are inherited from Rng. The difference between Rng and CRng again appears when considering coproducts. The coproduct of R,S ∈ CRng is denoted by R ⊗Z S and is called the tensor product over Z…..

From Posets to Categories. Part 1

A poset (partially ordered set) is a pair (P, ≤), where

P is a set,

is a binary relation on P satisfying the three axioms of partial order:

(i) Reflexive: ∀x ∈ P, x ≤ x

(ii) Antisymmetric: ∀x ,y ∈ P, x ≤ y & y ≤ x ⇒ x = y

(iii) Transitive: ∀x, y, z ∈ P, x ≤ y & y ≤ z ⇒ x ≤ z.

And what does this have to do with category theory?

“x ≤ y” ⇐⇒ “ x → y” “x = y” ⇐⇒ “ x ↔ y”

Given x, y ∈ P,

we say that u ∈ P is a least upper bound of x, y ∈ P if we have x → u & y → u, and for all z ∈ P satisfying x → z & y → z we must have u → z. It is more convenient to express this definition with a picture. We say that u ∈ P is a least upper bound of x, y if for all z ∈ P the following picture holds:

img_20170128_173201

Dually, we say that l ∈ P is a greatest lower bound of x, y if for all z ∈ P the following picture holds:

img_20170128_173938

Now suppose that u1, u2 ∈ P are two least upper bounds for x, y. Applying the defininition in both directions gives

u1 → u2 and u2 → u1,

and then from antisymmetry it follows that u1 = u2, which just means that u1 and u2 are indistinguishable within the structure of P. For this reason we can speak of the least upper bound (or “join”) of x, y. If it exists, we denote it by

x ∨ y

Dually, if it exists, we denote the greatest lower bound (or “meet”) by

x ∧ y

The definitions of meet and join are called “universal properties”. Whenever an object defined by a universal property exists, it is automatically unique in a certain canonical sense. However, since the object might not exist, maybe it is better to refer to a universal property as a “characterization,” or a “prescription,” rather than a “definition.”

Let P be a poset. We say that t ∈ P is a top element

if for all z ∈ P the following picture holds:

z —> t

Dually, we say that b ∈ P is a bottom element if for all z ∈ P the following picture holds:

b —> z

For any subset of elements of a poset S ⊆ P we say that the element ⋁ S ∈ P is its join if for all z ∈ P the following diagram is satisfied:

img_20170128_175246

Dually, we say that ⋀ S ∈ P is the meet of S if for all z ∈ P the following diagram is satisfied:

img_20170128_175304

If the objects ⋁ S and ⋀ S exist then they are uniquely characterized by their universal properties.

The universal properties in these diagrams will be called the “limit” and “colimit” properties when we move from posets to categories. Note that a limit/colimit diagram looks like a “cone over S”. This is one example of the link between category theory and topology.

Note that all definitions so far are included in this single (pair of) definition(s):

⋁ {x, y} = x∨ y & ⋀ {x, y} = x ∧ y

⋁∅ = 0 & ⋀ ∅ = 1.