Marching From Galois Connections to Adjunctions. Part 4.

To make the transition from Galois connections to adjoint functors we make a slight change of notation. The change is only cosmetic but it is very important for our intuition.

Definition of Poset Adjunction. Let (P, ≤P) and (Q, ≤Q) be posets. A pair of functions L ∶ P ⇄ Q ∶ R is called an adjunction if ∀ p ∈ P and q ∈ Q we have

p ≤P R(q) ⇐⇒ L(p) ≤Q q

In this case we write L ⊣ R and call this an adjoint pair of functions. The function L is the left adjoint and R is the right adjoint.

The only difference between Galois connections and poset adjunctions is that we have reversed the partial order on Q. To be precise, we define the opposite poset Qop with the same underlying set Q, such that for all q1 , q2 ∈ Q we have

q1Qop q2 ⇐⇒ q2Q q1

Then an adjunction P ⇄ Q is just the same thing as a Galois connection P ⇄ Qop.

However, this difference is important because it breaks the symmetry. It also prepares us for the notation of an adjunction between categories, where it is more common to use an “asymmetric pair of covariant functors” as opposed to a “symmetric pair of contravariant functors”.

Uniqueness of Adjoints for Posets: Let P and Q be posets and let L ∶ P ⇄ Q ∶ R be an adjunction. Then each of the adjoint functions L ⊣ R uniquely determines the other.

Proof: To prove that R determines L, suppose that L′ ∶ P ⇄ Q ∶ R is another adjunction. Then by definition of adjunction we have for all q ∈ Q that

L(p) ≤Q q ⇐⇒ p ≤P R(q) ⇐⇒ L′(p) ≤Q q

In particular, setting q = L(p) gives

L(p) ≤Q L(p) ⇒ L′(p) ≤Q L′(p)

and setting q = L′(p) gives

L′(p) ≤Q L(p) ⇒ L(p) ≤Q L′(p)

Then by the antisymmetry of Q we have L(p) = L′(p). Since this holds for all p ∈ P we conclude that L = L′, as desired.

RAPL Theorem for Posets. Let L ∶ P ⇄ Q ∶ R be an adjunction of posets. Then for all subsets S ⊆ P and T ⊆ Q we have

L (∨P S) = ∨Q L(S) and R (∧Q T) = ∧P R(T).

In words, this could be said as “left adjoints preserve join” and “right adjoints preserve meet”.

Proof: We just have to observe that sending Q to its opposite Qop switches the definitions of join and meet: Qop = ∧Q and Qop = ∨Q.

It seems worthwhile to emphasize the new terminology with a picture. Suppose that the posets P and Q have top and bottom elements: 1P , 0P ∈ P and 1Q, 0Q ∈ Q. Then a poset adjunction L ∶ P ⇄ Q ∶ R looks like this:

img_20170204_163208

In this case RL ∶ P → P is a closure operator as before, but now LR ∶ Q → Q is called an interior operator. From the case of Galois connections we also know that LRL = L and RLR = R. Since bottom elements are colmits and top elements are limits, the identities L(0P ) = 0Q and R(1Q) = 1P are special cases of the RAPL Theorem.

Just as with Galois connections, adjunctions between the Boolean lattices 2U and 2V are in bijection with relations ∼ ⊆ U × V, but this time we will view the relation as a function f ∼ ∶ U → 2V that sends each to the set f ∼ (u)∶= {v∈V ∶ u∼v}. We can also think off as a “multi-valued function” from U to V.

Adjunctions of Boolean Lattices: Let U,V be sets and consider an arbitrary function f ∶ U → 2V. Then subsets S ∈ 2U and T ∈ 2V we define

L(S) ∶= ∪s∈S f(s) ∈ 2V,

R(T) ∶= {u∈U ∶ f(u) ⊆ T} ∈ 2U

The pair of functions Lf ∶ 2U ⇄ 2V ∶ Rf is an adjunction of Boolean lattices. To see this, note  S ∈ 2U and T ∈ 2V

S ⊆ Rf (T) ⇐⇒ ∀ s∈S, s ∈ R(T)

⇐⇒ ∀ s∈S, f(s) ⊆ T

⇐⇒ ∪s∈S f(s) ⊆ T

⇐⇒ L(S) ⊆ T

Functions : Let f ∶ U → V be any function. We can extend this to a function f ∶ U → 2V by defining f(u) ∶= {f(u)} ∀ u ∈ U. In this case we denote the corresponding left and right adjoint functions by f ∶= Lf ∶ 2U → 2V and f−1 ∶= Rf ∶ 2V → 2U, so that ∀ S ∈ 2U and T ∈ 2V we have

f(S) = {f(s) ∶ s ∈ S}, f−1(T)={u∈U ∶ f(s) ∈ T}

The resulting adjunction f ∶ 2U ⇄ 2V ∶ f−1 is called the image and preimage of the function. It follows from RAPL that image preserves unions and preimage preserves intersections.

But now something surprising happens. We can restrict the preimage f−1 ∶ 2V → 2U to a function f−1 ∶ V → 2U by defining f−1(v) ∶= f−1({v}) for each v ∈ V. Then since f−1 = Lf−1 we obtain another adjunction

f−1 ∶ 2V ⇄ 2U ∶ Rf−1,
where this time f−1 is the left adjoint. The new right adjoint is defined for each S ∈ 2U by

R f−1(S) = {v∈V ∶ f−1(v) ⊆ S}

There seems to be no standard notation for this function, but people call it f! ∶= Rf−1 (the “!” is pronounced “shriek”). In summary, each function f ∶ U → V determines a triple of

adjoints f ⊣ f−1 ⊣ f! where f preserves unions, f! preserves intersections, and f−1 preserves both unions and intersections. Logicians will tell you that the functions f and f! are closely related to the existential (∃) and universal (∀) quantifiers, in the sense that for all S ∈ 2U we have

f∗ (S) = {v∈V ∶ ∃ u ∈ f−1 (v), u ∈ S}, f(S)={v ∈ V ∶ ∀ u ∈ f−1(v), u ∈ S}

Group Homomorphisms: Given a group G we let (L (G), ⊆) denote its poset of subgroups. Since the intersection of subgroups is again a subgroup, we have ∧ = ∩. Then since L (G) has arbitrary meets it also has arbitrary joins. In particular, the join of two subgroups A, B ∈ L (G) is given by

A ∨ B = ⋂ {C ∈ L(G) ∶ A ⊆ C and B ⊆ C},

which is the smallest subgroup containing the union A ∪ B. Thus L (G) is a lattice, but since A ∨ B ≠ A ∪ B (in general) it is not a sublattice of 2G.

Now let φ ∶ G → H be an arbitrary group homomorphism. One can check that the image and preimage φ ∶ 2G ⇄ 2H ∶ φ−1 send subgroups to subgroups, hence they restrict to an adjunction between subgroup lattices:

φ ∶L(G) ⇄ L(H)∶ φ−1.

The function φ! ∶ 2G → 2H does not send subgroups to subgroups, and in general the function φ−1 ∶ L(H) → L(G) does not have a right adjoint. For all subgroups A ∈ L (G) and B ∈ L (H) one can check that

φ−1φ(A)=A ∨ ker φ and φφ−1(B) = B ∧ im φ

Thus the φ−1φ-fixed subgroups of G are precisely those that contain the kernel and the φφ−1-fixed subgroups of H are precisely those contained in the image. Finally, the Fundamental Theorem gives us an order-preserving bijection as in the following picture:

img_20170204_173156

…..

Galois Connections. Part 3.

Let (P,≤P) and (Q,≤Q) be posets, and consider two set functions ∗ ∶ P ⇄ Q ∶ ∗. We will denote these by p ↦ p ∗ and q ↦ q ∗ for all p ∈ P and q ∈ Q. This pair of functions is called a Galois connection if, for all p ∈ P and q ∈ Q, we have

p ≤ P q ∗ ⇐⇒ q ≤ Q p  ∗

Let ∗ ∶ P ⇄ Q ∶ ∗ be a Galois connection. For all elements x of P or Q we will use the notations x ∗ ∗ ∶= (x ∗)∗ and x ∗ ∗ ∗ ∶= (x ∗ ∗)∗.

(1) For all p ∈ P and q ∈ Q we have

p ≤ P p ∗ ∗ and q ≤ Q q ∗ ∗.

(2) For all elements p1, p2 ∈ P and q1, q2 ∈ Q we have

p1 ≤ P p2 ⇒ p ∗ 2 ≤ Q p ∗ 1 and q1 ≤ Q q2 ⇒ q2 ∗ ≤ P q1 ∗.

(3) For all elements p ∈ P and q ∈ Q we have

p ∗ ∗ ∗ = p ∗ and q ∗ ∗ ∗ = q ∗

Proof:

Since the definition of a Galois connection is symmetric in P and Q, we will simplify the proof by using the notation

x ≤ y ∗ ⇐⇒ y ≤ x ∗

for all elements x,y such that the inequalities make sense. To prove (1) note that for any element x we have x ∗ ≤ x ∗ by the reflexivity of partial order. Then from the definition of Galois connection we obtain,

(x ∗) ≤ (x) ∗ ⇒ (x) ≤ (x ∗) ∗ ⇒ x ≤ x ∗ ∗

To prove (2) consider elements x, y such that x ≤ y. From (1) and the transitivity of partial x ≤ y ≤ y ∗ ∗ ⇒ x ≤ y ∗ ∗. Then from the definition of Galois connection we obtain

(x) ≤ (y ∗) ∗ ⇒ (y ∗) ≤ (x) ∗ ⇒ y ∗ ≤ x ∗.

To prove (3) consider any element x. On the one hand, part (1) tells us that

(x ∗) ≤ (x ∗) ∗ ∗ ⇒ x ∗ ≤ x ∗ ∗ ∗.

On the other hand, part (1) tells us that x ≤ x ∗ ∗ and then part (2) says that

(x) ≤ (x ∗ ∗) ⇒ (x ∗ ∗) ∗ ≤ (x) ∗ ⇒ x ∗ ∗ ∗ ≤ x ∗

Finally, the antisymmetry of partial order says that x∗∗∗ = x∗, which we interpret as isomorphism of objects in the poset category. The following definition captures the essence of these three basic properties.

Definition of Closure in a Poset. Given a poset (P,≤), we say that a function cl ∶ P → P is a closure operator if it satisfies the following three properties:

(i) Extensive: ∀p ∈ P, p ≤ cl(p)

(ii) Monotone: ∀ p,q ∈ P, p ≤ q ⇒ cl(p) ≤ cl(q)

(iii) Idempotent: ∀ p ∈ P, cl(cl(p)) = p.

[Remark: If P = 2U is a Boolean lattice, and if the closure cl ∶ 2U → 2U also preserves finite unions, then we call it a Kuratowski closure. Kuratowski proved that such a closure is equivalent to a topology on the set U.]

If ∗ ∶ P → Q ∶ ∗ is a Galois connection, then the basic properties above immediately imply that the compositions ∗ ∗ ∶ P → P and ∗ ∗ ∶ Q → Q are closure operators.

Proof: Property (ii) follows from applying property (2) twice and property (iii) follows from applying to property (3).

Fundamental Theorem of Galois Connections: Any Galois connection ∗ ∶ P ⇄ Q ∶ ∗ determines two closure operators ∗ ∗ ∶ P → P and ∗ ∗ ∶ Q → Q. We will say that the element p ∈  P (resp. q ∈  Q) is ∗ ∗-closed if p∗ ∗ = p (resp. q∗ ∗ = q). Then the Galois connection restricts to an order-reversing bijection between the subposets of ∗ ∗-closed elements.

Proof: Let Q ∗ ⊆ P and P ∗ ⊆ Q denote the images of the functions ∗ ∶ Q → P and ∗ ∶ P  → Q, respectively. The restriction of the connection to these subsets defines an order-reversing bijection:

img_20170204_065156

Indeed, consider any p ∈ Q ∗, so that p = q ∗ for some q ∈ Q. Then by properties (1) and (3) of Galois connections we have

(p) ∗ ∗ = (q ∗) ∗ ∗ ⇒ p ∗ ∗ = q ∗ ∗ ∗ ⇒ p ∗ ∗ = q ∗ ⇒ p ∗ ∗ = p

Similarly, for all q ∈ P ∗ we have q ∗ ∗ = q. The bijections reverse order because of property (2).

Finally, note that Q ∗ and P ∗ are exactly the subsets of ∗ ∗-closed elements in P and Q, respectively. Indeed, we have seen above that every element of Q ∗ is ∗ ∗-closed. Conversely, if p ∈ P is ∗ ∗-closed then we have

p = p ∗ ∗ ⇒ p = (p ∗) ∗,

and it follows that p ∈ Q ∗. Similarly, every element of P ∗ is ∗ ∗-closed.

Thus, a Galois connection is something like a “loose bijection”. It’s not necessarily a bijection but it becomes one after we “tighten it up”. Sort of like tightening your shoelaces.

img_20170204_071135

The shaded subposets here consist of the ∗ ∗-closed elements. They are supposed to look (anti-) isomorphic. The unshaded parts of the posets get “tightened up” into the shaded subposets. Note that the top elements are ∗ ∗-closed. Indeed, property (2) tells us that 1P ≤ P ≤ 1p∗∗ and then from the universal property of the top element we have 1P** = 1P. Since the left hand side is always true, so is the right hand side. But then from the universal property of the top element in Q we conclude that 0P = 1Q. As a consequence of this, the arbitrary meet of ∗ ∗-closed elements (if it exists) is still ∗ ∗-closed. We will see, however, that the join of ∗ ∗-closed elements is not necessarily ∗ ∗-closed. And hence not all Galois connections induce topologies.

Galois connections between Boolean lattices have a particularly nice form, which is closely related to the universal quantifier ““. Galois Connections of Boolean Lattices. Let U,V be sets and let ∼ ⊆ U × V be any subset (called a relation) between U and V . As usual, we will write “u ∼ v” in place of the statement “(u,v) ∈ ∼“, and we read this as “u is related to v“. Then for all S ∈ 2U and T ∈ 2V we define,

S ∶= {v ∈ V ∶ ∀ s ∈ S, s ∼ v} ∈ 2V,

T ∶= {u ∈ U ∶ ∀ t ∈ T , u ∼ t} ∈ 2U

The pair of functions S ↦ S and T ↦ T is a Galois connection, ∼ ∶ 2U ⇄ 2V ∶ ∼.

To see this, note that ∀ subsets S ∈ 2U and T ∈ 2V we have

S ⊆ T ⇐⇒ ∀ s ∈ S, s ∈ T

⇐⇒ ∀ s ∈ S,∀ t ∈ T, s ∼ t

⇐⇒ ∀ t ∈ T, ∀ s ∈ S, s ∼ t

⇐⇒ ∀ t ∈ T, t ∈ S

⇐⇒ T ⊆ S.

Moreover, one can prove that any Galois connection between 2U and 2V arises in this way from a unique relation.

Orthogonal Complement: Let V be a vector space over field K and let V ∗ be the dual space, consisting of linear functions α ∶ V → K. We define the relation ⊥ ⊆ V ∗ × V by

α ⊥ v ⇐⇒ α(v) = 0.

The resulting ⊥⊥-closed subsets are precisely the linear subspaces on both sides. Thus the Fundamental Theorem of Galois Connections gives us an order-reversing bijection between the subspaces of V ∗ and the subspaces of V.

Convex Complement: Let V be a Euclidean space, i.e., a real vector space with an inner product ⟨-,-⟩ ∶ V ×V → ℜ. We define the relation ∼ ⊆ V ×V by

u ∼ v ⇐⇒ ⟨u,v⟩ ≤ 0.

∀ S ⊆ V the operation S ↦ S ∼ ∼ gives the cone genrated by S, thus the ∼ ∼-closed sets are precisely the cones. Here is a picture:

img_20170204_075300

Original Galois Connection: Let L be a field and let G be a finite group of automorphisms of L, i.e., each g ∈ G is a function g ∶ L → L preserving addition and multiplication. We define a relation ∼ ⊆ G × L by

g ∼ l ⇐⇒ g(l) = l.

Define K ∶= L ∼ to be the “subfield fixed by G“. The original Fundamental Theorem of Galois Theory says that the ∼ ∼-closed subsets of G are precisely the subgroups and the ∼ ∼-closed subsets of L are precisely the subfields containing K.

Hilbert’s Nullstellensatz: Let K be a field and consider the ring of polynomials K[x] ∶= K[x1,…,xn] in n commuting variables. For each polynomial f(x) ∶= f(x1,…,xn) ∈ K[x] and for each n-tuple of field elements α ∶= (α1,…,αn) ∈ Kn, we denote the evaluation by f(α) ∶= f(α1,…,αn) ∈ K. Now we define a relation ∼ ⊆ K[x] × Kn by

f(x) ∼ α ⇐⇒ f(α) = 0

By definition, the closure operator ∼ ∼ on subsets of Kn is called the Zariski closure. It is not difficult to prove that it satisfies the additional property of a Kuratowski closure (i.e., finite unions of closed sets are closed) and hence it defines a topology on Kn, called the Zariski topology. Hilbert’s Nullstellensatz says that if K is algebraically closed, then the ∼ ∼-closed subsets of K[x] are precisely the radical ideals (i.e., ideals closed under taking arbitrary roots).

Marching Along Categories, Groups and Rings. Part 2

A category C consists of the following data:

A collection Obj(C) of objects. We will write “x ∈ C” to mean that “x ∈ Obj(C)

For each ordered pair x, y ∈ C there is a collection HomC (x, y) of arrows. We will write α∶x→y to mean that α ∈ HomC(x,y). Each collection HomC(x,x) has a special element called the identity arrow idx ∶ x → x. We let Arr(C) denote the collection of all arrows in C.

For each ordered triple of objects x, y, z ∈ C there is a function

○ ∶ HomC (x, y) × HomC(y, z) → HomC (x, z), which is called composition of  arrows. If  α ∶ x → y and β ∶ y → z then we denote the composite arrow by β ○ α ∶ x → z.

If each collection of arrows HomC(x,y) is a set then we say that the category C is locally small. If in addition the collection Obj(C) is a set then we say that C is small.

Identitiy: For each arrow α ∶ x → y the following diagram commutes:

img_20170202_165814

Associative: For all arrows α ∶ x → y, β ∶ y → z, γ ∶ z → w, the following diagram commutes:

img_20170202_165833

We say that C′ ⊆ C is a subcategory if Obj(C′) ⊆ Obj(C) and if ∀ x,y ∈ Obj(C′) we have HomC′(x,y) ⊆ HomC(x,y). We say that the subcategory is full if each inclusion of hom sets is an equality.

Let C be a category. A diagram D ⊆ C is a collection of objects in C with some arrows between them. Repetition of objects and arrows is allowed. OR. Let I be any small category, which we think of as an “index category”. Then any functor D ∶ I → C is called a diagram of shape I in C. In either case, we say that the diagram D commutes if for all pairs of objects x,y in D, any two directed paths in D from x to y yield the same arrow under composition.

Identity arrows generalize the reflexive property of posets, and composition of arrows generalizes the transitive property of posets. But whatever happened to the antisymmetric property? Well, it’s the same issue we had before: we should really define equivalence of objects in terms of antisymmetry.

Isomorphism: Let C be a category. We say that two objects x,y ∈ C are isomorphic in C if there exist arrows α ∶ x → y and β ∶ y → x such that the following diagram commutes:

img_20170202_175924

In this case we write x ≅C y, or just x ≅ y if the category is understood.

If γ ∶ y → x is any other arrow satisfying the same diagram as β, then by the axioms of identity and associativity we must have

γ = γ ○ idy = γ ○ (α ○ β) = (γ ○ α) ○ β = idx ○ β = β

This allows us to refer to β as the inverse of the arrow α. We use the notations β = α−1 and

β−1 = α.

A category with one object is called a monoid. A monoid in which each arrow is invertible is called a group. A small category in which each arrow is invertible is called a groupoid.

Subcategories of Set are called concrete categories. Given a concrete category C ⊆ Set we can think of its objects as special kinds of sets and its arrows as special kinds of functions. Some famous examples of conrete categories are:

• Grp = groups & homomorphisms
• Ab = abelian groups & homomorphisms
• Rng = rings & homomorphisms
• CRng = commutative rings & homomorphisms

Note that Ab ⊆ Grp and CRng ⊆ Rng are both full subcategories. In general, the arrows of a concrete category are called morphisms or homomorphisms. This explains our notation of HomC.

Homotopy: The most famous example of a non-concrete category is the fundamental groupoid π1(X) of a topological space X. Here the objects are points and the arrows are homotopy classes of continuous directed paths. The skeleton is the set π0(X) of path components (really a discrete category, i.e., in which the only arrows are the identities). Categories like this are the reason we prefer the name “arrow” instead of “morphism”.

Limit/Colimit: Let D ∶ I → C be a diagram in a category C (thus D is a functor and I is a small “index” category). A cone under D consists of

• an object c ∈ C,

• a collection of arrows αi ∶ x → D(i), one for each index i ∈ I,

such that for each arrow δ ∶ i → j in I we have αj = D(δ) ○ α

In visualizing this:

img_20170202_182016

The cone (c,(αi)i∈I) is called a limit of the diagram D if, for any cone (z,(βi)i∈I) under D, the following picture holds:

img_20170202_182041

[This picture means that there exists a unique arrow υ ∶ z → c such that, for each arrow δ ∶ i → j in I (including the identity arrows), the following diagram commutes:

img_20170202_182906

When δ = idi this diagram just says that βi = αi ○ υ. We do not assume that D itself is commutative. Dually, a cone over D consists of an object c ∈ C and a set of arrows αi ∶ D(i) → c satisfying αi = αj ○ D(δ) for each arrow δ ∶ i → j in I. This cone is called a colimit of the diagram D if, for any cone (z,(βi)i∈I) over D, the following picture holds:

img_20170202_183619

When the (unique) limit or colimit of the diagram D ∶ I → C exists, we denote it by (limI D, (φi)i∈I) or (colimI D, (φi)i∈I), respectively. Sometimes we omit the canonical arrows φi from the notation and refer to the object limID ∈ C as “the limit of D”. However, we should not forget that the arrows are part of the structure, i.e., the limit is really a cone.

Posets: Let P be a poset. We have already seen that the product/coproduct in P (if they exist) are the meet/join, respectively, and that the final/initial objects in P (if they exist) are the top/bottom elements, respectively. The only poset with a zero object is the one element poset.

Sets: The empty set ∅ ∈ Set is an initial object and the one point set ∗ ∈ Set is a final object. Note that two sets are isomorphic in Set precisely when there is a bijection between them, i.e., when they have the same cardinality. Since initial/final objects are unique up to isomorphism, we can identify the initial object with the cardinal number 0 and the final object with the cardinal number 1. There is no zero object in Set.

Products and coproducts exist in Set. The product of S,T ∈ Set consists of the Cartesian product S × T together with the canonical projections πS ∶ S × T → S and πT ∶ S × T → T. The coproduct of S, T ∈ Set consists of the disjoint union S ∐ T together with the canonical injections ιS ∶ S → S ∐ T and ιT ∶ T → S ∐ T. After passing to the skeleton, the product and coproduct of sets become the product and sum of cardinal numbers.

[Note: The “external disjoint union” S ∐ T is a formal concept. The familiar “internal disjoint union” S ⊔ T is only defined when there exists a set U containing both S and T as subsets. Then the union S ∪ T is the join operation in the Boolean lattice 2U ; we call the union “disjoint” when S ∩ T = ∅.]

Groups: The trivial group 1 ∈ Grp is a zero object, and for any groups G, H ∈ Grp the zero homomorphism 1 ∶ G → H sends all elements of G to the identity element 1H ∈ H. The product of groups G, H ∈ Grp is their direct product G × H and the coproduct is their free product G ∗ H, along with the usual canonical morphisms.

Let Ab ⊆ Grp be the full subcategory of abelian groups. The zero object and product are inherited from Grp, but we give them new names: we denote the zero object by 0 ∈ Ab and for any A, B ∈ Ab we denote the zero arrow by 0 ∶ A → B. We denote the Cartesian product by A ⊕ B and we rename it the direct sum. The big difference between Grp and Ab appears when we consider coproducts: it turns out that the product group A ⊕ B is also the coproduct group. We emphasize this fact by calling A ⊕ B the biproduct in Ab. It comes equipped with four canonical homomorphisms πA, πB, ιA, ιB satisfying the usual properties, as well as the following commutative diagram:

img_20170202_185619

This diagram is the ultimate reason for matrix notation. The universal properties of product and coproduct tell us that each endomorphism φ ∶ A ⊕ B → A ⊕ B is uniquely determined by its four components φij ∶= πi ○ φ ○ ιj for i, j ∈ {A,B},so we can represent it as a matrix:

img_20170202_185557

Then the composition of endomorphisms becomes matrix multiplication.

Rings. We let Rng denote the category of rings with unity, together with their homomorphisms. The initial object is the ring of integers Z ∈ Rng and the final object is the zero ring 0 ∈ Rng, i.e., the unique ring in which 0R = 1R. There is no zero object. The product of two rings R, S ∈ Rng is the direct product R × S ∈ Rng with component wise addition and multiplication. Let CRng ⊆ Rng be the full subcategory of commutative rings. The initial/final objects and product in CRng are inherited from Rng. The difference between Rng and CRng again appears when considering coproducts. The coproduct of R,S ∈ CRng is denoted by R ⊗Z S and is called the tensor product over Z…..