# Homological Algebra – Does A∞ Algebra Compensate for any Loss of Information in the Study of Chain Complexes? 1.0

In an abelian category, homological algebra is the homotopy theory of chain complexes up to quasi-isomorphism of chain complexes.  When considering nonnegatively graded chain complexes, homological algebra may be viewed as a linearized version of the homotopy theory of homotopy types or infinite groupoids. When considering unbounded chain complexes, it may be viewed as a linearized and stabilized version. Conversely, we may view homotopical algebra as a nonabelian generalization of homological algebra.

Suppose we have a topological space X and a “multiplication map” m2 : X × X → X. This map may or may not be associative; imposing associativity is an extra condition. An A space imposes a weaker structure, which requires m2 to be associative up to homotopy, along with “higher order” versions of this. Indeed, there are very standard situations where one has natural multiplication maps which are not associative, but obey certain weaker conditions.

The standard example is when X is the loop space of another space M, i.e., if m0 ∈ M is a chosen base point,

X = {x : [0,1] → M |x continuous, x(0) = x(1) = m0}.

Composition of loops is then defined, with

x2x1(t) = x2(2t), when 0 ≤ t ≤ 1/2

= x1(2t−1), when  1/2 ≤ t ≤ 1

However, this composition is not associative, but x3(x2x1) and (x1x2)x3 are homotopic loops.

On the left, we first traverse x3 from time 0 to time 1/2, then traverse x2 from time 1/2 to time 3/4, and then x1 from time 3/4 to time 1. On the right, we first traverse x3 from time 0 to time 1/4, x2 from time 1/4 to time 1/2, and then x1 from time 1/2 to time 1. By continuously deforming these times, we can homotop one of the loops to the other. This homotopy can be represented by a map

m3 : [0, 1] × X × X × X → X such that

{0} × X × X × X → X is given by (x3, x2, x1) 􏰀→ m2(x3, m2(x2, x1)) and

{1} × X × X × X → X is given by (x3, x2, x1) 􏰀→ m2(m2(x3, x2), x1)

What, if we have four elements x1, . . . , x4 of X? Then there are a number of different ways of putting brackets in their product, and these are related by the homotopies defined by m3. Indeed, we can relate

((x4x3)x2)x1 and x4(x3(x2x1))

in two different ways:

((x4x3)x2)x1 ∼ (x4x3)(x2x1) ∼ x4(x3(x2x1))

and

((x4x3)x2)x1 ∼ (x4(x3x2))x1 ∼ x4((x3x2)x1) ∼ x4(x3(x2x1)).

Here each ∼ represents a homotopy given by m3.

Schematically, this is represented by a polygon, S4, with each vertex labelled by one of the ways of associating x4x3x2x1, and the edges represent homotopies between them

The homotopies myield a map ∂S4 × X4 → X which is defined using appropriate combinations of m2 and m3 on each edge of the boundary of S4. For example, restricting to the edge with vertices ((x4x3)x2)x1 and (x4(x3x2))x1, this map is given by (s, x4, . . . , x1) 􏰀→ m2(m3(s, x4, x3, x2), x1).

Thus the conditionality on the structure becomes: this map extend across S4, giving a map

m4 : S4 × X4 → X.

As homological algebra seeks to study complexes by taking quotient modules to obtain the homology, the question arises as to whether any information is lost in this process. This is equivalent to asking whether it is possible to reconstruct the original complex (up to quasi-isomorphism) given its homology or whether some additional structure is needed in order to be able to do this. The additional structure that is needed is an A-structure constructed on the homology of the complex…

# Category of Super Vector Spaces Becomes a Tensor Category

The theory of manifolds and algebraic geometry are ultimately based on linear algebra. Similarly the theory of supermanifolds needs super linear algebra, which is linear algebra in which vector spaces are replaced by vector spaces with a Z/2Z-grading, namely, super vector spaces.

A super vector space is a Z/2Z-graded vector space

V = V0 ⊕ V1

where the elements of Vare called even and that of Vodd.

The parity of v ∈ V , denoted by p(v) or |v|, is defined only on non-zero homogeneous elements, that is elements of either V0 or V1:

p(v) = |v| = 0 if v ∈ V0

= 1 if v ∈ V1

The superdimension of a super vector space V is the pair (p, q) where dim(V0) = p and dim(V1) = q as ordinary vector spaces. We simply write dim(V) = p|q.

If dim(V) = p|q, then we can find a basis {e1,…., ep} of V0 and a basis {ε1,….., εq} of V1 so that V is canonically isomorphic to the free k-module generated by {e1,…., ep, ε1,….., εq}. We denote this k-module by kp|q and we will call {e1,…., ep, ε1,….., εq} the canonical basis of kp|q. The (ei) form a basis of kp = k0p|q and the (εj) form a basis for kq = k1p|q.

A morphism from a super vector space V to a super vector space W is a linear map from V to W preserving the Z/2Z-grading. Let Hom(V, W) denote the vector space of morphisms V → W. Thus we have formed the category of super vector spaces that we denote by (smod). It is important to note that the category of super vector spaces also admits an “inner Hom”, which we denote by Hom(V, W); for super vector spaces V, W, Hom(V, W) consists of all linear maps from V to W ; it is made into a super vector space itself by:

Hom(V, W)0 = {T : V → W|T preserves parity}  (= Hom(V, W))

Hom(V, W)1 = {T : V → W|T reverses parity}

If V = km|n, W = kp|q we have in the canonical basis (ei, εj):

Hom(V, W)0 = (A 0 0 D) and Hom(V, W)1 = (0 B C 0)

where A, B, C , D are respectively (p x m), (p x n), (q x m), (q x n) – matrices with entries in k.

In the category of super vector spaces we have the parity reversing functor ∏(V → ∏V) defined by

(∏V)0 = V1, (∏V)1 = V0

The category of super vector spaces admits tensor products: for super vector spaces V, W, V ⊗ W is given the Z/2Z-grading as

(V ⊗ W)0 = (V0 ⊗ W0) ⊕ (V1 ⊗ W1),

(V ⊗ W)1 = (V0 ⊗ W1) ⊕ (V1 ⊗ W0)

The assignment V, W ↦ V ⊗ W is additive and exact in each variable as in the ordinary vector space category. The object k functions as a unit element with respect to tensor multiplication ⊗ and tensor multiplication is associative, i.e., the two products U ⊗ (V ⊗ W) and (U ⊗ V) ⊗ W are naturally isomorphic. Moreover, V ⊗ W ≅ W ⊗ V by the commutative map,

cV,W : V ⊗ W → W ⊗ V

where

v ⊗ w ↦ (-1)|v||w|w ⊗ v

If we are working with the category of vector spaces, the commutativity isomorphism takes v ⊗ w to w ⊗ v. In super linear algebra we have to add the sign factor in front. This is a special case of the general principle called the “sign rule”. The principle says that in making definitions and proving theorems, the transition from the usual theory to the super theory is often made by just simply following this principle, which introduces a sign factor whenever one reverses the order of two odd elements. The functoriality underlying the constructions makes sure that the definitions are all consistent.

The commutativity isomorphism satisfies the so-called hexagon diagram:

where, if we had not suppressed the arrows of the associativity morphisms, the diagram would have the shape of a hexagon.

The definition of the commutativity isomorphism, also informally referred to as the sign rule, has the following very important consequence. If V1, …, Vn are the super vector spaces and σ and τ are two permutations of n-elements, no matter how we compose associativity and commutativity morphisms, we always obtain the same isomorphism from Vσ(1) ⊗ … ⊗ Vσ(n) to Vτ(1) ⊗ … ⊗ Vτ(n) namely:

Vσ(1) ⊗ … ⊗ Vσ(n) → Vτ(1) ⊗ … ⊗ Vτ(n)

vσ(1) ⊗ … ⊗ vσ(n) ↦ (-1)N vτ(1) ⊗ … ⊗ vτ(n)

where N is the number of pair of indices i, j such that vi and vj are odd and σ-1(i) < σ-1(j) with τ-1(i) > τ-1(j).

The dual V* of V is defined as

V* := Hom (V, k)

If V is even, V = V0, V* is the ordinary dual of V consisting of all even morphisms V → k. If V is odd, V = V1, then V* is also an odd vector space and consists of all odd morphisms V1 → k. This is because any morphism from V1 to k = k1|0 is necessarily odd and sends odd vectors into even ones. The category of super vector spaces thus becomes what is known as a tensor category with inner Hom and dual.

# Marching Along Categories, Groups and Rings. Part 2

A category C consists of the following data:

A collection Obj(C) of objects. We will write “x ∈ C” to mean that “x ∈ Obj(C)

For each ordered pair x, y ∈ C there is a collection HomC (x, y) of arrows. We will write α∶x→y to mean that α ∈ HomC(x,y). Each collection HomC(x,x) has a special element called the identity arrow idx ∶ x → x. We let Arr(C) denote the collection of all arrows in C.

For each ordered triple of objects x, y, z ∈ C there is a function

○ ∶ HomC (x, y) × HomC(y, z) → HomC (x, z), which is called composition of  arrows. If  α ∶ x → y and β ∶ y → z then we denote the composite arrow by β ○ α ∶ x → z.

If each collection of arrows HomC(x,y) is a set then we say that the category C is locally small. If in addition the collection Obj(C) is a set then we say that C is small.

Identitiy: For each arrow α ∶ x → y the following diagram commutes:

Associative: For all arrows α ∶ x → y, β ∶ y → z, γ ∶ z → w, the following diagram commutes:

We say that C′ ⊆ C is a subcategory if Obj(C′) ⊆ Obj(C) and if ∀ x,y ∈ Obj(C′) we have HomC′(x,y) ⊆ HomC(x,y). We say that the subcategory is full if each inclusion of hom sets is an equality.

Let C be a category. A diagram D ⊆ C is a collection of objects in C with some arrows between them. Repetition of objects and arrows is allowed. OR. Let I be any small category, which we think of as an “index category”. Then any functor D ∶ I → C is called a diagram of shape I in C. In either case, we say that the diagram D commutes if for all pairs of objects x,y in D, any two directed paths in D from x to y yield the same arrow under composition.

Identity arrows generalize the reflexive property of posets, and composition of arrows generalizes the transitive property of posets. But whatever happened to the antisymmetric property? Well, it’s the same issue we had before: we should really define equivalence of objects in terms of antisymmetry.

Isomorphism: Let C be a category. We say that two objects x,y ∈ C are isomorphic in C if there exist arrows α ∶ x → y and β ∶ y → x such that the following diagram commutes:

In this case we write x ≅C y, or just x ≅ y if the category is understood.

If γ ∶ y → x is any other arrow satisfying the same diagram as β, then by the axioms of identity and associativity we must have

γ = γ ○ idy = γ ○ (α ○ β) = (γ ○ α) ○ β = idx ○ β = β

This allows us to refer to β as the inverse of the arrow α. We use the notations β = α−1 and

β−1 = α.

A category with one object is called a monoid. A monoid in which each arrow is invertible is called a group. A small category in which each arrow is invertible is called a groupoid.

Subcategories of Set are called concrete categories. Given a concrete category C ⊆ Set we can think of its objects as special kinds of sets and its arrows as special kinds of functions. Some famous examples of conrete categories are:

• Grp = groups & homomorphisms
• Ab = abelian groups & homomorphisms
• Rng = rings & homomorphisms
• CRng = commutative rings & homomorphisms

Note that Ab ⊆ Grp and CRng ⊆ Rng are both full subcategories. In general, the arrows of a concrete category are called morphisms or homomorphisms. This explains our notation of HomC.

Homotopy: The most famous example of a non-concrete category is the fundamental groupoid π1(X) of a topological space X. Here the objects are points and the arrows are homotopy classes of continuous directed paths. The skeleton is the set π0(X) of path components (really a discrete category, i.e., in which the only arrows are the identities). Categories like this are the reason we prefer the name “arrow” instead of “morphism”.

Limit/Colimit: Let D ∶ I → C be a diagram in a category C (thus D is a functor and I is a small “index” category). A cone under D consists of

• an object c ∈ C,

• a collection of arrows αi ∶ x → D(i), one for each index i ∈ I,

such that for each arrow δ ∶ i → j in I we have αj = D(δ) ○ α

In visualizing this:

The cone (c,(αi)i∈I) is called a limit of the diagram D if, for any cone (z,(βi)i∈I) under D, the following picture holds:

[This picture means that there exists a unique arrow υ ∶ z → c such that, for each arrow δ ∶ i → j in I (including the identity arrows), the following diagram commutes:

When δ = idi this diagram just says that βi = αi ○ υ. We do not assume that D itself is commutative. Dually, a cone over D consists of an object c ∈ C and a set of arrows αi ∶ D(i) → c satisfying αi = αj ○ D(δ) for each arrow δ ∶ i → j in I. This cone is called a colimit of the diagram D if, for any cone (z,(βi)i∈I) over D, the following picture holds:

When the (unique) limit or colimit of the diagram D ∶ I → C exists, we denote it by (limI D, (φi)i∈I) or (colimI D, (φi)i∈I), respectively. Sometimes we omit the canonical arrows φi from the notation and refer to the object limID ∈ C as “the limit of D”. However, we should not forget that the arrows are part of the structure, i.e., the limit is really a cone.

Posets: Let P be a poset. We have already seen that the product/coproduct in P (if they exist) are the meet/join, respectively, and that the final/initial objects in P (if they exist) are the top/bottom elements, respectively. The only poset with a zero object is the one element poset.

Sets: The empty set ∅ ∈ Set is an initial object and the one point set ∗ ∈ Set is a final object. Note that two sets are isomorphic in Set precisely when there is a bijection between them, i.e., when they have the same cardinality. Since initial/final objects are unique up to isomorphism, we can identify the initial object with the cardinal number 0 and the final object with the cardinal number 1. There is no zero object in Set.

Products and coproducts exist in Set. The product of S,T ∈ Set consists of the Cartesian product S × T together with the canonical projections πS ∶ S × T → S and πT ∶ S × T → T. The coproduct of S, T ∈ Set consists of the disjoint union S ∐ T together with the canonical injections ιS ∶ S → S ∐ T and ιT ∶ T → S ∐ T. After passing to the skeleton, the product and coproduct of sets become the product and sum of cardinal numbers.

[Note: The “external disjoint union” S ∐ T is a formal concept. The familiar “internal disjoint union” S ⊔ T is only defined when there exists a set U containing both S and T as subsets. Then the union S ∪ T is the join operation in the Boolean lattice 2U ; we call the union “disjoint” when S ∩ T = ∅.]

Groups: The trivial group 1 ∈ Grp is a zero object, and for any groups G, H ∈ Grp the zero homomorphism 1 ∶ G → H sends all elements of G to the identity element 1H ∈ H. The product of groups G, H ∈ Grp is their direct product G × H and the coproduct is their free product G ∗ H, along with the usual canonical morphisms.

Let Ab ⊆ Grp be the full subcategory of abelian groups. The zero object and product are inherited from Grp, but we give them new names: we denote the zero object by 0 ∈ Ab and for any A, B ∈ Ab we denote the zero arrow by 0 ∶ A → B. We denote the Cartesian product by A ⊕ B and we rename it the direct sum. The big difference between Grp and Ab appears when we consider coproducts: it turns out that the product group A ⊕ B is also the coproduct group. We emphasize this fact by calling A ⊕ B the biproduct in Ab. It comes equipped with four canonical homomorphisms πA, πB, ιA, ιB satisfying the usual properties, as well as the following commutative diagram:

This diagram is the ultimate reason for matrix notation. The universal properties of product and coproduct tell us that each endomorphism φ ∶ A ⊕ B → A ⊕ B is uniquely determined by its four components φij ∶= πi ○ φ ○ ιj for i, j ∈ {A,B},so we can represent it as a matrix:

Then the composition of endomorphisms becomes matrix multiplication.

Rings. We let Rng denote the category of rings with unity, together with their homomorphisms. The initial object is the ring of integers Z ∈ Rng and the final object is the zero ring 0 ∈ Rng, i.e., the unique ring in which 0R = 1R. There is no zero object. The product of two rings R, S ∈ Rng is the direct product R × S ∈ Rng with component wise addition and multiplication. Let CRng ⊆ Rng be the full subcategory of commutative rings. The initial/final objects and product in CRng are inherited from Rng. The difference between Rng and CRng again appears when considering coproducts. The coproduct of R,S ∈ CRng is denoted by R ⊗Z S and is called the tensor product over Z…..