Fascism. Drunken Risibility

fiat

You must create your life, as you’d create a work of art. It’s necessary that the life of an intellectual be artwork with him as the subject. True superiority is all here. At all costs, you must preserve liberty, to the point of intoxication. — Gabriele d’Annunzio

The complex relationship between fascism and modernity cannot be resolved all at once, and with a simple yes or no. It has to be developed in the unfolding story of fascism’s acquisition and exercise of power. The most satisfactory work on this matter shows how antimodernizing resentments were channeled and neutralized, step by step, in specific legislation, by more powerful pragmatic and intellectual forces working in the service of an alternate modernity.

The word fascism has its root in the Italian fascio, literally a bundle or sheaf. More remotely, the word recalled the Latin fasces, an axe encased in a bundle of rods that was carried before the magistrates in Roman public processions to signify the authority and unity of the state. Before 1914, the symbolism of the Roman fasces was usually appropriated by the Left. Marianne, symbol of the French Republic, was often portrayed in the nineteenth century carrying the fasces to represent the force of Republican solidarity against her aristocratic and clerical enemies. Italian revolutionaries used the term fascio in the late nineteenth century to evoke the solidarity of committed militants. The peasants who rose against their landlords in Sicily in 1893–94 called themselves the Fasci Siciliani. When in late 1914 a group of left-wing nationalists, soon joined by the socialist outcast Benito Mussolini, sought to bring Italy into World War I on the Allied side, they chose a name designed to communicate both the fervor and the solidarity of their campaign: the Fascio Rivoluzionario d’Azione Interventista (Revolutionary League for Interventionist Action). At the end of World War I, Mussolini coined the term fascismo to describe the mood of the little band of nationalist ex-soldiers and pro-war syndicalist revolutionaries that he was gathering around himself. Even then, he had no monopoly on the word fascio, which remained in general use for activist groups of various political hues. Officially, Fascism was born in Milan on Sunday, March 23, 1919. That morning, somewhat more than a hundred persons, including war veterans, syndicalists who had supported the war, and Futurist intellectuals, plus some reporters and the merely curious, gathered in the meeting room of the Milan Industrial and Commercial Alliance, overlooking the Piazza San Sepolcro, to “declare war against socialism . . . because it has opposed nationalism.” Now Mussolini called his movement the Fasci di Combattimento, which means, very approximately, “fraternities of combat.”

Definitions are inherently limiting. They frame a static picture of something that is better perceived in movement, and they portray as “frozen ‘statuary’” something that is better understood as a process. They succumb all too often to the intellectual’s temptation to take programmatic statements as constitutive, and to identify fascism more with what it said than with what it did. The quest for the perfect definition, by reducing fascism to one ever more finely honed phrase, seems to shut off questions about the origins and course of fascist development rather than open them up. Fascism, by contrast, was a new invention created afresh for the era of mass politics. It sought to appeal mainly to the emotions by the use of ritual, carefully stage-managed ceremonies, and intensely charged rhetoric. The role programs and doctrine play in it is, on closer inspection, fundamentally unlike the role they play in conservatism, liberalism, and socialism. Fascism does not rest explicitly upon an elaborated philosophical system, but rather upon popular feelings about master races, their unjust lot, and their rightful predominance over inferior peoples. It has not been given intellectual underpinnings by any system builder, like Marx, or by any major critical intelligence, like Mill, Burke, or Tocqueville. In a way utterly unlike the classical “isms,” the rightness of fascism does not depend on the truth of any of the propositions advanced in its name. Fascism is “true” insofar as it helps fulfill the destiny of a chosen race or people or blood, locked with other peoples in a Darwinian struggle, and not in the light of some abstract and universal reason.

Automorphisms. Note Quote.

GraphAutormophismGroupExamples-theilmbh

A group automorphism is an isomorphism from a group to itself. If G is a finite multiplicative group, an automorphism of G can be described as a way of rewriting its multiplication table without altering its pattern of repeated elements. For example, the multiplication table of the group of 4th roots of unity G={1,-1,i,-i} can be written as shown above, which means that the map defined by

 1|->1,    -1|->-1,    i|->-i,    -i|->i

is an automorphism of G.

Looking at classical geometry and mechanics, Weyl followed Newton and Helmholtz in considering congruence as the basic relation which lay at the heart of the “art of measuring” by the handling of that “sort of bodies we call rigid”. He explained how the local congruence relations established by the comparison of rigid bodies can be generalized and abstracted to congruences of the whole space. In this respect Weyl followed an empiricist approach to classical physical geometry, based on a theoretical extension of the material practice with rigid bodies and their motions. Even the mathematical abstraction to mappings of the whole space carried the mark of their empirical origin and was restricted to the group of proper congruences (orientation preserving isometries of Euclidean space, generated by the translations and rotations) denoted by him as ∆+. This group seems to express “an intrinsic structure of space itself; a structure stamped by space upon all the inhabitants of space”.

But already on the earlier level of physical knowledge, so Weyl argued, the mathematical automorphisms of space were larger than ∆. Even if one sees “with Newton, in congruence the one and only basic concept of geometry from which all others derive”, the group Γ of automorphisms in the mathematical sense turns out to be constituted by the similarities.

The structural condition for an automorphism C ∈ Γ of classical congruence geometry is that any pair (v1,v2) of congruent geometric configurations is transformed into another pair (v1*,v2*) of congruent configurations (vj* = C(vj), j = 1,2). For evaluating this property Weyl introduced the following diagram:

IMG_20170320_040116_HDR

Because of the condition for automorphisms just mentioned the maps C T C-1 and C-1TC belong to ∆+ whenever T does. By this argument he showed that the mathematical automorphism group Γ is the normalizer of the congruences ∆+ in the group of bijective mappings of Euclidean space.

More generally, it also explains the reason for his characterization of generalized similarities in his analysis of the problem of space in the early 1920s. In 1918 he translated the relationship between physical equivalences as congruences to the mathematical automorphisms as the similarities/normalizer of the congruences from classical geometry to special relativity (Minkowski space) and “localized” them (in the sense of physics), i.e., he transferred the structural relationship to the infinitesimal neighbourhoods of the differentiable manifold characterizing spacetime (in more recent language, to the tangent spaces) and developed what later would be called Weylian manifolds, a generalization of Riemannian geometry. In his discussion of the problem of space he generalized the same relationship even further by allowing any (closed) sub-group of the general linear group as a candidate for characterizing generalized congruences at every point.

Moreover, Weyl argued that the enlargement of the physico-geometrical automorphisms of classical geometry (proper congruences) by the mathematical automorphisms (similarities) sheds light on Kant’s riddle of the “incongruous counterparts”. Weyl presented it as the question: Why are “incongruous counterparts” like the left and right hands intrinsically indiscernible, although they cannot be transformed into another by a proper motion? From his point of view the intrinsic indiscernibility could be characterized by the mathematical automorphisms Γ. Of course, the congruences ∆ including the reflections are part of the latter, ∆ ⊂ Γ; this implies indiscernibility between “left and right” as a special case. In this way Kant’s riddle was solved by a Leibnizian type of argument. Weyl very cautiously indicated a philosophical implication of this observation:

And he (Kant) is inclined to think that only transcendental idealism is able to solve this riddle. No doubt, the meaning of congruence and similarity is founded in spatial intuition. Kant seems to aim at some subtler point. But just this point is one which can be completely clarified by general concepts, namely by subsuming it under the general and typical group-theoretic situation explained before . . . .

Weyl stopped here without discussing the relationship between group theoretical methods and the “subtler point” Kant aimed at more explicitly. But we may read this remark as an indication that he considered his reflections on automorphism groups as a contribution to the transcendental analysis of the conceptual constitution of modern science. In his book on Symmetry, he went a tiny step further. Still with the Weylian restraint regarding the discussion of philosophical principles he stated: “As far as I see all a priori statements in physics have their origin in symmetry” (126).

To prepare for the following, Weyl specified the subgroup ∆o ⊂ ∆ with all those transformations that fix one point (∆o = O(3, R), the orthogonal group in 3 dimensions, R the field of real numbers). In passing he remarked:

In the four-dimensional world the Lorentz group takes the place of the orthogonal group. But here I shall restrict myself to the three-dimensional space, only occasionally pointing to the modifications, the inclusion of time into the four-dimensional world brings about.

Keeping this caveat in mind (restriction to three-dimensional space) Weyl characterized the “group of automorphisms of the physical world”, in the sense of classical physics (including quantum mechanics) by the combination (more technically, the semidirect product ̧) of translations and rotations, while the mathematical automorphisms arise from a normal extension:

– physical automorphisms ∆ ≅ R3 X| ∆o with ∆o ≅ O(3), respectively ∆ ≅ R4 X| ∆o for the Lorentz group ∆o ≅ O(1, 3),

– mathematical automorphisms Γ = R+ X ∆
(R+ the positive real numbers with multiplication).

In Weyl’s view the difference between mathematical and physical automorphisms established a fundamental distinction between mathematical geometry and physics.

Congruence, or physical equivalence, is a geometric concept, the meaning of which refers to the laws of physical phenomena; the congruence group ∆ is essentially the group of physical automorphisms. If we interpret geometry as an abstract science dealing with such relations and such relations only as can be logically defined in terms of the one concept of congruence, then the group of geometric automorphisms is the normalizer of ∆ and hence wider than ∆.

He considered this as a striking argument against what he considered to be the Cartesian program of a reductionist geometrization of physics (physics as the science of res extensa):

According to this conception, Descartes’s program of reducing physics to geometry would involve a vicious circle, and the fact that the group of geometric automorphisms is wider than that of physical automorphisms would show that such a reduction is actually impossible.” 

In this Weyl alluded to an illusion he himself had shared for a short time as a young scientist. After the creation of his gauge geometry in 1918 and the proposal of a geometrically unified field theory of electromagnetism and gravity he believed, for a short while, to have achieved a complete geometrization of physics.

He gave up this illusion in the middle of the 1920s under the impression of the rising quantum mechanics. In his own contribution to the new quantum mechanics groups and their linear representations played a crucial role. In this respect the mathematical automorphisms of geometry and the physical automorphisms “of Nature”, or more precisely the automorphisms of physical systems, moved even further apart, because now the physical automorphism started to take non-geometrical material degrees of freedom into account (phase symmetry of wave functions and, already earlier, the permutation symmetries of n-particle systems).

But already during the 19th century the physical automorphism group had acquired a far deeper aspect than that of the mobility of rigid bodies:

In physics we have to consider not only points but many types of physical quantities such as velocity, force, electromagnetic field strength, etc. . . .

All these quantities can be represented, relative to a Cartesian frame, by sets of numbers such that any orthogonal transformation T performed on the coordinates keeps the basic physical relations, the physical laws, invariant. Weyl accordingly stated:

All the laws of nature are invariant under the transformations thus induced by the group ∆. Thus physical relativity can be completely described by means of a group of transformations of space-points.

By this argumentation Weyl described a deep shift which ocurred in the late 19th century for the understanding of physics. He described it as an extension of the group of physical automorphisms. The laws of physics (“basic relations” in his more abstract terminology above) could no longer be directly characterized by the motion of rigid bodies because the physics of fields, in particular of electric and magnetic fields, had become central. In this context, the motions of material bodies lost their epistemological primary status and the physical automorphisms acquired a more abstract character, although they were still completely characterizable in geometric terms, by the full group of Euclidean isometries. The indistinguishability of left and right, observed already in clear terms by Kant, acquired the status of a physical symmetry in electromagnetism and in crystallography.

Weyl thus insisted that in classical physics the physical automorphisms could be characterized by the group ∆ of Euclidean isometries, larger than the physical congruences (proper motions) ∆+ but smaller than the mathe- matical automorphisms (similarities) Γ.

This view fitted well to insights which Weyl drew from recent developments in quantum physics. He insisted – differently to what he had thought in 1918 – on the consequence that “length is not relative but absolute” (Hs, p. 15). He argued that physical length measurements were no longer dependent on an arbitrary chosen unit, like in Euclidean geometry. An “absolute standard of length” could be fixed by the quantum mechanical laws of the atomic shell:

The atomic constants of charge and mass of the electron atomic constants and Planck’s quantum of action h, which enter the universal field laws of nature, fix an absolute standard of length, that through the wave lengths of spectral lines is made available for practical measurements.

Kant, Poincaré, Sklar and Philosophico-Geometrical Problem of Under-Determination. Note Quote.

maxresdefault1

What did Kant really mean in viewing Euclidean geometry as the correct geometrical structure of the world? It is widely known that one of the main goals that Kant pursued in the First Critique was that of unearthing the a priori foundations of Newtonian physics, which describes the structure of the world in terms of Euclidean geometry. How did he achieve that? Kant maintained that our understanding of the physical world had its foundations not merely in experience, but in both experience and a priori concepts. He argues that the possibility of sensory experience depends on certain necessary conditions which he calls a priori forms and that these conditions structure and hold true of the world of experience. As he maintains in the “Transcendental Aesthetic”, Space and Time are not derived from experience but rather are its preconditions. Experience provides those things which we sense. It is our mind, though, that processes this information about the world and gives it order, allowing us to experience it. Our mind supplies the conditions of space and time to experience objects. Thus “space” for Kant is not something existing – as it was for Newton. Space is an a priori form that structures our perception of objects in conformity to the principles of the Euclidean geometry. In this sense, then, the latter is the correct geometrical structure of the world. It is necessarily correct because it is part of the a priori principles of organization of our experience. This claim is exactly what Poincaré criticized about Kant’s view of geometry. Poincaré did not agree with Kant’s view of space as precondition of experience. He thought that our knowledge of the physical space is the result of inferences made out of our direct perceptions.

This knowledge is a theoretical construct, i.e, we infer the existence and nature of the physical space as an explanatory hypothesis which provides us with an account for the regularity we experience in our direct perceptions. But this hypothesis does not possess the necessity of an a priori principle that structures what we directly perceive. Although Poincaré does not endorse an empiricist account, he seems to think, though, that an empiricist view of geometry is more adequate than Kantian conception. In fact, the idea that only a large number of observations inquiring the geometry of physical world can establish which geometrical structure is the correct one, is considered by him as more plausible. But, this empiricist approach is not going to work as well. In fact Poincaré does not endorse an empiricist view of geometry. The outcome of his considerations about a comparison between the empiricist and Kantian accounts of geometry is well described by Sklar:

Nevertheless the empiricist account is wrong. For, given any collections of empirical observations a multitude of geometries, all incompatible with one another, will be equally compatible with the experimental results.

This is the problem of under-determination of hypotheses about the geometrical structure of physical space by experimental evidence. The under-determination is not due to our ability to collect experimental facts. No matter how rich and sophisticated are our experimental procedures for accumulating empirical results, these results will be never enough compelling to support just one of the hypotheses about the geometry of physical space – ruling out the competitors once for all. Actually, it is even worse than that: empirical results seem not to give us any reason at all to think one of the other hypothesis correct. Poincaré thought that this problem was grist to the mill of the conventionalist approach to geometry. The adoption of a geometry for physical space is a matter of making a conventional choice. A brief description of Poincaré disk model might unravel a bit more the issue that is coming up here. The short story about this imaginary world shows that an empiricist account of geometry fails to be adequate. In fact, Poincaré describes a scenario in which Euclidean and hyperbolic geometrical descriptions of that physical space end up being equally consistent with the same collection of empirical data. However, what this story tells us can be generalized to any other scenario, including ours, in which a scientific inquiry concerning the intrinsic geometry of the world is performed.

The imaginary world described in Poincaré’s example is an Euclidean two dimensional disk heated to a constant temperature at the center, whereas, along the radius R, it is heated in a way that produces a temperature’s variation described by R2 − r2. Therefore, the edge of the disk is uniformly cooled to 00.

A group of scientists living on the disk are interested in knowing what the intrinsic geometry of their world is. As Sklar says, the equipment available to them consists in rods uniformly dilating with increasing temperatures, i.e. at each point of the space they all change their lengths in a way which is directly proportional to temperature’s value at that point. However, the scientists are not aware of this peculiar temperature distortion of their rods. So, without anybody knowing, every time a measurement is performed, rods shrank or dilated, depending if they are close to the edge or to the center. After repeated measurements all over the disk, they have a list of empirical data that seems to support strongly the idea that their world is a Lobachevskian plane. So, this view becomes the official one. However, a different data’s interpretation is presented by a member of the community who, striking a discordant note, claims that those empirical data can be taken to indicate that the world is in fact an Euclidean disk, but equipped with fields shrinking or dilating lengths.

Although the two geometrical theories about the structure of the physical space are competitors, the empirical results collected by the scientists support both of them. According to our external three-dimensional Euclidean perspective we know their bi-dimensional world is Euclidean and so we know that only the innovator’s interpretation is the correct one. Using our standpoint the problem of under-determination would seem indeed a problem of epistemic access due to the particular experimental repertoire of the inhabitants. After all expanding this repertoire and increasing the amount of empirical data can overcome the problem. But, according to Poincaré that would completely miss the point. Moving from our “superior” perspective to their one would collocate us exactly in the same situation as they are, i.e.in the impossibility to decide which geometry is the correct one. But more importantly, Poincaré seems to say that any arbitrarily large amount of empirical data cannot refute a geometric hypothesis. In fact, a scientific theory about space is divided in two branches, a geometric one and a physical one. These two parts are deeply related. It would be possible to save from experimental refutation any geometric hypothesis about space, suitably changing some features of the physical branch of the theory. According to Sklar, this fact forces Poincaré to the conclusion that the choice of one hypothesis among several competitors is purely conventional.

The problem of under-determination comes up in the analysis of dual string theories with two string theories postulating two geometrically inequivalent backgrounds, if dual, can produce the same experimental results: same expectation values, same scattering amplitude, and so on. Therefore, similarly to Poincaré’s short story, empirical data relative to physical properties and physical dynamics of strings are not sufficient to determine which one between the two different geometries postulated for the background is the right one, or if there is any more fundamental geometry at all influencing physical dynamics.

General Philosophy Of Category Theory, i.e., We Should Only Care About Objects Up To Isomorphism. Part 7.

In this section we will prove that adjoint functors determine each other up to isomorphism. The key tool is the concept of an “embedding of categories”. In particular, the hom bifunctor Cop × C → Set induces two “Yoneda embeddings”

H(−) ∶ Cop → SetC and H(−) ∶ C → SetCop

These are analogous to the two embeddings of a vector space V into its dual space that are induced by a non-degenerate bilinear function ⟨−, −⟩ ∶ V × V → K.

Embedding of Categories: Recall that a functor F ∶ C → D consists of:

• An object function F ∶ Obj(C) → Obj(D),

• For each pair of objects c1, c2 ∈ C, a hom set function:

F ∶ HomC(c1,c2) → HomD(F(c1),F(c2))

We say that F is a full functor when the hom set functions are surjective, and we say that F is a faithful functor when the hom set functions are injective. If the hom set functions are bijective then we say that F is a fully faithful functor, or an embedding of categories.

An embedding is in some sense the correct notion of an “injective functor”. If F ∶ C → D is an embedding, then the object function F ∶ Obj(C) → Obj(D) is not necessarily injective, but it is “injective up to isomorphism”. This agrees with the general philosophy of category theory, i.e., that we should only care about objects up to isomorphism.

Embedding Lemma: Let F ∶ C → D be an embedding of categories. Then F is essentially injective in the sense that for all objects c1, c2 ∈ C we have

c1 ≅ c2 in C ⇐⇒ F(c1) ≅ F(c2) in D

Furthermore, F is essentially monic6 in the sense that for all functors G1, G2 ∶ B → C we have G1 ≅ G2 in CB ⇐⇒ F ○ G1 ≅ F ○ G2 in DB

Proof: Let F ∶ C → D be full and faithful, i.e., bijective on hom sets.

To prove that F is essentially injective, suppose that α ∶ c1 ↔ c2 ∶ β is an isomorphism in C and apply F to obtain arrows F (α) ∶ F (c1) ⇄ F (c2) ∶ F (β) in D. Then by the functoriality of F we have

F (α) ○ F (β) = F (α ○ β) = F (idc2 ) = idF(c2), F (β) ○ F (α) = F (β ○ α) = F (idc1) = idF(c1)

which implies that F (α) ∶ F (c1) ↔ F (c2) ∶ F (β) is an isomorphism in D. Conversely, suppose that α′ ∶ F (c1) ↔ F (c2) ∶ β′ is an isomorphism in D. By the fullness of F there exist arrows α ∶ c1 ⇄ c2 ∶ β such that F(α)=α′ and F(β)=β′, and by the functoriality of F we have

F (α ○ β) = F (α) ○ F(β) = α′ ○ β′ =idF(c2) = F(idc2), F (β ○ α) = F (β) ○ F (α) = β′ ○ α′ = idF(c1) = F(idc1)

Then by the faithfulness of F we have α ○ β = idc2 and β ○ α = idc1, which implies that α ∶ c1 ↔ c2 ∶ β is an isomorphism in C.

To prove that F is essentially monic, let G, G ∶ B → C be any functors and suppose that

we have a natural isomorphism Φ ∶ G1~ G2. This means that for each object b ∈ B we

have an isomorphism Φb ∶ G1(b) → G2(b) in C and for each arrow β ∶ b1 → b2 in B we have a commutative square:

img_20170208_203625

Recall from the previous argument that any functor sends isomorphisms to isomorphisms, thus by the functoriality of F we obtain another commutative square:

img_20170208_203609

in which the horizontal arrows are isomorphisms in D. In other words, the assignment F (Φ)b ∶= F(Φb) defines a natural isomorphism F(Φ) ∶ F ○ G1 ⇒ F ○ G2

Conversely, suppose that we have a natural isomorphism Φ’ ∶ F ○ G1~ F ○ G2, meaning that for each object b ∈ B we have an isomorphism Φb ∶ F (G1(b)) → F (G2(b)) in C, and for each arrow β ∶ b1 → b2 in B we have a commutative square:

img_20170208_204652_hdr

Since F is fully faithful, we know from the previous result that for each b ∈ B ∃ an isomorphism Φb ∶ G1(b) →~ G2(b) in C with the property Φb = F (Φ’b). Then by the functoriality of F and the commutativity of the above square we have,

F(Φb2 ○ G1(β)) = F(Φb2) ○ F(G1(β))

=Φ′b2 ○ F(G1(β))

= F (G2(β)) ○ Φ′b1

=F (G2(β)) ○ F′(Φb1)

= F (G2(β) ○ Φb1),

and by the faithfulness of F it follows that Φb2 ○ G1(β) = G2(β) = Φb1. We conclude that the following square commutes:

img_20170208_205811

In other words, the arrows Φb assemble into a natural isomorphism Φ ∶ G1 ⇒~ G2.

Lemma (The Yoneda Embeddings): Let C be a category and recall that for each object c ∈ C we have two hom functors

Hc =HomC(c,−) ∶ C → Set and Hc ∶ HomC(−,c) ∶ Cop → Set

The mappings c ↦ Hc and c ↦ Hc define two embeddings of categories:

H(−) ∶ Cop → SetC and H(−) ∶ C → SetCop

We will prove that H(−)  is an embedding. Then the fact that H(−) is an embedding follows by substituting Cop in place of C.

Proof:

Step 1: H(−) is a Functor. For each arrow γ ∶ c1 → c2 in Cop (i.e., for each arrow γ ∶ c2 → c1 in C) we must define a natural transformation H(−)(γ) ∶ H(−)(c1) ⇒ H(−)(c2), i.e., a natural transformation Hγ ∶ Hc1 ⇒ Hc2. And this means that for each object d ∈ C we must define an arrow (Hγ)d ∶ Hc1(d) → Hc2(d), i.e., a function (Hγ)d ∶ HomC(c1,d) → HomC(c2,d). Note that the only possible choice is to send each arrow α ∶ c1 → d to the arrow α ○ γ ∶ c2 → d. In other words, ∀ d ∈ C we define,

(Hγ)d ∶= (−) ○ γ

To check that this is indeed a natural transformation Hγ ∶ Hc1 ⇒ Hc2

δ ∶ d1 → d2 in C and observe that the following diagram commutes:

img_20170208_212435_hdr

Indeed, the commutativity of this square is just the associative axiom for composition. Thus we have defined the action of H(−) on arrows in Cop. To see that this defines a functor Cop → SetC, we need to show that for any composible arrows γ1, γ2 ∈ Arr(C) we have Hγ1 ○ γ2 = Hγ2 ○ Hγ1. So consider any arrows γ1 ∶ c2 → c1 and γ2 ∶ c3 → c2. Then ∀ objects d ∈ C and for all arrows δ ∶ c1 → d we have

[Hγ2 ○ Hγ1]d(δ) = [(Hγ2)d ○ (Hγ1)d] (δ)

= (Hγ2)d [(Hγ1)d(δ)]

= (Hγ2)d (δ ○ γ1)

= (δ ○ γ1) ○ γ2

= δ ○ (γ1 ○ γ2)

= (Hγ1 ○ γ2)d(δ)

Since this holds ∀ δ ∈ Hc1(d) we have [Hγ2 ○ Hγ1]d = (Hγ1 ○ γ2)d, and then since this holds ∀ d ∈ C we conclude that Hγ1 ○ γ2 = Hγ2 ○ Hγ1 as desired.

Step 2:

H(−) is Faithful. For each pair of objects c1,c2 ∈ C we want to show that the function H(−) ∶ HomCop (c1, c2) → HomSetC (Hc1 , Hc2)

defined in part (1) is injective. So consider any two arrows α, β ∶ c2 → c1 in C and suppose that we have Hα = Hβ as natural transformations. In this case we want to show that α = β.

Recall that ∀ objects d ∈ C and all arrows δ ∈ Hc1(d) we have defined (Hα)d(δ) = δ ○ α. Since Hα = Hβ, this means that

δ ○ α = (Hα)d(δ) = (Hβ)d(δ) = δ ○ β. Now we just take d = c1 and δ = idc1 to obtain

α = (idc1 ○ α) = (idc1 ○ β) = β

as desired.

Step 3:

H(−) is Full. For each pair of objects c1, c2 ∈ C we want to show that the function

H(−) ∶ HomCop (c1, c2) → HomSetC (Hc1 , Hc2 )
is surjective. So consider any natural transformation Φ ∶ Hc1 ⇒ Hc2. In this case we want to find an arrow φ ∶ c2 → c1 with the property Hφ = Φ. Where can we find such an arrow? By definition of “natural transformation” we have a function Φd ∶ Hc1(d) → Hc2(d) for each object d ∈ C, and for each arrow δ ∶ d1 → d2 we know that the following square commutes:

img_20170209_073201

Note that the category C might have very few arrows. (Indeed, C might be a discrete category, i.e., with only the identity arrows.) This suggests that our only possible choice is to evaluate the function Φc1 ∶ Hc1 (c1) → Hc2 (c1) at the identity arrow to obtain an arrow φ ∶= Φc1 (idc1) ∈ Hc2 (c1). Now hopefully we have Hφ = Φ (otherwise the theorem is not true). To check this, consider any element d ∈ C and any arrow δ ∶ c1 → d. Substituting this δ into the above diagram gives a commutative square:

img_20170209_074057

Then by following the arrow idc1 ∈ H c1 (c1) around the square in two different ways, and by using the definition (Hφ)d(δ) ∶= δ ○ φ from part (1), we obtain

Φd(δ ○ idc1) = δ ○ Φc1 (idc1) Φd(δ) = δ ○ φ

Φd(δ) = (Hφ)d(δ)

Since this holds for all arrows δ ∈ Hc1(d) we have Φd = (Hφ)d, and then since this holds for

all objects d ∈ C we conclude that Φ = Hφ as desired.

Let’s pause to apply the Embedding Lemma to the Yoneda embedding H(−) ∶ Cop → SetC. The fact that H(−) is “essentially injective” means that for all objects c1, c2 ∈ C we have c1 ≅ cin C ⇐⇒ Hc1 ≅ Hc2 in SetC.

[Note that c1 ≅ c2 in C if and only if c1 ≅ c2 in Cop.] This useful fact is the starting point for many areas of modern mathematics. It tells us that if we know all the information about arrows pointing to (or from) an object c ∈ C, then we know the object up to isomorphism. In some sense this is a justification for the philosophy of category theory. The Embedding Lemma also implies that the Yoneda embedding is “essentially monic,” i.e., “left-cancellable up to natural isomorphism”. We will use this fact to prove the uniqueness of adjoints.

Uniqueness of Adjoints: Let L ∶ C ⇄ D ∶ R be an adjunction of categories. Then each of L and R determines the other up to natural isomorphism.

Proof: We will prove that R determines L. The other direction is similar. So suppose that L′ ∶ C ⇄ D ∶ R is another adjunction. Then we have two bijections

HomD(L(c),d) ≅ HomC(c,R(d)) ≅ HomD(L′(c),d)

that are natural in (c, d) ∈ Cop × D, and by composing them we obtain a bijection

HomD(L(c),d) ≅ HomD(L′(c),d)

that is natural in (c,d) ∈ Cop × D

Naturality in d ∈ D means that for each c ∈ Cop we have a natural isomorphism of functors HomD(L(c),−) ≅ HomD(L′(c),−) in the category SetD.

Now let us compose the functor L ∶ Cop → Dop  with the Yoneda embedding H(−) ∶ Dop → SetD to obtain a functor (H(−) ○ L) ∶ Cop → SetD. Observe that if we apply the functor H(−) ○ L to an object c ∈ Cop then we obtain the functor

(H(−) ○ L)(c) = HomD(L(c),−) ∈ SetD

Thus, naturality in c ∈ Cop means exactly that we have a natural isomorphism of functors (H(−) ○ L) ≅ (H(−) ○ L′) in the category (SetD)Cop. Finally, since the “Yoneda embedding” H(−) is an embedding of categories, the Embedding Lemma tells us that we can cancel H(−) on the left to obtain a natural isomorphism:

(H(−) ○ L) ≅ (H(−) ○ L′) in (SetD)Cop ⇒ L ≅ L′ in (Dop)Cop

 In other words, we have L ≅ L′ in DC…..

Representation as a Meaningful Philosophical Quandary

1456831690974

The deliberation on representation indeed becomes a meaningful quandary, if most of the shortcomings are to be overcome, without actually accepting the way they permeate the scientific and philosophical discourse. The problem is more ideological than one could have imagined, since, it is only within the space of this quandary that one can assume success in overthrowing the quandary. Unless the classical theory of representation that guides the expert systems has been accepted as existing, there is no way to dislodge the relationship of symbols and meanings that build up such systems, lest the predicament of falling prey to the Scylla of metaphysically strong notion of meaningful representation as natural or the Charybdis of an external designer should gobble us up. If one somehow escapes these maliciously aporetic entities, representation as a metaphysical monster stands to block our progress. Is it really viable then to think of machines that can survive this representational foe, a foe that gets no aid from the clusters of internal mechanisms? The answer is very much in the affirmative, provided, a consideration of the sort of such a non-representational system as continuous and homogeneous is done away with. And in its place is had functional units that are no more representational ones, for the former derive their efficiency and legitimacy through autopoiesis. What is required is to consider this notional representational critique of distributed systems on the objectivity of science, since objectivity as a property of science has an intrinsic value of independence from the subject who studies the discipline. Kuhn  had some philosophical problems to this precise way of treating science as an objective discipline. For Kuhn, scientists operate under or within paradigms thus obligating hierarchical structures. Such hierarchical structures ensure the position of scientists to voice their authority on matters of dispute, and when there is a crisis within, or, for the paradigm, scientists, to begin with, do not outrightly reject the paradigm, but try their level best at resolution of the same. In cases where resolution becomes a difficult task, an outright rejection of the paradigm would follow suit, thus effecting what is commonly called the paradigm shift. If such were the case, obviously, the objective tag for science goes for a hit, and Kuhn argues in favor of a shift in social order that science undergoes, signifying the subjective element. Importantly, these paradigm shifts occur to benefit scientific progress and in almost all of the cases, occur non-linearly. Such a view no doubt slides Kuhn into a position of relativism, and has been the main point of attack on paradigms shifting. At the forefront of attacks has been Michael Polanyi and his bunch of supporters, whose work on epistemology of science have much of the same ingredients, but was eventually deprived of fame. Kuhn was charged with plagiarism. The commonality of their arguments could be measured by a dissenting voice for objectivity in science. Polanyi thought of it as a false ideal, since for him the epistemological claims that defined science were based more on personal judgments, and therefore susceptible to fallibilism. The objective nature of science that obligates the scientists to see things as they really are is kind of dislodged by the above principle of subjectivity. But, if science were to be seen as objective, then the human subjectivity would indeed create a rupture as far as the purified version of scientific objectivity is sought for. The subject or the observer undergoes what is termed the “observer effect” that refers to the change impacting an act of observation being observed. This effect is as good as ubiquitous in most of the domains of science and technology ranging from Heisenbug(1) in computing via particle physics, science of thermodynamics to quantum mechanics. The quantum mechanics observer effect is quite perplexing, and is a result of a phenomenon called “superposition” that signifies the existence in all possible states and all at once. The superposition gets its credit due to Schrödinger’s cat experiment. The experiment entails a cat that is neither dead nor alive until observed. This has led physicists to take into account the acts of “observation” and “measurement” to comprehend the paradox in question, and thereby come out resolving it. But there is still a minority of quantum physicists out there who vouch for the supremacy of an observer, despite the quantum entanglement effect that go on to explain “observation” and “measurement” impacts.(2) Such a standpoint is indeed reflected in Derrida (9-10) as well, when he says (I quote him in full),

The modern dominance of the principle of reason had to go hand in hand with the interpretation of the essence of beings as objects, and object present as representation (Vorstellung), an object placed and positioned before a subject. This latter, a man who says ‘I’, an ego certain of itself, thus ensures his own technical mastery over the totality of what is. The ‘re-‘ of repraesentation also expresses the movement that accounts for – ‘renders reason to’ – a thing whose presence is encountered by rendering it present, by bringing it to the subject of representation, to the knowing self.

If Derridean deconstruction needs to work on science and theory, the only way out is to relinquish the boundaries that define or divide the two disciplines. Moreover, if there is any looseness encountered in objectivity, the ramifications are felt straight at the levels of scientific activities. Even theory does not remain immune to these consequences. Importantly, as scientific objectivity starts to wane, a corresponding philosophical luxury of avoiding the contingent wanes. Such a loss of representation congruent with a certain theory of meaning we live by has serious ethical-political affectations.

(1) Heisenbug is a pun on the Heisenberg’s uncertainty principle and is a bug in computing that is characterized by a disappearance of the bug itself when an attempt is made to study it. One common example is a bug that occurs in a program that was compiled with an optimizing compiler, but not in the same program when compiled without optimization (e.g., for generating a debug-mode version). Another example is a bug caused by a race condition. A heisenbug may also appear in a system that does not conform to the command-query separation design guideline, since a routine called more than once could return different values each time, generating hard- to-reproduce bugs in a race condition scenario. One common reason for heisenbug-like behaviour is that executing a program in debug mode often cleans memory before the program starts, and forces variables onto stack locations, instead of keeping them in registers. These differences in execution can alter the effect of bugs involving out-of-bounds member access, incorrect assumptions about the initial contents of memory, or floating- point comparisons (for instance, when a floating-point variable in a 32-bit stack location is compared to one in an 80-bit register). Another reason is that debuggers commonly provide watches or other user interfaces that cause additional code (such as property accessors) to be executed, which can, in turn, change the state of the program. Yet another reason is a fandango on core, the effect of a pointer running out of bounds. In C++, many heisenbugs are caused by uninitialized variables. Another similar pun intended bug encountered in computing is the Schrödinbug. A schrödinbug is a bug that manifests only after someone reading source code or using the program in an unusual way notices that it never should have worked in the first place, at which point the program promptly stops working for everybody until fixed. The Jargon File adds: “Though… this sounds impossible, it happens; some programs have harbored latent schrödinbugs for years.”

(2) There is a related issue in quantum mechanics relating to whether systems have pre-existing – prior to measurement, that is – properties corresponding to all measurements that could possibly be made on them. The assumption that they do is often referred to as “realism” in the literature, although it has been argued that the word “realism” is being used in a more restricted sense than philosophical realism. A recent experiment in the realm of quantum physics has been quoted as meaning that we have to “say goodbye” to realism, although the author of the paper states only that “we would [..] have to give up certain intuitive features of realism”. These experiments demonstrate a puzzling relationship between the act of measurement and the system being measured, although it is clear from experiment that an “observer” consisting of a single electron is sufficient – the observer need not be a conscious observer. Also, note that Bell’s Theorem suggests strongly that the idea that the state of a system exists independently of its observer may be false. Note that the special role given to observation (the claim that it affects the system being observed, regardless of the specific method used for observation) is a defining feature of the Copenhagen Interpretation of quantum mechanics. Other interpretations resolve the apparent paradoxes from experimental results in other ways. For instance, the Many- Worlds Interpretation posits the existence of multiple universes in which an observed system displays all possible states to all possible observers. In this model, observation of a system does not change the behavior of the system – it simply answers the question of which universe(s) the observer(s) is(are) located in: In some universes the observer would observe one result from one state of the system, and in others the observer would observe a different result from a different state of the system.

Permeability of Autopoietic Principles (revisited) During Cognitive Development of the Brain

Distinctions and binaries have their problematics, and neural networks are no different when one such attempt is made regarding the information that flows from the outside into the inside, where interactions occur. The inside of the system has to cope up with the outside of the system through mechanisms that are either predefined for the system under consideration, or having no independent internal structure at all to begin with. The former mechanism results in loss of adaptability, since all possible eventualities would have to be catered for in the fixed, internal structure of the system. The latter is guided by conditions prevailing in the environment. In either cases, learning to cope with the environmental conditions is the key for system’s reaching any kind of stability. But, how would a system respond to its environment? According to the ideas propounded by Changeaux et. al. , this is possible in two ways, viz,

  1. An instructive mechanism is directly imposed by the environment on the system’s structure and,
  2. a selective mechanism, that is Darwinian in its import, helps maintain order as a result of interactions between the system and environment. The environment facilitates reinforcement, stabilization and development of the structure, without in any way determining it.

These two distinct ways when exported to neural networks take on connotations as supervised and unsupervised learning. The position of Changeaux et. al. is rooted in rule- based, formal and representational formats, and is thus criticized by the likes of Edelman. According to him, in a nervous system (his analysis are based upon nervous systems) neural signals in an information processing models are taken in from the periphery, and thereafter encoded in various ways to be subsequently transformed and retransformed during processing and generating an output. This not only puts extreme emphasis on formal rules, but also makes the claim on the nature of memory that is considered to occur through the representation of events through recording or replication of their informational details. Although, Edelman’s analysis takes nervous system as its centrality, the informational modeling approach that he undertakes is blanketed over the ontological basis that forms the fabric of the universe. Connectionists have no truck with this approach, as can be easily discerned from a long quote Edelman provides:

The notion of information processing tends to put a strong emphasis on the ability of the central nervous system to calculate the relevant invariance of a physical world. This view culminates in discussions of algorithms and computations, on the assumption that brain computes in an algorithmic manner…Categories of natural objects in the physical world are implicitly assumed to fall into defined classes or typologies that are accessible to a program. Pushing the notion even further, proponents of certain versions of this model are disposed to consider that the rules and representation (Chomsky) that appear to emerge in the realization of syntactical structures and higher semantic functions of language arise from corresponding structures at the neural level.

Edelman is aware of the shortcomings in informational processing models, and therefore takes a leap into connectionist fold with his proposal of brain consisting of a large number of undifferentiated, but connected neurons. He, at the same time gives a lot of credence to organization occurring at development phases of the brain. He lays out the following principles of this population thinking in his Neural Darwinism: The Theory of Neuronal Group Selection:

  1. The homogeneous, undifferentiated population of neurons is epigenetically diversified into structurally variant groups through a number of selective processescalled“primary repertoire”.
  2. Connections among the groups are modified due to signals received during the interactions between the system and environment housing the system. Such modifications that occur during the post-natal period become functionally active to used in future, and form “secondary repertoire”.
  3. With the setting up of “primary” and “secondary” repertoires, groups engage in interactions by means of feedback loops as a result of various sensory/motor responses, enabling the brain to interpret conditions in its environment and thus act upon them.

“Degenerate” is what Edelman calls are the neural groups in the primary repertoire to begin with. This entails the possibility of a significant number of non-identical variant groups. This has another dimension to it as well, in that, non-identical variant groups are distributed uniformly across the system. Within Edelman’s nervous system case study, degeneracy and distributedness are crucial features to deny the localization of cortical functions on the one hand, and existence of hierarchical processing structures in a narrow sense on the other. Edelman’s cortical map formation incorporate the generic principles of autopoiesis. Cortical maps are collections (areas) of minicolumns in the brain cortex that have been identified as performing a specific information processing function. Schematically, it is like,

8523645_orig

In Edelman’s theory, neural groups have an optimum size that is not known a priori, but develops spontaneously and dynamically. Within the cortex, this is achieved by means of inhibitory connections spread over a horizontal plane, while excitatory ones are vertically laid out, thus enabling the neuronal activity to be concentrated on the vertical plane rather than the horizontal one. Hebb’s rule facilitates the utility function of this group. Impulses are carried on to neural groups thereby activating the same, and subsequently altering synaptic strengths. During the ensuing process, a correlation gets formed between neural groups with possible overlapping of messages as a result of synaptic activity generated within each neural groups. This correlational activity could be selected for frequent exposure to such overlaps, and once selected, the group might start to exhibit its activity even in the absence of inputs or impulses. The selection is nothing but memory, and is always used in learning procedures. A lot depends upon the frequency of exposure, as if this is on the lower scale, memory, or selection could simply fade away, and be made available for a different procedure. No wonder, why forgetting is always referred to as a precondition for memory. Fading away might be an useful criteria for using the freed allotted memory storage space during developmental process, but at the stage when groups of the right size are in place and ready for selection, weakly interacting groups would meet the fate of elimination. Elimination and retention of groups depends upon what Edelman refers to as the vitality principle, wherein, sensitivity to historical process finds more legitimacy, and that of extant groups find takers in influencing the formation of new groups. The reason for including Edelman’s case was specifically to highlight the permeability of self-organizing principles during the cognitive development of the brain, and also pitting the superiority of neural networks/connectionist models in comprehending brain development over the traditional rule-based expert and formal systems of modeling techniques.

In order to understand the nexus between brain development and environment, it would be secure to carry further Edelman’s analysis. It is a commonsense belief in linking the structural changes in the brain with environmental effects. Even if one takes recourse to Darwinian evolution, these changes are either delayed due to systemic resistance to let these effects take over, or in not so Darwinian a fashion, the effects are a compounded resultant of embedded groups within the network. On the other hand, Edelman’s cortical map formation is not just confined to the processes occurring within brain’s structure alone, but is also realized by how the brain explores its environment. This aspect is nothing but motor behavior in its nexus between the brain and environment and is strongly voiced by Cilliers, when he calls to attention,

The role of active motor behavior forms the first half of the argument against abstract, solipsistic intelligence. The second half concerns the role of communication. The importance of communication, especially the use of symbol systems (language), does not return us to the paradigm of objective information- processing. Structures for communication remain embedded in a neural structure, and therefore will always be subjected to the complexities of network interaction. Our existence is both embodied and contingent.

Edelman is criticized for showing no respect to replication in his theory, which is a strong pillar for natural selection and learning. Recently, attempts to incorporate replication in the brain have been undertaken, and strong indicators for neuronal replicators with the use of Hebb’s learning mechanism as showing more promise when compared with natural selection are in the limelight (Fernando, Goldstein and Szathmáry). These autopoietic systems when given a mathematical description and treatment could be used to model onto a computer or a digital system, thus help giving insights into the world pregnant with complexity.

Autopiesis goes directly to the heart of anti-foundationalism. This is because the epistemological basis of basic beliefs is not paid any due respect or justificatory support in autopietic system’s insistence on internal interactions and external contingent factors obligating the system to undergo continuous transformations. If autopoiesis could survive wonderfully well without the any transcendental intervention, or a priori definition, it has parallels running within French theory. If anti-foundationalism is the hallmark of autopoiesis, so is anti-reductionism, since it is well nigh impossible to analyze to have meaning explicated in terms of atomistic units, and especially so, when the systems are already anti-foundationalist. Even in biologically contextual terms, a mereology according to Garfinkel is emergent as a result of complex interactions that go on within the autopoietic system. Garfinkel says,

We have seen that modeling aggregation requires us to transcend the level of the individual cells to describe the system by holistic variables. But in classical reductionism, the behavior of holistic entities must ultimately be explained by reference to the nature of their constituents, because those entities ‘are just’ collections of the lower-level objects with their interactions. Although, it may be true in some sense that systems are just collections of their elements, it does not follow that we can explain the system’s behavior by reference to its parts, together with a theory of their connections. In particular, in dealing with systems of large numbers of similar components, we must make recourse to holistic concepts that refer to the behavior of the system as a whole. We have seen here, for example, concepts such as entrainment, global attractors, waves of aggregation, and so on. Although these system properties must ultimately be definable in terms of the states of individuals, this fact does not make them ‘fictions’; they are causally efficacious (hence, real) and have definite causal relationships with other system variables and even to the states of the individuals.

Autopoiesis gains vitality, when systems thinking opens up the avenues of accepting contradictions and opposites rather than merely trying to get rid of them. Vitality is centered around a conflict, and ideally comes into a balanced existence, when such a conflict, or strife helps facilitate consensus building, or cooperation. If such goals are achieved, analyzing complexity theory gets a boost, and moreover by being sensitive to autopoiesis, an appreciation of the sort of the real lebenswelt gets underlined. Memory† and history are essentials for complex autopoietic system, whether they be biological and/or social, and this can be fully comprehended in some quite routine situations where systems that are quite identical in most respects, if differing in their histories would have different trajectories in responding to situations they face. Memory does not determine the final description of the system, since it is itself susceptible to transformations, and what really gets passed on are the traces. The same susceptibility to transformations would apply to traces as well. But memory is not stored in the brain as discrete units, but rather as in a distributed pattern, and this is the pivotal characteristic of self-organizing complex systems over any other form of iconic representation. This property of transformation as associated with autopoietic systems is enough to suspend the process in between activity and passivity, in that the former is determining by the environment and the latter is impact on the environment. This is really important in autopoiesis, since the distinction between the inside and outside and active and passive is difficult to discern, and moreover this disappearance of distinction is a sufficient enough case to vouch against any authoritative control as residing within the system, and/or emanating from any single source. Autopoiesis scores over other representational modeling techniques by its ability to self-reflect, or by the system’s ability to act upon itself. For Lawson, reflexivity disallows any static description of the system, since it is not possible to intercept the reflexive moment, and it also disallows a complete description of the system at a meta-level. Even though a meta-level description can be construed, it is only the frozen frames or snapshots of the systems at any given particular instance, and hence ignores the temporal dimensions the systems undergo. For that to be taken into account, and measure the complexity within the system, the role of activity and passivity cannot be ignored at any cost, despite showing up great difficulties while modeling. But, is it not really a blessing in disguise, for the model of a complex system should be retentive of complexity in the real world? Well, the answer is yes, it is.

Somehow, the discussion till now still smells of anarchy within autopoiesis, and if there is no satisfactory account of predictability and stability within the self-organizing system, the fears only get aggravated. A system which undergoes huge effects when small changes or alteration are made in the causes is definitely not a candidate for stability. And autopietic systems are precisely such. Does this mean that these are unstable?, or does it call for a reworking of the notion of stability? This is philosophically contentious and there is no doubt regarding this. Unstability could be a result of probabilities, but complex systems have to fall outside the realm of such probabilities. What happens in complex systems is a result of complex interactions due to a large number of factors, that need not be logically compatible. At the same time, stochasticity has no room, for it serves as an escape route from the annals of classical determinism, and hence a theory based on such escape routes could never be a theory of self-organization (Patteee). Stability is closely related to the ability to predict, and if stability is something very different from what classical determinism tells it is, the case for predictability should be no different. The problems in predictions are gross, as are echoed in the words of Krohn and Küppers,

In the case of these ‘complex systems’ (Nicolis and Prigogine), or ‘non-trivial’ machines, a functional analysis of input-output correlations must be supplemented by the study of ‘mechanisms’, i.e. by causal analysis. Due to the operational conditions of complex systems it is almost impossible to make sense of the output (in terms of the functions or expected effects) without taking into account the mechanisms by which it is produced. The output of the system follows the ‘history’ of the system, which itself depends on its previous output taken as input (operational closure). The system’s development is determined by its mechanisms, but cannot be predicted, because no reliable rule can be found in the output itself. Even more complicated are systems in which the working mechanisms themselves can develop according to recursive operations (learning of learning; invention of inventions, etc.).

The quote above clearly is indicative of predicaments while attempting to provide explanations of predictability. Although, it is quite difficult to get rid of these predicaments, nevertheless, attempts to mitigate them so as to reduce noise levels from distorting or disturbing the stability and predictability of the systems are always in the pipeline. One such attempt lies in collating or mapping constraints onto a real epistemological fold of history and environment, and thereafter apply it to the studies of the social and the political. This is voiced very strongly as a parallel metaphoric in Luhmann, when he draws attention to,

Autopoietic systems, then, are not only self organizing systems. Not only do they produce and eventually change their own structures but their self-reference applies to the production of other components as well. This is the decisive conceptual innovation. It adds a turbo charger to the already powerful engine of self-referential machines. Even elements, that is, last components (individuals), which are, at least for the system itself, undecomposable, are produced by the system itself. Thus, everything which is used as a unit by the system is produced as a unit by the system itself. This applies to elements, processes, boundaries and other structures, and last but not least to the unity of the system itself. Autopoietic systems, of course, exist within an environment. They cannot exist on their own. But there is no input and no output of unity.

What this entails for social systems is that they are autopoietically closed, in that, while they rely on resources from their environment, the resources in question do not become part of the systematic operation. So the system never tries its luck at adjusting to the changes that are brought about superficially and in the process fritter away its available resources, instead of going for trends that do not appear to be superficial. Were a system to ever attempt a fall from grace in making acclimatizations to these fluctuations, a choice that is ethical in nature and contextual at the same time is resorted to. Within the distributed systems as such, a central authority is paid no heed, since such a scenario could result in general degeneracy of the system as a whole. Instead, what gets highlighted is the ethical choice of decentralization, to ensure system’s survivability, and dynamism. Such an ethical treatment is no less altruistic.