Conjuncted: Indiscernibles – Philosophical Constructibility. Thought of the Day 48.1

Simulated Reality

Conjuncted here.

“Thought is nothing other than the desire to finish with the exorbitant excess of the state” (Being and Event). Since Cantor’s theorem implies that this excess cannot be removed or reduced to the situation itself, the only way left is to take control of it. A basic, paradigmatic strategy for achieving this goal is to subject the excess to the power of language. Its essence has been expressed by Leibniz in the form of the principle of indiscernibles: there cannot exist two things whose difference cannot be marked by a describable property. In this manner, language assumes the role of a “law of being”, postulating identity, where it cannot find a difference. Meanwhile – according to Badiou – the generic truth is indiscernible: there is no property expressible in the language of set theory that characterizes elements of the generic set. Truth is beyond the power of knowledge, only the subject can support a procedure of fidelity by deciding what belongs to a truth. This key thesis is established using purely formal means, so it should be regarded as one of the peak moments of the mathematical method employed by Badiou.

Badiou composes the indiscernible out of as many as three different mathematical notions. First of all, he decides that it corresponds to the concept of the inconstructible. Later, however, he writes that “a set δ is discernible (…) if there exists (…) an explicit formula λ(x) (…) such that ‘belong to δ’ and ‘have the property expressed by λ(x)’ coincide”. Finally, at the outset of the argument designed to demonstrate the indiscernibility of truth he brings in yet another definition: “let us suppose the contrary: the discernibility of G. A formula thus exists λ(x, a1,…, an) with parameters a1…, an belonging to M[G] such that for an inhabitant of M[G] it defines the multiple G”. In short, discernibility is understood as:

  1. constructibility
  2. definability by a formula F(y) with one free variable and no parameters. In this approach, a set a is definable if there exists a formula F(y) such that b is an element of a if F(b) holds.
  3. definability by a formula F (y, z1 . . . , zn) with parameters. This time, a set a is definable if there exists a formula F(y, z1,…, zn) and sets a1,…, an such that after substituting z1 = a1,…, zn = an, an element b belongs to a iff F(b, a1,…, an) holds.

Even though in “Being and Event” Badiou does not explain the reasons for this variation, it clearly follows from his other writings (Alain Badiou Conditions) that he is convinced that these notions are equivalent. It should be emphasized then that this is not true: a set may be discernible in one sense, but indiscernible in another. First of all, the last definition has been included probably by mistake because it is trivial. Every set in M[G] is discernible in this sense because for every set a the formula F(y, x) defined as y belongs to x defines a after substituting x = a. Accepting this version of indiscernibility would lead to the conclusion that truth is always discernible, while Badiou claims that it is not so.

Is it not possible to choose the second option and identify discernibility with definability by a formula with no parameters? After all, this notion is most similar to the original idea of Leibniz intuitively, the formula F(y) expresses a property characterizing elements of the set defined by it. Unfortunately, this solution does not warrant indiscernibility of the generic set either. As a matter of fact, assuming that in ontology, that is, in set theory, discernibility corresponds to constructibility, Badiou is right that the generic set is necessarily indiscernible. However, constructibility is a highly technical notion, and its philosophical interpretation seems very problematic. Let us take a closer look at it.

The class of constructible sets – usually denoted by the letter L – forms a hierarchy indexed or numbered by ordinal numbers. The lowest level L0 is simply the empty set. Assuming that some level – let us denote it by Lα – has already been

constructed, the next level Lα+1 is constructed by choosing all subsets of L that can be defined by a formula (possibly with parameters) bounded to the lower level Lα.

Bounding a formula to Lα means that its parameters must belong to Lα and that its quantifiers are restricted to elements of Lα. For instance, the formula ‘there exists z such that z is in y’ simply says that y is not empty. After bounding it to Lα this formula takes the form ‘there exists z in Lα such that z is in y’, so it says that y is not empty, and some element from Lα witnesses it. Accordingly, the set defined by it consists of precisely those sets in Lα that contain an element from Lα.

After constructing an infinite sequence of levels, the level directly above them all is simply the set of all elements constructed so far. For example, the first infinite level Lω consists of all elements constructed on levels L0, L1, L2,….

As a result of applying this inductive definition, on each level of the hierarchy all the formulas are used, so that two distinct sets may be defined by the same formula. On the other hand, only bounded formulas take part in the construction. The definition of constructibility offers too little and too much at the same time. This technical notion resembles the Leibnizian discernibility only in so far as it refers to formulas. In set theory there are more notions of this type though.

To realize difficulties involved in attempts to philosophically interpret constructibility, one may consider a slight, purely technical, extension of it. Let us also accept sets that can be defined by a formula F (y, z1, . . . , zn) with constructible parameters, that is, parameters coming from L. Such a step does not lead further away from the common understanding of Leibniz’s principle than constructibility itself: if parameters coming from lower levels of the hierarchy are admissible when constructing a new set, why not admit others as well, especially since this condition has no philosophical justification?

Actually, one can accept parameters coming from an even more restricted class, e.g., the class of ordinal numbers. Then we will obtain the notion of definability from ordinal numbers. This minor modification of the concept of constructibility – a relaxation of the requirement that the procedure of construction has to be restricted to lower levels of the hierarchy – results in drastic consequences.

Conjuncted: Gross Domestic Product. Part 2.

Conjuncted here.

The topology of the World Trade, which is encapsulated in its adjacency matrix aij defined by

aij(t) ≡ 1 if fij(t) > 0

aij(t) ≡ 0 if fij(t) = 0

, strongly depends on the GDP values wi. Indeed, the problem can be mapped onto the so-called fitness model where it is assumed that the probability pij for a link from i to j is a function p(xi, xj) of the values of a fitness variable x assigned to each vertex and drawn from a given distribution. The importance of this model relies in the possibility to write all the expected topological properties of the network (whose specification requires in principle the knowledge of the N2 entries of its adjacency matrix) in terms of only N fitness values. Several topological properties including the degree distribution, the degree correlations and the clustering hierarchy are determined by the GDP distribution. Moreover, an additional understanding of the World Trade as a directed network comes from the study of its reciprocity, which represents the strong tendency of the network to form pairs of mutual links pointing in opposite directions between two vertices. In this case too, the observed reciprocity structure can be traced back to the GDP values.

The probability that at time t a link exists from i to j (aij = 1) is empirically found to be

pt [xi(t), xj(t)] = [α(t) xi(t) xj(t)]/[1 + β(t) xi(t) xj(t)]

where xi is the rescaled GDP and the parameters α(t) and β(t) can be fixed by imposing that the expected number of links

Lexp(t) = ∑i≠j pt [xi(t), xj(t)]

equals its empirical value

L(t) = ∑i≠j aij(t)

and that the expected number of reciprocated links

Lexp(t) = ∑i≠j pt[xi(t), xj(t)] pt[xj(t), xi(t)]

equals its observed value

L(t) = ∑i≠j aij(t) aji(t)

This particular structure of the World Trade topology can be tested by comparing various expected topological properties with the empirical ones. For instance, we can compare the empirical and the theoretical plots of vertex degrees (at time t) versus their rescaled GDP xi(t). Note that since pt [xi(t), xj(t)] is symmetric under the exchange of i and j, at any given time the expected in-degree and the expected out-degree of a vertex i are equal. We denote both by kexpi, which can be expressed as

kexpi(t) = ∑j≠i pt[xi(t), xj(t)]

Since the number of countries N(t) increases in time, we define the rescaled degrees

k ̃i(t) ≡ ki(t)/[N(t) − 1]

that always represent the fraction of vertices which are connected to i (the term −1 comes from the fact that there are no self-loops in the network, hence the maximum degree is always N − 1). In this way, we can easily compare the data corresponding to different years and network sizes. The results are shown in the figure below for various snapshots of the system.

Untitled

Figure: Plot of the rescaled degrees versus the rescaled GDP at four different years, and comparison with the expected trend. 

The empirical trends are in accordance with the expected ones. Then we can also compare the cumulative distribution Pexp>(k ̃exp) of the expected degrees with the empirical degree distributions Pin>(k ̃in) and Pout>(k ̃out). The results are shown in the following figure and are in conformity to a good agreement between the theoretical prediction and the observed behavior.

Untitled

Figure: Cumulative degree distributions of the World Trade topology for four different years and comparison with the expected trend. 

Note that the accordance with the predicted behaviour is extremely important since the expected quantities are computed by using only the N GDP values of all countries, with no information regarding the N2 trade values. On the other hand, the empirical properties of the World Trade topology are extracted from trade data, with no knowledge of the GDP values. The agreement between the properties obtained by using these two independent sources of information is therefore surprising. This also shows that the World Trade topology crucially depends on the GDP distribution ρ(x).

Impasse to the Measure of Being. Thought of the Day 47.0

IMG_7622-1038x576

The power set p(x) of x – the state of situation x or its metastructure (Alain Badiou – Being and Event) – is defined as the set of all subsets of x. Now, basic relations between sets can be expressed as the following relations between sets and their power sets. If for some x, every element of x is also a subset of x, then x is a subset of p(x), and x can be reduced to its power set. Conversely, if every subset of x is an element of x, then p(x) is a subset of x, and the power set p(x) can be reduced to x. Sets that satisfy the first condition are called transitive. For obvious reasons the empty set is transitive. However, the second relation never holds. The mathematician Georg Cantor proved that not only p(x) can never be a subset of x, but in some fundamental sense it is strictly larger than x. On the other hand, axioms of set theory do not determine the extent of this difference. Badiou says that it is an “excess of being”, an excess that at the same time is its impasse.

In order to explain the mathematical sense of this statement, recall the notion of cardinality, which clarifies and generalizes the common understanding of quantity. We say that two sets x and y have the same cardinality if there exists a function defining a one-to-one correspondence between elements of x and elements of y. For finite sets, this definition agrees with common intuitions: if a finite set y has more elements than a finite set x, then regardless of how elements of x are assigned to elements of y, something will be left over in y precisely because it is larger. In particular, if y contains x and some other elements, then y does not have the same cardinality as x. This seemingly trivial fact is not always true outside of the domain of finite sets. To give a simple example, the set of all natural numbers contains quadratic numbers, that is, numbers of the form n2, as well as some other numbers but the set of all natural numbers, and the set of quadratic numbers have the same cardinality. The correspondence witnessing this fact assigns to every number n a unique quadratic number, namely n2.

Counting finite sets has always been done via natural numbers 0, 1, 2, . . . In set theory, the concept of such a canonical measure can be extended to infinite sets, using the notion of cardinal numbers. Without getting into details of their definition, let us say that the series of cardinal numbers begins with natural numbers, which are directly followed by the number ω0, that is, the size of the set of all natural numbers , then by ω1, the first uncountable cardinal numbers, etc. The hierarchy of cardinal numbers has the property that every set x, finite or infinite, has cardinality (i.e. size) equal to exactly one cardinal number κ. We say then that κ is the cardinality of x.

The cardinality of the power set p(x) is 2n for every finite set x of cardinality n. However, something quite paradoxical happens when infinite sets are considered. Even though Cantor’s theorem does state that the cardinality of p(x) is always larger than x – similarly as in the case of finite sets – axioms of set theory never determine the exact cardinality of p(x). Moreover, one can formally prove that there exists no proof determining the cardinality of the power sets of any given infinite set. There is a general method of building models of set theory, discovered by the mathematician Paul Cohen, and called forcing, that yields models, where – depending on construction details – cardinalities of infinite power sets can take different values. Consequently, quantity – “a fetish of objectivity” as Badiou calls it – does not define a measure of being but it leads to its impasse instead. It reveals an undetermined gap, where an event can occur – “that-which-is-not being-qua-being”.

Political Ideology Chart

8kkxS

It displays anarchism (lower end) and authoritarianism (higher end) as the extremes of another (vertical) axis as a social measure while left-right is the horizontal axis which is an economic measure.

Anarchism is about self-governance, having as little hierarchy as possible. As you go to the left, the means of production are distrubuted more equally; and as you go to the right, individuals and corporations own more of the means of production and accumulate capital.

On the upper left you have an authoritarian state, distributing the means of production to the people as equally as possible; on the lower left you have the collectives, getting together voluntarily utilizing their local means of production and sharing the products; on the lower right you have anarchocapitalists, with no state, tax or public service, everything operated by private companies in a completely free and global market; and finally on the top right you both have powerful state and corporations (pretty much all the countries).

But after all, these terms change meanings through history and different cultures. Under unrestrained capitalism the accumulation of wealth both creates monopolies and more importantly political influence. So that influences state interference and civil liberties also. It also aspires for infinite growth which leads to the depletion of natural resources which is another diminishing fact for the quality of living for the people. At that point it favors conservatism rather than progressive scientific thinking. Under collective anarchism, since it’s localized, it is quite difficult to create global catastrophes, and this is why in today’s world, the terms anarchism and capitalism seems as opposite.

Carnap’s Topological Properties and Choice of Metric. Note Quote.

Husserl’s system is ontologically, a traditional double hierarchy. There are regions or spheres of being, and perfectly traditional ones, except that (due to Kant’s “Copernican revolution”) the traditional order is reversed: after the new Urregion of pure consciousness come the region of nature, the psychological region, and finally a region (or perhaps many regions) of Geist. Each such region is based upon a single highest genus of concrete objects (“individua”), corresponding to the traditional highest genera of substances: in pure consciousness, for example, Erlebnisse; in nature, “things” (Dinge). But each region also contains a hierarchy of abstract genera – genera of singular abstracta and of what Husserl calls “categorial” or “syntactic” objects (classes and relations). This structure of “logical modifications,” found analogously in each region, is the concern of logic. In addition, however, to the “formal essence” which each object has by virtue of its position in the logical hierarchy, there are also truths of “material” (sachliche) essence, which apply to objects as members of some species or genus – ultimately, some region of being. Thus the special sciences, which are individuated (as in Aristotle) by the regions they study, are each broadly divided into two parts: a science of essence and a science of “matters of fact.” Finally, there are what might be called matters of metaphysical essence: necessary truths about objects which apply in virtue of their dependence on objects in prior regions, and ultimately within the Urregion of pure consciousness.

This ontological structure translates directly into an epistemological one, because all being in the posterior regions rests on positing Erlebnisse in the realm of pure consciousness, and in particular on originary (immediate) rational theoretical positings, i.e. “intuitions.” The various sciences are therefore based on various types of intuition. Sciences of matters of fact, on the one hand, correspond to the kinds of ordinary intuition, analogous to perception. Sciences of essence, on the other hand, and formal logic, correspond to (formal or material) “essential insight” (Wesensschau). Husserl equates formal- and material-essential insight, respectively, as sources of knowledge, to Kant’s analytic and synthetic a priori, whereas ordinary perceptual intuition, the source of knowledge about matters of fact, corresponds to the Kantian synthetic a posteriori. Phenomenology, finally, as the science of essence in the region of pure consciousness, has knowledge of the way beings in one region are dependent on those in another.

In Carnap’s doctoral thesis, Der Raum, he applies the above Husserlian apparatus to the problem of determining our sources of knowledge about space. Is our knowledge of space analytic, synthetic a priori, or empirical? Carnap answers, in effect: it depends on what you mean by “space.” His answer foreshadows much of his future thought, but is also based directly on Husserl’s remark about this question in Ideen I: that, whereas Euclidean manifold is a formal category (logical modification), our knowledge of geometry as it applies to physical objects is a knowledge of material essence within the region of nature. Der Raum is largely an expansion and explication of that one remark. Our knowledge of “formal space,” Carnap says, is analytic, i.e. derives from “formal ontology in Husserl’s sense,” but our knowledge of the “intuitive space” in which sensible objects are necessarily found is synthetic a priori, i.e. material-essential. There is one important innovation: Carnap claims that essential (a priori) knowledge of intuitive space extends only to its topological properties, whereas the full structure of physical space requires also a choice of metric. This latter choice is informed by the actual behavior of objects (e.g. measuring rods), and knowledge of physical space is thus in part a posteriori – as Carnap also says, a knowledge of “matters of fact.” But such considerations never force the choice of one metric or another: our knowledge of physical space also depends on “free positing”. This last point, which has no equivalent in Husserl, is important. Still more telling is that Carnap compares the choice involved here to a choice of language, although at this stage he sees this as a mere analogy. On the whole, however, the treatment of Der Raum is more or less orthodoxly Husserlian.

The Occultic Brotherhood

theosophy

Millions upon millions of years ago in the darkness of prehistory, humanity was an infant, a child of Mother Nature, unawakened, dreamlike, wrapped in the cloak of mental somnolence. Recognition of egoity slept; instinctual consciousness alone was active. Like a stream of brilliance across the horizon of time, divine beings, manasaputras, sons of mind, descended among the sleeping humans, and with the flame of intellectual solar fire lighted the wick of latent mind, and lo! the thinker stirred. Self-consciousness wakened, and man became a dynamo of intellectual and emotional power: capable of love, of hate, of glory, of defeat. Having knowledge, he acquired power; acquiring power, he chose; choosing, he fashioned the fabric of his future; and the perception of this ran like wine through his veins.

Knowledge, more knowledge, and still greater knowledge was required by the maturing humans who looked with gratitude to the godlike beings who had come to awaken them. For many millennia they followed their guidance, as children lovingly follow the footsteps of their mother.

As the ages rolled by, a circulation of divine instructors succeeded these primeval manasaputras and personally supervised the progress of child-humanity: they initiated them in the arts and sciences, taught them to sow their fields with corn and wheat, instructed them in the ways of clean and moral living — in short, established primeval schools of training and instruction open and free to all to learn of things material, intellectual, and spiritual. At this early period there were no Mystery colleges: the ancient wisdom was the common heirloom of all mankind, for as yet there had been no abuse of knowledge, and hence no need for schools kept hid and sacred from the world. Truth was freely given and as freely accepted in that golden age. (H. P. Blavatsky Collected Writings)

The race was young; not all were adept in learning. Some through past experience in former world periods learned quickly and with ease, choosing intuitively the path of spiritual intellection; others, less awake, were good though wayward in progress; while a third class of humans, drugged with inertia, found learning and aspiring a burden and became laggards in the evolutionary procession. To them, spiritual apathy was preferable to spiritual exertion.

Mankind as a whole progressed rapidly in the acquisition of knowledge and its subsequent use. Some obviously wrought evil — others good. What had been latent spirituality now became active good and active evil. Suffering and pain became nature’s most merciful method of restoring the heart to its primeval instinct, that of spiritual choice. As mind developed keener potentialities and the struggle for mental supremacy overcame the spiritual, the gift of intellect became a double-edged weapon: on the one hand, the bringer of spiritual awareness and undreamed of intellectual ecstasy; and on the other, the wielder of a weapon of destruction, of horror and, in the worst cases, of deliberate spiritual wickedness — diabolism. As H. P. Blavatsky wrote:

The mysteries of Heaven and Earth, revealed to the Third Race by their celestial teachers in the days of their purity, became a great focus of light, the rays from which became necessarily weakened as they were diffused and shed upon an uncongenial, because too material soil. With the masses they degenerated into Sorcery, taking later on the shape of exoteric religions, of idolatry full of superstitions, . . . — The Secret Doctrine  

Nature is cyclical throughout: at one time fertile in spiritual things, at another barren. At this long-ago period of the third root-race, on the great continent of Lemuria, now submerged, the cycle was against spiritual progress. A great downward sweep was in force, when expansion of physical and material energies were accelerated with the consequent retardation and contraction of spiritual power. The humanities of that period were part of the general evolutionary current, and individuals reacted to the coarsening atmosphere according to their nature. Some resisted its down- ward influence through awakened spirituality; others, weaker in understanding, vacillated between spirit and matter, between good and evil: sometimes listening to the promptings of intuition, at other times submerged by the rushing waves of the downward current. Still others, in whom the spark of intellectual splendor burned low, plunged headlong downstream, unmindful of the turbulent and muddy waters.

As the downward cycle proceeded, knowledge of spiritual verities and living of the life in accordance with them became a dull and useless tool in human hearts and minds. Such folly was inevitable in the course of cosmic events, and all things were provided for. Just as there are many types of people — some spiritual, others material, some highly intelligent, others slow of thought — so are there various grades of beings throughout the universe, ranging from the mineral, through the vegetable, animal, and human kingdom, and beyond to the head and hierarch of our earth.

During these first millennia the spiritual head and guardian of the earth had been stimulating wherever possible the individual fires of active spirituality. Gradually as knowledge of divine things became abused by those strong in will but weak in morality, truth was increasingly veiled. The planetary watcher now felt the need of selecting a band of co-workers to act as bodyguard and protector of the ancient wisdom. Alone a handful of spiritually illumined human beings, in whom the divine fervor burned bright, acknowledged wholehearted allegiance to their planetary mentor — the spiritual hierarch of humanity. Through long ages certain individuals had been watched over and guided, strengthened and tested in innumerable ways, and those who passed the test of self-knowledge and self-sacrifice were gathered together to form the first association of spiritual-divine human beings — the Great Brotherhood. As G. de Purucker elaborates:

Then was formed or established or set in operation the gathering together of the very highest representatives, spiritually and intellectually speaking, that the human race as yet had given manifestation to; . . .

. . . the Silent Watcher of the Globe, through the spiritual-magnetic attraction of like to like, was enabled to attract to the Path of Light, even from the earliest times of the Third Root-Race, certain unusual human individuals, early forerunners of the general Manasaputric “descent,” and thus to form with these individuals a Focus of Spiritual and Intellectual Light on Earth, this fact signifying not so much an association or society or brotherhood as a unity of human spiritual and intellectual Flames, so to speak, which then represented on Earth the heart of the Hierarchy of Compassion. . . .

Now it was just this original focus of Living Flames, which never degenerated nor lost its high status of the mystic center on Earth through which poured the supernal glory of the Hierarchy of Compassion, today represented by the Great Brotherhood of the Mahatmans, . . . Thus it is that the Great Brotherhood traces an unbroken and uninterrupted ancestry back to the original focus of Light of the Third Root-Race. — The Esoteric Tradition  

Hence the elder brothers of the race remain

the elect custodians of the Mysteries revealed to mankind by the divine Teachers . . . and tradition whispers, what the secret teachings affirm, namely, that these Elect were the germ of a Hierarchy which never died since that period” (Secret Doctrine)

— since the foundation and establishment of the Great Brotherhood some 12 million years ago. From this center for millions of years have been streaming in continuous procession rays of light and strength into the world at large and, more specifically, into the hearts of those whose lives are dedicated to the service of truth. From this Fraternity have gone forth messengers, masters of wisdom, to inspire the grand religions of the past, and they will continue to send forth their envoys as long as mankind requires their care.

The Semiotic Theory of Autopoiesis, OR, New Level Emergentism

higher-consciousness

The dynamics of all the life-cycle meaning processes can be described in terms of basic semiotic components, algebraic constructions of the following forms:

Pnn:fnn] → Ξn+1)

where Ξn is a sign system corresponding to a representation of a (design) problem at time t1, Ξn+1 is a sign system corresponding to a representation of the problem at time t2, t2 > t1, fn is a composition of semiotic morphisms that specifies the interaction of variation and selection under the condition of information closure, which requires no external elements be added to the current sign system; мn is a semiotic morphism, and Pn is the probability associated with мn, ΣPn = 1, n=1,…,M, where M is the number of the meaningful transformations of the resultant sign system after fn. There is a partial ranking – importance ordering – on the constraints of A in every Ξn, such that lower ranked constraints can be violated in order for higher ranked constraints to be satisfied. The morphisms of fn preserve the ranking.

The Semiotic Theory of Self-Organizing Systems postulates that in the scale hierarchy of dynamical organization, a new level emerges if and only if a new level in the hierarchy of semiotic interpretance emerges. As the development of a new product always and naturally causes the emergence of a new meaning, the above-cited Principle of Emergence directly leads us to the formulation of the first law of life-cycle semiosis as follows:

I. The semiosis of a product life cycle is represented by a sequence of basic semiotic components, such that at least one of the components is well defined in the sense that not all of its morphisms of м and f are isomorphisms, and at least one м in the sequence is not level-preserving in the sense that it does not preserve the original partial ordering on levels.

For the present (i.e. for an on-going process), there exists a probability distribution over the possible мn for every component in the sequence. For the past (i.e. retrospectively), each of the distributions collapses to a single mapping with Pn = 1, while the sequence of basic semiotic components is degenerated to a sequence of functions. For the future, the life-cycle meaning-making

Catastrophe Revisited. Note Quote.

Transversality and structural stability are the topics of Thom’s important transversality and isotopy theorems; the first one says that transversality is a stable property, the second one that transverse crossings are themselves stable. These theorems can be extended to families of functions: If f: Rn x Rr–>R is equivalent to any family f + p: Rn x Rr–>R, where p is a sufficiently small family Rn x Rr–> R, then f is structurally stable. There may be individual functions with degenerate critical points in such a family, but these exceptions from the rule are in a sense “checked” by the other family members. Such families can be obtained e.g. by parametrizing the original function with one or several extra variables. Thom’s classification theorem, comes in at this level.

So, in a given state function, catastrophe theory separates between two kinds of functions: one “Morse” piece, containing the nondegenerate critical points, and one piece, where the (parametrized) family contains at least one degenerate critical point. The second piece has two sets of variables; the state variables (denoted x, y…) responsible for the critical points, and the control variables or parameters (denoted a, b, c…), capable of stabilizing a degenerate critical point or steering away from it to nondegenerate members of the same function family. Each control parameter can control the degenerate point only in one direction; the more degenerate a singular point is (the number of independent directions equal to the corank), the more control parameters will be needed. The number of control parameters needed to stabilize a degenerate point (“the universal unfolding of the singularity” with the same dimension as the number of control parameters) is called the codimension of the system. With these considerations in mind, keeping close to surveyable, four-dimensional spacetime, Thom defined an “elementary catastrophe theory” with seven elementary catastrophes, where the number of state variables is one or two: x, y, and the number of control parameters, equal to the codimension, at most four: a, b, c, d. (With five parameters there will be eleven catastrophes). The tool used here is the above mentioned classification theorem, which lists all possible organizing centers (quadratic, cubic forms etc.) in which there are stable unfoldings (by means of control parameters acting on state variables). 

Two elementary catastrophes: fold and cusp

1. In the first place the classification theorem points out the simple potential function y = x3 as a candidate for study. It has a degenerate critical point at {0, 0} and is always declining (with minus sign), needing an addition from the outside in order to grow locally. All possible perturbations of this function are essentially of type x3 + x or type x3 – x (more generally x3 + ax); which means that the critical point (y, x = 0) is of codimension one. Fig. below shows the universal unfolding of the organizing centre y = x3, the fold:

fold1

This catastrophe, says Thom, can be interpreted as “the start of something” or “the end of something”, in other words as a “limit”, temporal or spatial. In this particular case (and only in this case) the complete graph in internal (x) and external space (y) with the control parameter a running from positive to negative values can be shown in a three-dimensional graph (Fig. below); it is evident why this catastrophe is called “fold”:

fold2

One point should be stressed already at this stage, it will be repeated again later on. In “Topological models…”, Thom remarks on the “informational content” of the degenerate critical point: 

This notion of universal unfolding plays a central role in our biological models. To some extent, it replaces the vague and misused term of ‘information’, so frequently found in the writings of geneticists and molecular biologists. The ‘information’ symbolized by the degenerate singularity V(x) is ‘transcribed’, ‘decoded’ or ‘unfolded’ into the morphology appearing in the space of external variables which span the universal unfolding family of the singularity V(x). 

2. Next let us as organizing centre pick the second potential function pointed out by the classification theorem: y = x4. It has a unique minimum (0, 0), but it is not generic , since nearby potentials can be of a different qualitative type, e.g. they can have two minima. But the two-parameter function x4 + ax2 + bx is generic and contains all possible unfoldings of y = x4. The graph of this function, with four variables: y, x, a, b, can not be shown, the display must be restricted to three dimensions. The obvious way out is to study the derivative f'(x) = 4x3 + 2ax + b for y = 0 and in the proximity of x = 0. It turns out, that this derivative has the qualities of the fold, shown in the Fig. below; the catastrophes are like Chinese boxes, one contained within the next of the hierarchy. 

cuspder

Finally we look for the position of the degenerate critical points projected on (a,b)-space, this projection has given the catastrophe its name: the “cusp” (Fig. below). (An arrowhead or a spearhead is a cusp). The edges of the cusp, the bifurcation set, point out the catastrophe zone, above the area between these limits the potential has two Morse minima and one maximum, outside the cusp limits there is one single Morse minimum. With the given configuration (the parameter a perpendicular to the axis of the cusp) a is called the normal factor – since x will increase continuously with a if b < 0, while b is called the splitting factor because the fold surface is split into two sheets if b > 0. If the control axes are instead located on either side of the cusp (A = b + a and B = b – a) A and B are called conflicting factors; A tries to push the result to the upper sheet (attractor), B to the lower sheet of the fold. (Here is an “inlet” for truly external factors; it is well-known how e.g. shadow or excessive light affects the morphogenetic process of plants. 

cusp

Thom states: the cusp is a pleat, a fault, its temporal interpretation is “to separate, to unite, to capture, to generate, to change”. Countless attempts to model bimodal distributions are connected with the cusp, it is the most used (and maybe the most misused) of the elementary catastrophes. 

Zeeman has treated stock exchange and currency behaviour from one and the same model, namely what he terms the cusp catastrophe with a slow feedback. Here the rate of change of indexes (or currencies) is considered as dependent variable, while different buying patterns (“fundamental” /N in fig. below and “chartist” /S in fig. below) serve as normal and splitting parameters. Zeeman argues: the response time of X to changes in N and S is much faster than the feedback of X on N and S, so the flow lines will be almost vertical everywhere. If we fix N and S, X will seek a stable equilibrium position, an attractor surface (or: two attractor surfaces, separated by a repellor sheet and “connected” by catastrophes; one sheet is inflation/bull market, one sheet deflation/bear market, one catastrophe collapse of market or currency. Note that the second catastrophe is absent with the given flow direction. This is important, it tells us that the whole pattern can be manipulated, “adapted” by means of feedbacks/flow directions). Close to the attractor surface, N and S become increasingly important; there will be two horizontal components, representing the (slow) feedback effects of N and S on X. The whole sheet (the fold) is given by the equation X3 – (S – So)X – N = 0, the edge of the cusp by 3X2 + So = S, which gives the equation 4(S – So)3 = 27 N2 for the bifurcation curve. 

cusp2

Figure: “Cusp with a slow feedback”, according to Zeeman (1977). X, the state variable, measures the rate of change of an index, N = normal parameter, S = splitting parameter, the catastrophic behaviour begins at So. On the back part of the upper sheet, N is assumed constant and dS/dt positive, on the fore part dN/dT is assumed to be negative and dS/dt positive; this gives the flow direction of the feedback. On the fore part of the lower sheet both dN/dt and dS/dt are assumed to be negative, on the back part dN/dt is assumed to be positive and dS/dt still negative, this gives the flow direction of feedback on this sheet. The cusp projection on the {N,S}-plane is shaded grey, the visible part of the repellor sheet black. (The reductionist character of these models must always be kept in mind; here two obvious key parameters are considered, while others of a weaker or more ephemeral kind – e.g. interest levels – are ignored.) 

Is the Hierarchical Society Cardinal?

Even the question posed has a stink of the inhuman, or un-human, though it is evident that in theory we might try and flatten such hierarchies, the same never holds true in practice. Although such hierarchies might be held on to surreptitiously, the tendency to be resilient is never really ruled out in matters as sensitive as these, which make us prone to getting branded as fundamentalists or fanatics, or anything which has semblance to the right-wing ideology. So, in a nutshell, hierarchies in the Social are indeed emanating from the right-wing, or are at least given to sway in their descriptions and prescriptions.

So, are these hierarchies important? Well, the answer at first go is a strict ‘no’. But, let us deliberate upon. One way is to look upon hierarchy as dominant, and the other is identity when there is an absence of hierarchies. Now, those that belong to the first camp would imply reciprocity as enabling social order. Reciprocity is a relationship that exists one-to-one, one-to-many and many-to-one as regards the first camp, while one and the many merge into one another as regards the second camp. In the first camp, reciprocity is built up on adherence, while in the second, it is more and more symbiotic. The existing of social strata could advocate the existence of micro-cultural forms characterised by the features that lead to the formation of such strata in the first place and could include notions like religiosity, power (both muscle and economic), cultural and intellectual/ideological. On the flip side, such notions are tolerant towards multiculturalism or pluralism. Dominance becomes a nested approach, while becoming-identity is a web-like structure with nodes of individuals or clusters of societies that interact on a horizontal level, or to put it more politically, act on a democratic level, in theory at least, to say the least. Disturbance in this net is knotted into a nest, where dominance takes over the democratic structure and subsequently forcing the second camp to be evacuated onto the first one. Now here is where the catch lies. From netted to a nested structure would mean classification, and it is classification that gets conceptual authority, thus in a hugely ironical manner ameliorating the potential of conflicts due to centralised authoritarian structure. This is its use value. Hierarchies try to make sense out of the apparent relationships between things with the caveat that orientations that determine those relations are just looming round the corner.

In hierarchical societies there are domains of individuals, clusters, micro-cultures or societies that are instances of isolated-ness from each other, whereas in non-hierarchical societies these domains tendentially overlap into one another, or even across one another making the very study of latter kind of studies difficult in intent. Other use value would lie in mapping domains become easier in dominance or stratified societies as compared with in non-hierarchical societies.

hero_eef46fe0-ae85-4611-b147-bd77c3993347

Representation as a Meaningful Philosophical Quandary

1456831690974

The deliberation on representation indeed becomes a meaningful quandary, if most of the shortcomings are to be overcome, without actually accepting the way they permeate the scientific and philosophical discourse. The problem is more ideological than one could have imagined, since, it is only within the space of this quandary that one can assume success in overthrowing the quandary. Unless the classical theory of representation that guides the expert systems has been accepted as existing, there is no way to dislodge the relationship of symbols and meanings that build up such systems, lest the predicament of falling prey to the Scylla of metaphysically strong notion of meaningful representation as natural or the Charybdis of an external designer should gobble us up. If one somehow escapes these maliciously aporetic entities, representation as a metaphysical monster stands to block our progress. Is it really viable then to think of machines that can survive this representational foe, a foe that gets no aid from the clusters of internal mechanisms? The answer is very much in the affirmative, provided, a consideration of the sort of such a non-representational system as continuous and homogeneous is done away with. And in its place is had functional units that are no more representational ones, for the former derive their efficiency and legitimacy through autopoiesis. What is required is to consider this notional representational critique of distributed systems on the objectivity of science, since objectivity as a property of science has an intrinsic value of independence from the subject who studies the discipline. Kuhn  had some philosophical problems to this precise way of treating science as an objective discipline. For Kuhn, scientists operate under or within paradigms thus obligating hierarchical structures. Such hierarchical structures ensure the position of scientists to voice their authority on matters of dispute, and when there is a crisis within, or, for the paradigm, scientists, to begin with, do not outrightly reject the paradigm, but try their level best at resolution of the same. In cases where resolution becomes a difficult task, an outright rejection of the paradigm would follow suit, thus effecting what is commonly called the paradigm shift. If such were the case, obviously, the objective tag for science goes for a hit, and Kuhn argues in favor of a shift in social order that science undergoes, signifying the subjective element. Importantly, these paradigm shifts occur to benefit scientific progress and in almost all of the cases, occur non-linearly. Such a view no doubt slides Kuhn into a position of relativism, and has been the main point of attack on paradigms shifting. At the forefront of attacks has been Michael Polanyi and his bunch of supporters, whose work on epistemology of science have much of the same ingredients, but was eventually deprived of fame. Kuhn was charged with plagiarism. The commonality of their arguments could be measured by a dissenting voice for objectivity in science. Polanyi thought of it as a false ideal, since for him the epistemological claims that defined science were based more on personal judgments, and therefore susceptible to fallibilism. The objective nature of science that obligates the scientists to see things as they really are is kind of dislodged by the above principle of subjectivity. But, if science were to be seen as objective, then the human subjectivity would indeed create a rupture as far as the purified version of scientific objectivity is sought for. The subject or the observer undergoes what is termed the “observer effect” that refers to the change impacting an act of observation being observed. This effect is as good as ubiquitous in most of the domains of science and technology ranging from Heisenbug(1) in computing via particle physics, science of thermodynamics to quantum mechanics. The quantum mechanics observer effect is quite perplexing, and is a result of a phenomenon called “superposition” that signifies the existence in all possible states and all at once. The superposition gets its credit due to Schrödinger’s cat experiment. The experiment entails a cat that is neither dead nor alive until observed. This has led physicists to take into account the acts of “observation” and “measurement” to comprehend the paradox in question, and thereby come out resolving it. But there is still a minority of quantum physicists out there who vouch for the supremacy of an observer, despite the quantum entanglement effect that go on to explain “observation” and “measurement” impacts.(2) Such a standpoint is indeed reflected in Derrida (9-10) as well, when he says (I quote him in full),

The modern dominance of the principle of reason had to go hand in hand with the interpretation of the essence of beings as objects, and object present as representation (Vorstellung), an object placed and positioned before a subject. This latter, a man who says ‘I’, an ego certain of itself, thus ensures his own technical mastery over the totality of what is. The ‘re-‘ of repraesentation also expresses the movement that accounts for – ‘renders reason to’ – a thing whose presence is encountered by rendering it present, by bringing it to the subject of representation, to the knowing self.

If Derridean deconstruction needs to work on science and theory, the only way out is to relinquish the boundaries that define or divide the two disciplines. Moreover, if there is any looseness encountered in objectivity, the ramifications are felt straight at the levels of scientific activities. Even theory does not remain immune to these consequences. Importantly, as scientific objectivity starts to wane, a corresponding philosophical luxury of avoiding the contingent wanes. Such a loss of representation congruent with a certain theory of meaning we live by has serious ethical-political affectations.

(1) Heisenbug is a pun on the Heisenberg’s uncertainty principle and is a bug in computing that is characterized by a disappearance of the bug itself when an attempt is made to study it. One common example is a bug that occurs in a program that was compiled with an optimizing compiler, but not in the same program when compiled without optimization (e.g., for generating a debug-mode version). Another example is a bug caused by a race condition. A heisenbug may also appear in a system that does not conform to the command-query separation design guideline, since a routine called more than once could return different values each time, generating hard- to-reproduce bugs in a race condition scenario. One common reason for heisenbug-like behaviour is that executing a program in debug mode often cleans memory before the program starts, and forces variables onto stack locations, instead of keeping them in registers. These differences in execution can alter the effect of bugs involving out-of-bounds member access, incorrect assumptions about the initial contents of memory, or floating- point comparisons (for instance, when a floating-point variable in a 32-bit stack location is compared to one in an 80-bit register). Another reason is that debuggers commonly provide watches or other user interfaces that cause additional code (such as property accessors) to be executed, which can, in turn, change the state of the program. Yet another reason is a fandango on core, the effect of a pointer running out of bounds. In C++, many heisenbugs are caused by uninitialized variables. Another similar pun intended bug encountered in computing is the Schrödinbug. A schrödinbug is a bug that manifests only after someone reading source code or using the program in an unusual way notices that it never should have worked in the first place, at which point the program promptly stops working for everybody until fixed. The Jargon File adds: “Though… this sounds impossible, it happens; some programs have harbored latent schrödinbugs for years.”

(2) There is a related issue in quantum mechanics relating to whether systems have pre-existing – prior to measurement, that is – properties corresponding to all measurements that could possibly be made on them. The assumption that they do is often referred to as “realism” in the literature, although it has been argued that the word “realism” is being used in a more restricted sense than philosophical realism. A recent experiment in the realm of quantum physics has been quoted as meaning that we have to “say goodbye” to realism, although the author of the paper states only that “we would [..] have to give up certain intuitive features of realism”. These experiments demonstrate a puzzling relationship between the act of measurement and the system being measured, although it is clear from experiment that an “observer” consisting of a single electron is sufficient – the observer need not be a conscious observer. Also, note that Bell’s Theorem suggests strongly that the idea that the state of a system exists independently of its observer may be false. Note that the special role given to observation (the claim that it affects the system being observed, regardless of the specific method used for observation) is a defining feature of the Copenhagen Interpretation of quantum mechanics. Other interpretations resolve the apparent paradoxes from experimental results in other ways. For instance, the Many- Worlds Interpretation posits the existence of multiple universes in which an observed system displays all possible states to all possible observers. In this model, observation of a system does not change the behavior of the system – it simply answers the question of which universe(s) the observer(s) is(are) located in: In some universes the observer would observe one result from one state of the system, and in others the observer would observe a different result from a different state of the system.