The Locus of Renormalization. Note Quote.

IMM-3-200-g005

Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

Advertisement

Industrial Semiosis. Note Quote.

rUNdh

The concept of Industrial Semiosis categorizes the product life-cycle processes along three semiotic levels of meaning emergence: 1) the ontogenic level that deals with the life history data and future expectations about a single occurrence of a product; 2) the typogenic level that holds the processes related to a product type or generation; and 3) the phylogenic level that embraces the meaning-affecting processes common to all of the past and current types and occurrences of a product. The three levels naturally differ by the characteristic durational times of the grouped semiosis processes: as one moves from the lowest, ontogenic level to the higher levels, the objects become larger and more complicated and have slower dynamics in both original interpretation and meaning change. The semantics of industrial semiosis in industry investigates the relationships that hold between the syntactical elements — the signs in language, models, data — and the objects that matter in industry, such as customers, suppliers, work-pieces, products, processes, resources, tools, time, space, investments, costs, etc. The pragmatics of industrial semiosis deals with the expression and appeal functions of all kinds of languages, data and models and their interpretations in the setting of any possible enterprise context, as part of the enterprise realising its mission by enterprising, engineering, manufacturing, servicing, re-engineering, competing, etc. The relevance of the presented definitions for infor- mation systems engineering is still limited and vague: the definitions are very general and hardly reflect any knowledge about the industrial domain and its objects, nor do they reflect knowledge about the ubiquitous information infrastructure and the sign systems it accommodates.

A product (as concept) starts its development with initially coinciding onto-, typo-, and phylogenesis processes but distinct and pre-existing semiotic levels of interpretation. The concept is evolved, and typogenesis works to reorganize the relationships between the onto- and phylogenesis processes, as the variety of objects involved in product development increases. Product types and their interactions mediate – filter and buffer – between the levels above and below: not all variety of distinctions remains available for re-organization as phylos, nor every lowest-level object have a material relevance there. The phylogenic level is buffered against variations at the ontogenic level by the stabilizing mediations at the typogenic level.

The dynamics of the interactions between the semiotic levels can well be described in terms of the basic processes of variation and selection. In complex system evolution, variation stands for the generation of a variety of simultaneously present, distinct entities (synchronic variety), or of subsequent, distinct states of the same entity (diachronic variety). Variation makes variety increase and produces more distinctions. Selection means, in essence, the elimination of certain distinct entities and/or states, and it reduces the number of remaining entities and/or states.

From a semiotic point of view, the variety of a product intended to operate in an environment is determined by the devised product structure (i.e. the relations established between product parts – its synchronic variety) and the possible relations between the product and the anticipated environment (i.e. the product feasible states – its potential diachronic variety), which together aggregate the product possible configurations. The variety is defined on the ontogenic level that includes elements for description of both the structure and environment. The ontogenesis is driven by variation that goes through different configurations of the product and eventually discovers (by distinction selection at every stage of the product life cycle) configurations, which are stable on one or another time-scale. A constraint on the configurations is then imposed, resulting in the selective retention – emergence of a new meaning for a (not necessarily new) sign – at the typogenic level. The latter decreases the variety but specializes the ontogenic level so that only those distinctions ultimately remain, which fit to the environment (i.e. only dynamically stable relation patterns are preserved). Analogously but at a slower time- scale, the typogenesis results in the emergence of a new meaning on the phylogenic level that consecutively specializes the lower levels. Thus, the main semiotic principle of product development is such that the dynamics of the meaning-making processes always seeks to decrease the number of possible relations between the product and its environment and hence, the semiosis of product life cycle is naturally simplified. At the same time, however, the ‘natural’ dynamics is such that augments the evolutive potential of the product concept for increasing its organizational richness: the emergence of new signs (that may lead to the emergence of new levels of interpretation) requires a new kind of information and new descriptive categories must be given to deal with the still same product.

Distributed Representation Revisited

Figure-132-The-distributed-representation-of-language-meaning-in-neural-networks

If the conventional symbolic model mandates a creation of theory that is sought to address the issues pertaining to the problem, this mandatory theory construction is bypassed in case of distributed representational systems, since the latter is characterized by a large number of interactions occurring in a nonlinear fashion. No such attempts at theoretical construction are to be made in distributed representational systems for fear of high end abstraction, thereby sucking off the nutrient that is the hallmark of the model. Distributed representation is likely to encounter onerous issues if the size of the network inflates, but the issue is addressed through what is commonly known as redundancy technique, whereby, a simultaneous encoding of information generated by numerous interactions take place, thus ameliorating the adequacy of presenting the information to the network. In the words of Paul Cilliers, this is an important point, for,

the network used for the model of a complex system will have to have the same level of complexity as the system itself….However, if the system is truly complex, a network of equal complexity may be the simplest adequate model of such a system, which means that it would be just as difficult to analyze as the system itself.

Following, he also presents a caveat,

This has serious methodological implications for the scientists working with complex systems. A model which reduces the complexity may be easier to implement, and may even provide a number of economical descriptions of the system, but the price paid for this should be considered carefully.

One of the outstanding qualities of distributed representational systems is their adaptability. Adaptability, in the sense of reusing the network to be applicable to other problems to offer solutions. Exactly, what this connotes is, the learning process the network has undergone for a problem ‘A’, could be shared for problem ‘B’, since many of the input neurons are bounded by information learned through ‘A’ that could be applicable to ‘B’. In other words, the weights are the dictators for solving or resolving issues, no matter, when and for which problem the learning took place. There is a slight hitch here, and that being this quality of generalizing solutions could suffer, if the level of abstraction starts to shoot up. This itself could be arrested, if in the initial stages, the right kind of framework is decided upon, thus obscuring the hitch to almost non-affective and non-existence impacting factor. The very notion of weights is considered here by Sterelny as a problematic, and he takes it to attack distributed representation in general and connectionsim as a whole in particular. In an analogically witty paragraph, Sterelny says,

There is no distinction drawable, even in principle, between functional and non- functional connections. A positive linkage between two nodes in a distributed network might mean a constitutive link (eg. Catlike, in a network for tiger); a nomic one (carnivore, in the same network), or a merely associative one (in my case, a particular football team that play in black and orange.

It should be noted that this criticism on weights is derived, since for Sterelny, relationship between distributed representations and the micro-features that compose them is deeply problematic. If such is the criticism, then no doubt, Sterelny still seems to be ensconced within the conventional semantic/symbolic model. And since, all weights can take part in information processing, there is some sort of a democratic liberty that is accorded to the weights within a distributed representation, and hence any talk of constitutive, nomic, or even for that matter associative is mere humbug. Even if there is a disagreement prevailing that a large pattern of weights are not convincing enough for an explanation, as they tend to complicate matters, the distributed representational systems work consistently enough as compared to an alternative system that offers explanation through reasoning, and thereby, it is quite foolhardy to jettison the distributed representation by the sheer force of criticism. If the neural network can be adapted to produce the correct answer for a number of training cases that is large compared with the size of the network, it can be trusted to respond correctly to the previously unseen cases provided they are drawn from the same population using the same distribution as the training cases, thus undermining the commonly held idea that explanations are the necessary feature of the trustworthy systems (Baum and Haussler). Another objection that distributed representation faces is that, if representations are distributed, then the probability of two representations of the same thing as different from one another cannot be ruled out. So, one of them is the true representation, while the other is only an approximation of the representation.(1) This is a criticism of merit and is attributed to Fodor, in his influential book titled Psychosemantics.(2) For, if there is only one representation, Fodor would not shy from saying that this is the yucky solution, folks project believe in. But, since connectionism believes in the plausibility of indeterminate representations, the question of flexibility scores well and high over the conventional semantic/symbolic models, and is it not common sense to encounter flexibility in daily lives? The other response to this objection comes from post-structuralist theories (Baudrillard is quite important here. See the first footnote below). The objection of true representation, and which is a copy of the true representation meets its pharmacy in post-structuralism, where meaning is constituted by synchronic as well as diachronic contextualities, and thereby supplementing the distributed representation with a no-need-for concept and context, as they are inherent in the idea of such a representation itself. Sterelny, still seems to ride on his obstinacy, and in a vitriolic tone poses his demand to know as to why distributed representation should be regarded as states of the system at all. Moreover, he says,

It is not clear that a distributed representation is a representation for the connectionist system at all…given that the influence of node on node is local, given that there is no processor that looks at groups of nodes as a whole, it seems that seeing a distributed representation in a network is just an outsider’s perspective on the system.

This is moving around in circles, if nothing more. Or maybe, he was anticipating what G. F. Marcus would write and echo to some extent in his book The Algebraic Mind. In the words of Marcus,

…I agree with Stemberger(3) that connectionism can make a valuable contribution to cognitive science. The only place, we differ is that, first, he thinks that the contribution will be made by providing a way of eliminating symbols, whereas I think that connectionism will make its greatest contribution by accepting the importance of symbols, seeking ways of supplementing symbolic theories and seeking ways of explaining how symbols could be implemented in the brain. Second, Stemberger feels that symbols may play no role in cognition; I think that they do.

Whatever Sterelny claims, after most of the claims and counter-claims that have been taken into account, the only conclusion for the time being is that distributive representation has been undermined, his adamant position to be notwithstanding.

(1) This notion finds its parallel in Baudrillard’s Simulation. And subsequently, the notion would be invoked in studying the parallel nature. Of special interest is the order of simulacra in the period of post-modernity, where the simulacrum precedes the original, and the distinction between reality and representation vanishes. There is only the simulacrum and the originality becomes a totally meaningless concept.

(2) This book is known for putting folk psychology firmly on the theoretical ground by rejecting any external, holist and existential threat to its position.

(3) Joseph Paul Stemberger is a professor in the Department of Linguistics at The University of British Columbia in Vancouver, British Columbia, Canada, with primary interests in phonology, morphology, and their interactions. My theoretical orientations are towards Optimality Theory, employing our own version of the theory, and towards connectionist models.

 

Autopoiesis Revisited

33_10klein1

Autopoiesis principally dealt with determining the essence of living beings to start off with, thus calling to attention a clarification between organization and structure. This distinction was highlighted with organization subtending the set of all possible relations of the autopoietic processes of an organism and structure as a synchronic snapshot from the organizational set that was active at any given instant. This distinction was tension ridden, for a possibility of a production of a novel functional structure was inhibited, and especially so, when the system had perturbations vis-à-vis the environment that housed it. Thus within the realm of autopoiesis, a diachronic emergence was conceivable only as a natural drift. John Protevi throws light on this perspective with his insistence on synchronic emergence as autonomous, and since autonomy is interest directed, the question of autopoiesis in the social realm is ruled out. The case of understanding rejection of extending autopoiesis to the social realm, especially Varela’s rejection, is a move conceived more to move beyond autopoiesis, rather than beyond neocybernetics as concerned with the organizational closure of informational systems, lest a risk of slipping into polarization should loom large. The aggrandizing threat of fascistic and authoritarian tendencies in Varela were indeed ill-conceived. This polarity that Varela considered later in his intellectual trajectory as comprising of fragments that constituted the whole, and collectively constructed, was a launch pad for Luhmann to enter the fray and use autopoiesis to social systems. Autopoiesis forms the central notion for his self-referential systems, where the latter are characterized by acknowledging their referring to themselves in every operation. Autopoietic system while organizationally closed nevertheless references an environment, background or context. This is an indication that pure auto-referentiality is generally lacking, replaced instead by a broader process of self- referentiality which comprises hetero-referentiality with a reference to an environment. This process is watchful of the distinction between itself and the environment, lest it should fail to take off. As Luhmann says that if an autopoietic system did not have an environment, it would be forced to invent one as the horizon of its auto-referentiality.

A system distinguishes itself from the environment by boundaries, where the latter is a zone of high-degree complexity, the former is a one of reduced complexity. Even Luhmann’s system believes in being interest-driven, where the communication is selective with the available information to the best of its efficiency. Luhmann likens the operation of autopoiesis to a program, making a series of logical distinctions. Here, Luhmann refers to the British mathematician G. Spencer Brown’s logic of distinctions that Maturana and Varela had identified as a model for the functioning of any cognitive process. The supreme criteria guiding the “self-creation” of any given system is a defining binary code. This binary code is taken by Luhmann to problematize the auto-referential system’s continuous confrontation with the dilemma of disintegration/continuation. Importantly, Luhmann treats systems on an ontological level, that is, systems exist, and this paradigm is attempted to be changed through the differential relations between the system and the environment.

Philosophically, complexity and self-organizational principles shifts trends into interdisciplinarity. To take a case of holism, emergentism within complexity abhors a study through reductionism. Scientifically, this notion of holism failed to stamp its authority due to a lack of any solid scientificity, and the hubristic Newtonian paradigm of reductionism as the panacea for all ills came to stay. The rapprochement was not possible until a German biologist Ludwig von Bertalanffy shocked the prevalent world view with his thesis on the openness of living systems through interactions with the surrounding systems for their continual survival. This idea deliberated on a system embedded within an environment separated by a boundary that lent the system its own identity. Input from the environment and output from the system could be conceived as a plurality of systems interacting with one another to form a network, which, if functionally coherent is a system in its own right, or a supersystem, with the initial conditions as its subsystems. This strips the subsystems of any independence, but determinable within a network via relations and/or mapping. This in general is termed constraint, that abhors independence from relations between the coupled systems (supersystem/subsystem). If the coupling between the systems is tight enough, an organization with its identity and autonomy results. Cybernetics deals precisely with such a formulation, where the autonomy in question is maintained through goal-directed seemingly intelligent action in line with the thoughts of Varela and Luhmann. This is significant because the perturbations originating in the environment are compensated for by the system actively in order to maintain its preferred state of affairs, with greater the amount of perturbations implying greater compensatory actions on the part of the system. One consequence of such a systemic perspective has gotten rid of Cartesian mind-matter split by thinking of it as nothing more than a special kind of relation. Such is the efficacy of autopoiesis in negotiating the dilemma surrounding the metaphysical question concerning the origin of order.

Embedding Complexity within Accelerationism: drunken risibility

I am yet to use Deleuzean ideas in activism, but I do use him in building up discursive notes that find difficulty in manifesting and/or realizing within the vocabulary of/on field of activism. That is the reason for my ‘yet to use’ his ideas. There is no doubting the alignment to be effected between theory and practice, but working towards such coming together is what my intention has always been. To be honest, I have always had this difficulty of retaining the alignment, since practice takes precedence sublating theory in the process. Deleuzean ontology is emancipatory for thought from commonsense thinking, but the loss is incurred many a times when practice is seemingly cohesive with commonsense thinking. This aporia should be the launch pad for the project , for the intention is to smash whatever traces of postmodern thinking/thought are still tangible, and replace them with the trajectory of being within ‘capitalism’ as a structure continuously in flux. The complexity is therefore heightened when one stops to take a synchronic snapshot only to find oneself totally overwhelmed by the diachronicity in which embeddedness takes place. And here, if one makes the ontological situatedness of being coplanar with ‘capitalism’, a take on accelerationism should have critical/constructive applications in ‘nouveau’ capitalism. Enough drinking for now!!!

speed-velocity-930x620