Imagination insinuates the excess, a passing over into silence amidst the noise of the already traversed.
Month: December 2016
Mool-mantras, Doctrine of Accommodation and Phenomenology?
In the sacred writings of Guru Granth Sahib, Guru Nanak (the first guru of Sikhs) utters the Mool-mantra that occupies a central role, somehow depicting the entirety of Sikhism’s universally complex theology. And it is often claimed that for common benefit of all, these experiences (utterances) are articulated without interpretation or distortion. They are yathapurvam akalpyat – expressed as they are seen. According to Vinoba Bhave, the conception of God in Guru Nanak is based on the concept of Brahman and Aum in Vedanta. Brahman ‘is that from which the world originates’. It is the material, efficient and formal cause of the world. It is responsible for ‘the origin, sustenance and cessation of the world (Taittriya Upanishad).
God is Ananda, i.e. joy or bliss. “From (God’s) joy does spring all this Creation, by joy is it maintained, towards joy does it progress, and into joy does it enter.” (Sadhana by Rabindranath Tagore, 45)
“This world is Whole. That World is Whole. From Whole comes the Whole. If you take away Whole from Whole, what remains is Whole.”
This reminds me so much of the Christian theological “Doctrine of Accommodation“. Theologians stress on a form of truth (for eg, angels: good ones or the fallen ones, or, even what they somehow linguistically represented) that was neither allegorical nor metaphorical; that in a sense they really did look like how they appeared, but that also they should not be understood in an entirely literal way. This was known as the doctrine of accommodation and has occupied a central place in theology.
Are these Mool-mantras taken at their literal values? Somewhere phenomenology seems to be at work. This comparative-ness is not then really far-fetched. For instance, take Calvin‘s Commentary on Genesis. I quote in full,
If any one should inquire whether this vacuity did not previously exist, I answer, however true it may be that all parts of the earth were not overflowed by the waters; yet now, for the first time, a separation was ordained, whereas a confused admixture had previously existed. Moses describes the special use of this expanse, to divide the waters from the waters from which word arises a great difficulty. For it appears opposed to common sense, and quite incredible, that there should be waters above the heaven. Hence some resort to allegory, and philosophize concerning angels; but quite beside the purpose. For, to my mind, this is a certain principle, that nothing is here treated of but the visible form of the world. He who would learn astronomy, and other recondite arts, let him go elsewhere. Here the Spirit of God would teach all men without exception; and therefore what Gregory declares falsely and in vain respecting statues and pictures is truly applicable to the history of the creation, namely, that it is the book of the unlearned. The things, therefore, which he relates, serve as the garniture of that theater which he places before our eyes. Whence I conclude, that the waters here meant are such as the rude and unlearned may perceive. The assertion of some, that they embrace by faith what they have read concerning the waters above the heavens, notwithstanding their ignorance respecting them, is not in accordance with the design of Moses. And truly a longer inquiry into a matter open and manifest is superfluous. We see that the clouds suspended in the air, which threaten to fall upon our heads, yet leave us space to breathe. (Commentary on Genesis 1:6)
Here Calvin is talking about the firmament, and he denies allegory or other sophisticated forms of interpretation, instead affirming a general phenomenological reading. Moses is not making a sort of “cosmological” or “astronomical” claim but is instead describing things according to the way that they look from the ordinary human perspective. Moses does not intend any strict claim about physical science, and thus there is no need to “embrace by faith” something which is not being asserted or taught. He is merely speaking of “the visible form of the world.”
Sadie Plant: Note Quote

“Humanity tends towards the organized body, the body with organ, the male member. the modern human is dressed in blue, as far from the red-blooded feminine as it is possible to be, gendered and sexed in a world still solidified in the mold of brotherhood and patrilineal inheritance. the female body is already diseased, on the way to the limits of life, while the phallus functions as the badge of membership, or belonging – to one’s self, society, species.”
NeoCameralism? Shunting it Mainline….Exitocracy or Otherwise?
You cannot own it, if you cannot control it.
Corollary
You own something, if you alone control it.
This control is assumed assurance by powers of overt or covert violence, or assumed assurance of similar violence delegated by higher forms of authority. The twist is that if it is the former, it is secondary power, whereas, if it is the latter, it is primary a.k.a sovereign property.
As you’d probably guessed it by now, I am hinting at NeoCameralism. The sovereign power, or sovereign corporation (there is hardly any harm in arriving at this complicit identity) is alone able to ensure its own property rights. Another complicit identity would lie in sovereign’s might and rights. This is absolute, in so far as it is primary, and subordinate rights or secondary properties cascade down the social hierarchies. NC is nothing but a systemic and systematic realization of this reality. Or, as someone, somewhere might have it’. The most compelling idea in the sprawling Moldbuggian corpus is “NeoCameralism”. NeoCameralism is a close relative to Patri’s theory of Dynamic Geography in that both are forms of practical market anarchism. Its reasoning is straightforward: If you believe that government should be given incentive to govern well, then modern democracy must be thrown out. Simply trying harder to elect better candidates will not fix the familiar structural problems of democracy, such as plundering special interest groups, ever-expanding bureaucracy, and election contests with the intellectual content of an American Idol finale. However, if you think that security service providers (AKA “governments”) form geographic monopolies (500,000 years of human history provides good evidence for this), then the Rothbard/Hoppe/Friedman vision of anarcho-capitalism with a competitive market in security must also be set aside as a pipe dream.
NeoCameralism is the idea that a sovereign state or primary corporation is not organizationally distinct from a secondary or private corporation. Thus we can achieve good management, and thus libertarian government, by converting sovereign corporations to the same management design that works well in today’s private sector – the joint-stock corporation.
One way to approach NeoCameralism is to see it as a refinement of royalism, an ancient system in which the sovereign corporation is a sort of family business. Under NeoCameralism, the biological quirks of royalism are eliminated and the State “goes public,” hiring the best executives regardless of their bloodline or even nationality.
Or you can just see NeoCameralism as part of the usual capitalist pattern in which services are optimized by aligning the interests of the service provider and the service consumer. If this works for groceries, why shouldn’t it work for government? Who doesn’t in the right mind have a hard time in accepting the possibility that democratic constitutionalism would generate either lower prices or better produce at Safeway …
I am fully aware of nuances mushrooming at the tiniest crack in using the words control, might, and rights. And, why would I mind it? I wouldn’t, since to parenthesize these words into isolation would beg the question of why NeoCameralism?, and eventually, why this exercise? I shouldn’t be held culpable of insouciance. And I am not, I am acquitted, since in moving on, the plausible way to alienate ownership, which is no doubt a legal contract, is by entering into negotiations, trading away. A possibility of non-alienable political responsibility just has no scope of space, has nothing to offer substantially in terms of rights on property, whether primary, or secondary. If, I cannot legislate, I cannot take a free exit, and if I cannot take a free exit, I, in no way can escape the despotism of NeoCameralism. I only commercialize sovereignty, and in turn my very belongingness in this relationship with the despot.
Free markets are better than communism, but owned markets are better than free markets. Free markets are only good compared to communism, which is the dichotomy that’s been set up by our elites in order to guide us slowly towards communism. I mean socialism.It all comes back to sovereignty. Capitalism is only good insofar as it makes people responsible for their own property and profits i.e. insofar as it makes them responsible and provides an incentive to virtue. But then it is not the only way to do so, and the reason it is good is incidental, not central. NeoCameralism is a thought experiment that is useful for explaining NRx ideas. Especially useful as a crutch between techno-libertarian Alzheimer’s disease and normal, sane reactionary thinking. Moldbug today would not endorse it, nor would the Moldbug that was reading Carlyle studiously a few years back. There are certainly difficulties with NeoVameralism. Transitioning to a neocameralist world is the first hurdle that springs to mind. Moldbug never clearly spells out a plausible strategy for getting from here to there. Then there is the minor matter of how shareholders in the government will keep the management under control when management presumably has all the guns. After all, in a democracy corporate shareholders can ask the government to enforce contractual obligations when management shirks its duties. Hopefully you see the problem that occurs with this model when management runs the government. Moldbug offers some technological solutions to this problem that are interesting but unsatisfying……but, but, accelerate liberty via technology.
Democracy as Enemy, Democratic as Fanatic
Responses to The Coming Insurrection are ambivalent, with some claiming it to be an important volume of left-wing theory, while others seeing in it elements of anti-modern thought and right-wing extreme fundamentalism. The work takes its inspiration from the ideas of Carl Schmitt, and is often touted as perpetuating political violence against the rule of law and democracy. The call for violence is not wholly unjustified, as the pamphlet contains an explicit call for violence to liberate territory from police occupation. Adherents of democracy are declared fanatics, with the very form of democracy as an enemy. The pamphlet written anonymously, draws on Schmitt’s ideas of the state of emergency, and the concept of the political. Heidegger is invoked for his ideas pertaining to resenting the technology and modernity. The authors take their impressions to the extreme, when even pretty innocuous looking matters in the present day scenario are equated with the relativism of imperialism, in turn dictated by the fundamentalism associated with the right wing (ala Arundhati Roy!!!). In his article in the FAZ, Nils Minkmar celebrates the antidemocratic manifesto as a “brilliantly penned diagnosis of our time” and speculates that it will become “the most important left-wing theory book of the age”. Of course the most questionable part of this statement is whether this a leftist book at all. But the FAZ author was particularly impressed by the anti-modern sentiments it contained. Towards the end, though, he does admit that the “black SUVs” that will follow on the heels of the state’s destruction will no doubt be worse that what we have now. The present intellectual zeitgeist is permeated with branding anything western an ideal as authoritarian or totalitarian (ala Roy again, Patkar, and the holy comrades of the left parties in India, to cite a few), which is tutored by a rhetoric that gate crashes into logic. The ideas somehow mirror Agamben’s The Coming Community. Agamben wrote in his magnum opus “Homo sacer”: “In modern democracies it is possible to state in public what Nazi biopoliticians did not dare to say.” With the help of Carl Schmitt’s theories on the “state of emergency” and Foucault’s concept of biopolitics, he places human rights and race laws, intensive care units and concentration camps on a par. It appears that the book is a naive translation of Agamben’s theories. The way to combat the so-called “normalisation of life” in modern societies, is to seek out invigorating salvation in a “state of emergency”, a far cry from democracy, rule of law and the market economy – this idea of a better age minus all coordinates of the present day comes from Schmitt and Heidegger, as does the search for hidden totalitarianism within democracy.
Quantum Entanglement, Post-Selection and Time Travel
If Copenhagen interpretation of quantum mechanics is to be believed, nothing exists in reality until a measurement is carried out. In the double slit experiment carried out by John Wheeler, post-selection can be made to work, after the experiment is finished, and that by delaying the observation after the photon has purportedly passed through the slits. Now, if post-selection is to work, there must be a change in the properties in the past. This has been experimentally proved by physicists like Jean-François Roch at the Ecole Normale Supérieure in Cachan, France. This is weird, but invoking the quantum entanglement and throwing it up for grabs against the philosophic principle of causality surprises. If the experimental set up impacts the future course of outcome, quantum particles in a most whimsical manner are susceptible to negate it. This happens due to the mathematics governing these particle, which enable or rather disable them to differentiate between the course of sense they are supposed to undertake. In short, what happens in the future could determine the past….
….If particles are caught up in quantum entanglement, the measurement of one immediately affects the other, some kind of a Einsteinian spooky action at a distance.
Cyclonopedia, Note Quote

Ganesha’s Trunk as a Misplaced Phallus
Psychoanalysis immunes itself with the impregnable wall of symbolic[ism], despite creating a level playing field of having multiple symbolic connotations. In a nutshell, one cannot afford to discount the plethora of analysis. This is a boon and a bane and could even co-exist, be compossible.
The trunk of Ganesha as a weak phallus, at the wrong place and therefore devoid of all potency on the one hand is accorded a co-planar ontology with his being a transgender thriving on oral sex. I do understand the repulsion caused in the insider with such a treatment given to a deity, but, it makes good sense, if one goes by the Lacanian dictum of ‘Symbolic’ as the radical alterity, the radical ‘Other’. Therein, the dislodging of the ‘insider’ by the insinuating alterity, or , ‘other’, through causing repulsion could meet its counterpoint only in the mirror (as in lateral inversion), where, the roles would undergo switching of responsibilities, in that of an ‘insider’ versus the ‘other’. Such a reaction from the ‘insider’ to the ‘other’ would succeed in fighting the tenuous proof with something equally tenuous, or would raise stakes. Personally, I feel for such a psychoanalytical reading as fecund for thought, even if on the surface, most of it appears preposterous.
Rancière and Synaptic Subjectivity
Another important type of subjectivity is the ‘Synaptic Subjectivity’, that finds its genesis in the political. The synaptic subjects are the mediators in the relationships that bind together groups with varying, and at times conflicting views and interests. These subjects help keep the group cohesive, by complexifying their engagement as mediators, thus enabling the appearance of a multiple and differential subjectivity. This subjectivity is attributed to Rancière, for whom these heterogeneous and porous subjectivities, specific to interstitial environments allow each person to have multiple transits and successive and temporary adherences within different cultural, professional and social contexts, opening up the possibility of a new emergence from this ecliptic subject. This ecliptic subject, further on, generates a subjectivity that is continually organizing itself through multiple transversalities; constituting a ‘synaptic subject’, that can function like a synapse: a body that receives and transmits flows.
Permeability of Autopoietic Principles (revisited) During Cognitive Development of the Brain
Distinctions and binaries have their problematics, and neural networks are no different when one such attempt is made regarding the information that flows from the outside into the inside, where interactions occur. The inside of the system has to cope up with the outside of the system through mechanisms that are either predefined for the system under consideration, or having no independent internal structure at all to begin with. The former mechanism results in loss of adaptability, since all possible eventualities would have to be catered for in the fixed, internal structure of the system. The latter is guided by conditions prevailing in the environment. In either cases, learning to cope with the environmental conditions is the key for system’s reaching any kind of stability. But, how would a system respond to its environment? According to the ideas propounded by Changeaux et. al. , this is possible in two ways, viz,
- An instructive mechanism is directly imposed by the environment on the system’s structure and,
- a selective mechanism, that is Darwinian in its import, helps maintain order as a result of interactions between the system and environment. The environment facilitates reinforcement, stabilization and development of the structure, without in any way determining it.
These two distinct ways when exported to neural networks take on connotations as supervised and unsupervised learning. The position of Changeaux et. al. is rooted in rule- based, formal and representational formats, and is thus criticized by the likes of Edelman. According to him, in a nervous system (his analysis are based upon nervous systems) neural signals in an information processing models are taken in from the periphery, and thereafter encoded in various ways to be subsequently transformed and retransformed during processing and generating an output. This not only puts extreme emphasis on formal rules, but also makes the claim on the nature of memory that is considered to occur through the representation of events through recording or replication of their informational details. Although, Edelman’s analysis takes nervous system as its centrality, the informational modeling approach that he undertakes is blanketed over the ontological basis that forms the fabric of the universe. Connectionists have no truck with this approach, as can be easily discerned from a long quote Edelman provides:
The notion of information processing tends to put a strong emphasis on the ability of the central nervous system to calculate the relevant invariance of a physical world. This view culminates in discussions of algorithms and computations, on the assumption that brain computes in an algorithmic manner…Categories of natural objects in the physical world are implicitly assumed to fall into defined classes or typologies that are accessible to a program. Pushing the notion even further, proponents of certain versions of this model are disposed to consider that the rules and representation (Chomsky) that appear to emerge in the realization of syntactical structures and higher semantic functions of language arise from corresponding structures at the neural level.
Edelman is aware of the shortcomings in informational processing models, and therefore takes a leap into connectionist fold with his proposal of brain consisting of a large number of undifferentiated, but connected neurons. He, at the same time gives a lot of credence to organization occurring at development phases of the brain. He lays out the following principles of this population thinking in his Neural Darwinism: The Theory of Neuronal Group Selection:
- The homogeneous, undifferentiated population of neurons is epigenetically diversified into structurally variant groups through a number of selective processescalled“primary repertoire”.
- Connections among the groups are modified due to signals received during the interactions between the system and environment housing the system. Such modifications that occur during the post-natal period become functionally active to used in future, and form “secondary repertoire”.
- With the setting up of “primary” and “secondary” repertoires, groups engage in interactions by means of feedback loops as a result of various sensory/motor responses, enabling the brain to interpret conditions in its environment and thus act upon them.
“Degenerate” is what Edelman calls are the neural groups in the primary repertoire to begin with. This entails the possibility of a significant number of non-identical variant groups. This has another dimension to it as well, in that, non-identical variant groups are distributed uniformly across the system. Within Edelman’s nervous system case study, degeneracy and distributedness are crucial features to deny the localization of cortical functions on the one hand, and existence of hierarchical processing structures in a narrow sense on the other. Edelman’s cortical map formation incorporate the generic principles of autopoiesis. Cortical maps are collections (areas) of minicolumns in the brain cortex that have been identified as performing a specific information processing function. Schematically, it is like,
In Edelman’s theory, neural groups have an optimum size that is not known a priori, but develops spontaneously and dynamically. Within the cortex, this is achieved by means of inhibitory connections spread over a horizontal plane, while excitatory ones are vertically laid out, thus enabling the neuronal activity to be concentrated on the vertical plane rather than the horizontal one. Hebb’s rule facilitates the utility function of this group. Impulses are carried on to neural groups thereby activating the same, and subsequently altering synaptic strengths. During the ensuing process, a correlation gets formed between neural groups with possible overlapping of messages as a result of synaptic activity generated within each neural groups. This correlational activity could be selected for frequent exposure to such overlaps, and once selected, the group might start to exhibit its activity even in the absence of inputs or impulses. The selection is nothing but memory, and is always used in learning procedures. A lot depends upon the frequency of exposure, as if this is on the lower scale, memory, or selection could simply fade away, and be made available for a different procedure. No wonder, why forgetting is always referred to as a precondition for memory. Fading away might be an useful criteria for using the freed allotted memory storage space during developmental process, but at the stage when groups of the right size are in place and ready for selection, weakly interacting groups would meet the fate of elimination. Elimination and retention of groups depends upon what Edelman refers to as the vitality principle, wherein, sensitivity to historical process finds more legitimacy, and that of extant groups find takers in influencing the formation of new groups. The reason for including Edelman’s case was specifically to highlight the permeability of self-organizing principles during the cognitive development of the brain, and also pitting the superiority of neural networks/connectionist models in comprehending brain development over the traditional rule-based expert and formal systems of modeling techniques.
In order to understand the nexus between brain development and environment, it would be secure to carry further Edelman’s analysis. It is a commonsense belief in linking the structural changes in the brain with environmental effects. Even if one takes recourse to Darwinian evolution, these changes are either delayed due to systemic resistance to let these effects take over, or in not so Darwinian a fashion, the effects are a compounded resultant of embedded groups within the network. On the other hand, Edelman’s cortical map formation is not just confined to the processes occurring within brain’s structure alone, but is also realized by how the brain explores its environment. This aspect is nothing but motor behavior in its nexus between the brain and environment and is strongly voiced by Cilliers, when he calls to attention,
The role of active motor behavior forms the first half of the argument against abstract, solipsistic intelligence. The second half concerns the role of communication. The importance of communication, especially the use of symbol systems (language), does not return us to the paradigm of objective information- processing. Structures for communication remain embedded in a neural structure, and therefore will always be subjected to the complexities of network interaction. Our existence is both embodied and contingent.
Edelman is criticized for showing no respect to replication in his theory, which is a strong pillar for natural selection and learning. Recently, attempts to incorporate replication in the brain have been undertaken, and strong indicators for neuronal replicators with the use of Hebb’s learning mechanism as showing more promise when compared with natural selection are in the limelight (Fernando, Goldstein and Szathmáry). These autopoietic systems when given a mathematical description and treatment could be used to model onto a computer or a digital system, thus help giving insights into the world pregnant with complexity.
Autopiesis goes directly to the heart of anti-foundationalism. This is because the epistemological basis of basic beliefs is not paid any due respect or justificatory support in autopietic system’s insistence on internal interactions and external contingent factors obligating the system to undergo continuous transformations. If autopoiesis could survive wonderfully well without the any transcendental intervention, or a priori definition, it has parallels running within French theory. If anti-foundationalism is the hallmark of autopoiesis, so is anti-reductionism, since it is well nigh impossible to analyze to have meaning explicated in terms of atomistic units, and especially so, when the systems are already anti-foundationalist. Even in biologically contextual terms, a mereology according to Garfinkel is emergent as a result of complex interactions that go on within the autopoietic system. Garfinkel says,
We have seen that modeling aggregation requires us to transcend the level of the individual cells to describe the system by holistic variables. But in classical reductionism, the behavior of holistic entities must ultimately be explained by reference to the nature of their constituents, because those entities ‘are just’ collections of the lower-level objects with their interactions. Although, it may be true in some sense that systems are just collections of their elements, it does not follow that we can explain the system’s behavior by reference to its parts, together with a theory of their connections. In particular, in dealing with systems of large numbers of similar components, we must make recourse to holistic concepts that refer to the behavior of the system as a whole. We have seen here, for example, concepts such as entrainment, global attractors, waves of aggregation, and so on. Although these system properties must ultimately be definable in terms of the states of individuals, this fact does not make them ‘fictions’; they are causally efficacious (hence, real) and have definite causal relationships with other system variables and even to the states of the individuals.
Autopoiesis gains vitality, when systems thinking opens up the avenues of accepting contradictions and opposites rather than merely trying to get rid of them. Vitality is centered around a conflict, and ideally comes into a balanced existence, when such a conflict, or strife helps facilitate consensus building, or cooperation. If such goals are achieved, analyzing complexity theory gets a boost, and moreover by being sensitive to autopoiesis, an appreciation of the sort of the real lebenswelt gets underlined. Memory† and history are essentials for complex autopoietic system, whether they be biological and/or social, and this can be fully comprehended in some quite routine situations where systems that are quite identical in most respects, if differing in their histories would have different trajectories in responding to situations they face. Memory does not determine the final description of the system, since it is itself susceptible to transformations, and what really gets passed on are the traces. The same susceptibility to transformations would apply to traces as well. But memory is not stored in the brain as discrete units, but rather as in a distributed pattern, and this is the pivotal characteristic of self-organizing complex systems over any other form of iconic representation. This property of transformation as associated with autopoietic systems is enough to suspend the process in between activity and passivity, in that the former is determining by the environment and the latter is impact on the environment. This is really important in autopoiesis, since the distinction between the inside and outside and active and passive is difficult to discern, and moreover this disappearance of distinction is a sufficient enough case to vouch against any authoritative control as residing within the system, and/or emanating from any single source. Autopoiesis scores over other representational modeling techniques by its ability to self-reflect, or by the system’s ability to act upon itself. For Lawson, reflexivity disallows any static description of the system, since it is not possible to intercept the reflexive moment, and it also disallows a complete description of the system at a meta-level. Even though a meta-level description can be construed, it is only the frozen frames or snapshots of the systems at any given particular instance, and hence ignores the temporal dimensions the systems undergo. For that to be taken into account, and measure the complexity within the system, the role of activity and passivity cannot be ignored at any cost, despite showing up great difficulties while modeling. But, is it not really a blessing in disguise, for the model of a complex system should be retentive of complexity in the real world? Well, the answer is yes, it is.
Somehow, the discussion till now still smells of anarchy within autopoiesis, and if there is no satisfactory account of predictability and stability within the self-organizing system, the fears only get aggravated. A system which undergoes huge effects when small changes or alteration are made in the causes is definitely not a candidate for stability. And autopietic systems are precisely such. Does this mean that these are unstable?, or does it call for a reworking of the notion of stability? This is philosophically contentious and there is no doubt regarding this. Unstability could be a result of probabilities, but complex systems have to fall outside the realm of such probabilities. What happens in complex systems is a result of complex interactions due to a large number of factors, that need not be logically compatible. At the same time, stochasticity has no room, for it serves as an escape route from the annals of classical determinism, and hence a theory based on such escape routes could never be a theory of self-organization (Patteee). Stability is closely related to the ability to predict, and if stability is something very different from what classical determinism tells it is, the case for predictability should be no different. The problems in predictions are gross, as are echoed in the words of Krohn and Küppers,
In the case of these ‘complex systems’ (Nicolis and Prigogine), or ‘non-trivial’ machines, a functional analysis of input-output correlations must be supplemented by the study of ‘mechanisms’, i.e. by causal analysis. Due to the operational conditions of complex systems it is almost impossible to make sense of the output (in terms of the functions or expected effects) without taking into account the mechanisms by which it is produced. The output of the system follows the ‘history’ of the system, which itself depends on its previous output taken as input (operational closure). The system’s development is determined by its mechanisms, but cannot be predicted, because no reliable rule can be found in the output itself. Even more complicated are systems in which the working mechanisms themselves can develop according to recursive operations (learning of learning; invention of inventions, etc.).
The quote above clearly is indicative of predicaments while attempting to provide explanations of predictability. Although, it is quite difficult to get rid of these predicaments, nevertheless, attempts to mitigate them so as to reduce noise levels from distorting or disturbing the stability and predictability of the systems are always in the pipeline. One such attempt lies in collating or mapping constraints onto a real epistemological fold of history and environment, and thereafter apply it to the studies of the social and the political. This is voiced very strongly as a parallel metaphoric in Luhmann, when he draws attention to,
Autopoietic systems, then, are not only self organizing systems. Not only do they produce and eventually change their own structures but their self-reference applies to the production of other components as well. This is the decisive conceptual innovation. It adds a turbo charger to the already powerful engine of self-referential machines. Even elements, that is, last components (individuals), which are, at least for the system itself, undecomposable, are produced by the system itself. Thus, everything which is used as a unit by the system is produced as a unit by the system itself. This applies to elements, processes, boundaries and other structures, and last but not least to the unity of the system itself. Autopoietic systems, of course, exist within an environment. They cannot exist on their own. But there is no input and no output of unity.
What this entails for social systems is that they are autopoietically closed, in that, while they rely on resources from their environment, the resources in question do not become part of the systematic operation. So the system never tries its luck at adjusting to the changes that are brought about superficially and in the process fritter away its available resources, instead of going for trends that do not appear to be superficial. Were a system to ever attempt a fall from grace in making acclimatizations to these fluctuations, a choice that is ethical in nature and contextual at the same time is resorted to. Within the distributed systems as such, a central authority is paid no heed, since such a scenario could result in general degeneracy of the system as a whole. Instead, what gets highlighted is the ethical choice of decentralization, to ensure system’s survivability, and dynamism. Such an ethical treatment is no less altruistic.