State-Space Trajectory and Basin of Attraction

A system that undergoes unexpected and/or violent upheaval is always attributed to as being facing up to a rupture, the kind of which is comprehensible by analyzing the subsystems that go on to make the systemic whole. Although, it could prove to be quite an important tool analysis, it seldom faces aporetic situations, because of unthought behavior exhibited. This behavior emanates from localized zones, and therefore send in lines of rupture that digresses the whole system from being comprehended successfully. To overcome this predicament, one must legitimize the existence of what Bak and Chen refer to as autopoietic or self-organizing criticality.

…composite systems naturally evolve to a critical state in which a minor event starts a chain reaction that can affect any number of elements in the system. Although composite systems produce more minor events than catastrophes, chain reaction of all sizes are an integral part of the dynamics. According to the theory, the mechanism that leads to minor events is the same one that leads to major events. Furthermore, composite systems never reach equilibrium but evolve from one meta-stable state to the next…self-organized criticality is a holistic theory: the global features such as the relative number of large and small events, do not depend on the microscopic mechanisms. Consequently global features of the system cannot be understood by analyzing the parts separately. To our knowledge, self-organized criticality is the only model or mathematical description that has led to a holistic theory for dynamic systems.

The acceptance of this criticality as existing has an affirmative impact on the autopoietic system in moving towards the point aptly called the critical point, which is loaded with a plethora of effects for a single event. This multitude is achieved through state-space descriptions or diagrams, with their uncanny characteristics of showing up different dimensions for different and independent variables. Diagrammatically, a state-space or Wuensche Diagram is,

r225n

and

 

rbn_pa

In all complex systems simulations at each moment the state of the system is described by a set of variables. As the system is updated over time these variables undergo changes that are influenced by the previous state of the entire system. System dynamics can be viewed as tabular data depicting the changes in variables over time. However, it is hard to analyze system dynamics just looking at the changes in these variables, as causal relationships between variables are not readily apparent. By removing all the details about the actual state and the actual temporal information, we can view the dynamics as a graph with nodes describing states and links describing transitions. For instance software applications can have a large number of states. Problems occur when software applications reach uncommon or unanticipated states. Being able to visualize the entire state space, and quickly comprehend the paths leading to any particular state, allows more targeted analysis. Common states can be thoroughly tested, uncommon states can be identified and artificially induced. State space diagrams allow for numerous insights into system behaviour, in particular some states of the system can be shown to be unreachable, while others are unavoidable. Its applicability lies in any situation in which you have a model or system which changes state over time and you want to examine the abstract dynamical qualities of these changes. For example, social network theory, gene regulatory networks, urban and agricultural water usage, and concept maps in cognition and language modeling.

In such a scenario, every state of the system would be represented by a unique point in the state-space and the dynamics of the system would be mapped by trajectories through the state-space. These trajectories when converge at a point, are said to converge in on a basin of attraction, or simply an attractor, and it is at this point that any system reaches stability.

ecplex_basin

But, would this attractor phenomenon work for neural networks, where there are myriad nodes, each with their own corresponding state-spaces? The answer is in the affirmative, and thats because in stable systems, only a few attractor points are present, thus pulling in the system to stability. On the contrary, if the system is not stable, utter chaos would reign supreme, and this is where autopoiesis as a phenomenon comes to rescue by balancing perfectly between chaos and ordered states. This balancing act is extremely crucial and sensitive, for on the one hand, a chaotic system is too disordered to be beneficial, and on the other, a system that is highly stable suffers a handicap in dedicating a lot of resources towards reaching and maintaining attractor point/s. Not only that, even a transition from one stable state to another would rope in sluggish responses in adaptability to environment, and that too at a heavy cost of perturbations. But, self-organizing criticality would take care of such high costs by optimally utilizing available resources. And as the nodes are in possession of unequal weights to begin with, the fight for superiority takes precedence, that gets reflected in the state-space as well. If inputs are marked by variations, optimization through autopoietic criticality takes over, else, the system settles down to a strong attractor/s.

The nodes that interact at local zones are responsible for the macro-level effects, and according to Kauffman, this is possible in simple networks, by the switching values of “on” or “off” at input. In such cases, order is ensured with the formation of cores of stability that thereafter permeate the network and further see to it that the system reaches stability by drawing in other nodes into stability. In complex networks, nonlinearity is incorporated to adjust signals approaching critical point. The adjustment sees to it that if the signals are getting higher values, the system as such would slide into stability, or otherwise into chaos. Therefore adjustment as a mechanism is an important factor in complex systems to self-organize by hovering around criticality, and this is what is referred to as “on the edge of chaos” by Lewin.

Autopoiesis & Pre-Determined Design: An Abhorrent Alliance

Self-organization has also been conflated with the idea of emergence, and indeed one can occur without the other, thus nullifying the thesis of strong reliance between the two. Moreover, western philosophical traditions have been quite vocal in their skepticism about emergence and order within a structure, if there isn’t a presence of an external agency, either in the form of God, or in some a priori principle. But these traditions are indeed in for a rude shock, since there is nothing mystical about emergence and even self-organization (in cases where they are thought to be in conflated usage). Not just an absence of mysticism characterizing self-organization, but, even stochasticity seems to be a missing link in the said principle. Although, examples supporting the case vary according to the diverse environmental factors and complexity inherent in the system, the ease of working through becomes apparent, if self-organization or autopoiesis is viewed as a the capacity exhibited by the complex systems in enabling them to change or develop the internal structure spontaneously, while adapting and manipulating with their environment in the ongoing process. This could very well be the starting point in line with a working definition of autopoiesis. A clear example of this kind would be the human brain (although, brains of animals could suffice this equally well), which shows a great proclivity to learn, to remember in the midst of its development. Language is another instance, since in its development, a recognition of its structure is mandated, and this very structure in its attempt to survive and develop further under circumstances that are variegated, must show allegiance to adaptability. Even if, language is guided by social interactions between humans, the cultural space conducive for its development would have strong affinity to a generalized aspect of linguistic system. Now, let us build up on the criteria of determining what makes the system autopoietic, and thereby see what are the features that are generally held to be in common with autopoietic systems.

33_10klein

Self-organization abhors predetermined design, thus enabling the system to dynamically adapt to the regular/irregular changes in the environment in a nonlinear adherence. Even if emergence can occur without the underlying principle of self- organization, and vice versa, self-organization is in itself an emergent property of the system as a whole, with the individual components acting on local information and general principles. This is crucial, since the macroscopic behavior emerges out of microscopic interactions that in themselves are carriers of scant information, and in turn have a direct component of complexity associated with them when viewed microscopically. This complexity also gets reflected in their learning procedures, since for systems that are self-organizing, it is only the experiential aspects from previous encounters compared with the recent ones that help. And what would this increase in complexity entail? As a matter of fact, complexity is a reversal of entropy at the local level, thus putting the system at the mercy of a point of saturation. Moreover, since the systems are experiential, they are historical and hence based on memory. If such is the case, then it is safe to point out that these systems are diachronic in nature, and hence memory forms a vital component of emergence. Memory as anamnesis is unthinkable without selective amnesia, for piling up information does trade off with relevance of information simultaneously. For information that goes under the name of irrelevant, is simply jettisoned, and the space created in this manner is utilized for cases pertaining to representation. Not only representation sort of makes a back door entry here, it is also convenient for this space to undergo a systematic patterning that is the hallmark of these systems. Despite the patterns being the hallmark of self-organization, it should in no way be taken to mean that these systems are stringently teleological, because, the introduction of nonlinear functions that guide these systems introduce at the same time the shunning off of a central authority, or anthropomorphic centrality, or any external designer. Linear functions could partake in localized situations, but at the macroscopic level, they lose their vitality, and if complex systems stick on to their loyalty towards linear functions, they fade away in the process of trying hard to avoid negotiating this allegiance. Just as allegiance to nonlinearity is important for self-organization, so is an allegiance to anti-reductionism. That is due to the fact of micro-level units having no knowledge about the macro-level effects, while at the same time, these macro-level effects manifest themselves in clusters of micro-level units, thus ruling out any sort of independent level- based descriptions. The levels are stacked, intertwined, and most importantly, any resistance to reductionist discourse in trying to explicate the emergence within the system has no connotation for resistance to materialist properties themselves emerging.

chaos-nonlinearity-web-490x363

Clusters of information flow into the system from the external world that have an influencing impact on the internal makeup of the system and in turn triggers off interactions in tune with Hebb’s law to alter weights. With the process in full swing, two possibilities could take shape, viz, formation of a stable system of weights based on the regularity of a stable cluster, and association between sets of these stable clusters as and when they are identified. This self-organizing principle is not only based on learning, but at the same time also cautious with sidelining those that are potentially futile for the system. Now, when such information flows into the system, sensors and/or transducers are set up, that obligate varying levels of intensity of activity to some neurons and nodes over others. This is of course to be expected, and the way to come to terms with a regulated pattern of activity is the onus of adjustments of weights associated with neurons/nodes. A very important factor lies in the fact of the event denoting the flow of information from the external world into the system to occur regularly, or at least occasionally, lest the self-organizing or autopoietic system should fail to record in memory such occurrences and eventually fade out. Strangely, the patterns are arrived at without any reliance upon differentiated micro-level units to begin with. In parallel with neural networks, the nodes and neurons possess random values for their weights. The levels housing these micro- level nodes or neurons are intertwined to increase their strength, and if there is any absence of self-persisting positive feedback, the autopoietic system can in no way move away from the dictates of undifferentiated states it began with. As the nodes are associated with random values of weights, there is a race to show superiority, thus arresting the contingent limitless growth under the influence of limitless resources, thereby giving the emerging structure some meaningful essence and existence. Intertwining of levels also results in consensus building, and therefore effectuates meaning as accorded to these emergent structures of autpoietic systems. But this consensus building could lead astray the system from complexity, and hence to maintain the status quo, it is imperative for these autopoietic systems to have a correctional apparatus. The correctional apparatus spontaneously breaks the symmetry that leads the system away from complexity by either introducing haphazard fault lines in connections, or chaotic behaviors resulting from sensitivity to minor fluctuations as a result of banking on nonlinearity. Does this correctional apparatus in any way impact memory gained through the process of historicality? Apparently not. This is because of the distributed nature of memory storage, which is largely due to weights that are non-correspondingly symbolic. The weights that show their activity at the local scale are associated with memory storage through traces, and it is only due to this fact that information gets distributed over the system generating robustness. With these characteristic features, autopoietic systems only tend towards organizing their structures to the optimum, with safely securing the complexity expected within the system.

Indian Thought and Language: a raw recipe imported in the east

In his Philosophy of History, Hegel mistakenly believed that

“Hindoo principles” are polar in character. Because of their polarity which vacillates between “pure self-renouncing ideality, and that (phenomenal) variety which goes to the opposite extreme of sensuousness, it is evident that nothing but abstract thought and imagination can be developed”. However, from these mistaken beliefs, he rightly concluded that grammar in Indian thought “has advanced to a high degree of consistent regularity”. He was so impressed by the developments that he concluded that the development in grammar “has been so far cultivated that no language can be regarded as more fully developed than the Sanscrit”.

This is enlightening to the extent of what even Fred Dallmayr in his opus on Hegel titled aptly “G. W. F. Hegel: Modernity and Political Thought” would be most happy to corroborate. This is precisely what I would call ‘Philosophy in the times of errors’ (pun intended for Hegel and his arrogance).

About the nature of language, I quote in full the paragraph:

“Language is intimately related with our life like the warp and weft in a cloth. Our concepts determine the way we look at our world. Any aberration in our understanding of language affects our cognition. Despite the cardinal importance of language, the questions like “What is the nature of language?” “What is the role of semantics and syntax of language? ” What is the relationship between language, thought and reality?” How do we understand language—do we understand it by understanding each of the words in a sentence, or is the sentence a carrier of meaning?” “How does the listener understand the speaker?” are the questions which have been an enigma.”

Philosopher Christopher Gauker created quite a ruckus with his influential yet, critically attacked book called “Words without Meaning” and I quote a small review of it from the MIT press (which published the book):

“According to the received view of linguistic communication, the primary function of language is to enable speakers to reveal the propositional contents of their thoughts to hearers. Speakers are able to do this because they share with their hearers an understanding of the meanings of words. Christopher Gauker rejects this conception of language, arguing that it rests on an untenable conception of mental representation and yields a wrong account of the norms of discourse.

Gauker’s alternative starts with the observation that conversations have goals and that the best way to achieve the goal of a conversation depends on the circumstances under which the conversation takes place. These goals and circumstances determine a context of utterance quite apart from the attitudes of the interlocutors. The fundamental norms of discourse are formulated in terms of the conditions under which sentences are assertible in such contexts.

Words without Meaning contains original solutions to a wide array of outstanding problems in the philosophy of language, including the logic of quantification, the logic of conditionals, the semantic paradoxes, the nature of presupposition and implicature, and the nature and attribution of beliefs.”

gal-004a1

This is indeed a new way of looking up at the nature of language and the real question is if anyone in the Indian tradition comes really close to doing this, i.e. a conflation of what Gauker says with that of the tradition. Another thing that I discovered thanks to a  friend of mine is a book by Richard King on Indian Philosophy. He quotes about Bhartṛhari/भर्तृहरि thus:

Bhartṛhari/भर्तृहरि, like his Lacanian and Derridean counterparts rejects the view that one can know anythin outside of language.There is an eternal connection between knowledge and language which cannot be broken”

If this identity between knowledge and the word were to disappear, knowledge would cease to be knowledge. (Bhartṛhari/भर्तृहरि himself)

Thus he equates Śabda and Jñāna as they become or come identical in nature.

Language could indeed be looked as a function that may take the arguments as getting passed on to it that need not specifically base itself upon communication as an end, but could somehow serve as communication as a means. I would somehow call this as the syndrome of language (or maybe even a deficit of language, to take the cue from the ‘phenomenological deficit’), as in whatsoever way it is looked upon, i.e. transcendental realization or an immanent force (‘play’ would be better suited here) ‘in-itself’ for the sake of establishment, the possibilization of keeping out communication cannot be ruled out. Language would still be communico-centric for all that.

But another way of looking at realizing (by not establishing or introducing) relations between two relata, and by this if it could indeed be thought of is, if we somehow attribute language to ‘Objects’ and therefore even call the untenability of interactions between any entities as based on a relation that is linguistic in ways we might not comprehend.

No wonder, why I am getting drawn into the seriousness of objects as a way of realizing their interactions, their language and this all, away from the mandates we (anthropocentrism) have hitherto set upon them.

Why I insist on the objects having a language of their own and that too divorced from the realm of humans is maybe the impact of Whitehead on reading the tool-analysis of Heidegger. It must be noted that Whitehead never shied of embracing inanimate reality, of never using words like ‘thought’, ’emotions/feelings’ for the inner life of the inanimate entities. If these things, in their hermeneutical exegesis get attributed to the inanimate entities, there can be no doubt of these ‘Objects’ possessing language, as I said that is far away from the human interference. This could indeed be a way of looking at language in the sense of transcendentalizing possibility, this time, maybe, through the immanent look……

Would Large Exposure Framework, LEF effectively nip the NPAs/NPLs in the bud?

The narrative that non-state actors contribute to stressed assets (NPAs/NPLs :: English / Hindi) via their infrastructural forays and are as complicit in debilitating the banking health in the country as their public counterparts (excuse me for the liberty of giving budgetary allocations the same stature) is slightly misplaced on one important count: accessibility to market mechanisms. The former are adventurous with their instrumentals (yes, more often than not resulting in ignominy, but ingenuous in escaping for a better invention of monetary and fiscal instruments), while the latter are hand-held devices suffocated by Governmental interference and constrictions. The key point lies in the fact that former and not the latter enjoy as well as enjoin connectography, the key pillar to globalised financial in- and out-flows. Turning on to LEF, which I personally feel has enough teeth to sink into narrowing the risk exposure, but, sharpness of the teeth is still speculatively housed in the orbit.

reserve-bank-of-india-rbi

Although regulatory mechanisms are on an uptick, these efforts are not yielding results to be optimistic about, and even if they are, they are only peripheral at best. Deterrents to prevent large exposure of banks’ bad accounts are marred by lenient approach towards: inadequate tangible collaterals during credit exposure enhancements; promoter-equity contribution financed out of debt borrowed by another bank, leading to significant stress of debt servicing; and short-term borrowings made by corporations to meet working capital and current debt servicing obligations exerting severe liquidity pressures on account of stress build-up in their portfolios. These are cursory introductions to the necessity of Large Exposure Framework (LEF) by the Reserve Bank of India (RBI). This framework confines banking sector’s exposure to highly leveraged corporates by recommending an overarching ceiling on total bank borrowing by the corporates. The idea is to secure other external sources of funding for corporates other than banks by introducing a cap on bank borrowings. With the introduction of this cap, corporates would have to fend for their working capital by tapping market sources. How well this augurs for mitigating NPAs is yet to be scrutinised as the framework will take effect from next financial year. But, the framework has scope for recognising risks, whereby banks would be able to draft additional standard asset provisioning and higher risk weights for a specific borrower no matter how leveraged the borrower is. The issue of concentrated sectoral-risk would get highlighted, even if the single and group borrower exposure for each bank remains within prescribed limits. The framework thus limits relentless lending to a borrower reducing risks of snowballing NPAs by throwing open avenues of market capitalisation on one hand and more discernment regarding sectors vulnerable to fluctuating performances. The efficacy will only have to stand the test of time.