Welfare Economics, or Social Psychic Wellbeing. Note Quote.


The economic system is a social system in which commodities are exchanged. Sets of these commodities can be represented by vectors x within a metric space X contained within the non-negative orthant of an Euclidean space RNx+ of dimensionality N equal to the number of such commodities.

An allocation {xi}i∈N ⊂ X ⊂ RNx+ of commodities in society is a set of vectors xi representing the commodities allocated within the economic system to each individual i ∈ N.

In questions of welfare economics at least in all practical policy matters, the state of society is equated with this allocation, that is, s = {xi}i∈N, and the set of all possible information concerning the economic state of society is S = X. It is typically taken to be the case that the individual’s preference-information is simply their allocation xi, si = xi. The concept of Pareto efficiency is thus narrowed to “neoclassical Pareto efficiency” for the school of economic thought in which originates, and to distinguish it from the weaker criterion.

An allocation {xi}i∈N is said to be neoclassical Pareto efficient iff ∄{xi}i∈N ⊂ X & i ∈ N : x′i ≻ xi & x′j ≽ xj ∀ j ≠ i ∈ N.

A movement between two allocations, {xi}i∈N → {x′i}i∈N is called a neoclassical Pareto improvement iff ∃i∈N : x′i ≻ xi & x′j ≽ xj ∀ j ≠ i ∈ N.

For technical reasons it is almost always in practice assumed for simplicity that individual preference relations are monotonically increasing across the space of commodities.

If individual preferences are monotonically increasing then x′ii xi ⇐⇒ x′i ≥ xi, and x′ ≻ xi ⇐⇒ xi > x′i2.

This is problematic, because a normative economics guided by the principle of implementing a decision if it yields a neoclassical Pareto improvement where individuals have such preference relations above leads to the following situation.

Suppose that individual’s preference-information is their own allocation of commodities, and that their preferences are monotonically increasing. Take one individual j ∈ N and an initial allocation {xi}i∈N.

– A series of movements between allocations {{xi}ti∈N → {x′i}ti∈N}Tt=1 such that xi≠j = x′i≠j ∀ t and x′j > xj ∀ t and therefore that xj − xi → ∞∀i≠j ∈ N, are neoclassical Pareto improvements. Furthermore, if these movements are made possible only by the discovery of new commodities, each individual state in the movement is neoclassical Pareto efficient prior to the next discovery if the first allocation was neoclassical Pareto efficient.

Admittedly perhaps not to the economic theorist, but to most this seems a rather dubious out- come. It means that if we are guided by neoclassical Pareto efficiency it is acceptable, indeed de- sirable, that one individual within society be made increasingly “richer” without end and without increasing the wealth of others. Provided only the wealth of others does not decrease. The same result would hold if instead of an individual, we made a whole group, or indeed the whole of society “better off”, without making anyone else “worse off”.

Even the most devoted disciple of Ayn Rand would find this situation dubious, for there is no requirement that the individual in question be in some sense “deserving” of their riches. But it is perfectly logically consistent with Pareto optimality if individual preferences concern only to their allocation and are monotonically increasing. So what is it that is strange here? What generates this odd condonation? It is the narrowing of that which the polity care about to each individual allocation, alone, independent of others. The fact that neoclassical Pareto improvements are distribution-invariant because the polity is supposed to care only about their own individual allocation xi ∈ {xi}ti∈N alone rather than broader states of society si ⊂ s as they see it.

To avoid such awkward results, the economist may move from the preference-axiomatic concept of Pareto efficiency to embrace utilitarianism. The policy criterion (actually not immediately representative of Bentham’s surprisingly subtle statement) being the maximisation of some combination W(x) = W {ui(xi)}i∈N of individual utilities ui(xi) over allocations. The “social psychic wellbeing” metric known as the Social Welfare Function.

In theory, the maximisation of W(x) would, given the “right” assumptions on the combination method W (·) (sum, multiplication, maximin etc.) and utilities (concavity, montonocity, independence etc.) fail to condone a distribution of commodities x extreme as that discussed above. By dint of its failure to maximise social welfare W(x). But to obtain this egalitarian sensitivity to the distribution of income, three properties of Social Welfare Functions are introduced. Which prove fatal to the a-politicality of the economist’s policy advice, and introduce presuppositions which must lay naked upon the political passions of the economist, so much more indecently for their hazy concealment under the technicalistic canopy of functional mathematics.

Firstly, it is so famous a result as to be called the “third theorem of welfare economics” that any such function W(·) as has certain “uncontroversially” desirable technical properties will impose upon the polity N the preferences of a dictator i ∈ N within it. The preference of one individual i ∈ N will serve to determine the preference indicated between by society between different states by W(x). In practice, the preferences of the economist, who decides upon the form of W(·) and thus imposes their particular political passions (be they egalitarian or otherwise) upon policy, deeming what is “socially optimal” by the different weightings assigned to individual utilities ui(·) within the polity. But the political presuppositions imported by the economist go deeper in fact than this. Utilitari-anism which allows for inter-personal comparisons of utility in the construction of W(x) requires utility functions be “cardinal” – representing “how much” utility one derives from commodities over and above the bare preference between different sets thereof. Utility is an extremely vague concept, because it was constructed to represent a common hedonistic experiential metric where the very existence of such is uncertain in the first place. In practice, the economist decides upon, extrapolates, assigns to i ∈ N a particular utility function which imports yet further assumptions about how any one individual values their commodity allocation, and thus contributes to social psychic wellbeing.

And finally, utilitarianism not only makes political statements about who in the polity is to be assigned a disimproved situation. It makes statements so outlandish and outrageous to the common sensibility as to have provided the impetus for two of the great systems of philosophy of justice in modernity – those of John Rawls and Amartya Sen. Under almost any combination method W(·), the maximization of W(·) demands allocation to those most able to realize utility from their allocation. It would demand, for instance, redistribution of commodities from sick children to the hedonistic libertine, for the latter can obtain greater “utility” there from. A problem so severe in its political implications it provided the basic impetus for Rawls’ and Sen’s systems. A Theory of Justice is, of course, a direct response to the problematic political content of utilitarianism.

So Pareto optimality stands as the best hope for the economist to make a-political statements about policy, refraining from making statements therein concerning the assignation of dis-improvements in the situation of any individual. Yet if applied to preferences over individual allocations alone it condones some extreme situations of dubious political desirability across the spectrum of political theory and philosophy. But how robust a guide is it when we allow the polity to be concerned with states of society in general? Not only their own individual allocation of commodities. As they must be in the process of public reasoning in every political philosophy from Plato to Popper and beyond.

Rhizomatic Topology and Global Politics. A Flirtatious Relationship.



Deleuze and Guattari see concepts as rhizomes, biological entities endowed with unique properties. They see concepts as spatially representable, where the representation contains principles of connection and heterogeneity: any point of a rhizome must be connected to any other. Deleuze and Guattari list the possible benefits of spatial representation of concepts, including the ability to represent complex multiplicity, the potential to free a concept from foundationalism, and the ability to show both breadth and depth. In this view, geometric interpretations move away from the insidious understanding of the world in terms of dualisms, dichotomies, and lines, to understand conceptual relations in terms of space and shapes. The ontology of concepts is thus, in their view, appropriately geometric, a multiplicity defined not by its elements, nor by a center of unification and comprehension and instead measured by its dimensionality and its heterogeneity. The conceptual multiplicity, is already composed of heterogeneous terms in symbiosis, and is continually transforming itself such that it is possible to follow, and map, not only the relationships between ideas but how they change over time. In fact, the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference. They outline the unique contribution of geometric models to the understanding of contingent structure:

Principle of cartography and decalcomania: a rhizome is not amenable to any structural or generative model. It is a stranger to any idea of genetic axis or deep structure. A genetic axis is like an objective pivotal unity upon which successive stages are organized; deep structure is more like a base sequence that can be broken down into immediate constituents, while the unity of the product passes into another, transformational and subjective, dimension. (Deleuze and Guattari)

The word that Deleuze and Guattari use for ‘multiplicities’ can also be translated to the topological term ‘manifold.’ If we thought about their multiplicities as manifolds, there are a virtually unlimited number of things one could come to know, in geometric terms, about (and with) our object of study, abstractly speaking. Among those unlimited things we could learn are properties of groups (homological, cohomological, and homeomorphic), complex directionality (maps, morphisms, isomorphisms, and orientability), dimensionality (codimensionality, structure, embeddedness), partiality (differentiation, commutativity, simultaneity), and shifting representation (factorization, ideal classes, reciprocity). Each of these functions allows for a different, creative, and potentially critical representation of global political concepts, events, groupings, and relationships. This is how concepts are to be looked at: as manifolds. With such a dimensional understanding of concept-formation, it is possible to deal with complex interactions of like entities, and interactions of unlike entities. Critical theorists have emphasized the importance of such complexity in representation a number of times, speaking about it in terms compatible with mathematical methods if not mathematically. For example, Foucault’s declaration that: practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in (apparently simple) concepts deployed both in global politics.  This leads to a shift in the concept of danger as well, where danger is not an objective condition but “an effect of interpretation”. Critical thinking about how-possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses, sending a wave of added complexity through other concepts as well. This work seeking complexity serves one of the major underlying functions of critical theorizing: finding invisible injustices in (modernist, linear, structuralist) givens in the operation and analysis of global politics.

In a geometric sense, this complexity could be thought about as multidimensional mapping. In theoretical geometry, the process of mapping conceptual spaces is not primarily empirical, but for the purpose of representing and reading the relationships between information, including identification, similarity, differentiation, and distance. The reason for defining topological spaces in math, the essence of the definition, is that there is no absolute scale for describing the distance or relation between certain points, yet it makes sense to say that an (infinite) sequence of points approaches some other (but again, no way to describe how quickly or from what direction one might be approaching). This seemingly weak relationship, which is defined purely ‘locally’, i.e., in a small locale around each point, is often surprisingly powerful: using only the relationship of approaching parts, one can distinguish between, say, a balloon, a sheet of paper, a circle, and a dot.

To each delineated concept, one should distinguish and associate a topological space, in a (necessarily) non-explicit yet definite manner. Whenever one has a relationship between concepts (here we think of the primary relationship as being that of constitution, but not restrictively, we ‘specify’ a function (or inclusion, or relation) between the topological spaces associated to the concepts). In these terms, a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented. Such an approach can be leveraged for thinking about conceptual components, dimensionality, and structure. In these terms, dimensions can be thought of as properties or qualities, each with their own (often-multidimensional) properties or qualities. A key goal of the modeling of conceptual space being representation means that a key (mathematical and theoretical) goal of concept space mapping is

associationism, where associations between different kinds of information elements carry the main burden of representation. (Conceptual_Spaces_as_a_Framework_for_Knowledge_Representation)

To this end,

objects in conceptual space are represented by points, in each domain, that characterize their dimensional values. A concept geometry for conceptual spaces

These dimensional values can be arranged in relation to each other, as Gardenfors explains that

distances represent degrees of similarity between objects represented in space and therefore conceptual spaces are “suitable for representing different kinds of similarity relation. Concept

These similarity relationships can be explored across ideas of a concept and across contexts, but also over time, since “with the aid of a topological structure, we can speak about continuity, e.g., a continuous change” a possibility which can be found only in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations.

Osteo Myological Quantization. Note Quote.

The site of the parameters in a higher order space can also be quantized into segments, the limits of which can be no more decomposed. Such a limit may be nearly a rigid piece. In the animal body such quanta cannot but be bone pieces forming parts of the skeleton, whether lying internally as [endo]-skeleton or as almost rigid shell covering the body as external skeleton.

Note the partition of the body into three main segments: Head (cephalique), pectral (breast), caudal (tail), materializing the KH order limit M>= 3 or the KHK dimensional limit N>= 3. Notice also the quantization into more macroscopic segments such as of the abdominal part into several smaller segments beyond the KHK lower bound N=3. Lateral symmetry with a symmetry axis is remarkable. This is of course an indispensable consequence of the modified Zermelo conditions, which entails also locomotive appendages differentiating into legs for walking and wings for flying in the case of insects.


Two paragraphs of Kondo addressing the simple issues of what bones are, mammalian bi-lateral symmetry, the numbers of major body parts and their segmentation, the notion of the mathematical origins of wings, legs and arms. The dimensionality of eggs being zero, hence their need of warmth for progression to locomotion and the dimensionality of snakes being one, hence their mode of locomotion. A feature of the biological is their attention to detail, their use of line art to depict the various forms of living being – from birds to starfish to dinosaurs, the use of the full latin terminology and at all times the relationship of the various form of living being to the underlying higher order geometry and the mathematical notion of principle ideals. The human skeleton is treated as a hierarchical Kawaguchi tree with its characteristic three pronged form. The Riemannian arc length of the curve k(t) is given by the integral of the square root of a quadratic form in x’ with coefficients dependent in x’. This integrand is homogenous of the first order in x’. If we drop the quadratic property and retain the homogeneity, then we obtain the Finsler geometry. Kawaguchi geometry supposes that the integrand depends upon the higher derivatives x’’ up to the k-th derivative xk. The notation that Kondo uses is:



L Parameters N Dimensions M Derivatives

The lower part of the skeleton can be divided into three prongs, each starting from the centre as a single parametric Kawaguchi tree.

…the skeletal, muscular, gastrointestinal, circulation systems etc combine into a holo-parametric whole that can be more generally quantized, each quantum involving some osteological, neural, circulatory functions etc.

…thus globally the human body from head through trunk to limbs are quantized into a finite number of quanta.

Extreme Value Theory


Standard estimators of the dependence between assets are the correlation coefficient or the Spearman’s rank correlation for instance. However, as stressed by [Embrechts et al. ], these kind of dependence measures suffer from many deficiencies. Moreoever, their values are mostly controlled by relatively small moves of the asset prices around their mean. To cure this problem, it has been proposed to use the correlation coefficients conditioned on large movements of the assets. But [Boyer et al.] have emphasized that this approach suffers also from a severe systematic bias leading to spurious strategies: the conditional correlation in general evolves with time even when the true non-conditional correlation remains constant. In fact, [Malevergne and Sornette] have shown that any approach based on conditional dependence measures implies a spurious change of the intrinsic value of the dependence, measured for instance by copulas. Recall that the copula of several random variables is the (unique) function which completely embodies the dependence between these variables, irrespective of their marginal behavior (see [Nelsen] for a mathematical description of the notion of copula).

In view of these limitations of the standard statistical tools, it is natural to turn to extreme value theory. In the univariate case, extreme value theory is very useful and provides many tools for investigating the extreme tails of distributions of assets returns. These new developments rest on the existence of a few fundamental results on extremes, such as the Gnedenko-Pickands-Balkema-de Haan theorem which gives a general expression for the distribution of exceedence over a large threshold. In this framework, the study of large and extreme co-movements requires the multivariate extreme values theory, which unfortunately does not provide strong results. Indeed, in constrast with the univariate case, the class of limiting extreme-value distributions is too broad and cannot be used to constrain accurately the distribution of large co-movements.

In the spirit of the mean-variance portfolio or of utility theory which establish an investment decision on a unique risk measure, we use the coefficient of tail dependence, which, to our knowledge, was first introduced in the financial context by [Embrechts et al.]. The coefficient of tail dependence between assets Xi and Xj is a very natural and easy to understand measure of extreme co-movements. It is defined as the probability that the asset Xi incurs a large loss (or gain) assuming that the asset Xj also undergoes a large loss (or gain) at the same probability level, in the limit where this probability level explores the extreme tails of the distribution of returns of the two assets. Mathematically speaking, the coefficient of lower tail dependence between the two assets Xi and Xj , denoted by λ−ij is defined by

λ−ij = limu→0 Pr{Xi<Fi−1(u)|Xj < Fj−1(u)} —– (1)

where Fi−1(u) and Fj−1(u) represent the quantiles of assets Xand Xj at level u. Similarly the coefficient of the upper tail dependence is

λ+ij = limu→1 Pr{Xi > Fi−1(u)|Xj > Fj−1(u)} —– (2)

λ−ij and λ+ij are of concern to investors with long (respectively short) positions. We refer to [Coles et al.] and references therein for a survey of the properties of the coefficient of tail dependence. Let us stress that the use of quantiles in the definition of λ−ij and λ+ij makes them independent of the marginal distribution of the asset returns: as a consequence, the tail dependence parameters are intrinsic dependence measures. The obvious gain is an “orthogonal” decomposition of the risks into (1) individual risks carried by the marginal distribution of each asset and (2) their collective risk described by their dependence structure or copula.

Being a probability, the coefficient of tail dependence varies between 0 and 1. A large value of λ−ij means that large losses occur almost surely together. Then, large risks can not be diversified away and the assets crash together. This investor and portfolio manager nightmare is further amplified in real life situations by the limited liquidity of markets. When λ−ij vanishes, these assets are said to be asymptotically independent, but this term hides the subtlety that the assets can still present a non-zero dependence in their tails. For instance, two normally distributed assets can be shown to have a vanishing coefficient of tail dependence. Nevertheless, unless their correlation coefficient is identically zero, these assets are never independent. Thus, asymptotic independence must be understood as the weakest dependence which can be quantified by the coefficient of tail dependence.

For practical implementations, a direct application of the definitions (1) and (2) fails to provide reasonable estimations due to the double curse of dimensionality and undersampling of extreme values, so that a fully non-parametric approach is not reliable. It turns out to be possible to circumvent this fundamental difficulty by considering the general class of factor models, which are among the most widespread and versatile models in finance. They come in two classes: multiplicative and additive factor models respectively. The multiplicative factor models are generally used to model asset fluctuations due to an underlying stochastic volatility for a survey of the properties of these models). The additive factor models are made to relate asset fluctuations to market fluctuations, as in the Capital Asset Pricing Model (CAPM) and its generalizations, or to any set of common factors as in Arbitrage Pricing Theory. The coefficient of tail dependence is known in close form for both classes of factor models, which allows for an efficient empirical estimation.

Yield Curve Dynamics or Fluctuating Multi-Factor Rate Curves


The actual dynamics (as opposed to the risk-neutral dynamics) of the forward rate curve cannot be reduced to that of the short rate: the statistical evidence points out to the necessity of taking into account more degrees of freedom in order to represent in an adequate fashion the complicated deformations of the term structure. In particular, the imperfect correlation between maturities and the rich variety of term structure deformations shows that a one factor model is too rigid to describe yield curve dynamics.

Furthermore, in practice the value of the short rate is either fixed or at least strongly influenced by an authority exterior to the market (the central banks), through a mechanism different in nature from that which determines rates of higher maturities which are negotiated on the market. The short rate can therefore be viewed as an exogenous stochastic input which then gives rise to a deformation of the term structure as the market adjusts to its variations.

Traditional term structure models define – implicitly or explicitly – the random motion of an infinite number of forward rates as diffusions driven by a finite number of independent Brownian motions. This choice may appear surprising, since it introduces a lot of constraints on the type of evolution one can ascribe to each point of the forward rate curve and greatly reduces the dimensionality i.e. the number of degrees of freedom of the model, such that the resulting model is not able to reproduce any more the complex dynamics of the term structure. Multifactor models are usually justified by refering to the results of principal component analysis of term structure fluctuations. However, one should note that the quantities of interest when dealing with the term structure of interest rates are not the first two moments of the forward rates but typically involve expectations of non-linear functions of the forward rate curve: caps and floors are typical examples from this point of view. Hence, although a multifactor model might explain the variance of the forward rate itself, the same model may not be able to explain correctly the variability of portfolio positions involving non-linear combinations of the same forward rates. In other words, a principal component whose associated eigenvalue is small may have a non-negligible effect on the fluctuations of a non-linear function of forward rates. This question is especially relevant when calculating quantiles and Value-at-Risk measures.

In a multifactor model with k sources of randomness, one can use any k + 1 instruments to hedge a given risky payoff. However, this is not what traders do in real markets: a given interest-rate contingent payoff is hedged with bonds of the same maturity. These practices reflect the existence of a risk specific to instruments of a given maturity. The representation of a maturity-specific risk means that, in a continuous-maturity limit, one must also allow the number of sources of randomness to grow with the number of maturities; otherwise one loses the localization in maturity of the source of randomness in the model.

An important ingredient for the tractability of a model is its Markovian character. Non-Markov processes are difficult to simulate and even harder to manipulate analytically. Of course, any process can be transformed into a Markov process if it is imbedded into a space of sufficiently high dimension; this amounts to injecting a sufficient number of “state variables” into the model. These state variables may or may not be observable quantities; for example one such state variable may be the short rate itself but another one could be an economic variable whose value is not deducible from knowledge of the forward rate curve. If the state variables are not directly observed, they are obtainable in principle from the observed interest rates by a filtering process. Nevertheless the presence of unobserved state variables makes the model more difficult to handle both in terms of interpretation and statistical estimation. This drawback has motivated the development of so-called affine curve models models where one imposes that the state variables be affine functions of the observed yield curve. While the affine hypothesis is not necessarily realistic from an empirical point of view, it has the property of directly relating state variables to the observed term structure.

Another feature of term structure movements is that, as a curve, the forward rate curve displays a continuous deformation: configurations of the forward rate curve at dates not too far from each other tend to be similar. Most applications require the yield curve to have some degree of smoothness e.g. differentiability with respect to the maturity. This is not only a purely mathematical requirement but is reflected in market practices of hedging and arbitrage on fixed income instruments. Market practitioners tend to hedge an interest rate risk of a given maturity with instruments of the same maturity or close to it. This important observation means that the maturity is not simply a way of indexing the family of forward rates: market operators expect forward rates whose maturities are close to behave similarly. Moreover, the model should account for the observation that the volatility term structure displays a hump but that multiple humps are never observed.

Comment on Purely Random Correlations of the Matrix, or Studying Noise in Neural Networks


In the presence of two-body interactions the many-body Hamiltonian matrix elements vJα,α′ of good total angular momentum J in the shell-model basis |α⟩ generated by the mean field, can be expressed as follows:

vJα,α′ = ∑J’ii’ cJαα’J’ii’ gJ’ii’ —– (4)

The summation runs over all combinations of the two-particle states |i⟩ coupled to the angular momentum J′ and connected by the two-body interaction g. The analogy of this structure to the one schematically captured by the eq. (2) is evident. gJ’ii’ denote here the radial parts of the corresponding two-body matrix elements while cJαα’J’ii’ globally represent elements of the angular momentum recoupling geometry. gJ’ii’ are drawn from a Gaussian distribution while the geometry expressed by cJαα’J’ii’ enters explicitly. This originates from the fact that a quasi-random coupling of individual spins results in the so-called geometric chaoticity and thus cJαα’ coefficients are also Gaussian distributed. In this case, these two (gJ’ii’ and c) essentially random ingredients lead however to an order of magnitude larger separation of the ground state from the remaining states as compared to a pure Random Matrix Theory (RMT) limit. Due to more severe selection rules the effect of geometric chaoticity does not apply for J = 0. Consistently, the ground state energy gaps measured relative to the average level spacing characteristic for a given J is larger for J > 0 than for J = 0, and also J > 0 ground states are more orderly than those for J = 0, as it can be quantified in terms of the information entropy.

Interestingly, such reductions of dimensionality of the Hamiltonian matrix can also be seen locally in explicit calculations with realistic (non-random) nuclear interactions. A collective state, the one which turns out coherent with some operator representing physical external field, is always surrounded by a reduced density of states, i.e., it repells the other states. In all those cases, the global fluctuation characteristics remain however largely consistent with the corresponding version of the random matrix ensemble.

Recently, a broad arena of applicability of the random matrix theory opens in connection with the most complex systems known to exist in the universe. With no doubt, the most complex is the human’s brain and those phenomena that result from its activity. From the physics point of view the financial world, reflecting such an activity, is of particular interest because its characteristics are quantified directly in terms of numbers and a huge amount of electronically stored financial data is readily available. An access to a single brain activity is also possible by detecting the electric or magnetic fields generated by the neuronal currents. With the present day techniques of electro- or magnetoencephalography, in this way it is possible to generate the time series which resolve neuronal activity down to the scale of 1 ms.

One may debate over what is more complex, the human brain or the financial world, and there is no unique answer. It seems however to us that it is the financial world that is even more complex. After all, it involves the activity of many human brains and it seems even less predictable due to more frequent changes between different modes of action. Noise is of course owerwhelming in either of these systems, as it can be inferred from the structure of eigen-spectra of the correlation matrices taken across different space areas at the same time, or across different time intervals. There however always exist several well identifiable deviations, which, with help of reference to the universal characteristics of the random matrix theory, and with the methodology briefly reviewed above, can be classified as real correlations or collectivity. An easily identifiable gap between the corresponding eigenvalues of the correlation matrix and the bulk of its eigenspectrum plays the central role in this connection. The brain when responding to the sensory stimulations develops larger gaps than the brain at rest. The correlation matrix formalism in its most general asymmetric form allows to study also the time-delayed correlations, like the ones between the oposite hemispheres. The time-delay reflecting the maximum of correlation (time needed for an information to be transmitted between the different sensory areas in the brain is also associated with appearance of one significantly larger eigenvalue. Similar effects appear to govern formation of the heteropolymeric biomolecules. The ones that nature makes use of are separated by an energy gap from the purely random sequences.


Purely Random Correlations of the Matrix, or Studying Noise in Neural Networks


Expressed in the most general form, in essentially all the cases of practical interest, the n × n matrices W used to describe the complex system are by construction designed as

W = XYT —– (1)

where X and Y denote the rectangular n × m matrices. Such, for instance, are the correlation matrices whose standard form corresponds to Y = X. In this case one thinks of n observations or cases, each represented by a m dimensional row vector xi (yi), (i = 1, …, n), and typically m is larger than n. In the limit of purely random correlations the matrix W is then said to be a Wishart matrix. The resulting density ρW(λ) of eigenvalues is here known analytically, with the limits (λmin ≤ λ ≤ λmax) prescribed by

λmaxmin = 1+1/Q±2 1/Q and Q = m/n ≥ 1.

The variance of the elements of xi is here assumed unity.

The more general case, of X and Y different, results in asymmetric correlation matrices with complex eigenvalues λ. In this more general case a limiting distribution corresponding to purely random correlations seems not to be yet known analytically as a function of m/n. It indicates however that in the case of no correlations, quite generically, one may expect a largely uniform distribution of λ bound in an ellipse on the complex plane.

Further examples of matrices of similar structure, of great interest from the point of view of complexity, include the Hamiltonian matrices of strongly interacting quantum many body systems such as atomic nuclei. This holds true on the level of bound states where the problem is described by the Hermitian matrices, as well as for excitations embedded in the continuum. This later case can be formulated in terms of an open quantum system, which is represented by a complex non-Hermitian Hamiltonian matrix. Several neural network models also belong to this category of matrix structure. In this domain the reference is provided by the Gaussian (orthogonal, unitary, symplectic) ensembles of random matrices with the semi-circle law for the eigenvalue distribution. For the irreversible processes there exists their complex version with a special case, the so-called scattering ensemble, which accounts for S-matrix unitarity.

As it has already been expressed above, several variants of ensembles of the random matrices provide an appropriate and natural reference for quantifying various characteristics of complexity. The bulk of such characteristics is expected to be consistent with Random Matrix Theory (RMT), and in fact there exists strong evidence that it is. Once this is established, even more interesting are however deviations, especially those signaling emergence of synchronous or coherent patterns, i.e., the effects connected with the reduction of dimensionality. In the matrix terminology such patterns can thus be associated with a significantly reduced rank k (thus k ≪ n) of a leading component of W. A satisfactory structure of the matrix that would allow some coexistence of chaos or noise and of collectivity thus reads:

W = Wr + Wc —– (2)

Of course, in the absence of Wr, the second term (Wc) of W generates k nonzero eigenvalues, and all the remaining ones (n − k) constitute the zero modes. When Wr enters as a noise (random like matrix) correction, a trace of the above effect is expected to remain, i.e., k large eigenvalues and the bulk composed of n − k small eigenvalues whose distribution and fluctuations are consistent with an appropriate version of random matrix ensemble. One likely mechanism that may lead to such a segregation of eigenspectra is that m in eq. (1) is significantly smaller than n, or that the number of large components makes it effectively small on the level of large entries w of W. Such an effective reduction of m (M = meff) is then expressed by the following distribution P(w) of the large off-diagonal matrix elements in the case they are still generated by the random like processes

P(w) = (|w|(M-1)/2K(M-1)/2(|w|))/(2(M-1)/2Γ(M/2)√π) —– (3)

where K stands for the modified Bessel function. Asymptotically, for large w, this leads to P(w) ∼ e(−|w|) |w|M/2−1, and thus reflects an enhanced probability of appearence of a few large off-diagonal matrix elements as compared to a Gaussian distribution. As consistent with the central limit theorem the distribution quickly converges to a Gaussian with increasing M.

Based on several examples of natural complex dynamical systems, like the strongly interacting Fermi systems, the human brain and the financial markets, one could systematize evidence that such effects are indeed common to all the phenomena that intuitively can be qualified as complex.