Network Theoretic of the Fermionic Quantum State – Epistemological Rumination. Thought of the Day 150.0

galitski_moatv3

In quantum physics, fundamental particles are believed to be of two types: fermions or bosons, depending on the value of their spin (an intrinsic ‘angular moment’ of the particle). Fermions have half-integer spin and cannot occupy a quantum state (a configuration with specified microscopic degrees of freedom, or quantum numbers) that is already occupied. In other words, at most one fermion at a time can occupy one quantum state. The resulting probability that a quantum state is occupied is known as the Fermi-Dirac statistics.

Now, if we want to convert this into a model with maximum entropy, where the real movement is defined topologically, then we require a reproduction of heterogeneity that is observed. The starting recourse is network theory with an ensemble of networks where each vertex i has the same degree ki as in the real network. This choice is justified by the fact that, being an entirely local topological property, the degree is expected to be directly affected by some intrinsic (non-topological) property of vertices. The caveat is that the real shouldn’t be compared with the randomized, which could otherwise lead to interpreting the observed as ‘unavoidable’ topological constraints, in the sense that the violation of the observed values would lead to an ‘impossible’, or at least very unrealistic values.

The resulting model is known as the Configuration Model, and is defined as a maximum-entropy ensemble of graphs with given degree sequence. The degree sequence, which is the constraint defining the model, is nothing but the ordered vector k of degrees of all vertices (where the ith component ki is the degree of vertex i). The ordering preserves the ‘identity’ of vertices: in the resulting network ensemble, the expected degree ⟨ki⟩ of each vertex i is the same as the empirical value ki for that vertex. In the Configuration Model, the graph probability is given by

P(A) = ∏i<jqij(aij) =  ∏i<jpijaij (1 – pij)1-aij —– (1)

where qij(a) = pija (1 – pij)1-a is the probability that particular entry of the adjacency matrix A takes the value aij = a, which is a Bernoulli process with different pairs of vertices characterized by different connection probabilities pij. A Bernoulli trial (or Bernoulli process) is the simplest random event, i.e. one characterized by only two possible outcomes. One of the two outcomes is referred to as the ‘success’ and is assigned a probability p. The other outcome is referred to as the ‘failure’, and is assigned the complementary probability 1 − p. These probabilities read

⟨aij⟩ = pij = (xixj)/(1 + xixj) —– (2)

where xi is the Lagrange multiplier obtained by ensuring that the expected degree of the corresponding vertex i equals its observed value: ⟨ki⟩ = ki ∀ i. As always happens in maximum-entropy ensembles, the probabilistic nature of configurations implies that the constraints are valid only on average (the angular brackets indicate an average over the ensemble of realizable networks). Also note that pij is a monotonically increasing function of xi and xj. This implies that ⟨ki⟩ is a monotonically increasing function of xi. An important consequence is that two variables i and j with the same degree ki = kj must have the same value xi = xj.

Unknown

(2) provides an interesting connection with quantum physics, and in particular the statistical mechanics of fermions. The ‘selection rules’ of fermions dictate that only one particle at a time can occupy a single-particle state, exactly as each pair of vertices in binary networks can be either connected or disconnected. In this analogy, every pair i, j of vertices is a ‘quantum state’ identified by the ‘quantum numbers’ i and j. So each link of a binary network is like a fermion that can be in one of the available states, provided that no two objects are in the same state. (2) indicates the expected number of particles/links in the state specified by i and j. With no surprise, it has the same form of the so-called Fermi-Dirac statistics describing the expected number of fermions in a given quantum state. The probabilistic nature of links allows also for the presence of empty states, whose occurrence is now regulated by the probability coefficients (1 − pij). The Configuration Model allows the whole degree sequence of the observed network to be preserved (on average), while randomizing other (unconstrained) network properties. now, when one compares the higher-order (unconstrained) observed topological properties with their expected values calculated over the maximum-entropy ensemble, it should be indicative of the fact that the degree of sequence is informative in explaining the rest of the topology, which is a consequent via probabilities in (2). Colliding these into a scatter plot, the agreement between model and observations can be simply assessed as follows: the less scattered the cloud of points around the identity function, the better the agreement between model and reality. In principle, a broadly scattered cloud around the identity function would indicate the little effectiveness of the chosen constraints in reproducing the unconstrained properties, signaling the presence of genuine higher-order patterns of self-organization, not simply explainable in terms of the degree sequence alone. Thus, the ‘fermionic’ character of the binary model is the mere result of the restriction that no two binary links can be placed between any two vertices, leading to a mathematical result which is formally equivalent to the one of quantum statistics.

Expressivity of Bodies: The Synesthetic Affinity Between Deleuze and Merleau-Ponty. Thought of the Day 54.0

6

It is in the description of the synesthetic experience that Deleuze finds resources for his own theory of sensation. And it is in this context that Deleuze and Merleau-Ponty are closest. For Deleuze sees each sensation as a dynamic evolution, sensation is that which passes from one ‘order’ to another, from one ‘level’ to another. This means that each sensation is at diverse levels, of different orders, or in several domains….it is characteristic of sensation to encompass a constitutive difference of level and a plurality of constituting domains. What this means for Deleuze is that sensations cannot be isolated in a particular field of sense; these fields interpenetrate, so that sensation jumps from one domain to another, becoming-color in the visual field or becoming-music on the auditory level. For Deleuze (and this goes beyond what Merleau-Ponty explicitly says), sensation can flow from one field to another, because it belongs to a vital rhythm which subtends these fields, or more precisely, which gives rise to the different fields of sense as it contracts and expands, as it moves between different levels of tension and dilation.

If, as Merleau-Ponty says (and Deleuze concurs), synesthetic perception is the rule, then the act of recognition that identifies each sensation with a determinate quality or sense and operates their synthesis within the unity of an object, hides from us the complexity of perception, and the heterogeneity of the perceiving body. Synesthesia shows that the unity of the body is constituted in the transversal communication of the senses. But these senses are not pre given in the body; they correspond to sensations that move between levels of bodily energy – finding different expression in each other. To each of these levels corresponds a particular way of living space and time; hence the simultaneity in depth that is experienced in vision is not the lateral coexistence of touch, and the continuous, sensuous and overlapping extension of touch is lost in the expansion of vision. This heterogenous multiplicity of levels, or senses, is open to communication; each expresses its embodiment in its own way, and each expresses differently the contents of the other senses.

Thus sensation is not the causal process, but the communication and synchronization of senses within my body, and of my body with the sensible world; it is, as Merleau-Ponty says, a communion. And despite frequent appeal in the Phenomenology of Perception to the sameness of the body and to the common world to ground the diversity of experience, the appeal here goes in a different direction. It is the differences of rhythm and of becoming, which characterize the sensible world, that open it up to my experience. For the expressive body is itself such a rhythm, capable of synchronizing and coexisting with the others. And Merleau-Ponty refers to this relationship between the body and the world as one of sympathy. He is close here to identifying the lived body with the temporization of existence, with a particular rhythm of duration; and he is close to perceiving the world as the coexistence of such temporalizations, such rhythms. The expressivity of the lived body implies a singular relation to others, and a different kind of intercorporeity than would be the case for two merely physical bodies. This intercorporeity should be understood as inter-temporality. Merleau-Ponty proposes this at the end of the chapter on perception in his Phenomenology of Perception, when he says,

But two temporalities are not mutually exclusive as are two consciousnesses, because each one knows itself only by projecting itself into the present where they can interweave.

Thus our bodies as different rhythms of duration can coexist and communicate, can synchronize to each other – in the same way that my body vibrated to the colors of the sensible world. But, in the case of two lived bodies, the synchronization occurs on both sides – with the result that I can experience an internal resonance with the other when the experiences harmonize, or the shattering disappointment of a  miscommunication when the attempt fails. The experience of coexistence is hence not a guarantee of communication or understanding, for this communication must ultimately be based on our differences as expressive bodies and singular durations. Our coexistence calls forth an attempt, which is the intuition.

Rhizomatic Topology and Global Politics. A Flirtatious Relationship.

 

rhizome

Deleuze and Guattari see concepts as rhizomes, biological entities endowed with unique properties. They see concepts as spatially representable, where the representation contains principles of connection and heterogeneity: any point of a rhizome must be connected to any other. Deleuze and Guattari list the possible benefits of spatial representation of concepts, including the ability to represent complex multiplicity, the potential to free a concept from foundationalism, and the ability to show both breadth and depth. In this view, geometric interpretations move away from the insidious understanding of the world in terms of dualisms, dichotomies, and lines, to understand conceptual relations in terms of space and shapes. The ontology of concepts is thus, in their view, appropriately geometric, a multiplicity defined not by its elements, nor by a center of unification and comprehension and instead measured by its dimensionality and its heterogeneity. The conceptual multiplicity, is already composed of heterogeneous terms in symbiosis, and is continually transforming itself such that it is possible to follow, and map, not only the relationships between ideas but how they change over time. In fact, the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference. They outline the unique contribution of geometric models to the understanding of contingent structure:

Principle of cartography and decalcomania: a rhizome is not amenable to any structural or generative model. It is a stranger to any idea of genetic axis or deep structure. A genetic axis is like an objective pivotal unity upon which successive stages are organized; deep structure is more like a base sequence that can be broken down into immediate constituents, while the unity of the product passes into another, transformational and subjective, dimension. (Deleuze and Guattari)

The word that Deleuze and Guattari use for ‘multiplicities’ can also be translated to the topological term ‘manifold.’ If we thought about their multiplicities as manifolds, there are a virtually unlimited number of things one could come to know, in geometric terms, about (and with) our object of study, abstractly speaking. Among those unlimited things we could learn are properties of groups (homological, cohomological, and homeomorphic), complex directionality (maps, morphisms, isomorphisms, and orientability), dimensionality (codimensionality, structure, embeddedness), partiality (differentiation, commutativity, simultaneity), and shifting representation (factorization, ideal classes, reciprocity). Each of these functions allows for a different, creative, and potentially critical representation of global political concepts, events, groupings, and relationships. This is how concepts are to be looked at: as manifolds. With such a dimensional understanding of concept-formation, it is possible to deal with complex interactions of like entities, and interactions of unlike entities. Critical theorists have emphasized the importance of such complexity in representation a number of times, speaking about it in terms compatible with mathematical methods if not mathematically. For example, Foucault’s declaration that: practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in (apparently simple) concepts deployed both in global politics.  This leads to a shift in the concept of danger as well, where danger is not an objective condition but “an effect of interpretation”. Critical thinking about how-possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses, sending a wave of added complexity through other concepts as well. This work seeking complexity serves one of the major underlying functions of critical theorizing: finding invisible injustices in (modernist, linear, structuralist) givens in the operation and analysis of global politics.

In a geometric sense, this complexity could be thought about as multidimensional mapping. In theoretical geometry, the process of mapping conceptual spaces is not primarily empirical, but for the purpose of representing and reading the relationships between information, including identification, similarity, differentiation, and distance. The reason for defining topological spaces in math, the essence of the definition, is that there is no absolute scale for describing the distance or relation between certain points, yet it makes sense to say that an (infinite) sequence of points approaches some other (but again, no way to describe how quickly or from what direction one might be approaching). This seemingly weak relationship, which is defined purely ‘locally’, i.e., in a small locale around each point, is often surprisingly powerful: using only the relationship of approaching parts, one can distinguish between, say, a balloon, a sheet of paper, a circle, and a dot.

To each delineated concept, one should distinguish and associate a topological space, in a (necessarily) non-explicit yet definite manner. Whenever one has a relationship between concepts (here we think of the primary relationship as being that of constitution, but not restrictively, we ‘specify’ a function (or inclusion, or relation) between the topological spaces associated to the concepts). In these terms, a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented. Such an approach can be leveraged for thinking about conceptual components, dimensionality, and structure. In these terms, dimensions can be thought of as properties or qualities, each with their own (often-multidimensional) properties or qualities. A key goal of the modeling of conceptual space being representation means that a key (mathematical and theoretical) goal of concept space mapping is

associationism, where associations between different kinds of information elements carry the main burden of representation. (Conceptual_Spaces_as_a_Framework_for_Knowledge_Representation)

To this end,

objects in conceptual space are represented by points, in each domain, that characterize their dimensional values. A concept geometry for conceptual spaces

These dimensional values can be arranged in relation to each other, as Gardenfors explains that

distances represent degrees of similarity between objects represented in space and therefore conceptual spaces are “suitable for representing different kinds of similarity relation. Concept

These similarity relationships can be explored across ideas of a concept and across contexts, but also over time, since “with the aid of a topological structure, we can speak about continuity, e.g., a continuous change” a possibility which can be found only in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations.

Biogrammatic Vir(Ac)tuality. Note Quote.

In Foucault’s most famous example, the prison acts as the confluence of content (prisoners) and expression (law, penal code) (Gilles Deleuze, Sean Hand-Foucault). Informal Diagrams are proliferate. As abstract machines they contain the transversal vectors that cut across a panoply of features (such as institutions, classes, persons, economic formation, etc), mapping from point to relational point, the generalized features of power economies. The disciplinary diagram explored by Foucault, imposes “a particular conduct upon a particular human multiplicity”. The imposition of force upon force affects and effectuates the felt experience of a life, a living. Deleuze has called the abstract machine “pure matter/function” in which relations between forces are nonetheless very real.

[…] the diagram acts as a non-unifying immanent cause that is co-extensive with the whole social field: the abstract machine is like the cause of the concrete assemblages that execute its relations; and these relations between forces take place ‘not above’ but within the very tissue of the assemblages they produce.

The processual conjunction of content and expression; the cutting edge of deterritorialization:

The relations of power and resistance between theory and practice resonate – becoming-form; diagrammatics as praxis, integrates and differentiates the immanent cause and quasi-cause of the actualized occasions of research/creation. What do we mean by immanent cause? It is a cause which is realized, integrated and distinguished in its effect. Or rather, the immanent cause is realized, integrated and distinguished by its effect. In this way there is a correlation or mutual presupposition between cause and effect, between abstract machine and concrete assemblages

Memory is the real name of the relation to oneself, or the affect of self by self […] Time becomes a subject because it is the folding of the outside…forces every present into forgetting but preserves the whole of the past within memory: forgetting is the impossibiltiy of return and memory is the necessity of renewal.

Untitled

The figure on the left is Henri Bergson’s diagram of an infinitely contracted past that directly intersects with the body at point S – a mobile, sensorimotor present where memory is closest to action. Plane P represents the actual present; plane of contact with objects. The AB segments represent repetitive compressions of memory. As memory contracts it gets closer to action. In it’s more expanded forms it is closer to dreams. The figure on the right extrapolates from Bergson’s memory model to describe the Biogrammatic ontological vector of the Diagram as it moves from abstract (informal) machine in the most expanded form “A” through the cone “tissue” to the phase-shifting (formal), arriving at the Strata of the P plane to become artefact. The ontological vector passes through the stratified, through the interval of difference created in the phase shift (the same phase shift that separates and folds content and expression to move vertically, transversally, back through to the abstract diagram.)

A spatio-temporal-material contracting-expanding of the abstract machine is the processual thinking-feeling-articulating of the diagram becoming-cartographic; synaesthetic conceptual mapping. A play of forces, a series of relays, affecting a tendency toward an inflection of the informal diagram becoming-form. The inflected diagram/biogram folds and unfolds perception, appearances; rides in the gap of becoming between content and expression; intuitively transduces the actualizing (thinking, drawing, marking, erasing) of matter-movement, of expressivity-movement. “To follow the flow of matter… is intuition in action.” A processual stage that prehends the process of the virtual actualizing;

the creative construction of a new reality. The biogrammatic stage of the diagrammatic is paradoxically double in that it is both the actualizing of the abstract machine (contraction) and the recursive counter-actualization of the formal diagram (détournement); virtual and actual.

It is the event-dimension of potential – that is the effective dimension of the interrelating of elements, of their belonging to each other. That belonging is a dynamic corporeal “abstraction” – the “drawing off” (transductive conversion) of the corporeal into its dynamism (yielding the event) […] In direct channeling. That is, in a directional channeling: ontological vector. The transductive conversion is an ontological vector that in-gathers a heterogeneity of substantial elements along with the already-constituted abstractions of language (“meaning”) and delivers them together to change. (Brian Massumi Parables for the Virtual Movement, Affect, Sensation)

Skin is the space of the body the BwO that is interior and exterior. Interstitial matter of the space of the body.

Untitled

The material markings and traces of a diagrammatic process, a ‘capturing’ becoming-form. A diagrammatic capturing involves a transductive process between a biogrammatic form of content and a form of expression. The formal diagram is thus an individuating phase-shift as Simondon would have it, always out-of-phase with itself. A becoming-form that inhabits the gap, the difference, between the wave phase of the biogrammatic that synaesthetically draws off the intermix of substance and language in the event-dimension and the drawing of wave phase in which partial capture is formalized. The phase shift difference never acquires a vectorial intention. A pre-decisive, pre-emptive drawing of phase-shifting with a “drawing off” the biogram.

Untitled

If effects realize something this is because the relations between forces or power relations, are merely virtual, potential, unstable vanishing and molecular, and define only possibilities of interaction so long as they do not enter a macroscopic whole capable of giving form to their fluid manner and diffuse function. But realization is equally an integration, a collection of progressive integrations that are initially local and then become or tend to become global, aligning, homogenizing and summarizing relations between forces: here law is the integration of illegalisms.

 

Austrian Economics. Some More Further Ruminations. Part 3.

The dominant British tradition received its first serious challenge in many years when Carl Menger’s Principles of Economics was published in 1871. Menger, the founder of the Austrian School proper, resurrected the Scholastic-French approach to economics, and put it on firmer ground.

Menger spelled out the subjective basis of economic value, and fully explained, for the first time, the theory of marginal utility (the greater the number of units of a good that an individual possesses, the less he will value any given unit). In addition, Menger showed how money originates in a free market when the most marketable commodity is desired, not for consumption, but for use in trading for other goods. Menger restored economics as the science of human action based on deductive logic, and prepared the way for later theorists to counter the influence of socialist thought. Indeed, his student Friederich von Wieser strongly influenced Friedrich von Hayek’s later writings.

Menger’s admirer and follower at the University of Innsbruck, Eugen Böhm-Bawerk, took Menger’s exposition, reformulated it, and applied it to a host of new problems involving value, price, capital, and interest. His History and Critique of Interest Theories, appearing in 1884, is a sweeping account of fallacies in the history of thought and a firm defense of the idea that the interest rate is not an artificial construct but an inherent part of the market. It reflects the universal fact of “time preference,” the tendency of people to prefer satisfaction of wants sooner rather than later.

Böhm-Bawerk’s Positive Theory of Capital demonstrated that the normal rate of business profit is the interest rate. Capitalists save money, pay laborers, and wait until the final product is sold to receive profit. In addition, he demonstrated that capital is not homogeneous but an intricate and diverse structure that has a time dimension. A growing economy is not just a consequence of increased capital investment, but also of longer and longer processes of production.

Böhm-Bawerk favored policies that deferred to the ever-present reality of economic law. He regarded interventionism as an attack on market economic forces that cannot succeed in the long run. But one area where Böhm-Bawerk had not elaborated on the analysis of Menger was money, the institutional intersection of the “micro” and “macro” approach. A young Ludwig von Mises, economic advisor to the Austrian Chamber of Commerce, took on the challenge.

The result of Mises’s research was The Theory of Money and Credit, published in 1912. He spelled out how the theory of marginal utility applies to money, and laid out his “regression theorem,” showing that money not only originates in the market, but must always do so. Drawing on the British Currency School, Knut Wicksell’s theory of interest rates, and Böhm-Bawerk’s theory of the structure of production, Mises presented the broad outline of the Austrian theory of the business cycle. To note once again, his was not a theory of the physical capital, but a theory of interest. So, even if some of the economists of the school had covered through their writings the complexities of the structure of production, that wasn’t really their research object, but rather what their concentration really opted for was interest phenomenon, trade cycle or entrepreneurship.

Ludwig Lachmann in his Capital and its Structure is most serious about the complexities of the structure of production, especially on the heterogeneity of physical capital not only in relation to successive stages of production, but denying any possibility of systematically categorizing, measuring or aggregating capital goods. But, does that mean he is from a different camp? Evidently not, since much of his discussion contains an important contribution to the historically specificity of capital, in that the heterogenous is not itself the research object, but only a problem statement for the theory of the entrepreneur. Says he,

For most purposes capital goods have to be used jointly. complementarity is of the essence of capital use. but the heterogenous capital resources do not lend themselves to combination in any arbitrary fashion. For any given number of them only certain modes of complementarity are technically possible, and only a few of these are economically significant. It is among the latter that the entrepreneur has to find the ‘optimum combination’.

for him, the true function of the entrepreneur must remain hidden as long as we disregard the heterogeneity of capital. But, Peter Lewin’s Capital in Disequilibrium reads Lachmann revealingly. What makes it possible for entrepreneurs to make production plans comprising numerous heterogenous capital goods is a combination of the market process and the institution of money and financial accounting. There, you can see Lachmann slipping into the historical territory. Says Lewin,

Planning within firms proceeds against the necessary backdrop of the market. Planning within firms can occur precisely because “the market” furnishes it with the necessary prices for the factor inputs that would be absent in a fullblown state ownership situation.

Based on these prices, the institution of monetary calculation allows entrepreneurs to calculate retrospective and prospective profits. The calculation of profits, Lewin states, is “indispensable in that it provides the basis for discrimination between viable and non-viable production projects.” The approach is not concerned with the heterogeneity of capital goods as such but, to the contrary, with the way these goods are made homogeneous so that entrepreneurs can make the calculations their production plans are based on. Without this homogeneity of capital goods in relation to the goal of the entrepreneur – making monetary profit – it would be difficult, if not impossible, to combine them in a meaningful way.