Hegelian Marxism of Lukács: Philosophy as Systematization of Ideology and Politics as Manipulation of Ideology. Thought of the Day 80.0

turleyjames_lukacsaftermathchart

In the Hegelian Marxism of Lukács, for instance, the historicist problematic begins from the relativisation of theory, whereby that it is claimed that historical materialism is the “perspective” and “worldview” of the revolutionary class and that, in general, theory (philosophy) is only the coherent systematisation of the ideological worldview of a social group. No distinction of kind exists between theory and ideology, opening the path for the foundational character of ideology, expressed through the Lukácsian claim that the ideological consciousness of a historical subject is the expression of objective relations, and that, correlatively, this historical subject (the proletariat) alienates-expresses a free society by means of a transparent grasp of social processes. The society, as an expression of a single structure of social relations (where the commodity form and reified consciousness are theoretical equivalents) is an expressive totality, so that politics and ideology can be directly deduced from philosophical relations. According to Lukács’ directly Hegelian conception, the historical subject is the unified proletariat, which, as the “creator of the totality of [social] contents”, makes history according to its conception of the world, and thus functions as an identical subject-object of history. The identical subject-object and the transparency of praxis therefore form the telos of the historical process. Lukács reduces the multiplicity of social practices operative within the social formation to the model of an individual “making history,” through the externalisation of an intellectual conception of the world. Lukács therefore arrives at the final element of the historicist problematic, namely, a theorisation of social practice on the model of individual praxis, presented as the historical action of a “collective individual”. This structure of claims is vulnerable to philosophical deconstruction (Gasché) and leads to individualist political conclusions (Althusser).

In the light of the Gramscian provenance of postmarxism, it is important to note that while the explicit target of Althusser’s critique was the Hegelian totality, Althusser is equally critical of the aleatory posture of Gramsci’s “absolute historicism,” regarding it as exemplary of the impasse of radicalised historicism (Reading Capital). Althusser argues that Gramsci preserves the philosophical structure of historicism exemplified by Lukács and so the criticism of “expressive totality,” or spiritual holism, also applies to Gramsci. According to Gramsci, “the philosophy of praxis is absolute ‘historicism,’ the absolute secularisation and earthiness of thought, an absolute humanism of history”. Gramsci’s is an “absolute” historicism because it subjects the “absolute knowledge” supposed to be possible at the Hegelian “end of history” to historicisation-relativisation: instead of absolute knowledge, every truly universal worldview becomes merely the epochal totalisation of the present. Consequently, Gramsci rejects the conception that a social agent might aspire to “absolute knowledge” by adopting the “perspective of totality”. If anything, this exacerbates the problems of historicism by bringing the inherent relativism of the position to the surface. Ideology, conceptualised as the worldview of a historical subject (revolutionary proletariat, hegemonic alliance), forms the foundation of the social field, because in the historicist lens a social system is cemented by the ideology of the dominant group. Philosophy (and by extension, theory) represents only the systematisation of ideology into a coherent doctrine, while politics is based on ideological manipulation as its necessary precondition. Thus, for historicism, every “theoretical” intervention is immediately a political act, and correlatively, theory becomes the direct servant of ideology.

Accelerated Capital as an Anathema to the Principles of Communicative Action. A Note Quote on the Reciprocity of Capital and Ethicality of Financial Economics

continuum

Markowitz portfolio theory explicitly observes that portfolio managers are not (expected) utility maximisers, as they diversify, and offers the hypothesis that a desire for reward is tempered by a fear of uncertainty. This model concludes that all investors should hold the same portfolio, their individual risk-reward objectives are satisfied by the weighting of this ‘index portfolio’ in comparison to riskless cash in the bank, a point on the capital market line. The slope of the Capital Market Line is the market price of risk, which is an important parameter in arbitrage arguments.

Merton had initially attempted to provide an alternative to Markowitz based on utility maximisation employing stochastic calculus. He was only able to resolve the problem by employing the hedging arguments of Black and Scholes, and in doing so built a model that was based on the absence of arbitrage, free of turpe-lucrum. That the prescriptive statement “it should not be possible to make sure profits”, is a statement explicit in the Efficient Markets Hypothesis and in employing an Arrow security in the context of the Law of One Price. Based on these observations, we conject that the whole paradigm for financial economics is built on the principle of balanced reciprocity. In order to explore this conjecture we shall examine the relationship between commerce and themes in Pragmatic philosophy. Specifically, we highlight Robert Brandom’s (Making It Explicit Reasoning, Representing, and Discursive Commitment) position that there is a pragmatist conception of norms – a notion of primitive correctnesses of performance implicit in practice that precludes and are presupposed by their explicit formulation in rules and principles.

The ‘primitive correctnesses’ of commercial practices was recognised by Aristotle when he investigated the nature of Justice in the context of commerce and then by Olivi when he looked favourably on merchants. It is exhibited in the doux-commerce thesis, compare Fourcade and Healey’s contemporary description of the thesis Commerce teaches ethics mainly through its communicative dimension, that is, by promoting conversations among equals and exchange between strangers, with Putnam’s description of Habermas’ communicative action based on the norm of sincerity, the norm of truth-telling, and the norm of asserting only what is rationally warranted …[and] is contrasted with manipulation (Hilary Putnam The Collapse of the Fact Value Dichotomy and Other Essays)

There are practices (that should be) implicit in commerce that make it an exemplar of communicative action. A further expression of markets as centres of communication is manifested in the Asian description of a market brings to mind Donald Davidson’s (Subjective, Intersubjective, Objective) argument that knowledge is not the product of a bipartite conversations but a tripartite relationship between two speakers and their shared environment. Replacing the negotiation between market agents with an algorithm that delivers a theoretical price replaces ‘knowledge’, generated through communication, with dogma. The problem with the performativity that Donald MacKenzie (An Engine, Not a Camera_ How Financial Models Shape Markets) is concerned with is one of monism. In employing pricing algorithms, the markets cannot perform to something that comes close to ‘true belief’, which can only be identified through communication between sapient humans. This is an almost trivial observation to (successful) market participants, but difficult to appreciate by spectators who seek to attain ‘objective’ knowledge of markets from a distance. To appreciate the relevance to financial crises of the position that ‘true belief’ is about establishing coherence through myriad triangulations centred on an asset rather than relying on a theoretical model.

Shifting gears now, unless the martingale measure is a by-product of a hedging approach, the price given by such martingale measures is not related to the cost of a hedging strategy therefore the meaning of such ‘prices’ is not clear. If the hedging argument cannot be employed, as in the markets studied by Cont and Tankov (Financial Modelling with Jump Processes), there is no conceptual framework supporting the prices obtained from the Fundamental Theorem of Asset Pricing. This lack of meaning can be interpreted as a consequence of the strict fact/value dichotomy in contemporary mathematics that came with the eclipse of Poincaré’s Intuitionism by Hilbert’s Formalism and Bourbaki’s Rationalism. The practical problem of supporting the social norms of market exchange has been replaced by a theoretical problem of developing formal models of markets. These models then legitimate the actions of agents in the market without having to make reference to explicitly normative values.

The Efficient Market Hypothesis is based on the axiom that the market price is determined by the balance between supply and demand, and so an increase in trading facilitates the convergence to equilibrium. If this axiom is replaced by the axiom of reciprocity, the justification for speculative activity in support of efficient markets disappears. In fact, the axiom of reciprocity would de-legitimise ‘true’ arbitrage opportunities, as being unfair. This would not necessarily make the activities of actual market arbitrageurs illicit, since there are rarely strategies that are without the risk of a loss, however, it would place more emphasis on the risks of speculation and inhibit the hubris that has been associated with the prelude to the recent Crisis. These points raise the question of the legitimacy of speculation in the markets. In an attempt to understand this issue Gabrielle and Reuven Brenner identify the three types of market participant. ‘Investors’ are preoccupied with future scarcity and so defer income. Because uncertainty exposes the investor to the risk of loss, investors wish to minimise uncertainty at the cost of potential profits, this is the basis of classical investment theory. ‘Gamblers’ will bet on an outcome taking odds that have been agreed on by society, such as with a sporting bet or in a casino, and relates to de Moivre’s and Montmort’s ‘taming of chance’. ‘Speculators’ bet on a mis-calculation of the odds quoted by society and the reason why speculators are regarded as socially questionable is that they have opinions that are explicitly at odds with the consensus: they are practitioners who rebel against a theoretical ‘Truth’. This is captured in Arjun Appadurai’s argument that the leading agents in modern finance believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control . . . [they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”.

In the context of Pragmatism, financial speculators embody pluralism, a concept essential to Pragmatic thinking and an antidote to the problem of radical uncertainty. Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (The Gift), exploring the moral force behind reciprocity in primitive and archaic societies and goes on to say that the contemporary financial speculator is “betting on the obligation of return”, and this is the fundamental axiom of contemporary finance. David Graeber (Debt The First 5,000 Years) also recognises the fundamental position reciprocity has in finance, but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores uncertainty in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts”. In advocating that reciprocity need not be honoured, Graeber is not just challenging contemporary capitalism but also the foundations of the civitas, based on equality and reciprocity. The origins of Graeber’s argument are in the first half of the nineteenth century. In 1836 John Stuart Mill defined political economy as being concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end.

In Principles of Political Economy With Some of Their Applications to Social Philosophy, Mill defended Thomas Malthus’ An Essay on the Principle of Population, which focused on scarcity. Mill was writing at a time when Europe was struck by the Cholera pandemic of 1829–1851 and the famines of 1845–1851 and while Lord Tennyson was describing nature as “red in tooth and claw”. At this time, society’s fear of uncertainty seems to have been replaced by a fear of scarcity, and these standards of objectivity dominated economic thought through the twentieth century. Almost a hundred years after Mill, Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”. Dichotomies emerge in the aftermath of the Cartesian revolution that aims to remove doubt from philosophy. Theory and practice, subject and object, facts and values, means and ends are all separated. In this environment ex cathedra norms, in particular utility (profit) maximisation, encroach on commercial practice.

In order to set boundaries on commercial behaviour motivated by profit maximisation, particularly when market uncertainty returned after the Nixon shock of 1971, society imposes regulations on practice. As a consequence, two competing ethics, functional Consequential ethics guiding market practices and regulatory Deontological ethics attempting stabilise the system, vie for supremacy. It is in this debilitating competition between two essentially theoretical ethical frameworks that we offer an explanation for the Financial Crisis of 2007-2009: profit maximisation, not speculation, is destabilising in the presence of radical uncertainty and regulation cannot keep up with motivated profit maximisers who can justify their actions through abstract mathematical models that bare little resemblance to actual markets. An implication of reorienting financial economics to focus on the markets as centres of ‘communicative action’ is that markets could become self-regulating, in the same way that the legal or medical spheres are self-regulated through professions. This is not a ‘libertarian’ argument based on freeing the Consequential ethic from a Deontological brake. Rather it argues that being a market participant entails restricting norms on the agent such as sincerity and truth telling that support knowledge creation, of asset prices, within a broader objective of social cohesion. This immediately calls into question the legitimacy of algorithmic/high- frequency trading that seems an anathema in regard to the principles of communicative action.

Conjuncted: Occam’s Razor and Nomological Hypothesis. Thought of the Day 51.1.1

rockswater

Conjuncted here, here and here.

A temporally evolving system must possess a sufficiently rich set of symmetries to allow us to infer general laws from a finite set of empirical observations. But what justifies this hypothesis?

This question is central to the entire scientific enterprise. Why are we justified in assuming that scientific laws are the same in different spatial locations, or that they will be the same from one day to the next? Why should replicability of other scientists’ experimental results be considered the norm, rather than a miraculous exception? Why is it normally safe to assume that the outcomes of experiments will be insensitive to irrelevant details? Why, for that matter, are we justified in the inductive generalizations that are ubiquitous in everyday reasoning?

In effect, we are assuming that the scientific phenomena under investigation are invariant under certain symmetries – both temporal and spatial, including translations, rotations, and so on. But where do we get this assumption from? The answer lies in the principle of Occam’s Razor.

Roughly speaking, this principle says that, if two theories are equally consistent with the empirical data, we should prefer the simpler theory:

Occam’s Razor: Given any body of empirical evidence about a temporally evolving system, always assume that the system has the largest possible set of symmetries consistent with that evidence.

Making it more precise, we begin by explaining what it means for a particular symmetry to be “consistent” with a body of empirical evidence. Formally, our total body of evidence can be represented as a subset E of H, i.e., namely the set of all logically possible histories that are not ruled out by that evidence. Note that we cannot assume that our evidence is a subset of Ω; when we scientifically investigate a system, we do not normally know what Ω is. Hence we can only assume that E is a subset of the larger set H of logically possible histories.

Now let ψ be a transformation of H, and suppose that we are testing the hypothesis that ψ is a symmetry of the system. For any positive integer n, let ψn be the transformation obtained by applying ψ repeatedly, n times in a row. For example, if ψ is a rotation about some axis by angle θ, then ψn is the rotation by the angle nθ. For any such transformation ψn, we write ψ–n(E) to denote the inverse image in H of E under ψn. We say that the transformation ψ is consistent with the evidence E if the intersection

E ∩ ψ–1(E) ∩ ψ–2(E) ∩ ψ–3(E) ∩ …

is non-empty. This means that the available evidence (i.e., E) does not falsify the hypothesis that ψ is a symmetry of the system.

For example, suppose we are interested in whether cosmic microwave background radiation is isotropic, i.e., the same in every direction. Suppose we measure a background radiation level of x1 when we point the telescope in direction d1, and a radiation level of x2 when we point it in direction d2. Call these events E1 and E2. Thus, our experimental evidence is summarized by the event E = E1 ∩ E2. Let ψ be a spatial rotation that rotates d1 to d2. Then, focusing for simplicity just on the first two terms of the infinite intersection above,

E ∩ ψ–1(E) = E1 ∩ E2 ∩ ψ–1(E1) ∩ ψ–1(E2).

If x1 = x2, we have E1 = ψ–1(E2), and the expression for E ∩ ψ–1(E) simplifies to E1 ∩ E2 ∩ ψ–1(E1), which has at least a chance of being non-empty, meaning that the evidence has not (yet) falsified isotropy. But if x1 ≠ x2, then E1 and ψ–1(E2) are disjoint. In that case, the intersection E ∩ ψ–1(E) is empty, and the evidence is inconsistent with isotropy. As it happens, we know from recent astronomy that x1 ≠ x2 in some cases, so cosmic microwave background radiation is not isotropic, and ψ is not a symmetry.

Our version of Occam’s Razor now says that we should postulate as symmetries of our system a maximal monoid of transformations consistent with our evidence. Formally, a monoid Ψ of transformations (where each ψ in Ψ is a function from H into itself) is consistent with evidence E if the intersection

ψ∈Ψ ψ–1(E)

is non-empty. This is the generalization of the infinite intersection that appeared in our definition of an individual transformation’s consistency with the evidence. Further, a monoid Ψ that is consistent with E is maximal if no proper superset of Ψ forms a monoid that is also consistent with E.

Occam’s Razor (formal): Given any body E of empirical evidence about a temporally evolving system, always assume that the set of symmetries of the system is a maximal monoid Ψ consistent with E.

What is the significance of this principle? We define Γ to be the set of all symmetries of our temporally evolving system. In practice, we do not know Γ. A monoid Ψ that passes the test of Occam’s Razor, however, can be viewed as our best guess as to what Γ is.

Furthermore, if Ψ is this monoid, and E is our body of evidence, the intersection

ψ∈Ψ ψ–1(E)

can be viewed as our best guess as to what the set of nomologically possible histories is. It consists of all those histories among the logically possible ones that are not ruled out by the postulated symmetry monoid Ψ and the observed evidence E. We thus call this intersection our nomological hypothesis and label it Ω(Ψ,E).

To see that this construction is not completely far-fetched, note that, under certain conditions, our nomological hypothesis does indeed reflect the truth about nomological possibility. If the hypothesized symmetry monoid Ψ is a subset of the true symmetry monoid Γ of our temporally evolving system – i.e., we have postulated some of the right symmetries – then the true set Ω of all nomologically possible histories will be a subset of Ω(Ψ,E). So, our nomological hypothesis will be consistent with the truth and will, at most, be logically weaker than the truth.

Given the hypothesized symmetry monoid Ψ, we can then assume provisionally (i) that any empirical observation we make, corresponding to some event D, can be generalized to a Ψ-invariant law and (ii) that unconditional and conditional probabilities can be estimated from empirical frequency data using a suitable version of the Ergodic Theorem.

Concepts – Intensional and Extensional.

Omega-exp-omega-labeled

Let us start in this fashion: objects to which concepts apply (or not). The first step in arriving at a theory for this situation is, to assume that the objects in question are completely arbitrary (as urelements in set theory). This assumption is evidently wrong in empirical experience as also in mathematics itself, e.g., in function theory. So to admit this assumption forces us to build our own theory of sets to take care of the case of complex objects later on.

Concepts are normally given to us by linguistic expressions, disregarding by abstraction the origin of languages or signals or what have you. Now we can develop a theory of concepts as follows. We idealize our language by fixing a vocabulary together with logical operators and formulate expressions for classes, functions, and relations in the way of the λ-calculus. Here we have actually a theory of concepts, understood intensionally. Note that the extensional point of view is by no means lost, since we read for e.g., λx,yR(x,y) as the relation R over a domain of urelements; but either R is in the vocabulary or given by a composed expression in our logical language; equality does not refer to equal extensions but to logical equivalence and reduction processes. By the way, there is no hindrance to apply λ-expressions again to λ-expressions so that hierarchies of concepts can be included.

Another approach to the question of obtaining a theory of concepts is the algebraic one. Here introducing variables for extensions over a domain of urelements, and calling them classes helps develop the axiomatic class calculus. Adding (two-place) relations again with axioms, and we can obtain the relation calculus. One could go a step further to polyadic algebra. These theories do not have a prominent role nowadays, if one compares them with the λ-calculus or set theory. This is probably due to the circumstance that it seems difficult, not to say actually against the proper idea behind these theories, to allow iteration in the sense of classes of classes, etc.

For the mathematical purposes and for the use of logics, the appropriate way is to restrict a theory of concepts to a theory of their extensions. This has a good reason, since in an abstract theory we are interested in being as neutral as possible with respect to a description or factual theory given beforehand. There is a philosophical principle behind this, namely that logical (and in this case set theoretical) assumptions should be as far as possible distinguishable from any factual or descriptive assumption.

Whitehead’s Anti-Substantivilism, or Process & Reality as a Cosmology to-be. Thought of the Day 39.0

whiteheads-process-philosophy

Treating “stuff” as some kind of metaphysical primitive is mere substantivilism – and fundamentally question-begging. One has replaced an extra-theoretic referent of the wave-function (unless one defers to some quasi-literalist reading of the nature of the stochastic amplitude function ζ[X(t)] as somehow characterizing something akin to being a “density of stuff”, and moreover the logic and probability (Born Rules) must ultimately be obtained from experimentally obtained scattering amplitudes) with something at least as equally mystifying, as the argument against decoherence goes on to show:

In other words, you have a state vector which gives rise to an outcome of a measurement and you cannot understand why this is so according to your theory.

As a response to Platonism, one can likewise read Process and Reality as essentially anti-substantivilist.

Consider, for instance:

Those elements of our experience which stand out clearly and distinctly [giving rise to our substantial intuitions] in our consciousness are not its basic facts, [but] they are . . . late derivatives in the concrescence of an experiencing subject. . . .Neglect of this law [implies that] . . . [e]xperience has been explained in a thoroughly topsy-turvy fashion, the wrong end first (161).

To function as an object is to be a determinant of the definiteness of an actual occurrence [occasion] (243).

The phenomenological ontology offered in Process and Reality is richly nuanced (including metaphysical primitives such as prehensions, occasions, and their respectively derivative notions such as causal efficacy, presentational immediacy, nexus, etc.). None of these suggest metaphysical notions of substance (i.e., independently existing subjects) as a primitive. The case can perhaps be made concerning the discussion of eternal objects, but such notions as discussed vis-à-vis the process of concrescence are obviously not metaphysically primitive notions. Certainly these metaphysical primitives conform in a more nuanced and articulated manner to aspects of process ontology. “Embedding” – as the notion of emergence is a crucial constituent in the information-theoretic, quantum-topological, and geometric accounts. Moreover, concerning the issue of relativistic covariance, it is to be regarded that Process and Reality is really a sketch of a cosmology-to-be . . . [in the spirit of ] Kant [who] built on the obsolete ideas of space, time, and matter of Euclid and Newton. Whitehead set out to suggest what a philosophical cosmology might be that builds on Newton.

Biogrammatic Vir(Ac)tuality. Note Quote.

In Foucault’s most famous example, the prison acts as the confluence of content (prisoners) and expression (law, penal code) (Gilles Deleuze, Sean Hand-Foucault). Informal Diagrams are proliferate. As abstract machines they contain the transversal vectors that cut across a panoply of features (such as institutions, classes, persons, economic formation, etc), mapping from point to relational point, the generalized features of power economies. The disciplinary diagram explored by Foucault, imposes “a particular conduct upon a particular human multiplicity”. The imposition of force upon force affects and effectuates the felt experience of a life, a living. Deleuze has called the abstract machine “pure matter/function” in which relations between forces are nonetheless very real.

[…] the diagram acts as a non-unifying immanent cause that is co-extensive with the whole social field: the abstract machine is like the cause of the concrete assemblages that execute its relations; and these relations between forces take place ‘not above’ but within the very tissue of the assemblages they produce.

The processual conjunction of content and expression; the cutting edge of deterritorialization:

The relations of power and resistance between theory and practice resonate – becoming-form; diagrammatics as praxis, integrates and differentiates the immanent cause and quasi-cause of the actualized occasions of research/creation. What do we mean by immanent cause? It is a cause which is realized, integrated and distinguished in its effect. Or rather, the immanent cause is realized, integrated and distinguished by its effect. In this way there is a correlation or mutual presupposition between cause and effect, between abstract machine and concrete assemblages

Memory is the real name of the relation to oneself, or the affect of self by self […] Time becomes a subject because it is the folding of the outside…forces every present into forgetting but preserves the whole of the past within memory: forgetting is the impossibiltiy of return and memory is the necessity of renewal.

Untitled

The figure on the left is Henri Bergson’s diagram of an infinitely contracted past that directly intersects with the body at point S – a mobile, sensorimotor present where memory is closest to action. Plane P represents the actual present; plane of contact with objects. The AB segments represent repetitive compressions of memory. As memory contracts it gets closer to action. In it’s more expanded forms it is closer to dreams. The figure on the right extrapolates from Bergson’s memory model to describe the Biogrammatic ontological vector of the Diagram as it moves from abstract (informal) machine in the most expanded form “A” through the cone “tissue” to the phase-shifting (formal), arriving at the Strata of the P plane to become artefact. The ontological vector passes through the stratified, through the interval of difference created in the phase shift (the same phase shift that separates and folds content and expression to move vertically, transversally, back through to the abstract diagram.)

A spatio-temporal-material contracting-expanding of the abstract machine is the processual thinking-feeling-articulating of the diagram becoming-cartographic; synaesthetic conceptual mapping. A play of forces, a series of relays, affecting a tendency toward an inflection of the informal diagram becoming-form. The inflected diagram/biogram folds and unfolds perception, appearances; rides in the gap of becoming between content and expression; intuitively transduces the actualizing (thinking, drawing, marking, erasing) of matter-movement, of expressivity-movement. “To follow the flow of matter… is intuition in action.” A processual stage that prehends the process of the virtual actualizing;

the creative construction of a new reality. The biogrammatic stage of the diagrammatic is paradoxically double in that it is both the actualizing of the abstract machine (contraction) and the recursive counter-actualization of the formal diagram (détournement); virtual and actual.

It is the event-dimension of potential – that is the effective dimension of the interrelating of elements, of their belonging to each other. That belonging is a dynamic corporeal “abstraction” – the “drawing off” (transductive conversion) of the corporeal into its dynamism (yielding the event) […] In direct channeling. That is, in a directional channeling: ontological vector. The transductive conversion is an ontological vector that in-gathers a heterogeneity of substantial elements along with the already-constituted abstractions of language (“meaning”) and delivers them together to change. (Brian Massumi Parables for the Virtual Movement, Affect, Sensation)

Skin is the space of the body the BwO that is interior and exterior. Interstitial matter of the space of the body.

Untitled

The material markings and traces of a diagrammatic process, a ‘capturing’ becoming-form. A diagrammatic capturing involves a transductive process between a biogrammatic form of content and a form of expression. The formal diagram is thus an individuating phase-shift as Simondon would have it, always out-of-phase with itself. A becoming-form that inhabits the gap, the difference, between the wave phase of the biogrammatic that synaesthetically draws off the intermix of substance and language in the event-dimension and the drawing of wave phase in which partial capture is formalized. The phase shift difference never acquires a vectorial intention. A pre-decisive, pre-emptive drawing of phase-shifting with a “drawing off” the biogram.

Untitled

If effects realize something this is because the relations between forces or power relations, are merely virtual, potential, unstable vanishing and molecular, and define only possibilities of interaction so long as they do not enter a macroscopic whole capable of giving form to their fluid manner and diffuse function. But realization is equally an integration, a collection of progressive integrations that are initially local and then become or tend to become global, aligning, homogenizing and summarizing relations between forces: here law is the integration of illegalisms.

 

Object as Category-Theoretic or Object as Ontological: The Inadequacy of Implicitly Quantifying Over Elements. (2)

blue_morphism_by_eresaw-d3h1ef4

It will be convenient to use the term ‘object’ in two senses. First, as an object of a category, i.e. in a purely mathematical sense. We shall call this a C- object (‘C’ for category-theoretic). Second, in the sense commonly used in structural realist debates, and which was already introduced above, viz. an object is a physical entity which is a relatum in physical relations. We shall call this an O-object (‘O’ for ‘ontological’).

We will also need to clarify our use of the term ‘element’. We use ‘element’ to mean an element of a set, or as it is also often called, a ‘point’ of a set (indeed it will be natural for us to switch to the language of points when discussing manifolds, i.e. spacetimes.) This familiar use of element should be distinguished from the category-theoretic concepts of ‘global element’ and ‘generalized element’, which is introduced below.

Jonathan Bain’s first strategy for defending (Objectless) draws on the following idea: the usual set-theoretic representations of O-objects and relations can be translated into category-theoretic terms, whence these objects can be eliminated. In fact, the argument can be seen as consisting of two different parts.

In the first part, Bain attempts to give a highly general argument, in the sense that it turns only on the notion of universal properties and the translatability of statements about certain mathematical representations (i.e. elements of sets) of O-objects into statements about morphisms between C-objects. As Bain himself notes, the general argument fails, and he thus introduces a more specific argument, which is what he wishes to endorse. The specific argument turns on the idea of obtaining a translation scheme from a ‘categorical equivalence’ between a geometric category and an algebraic category, which in turn allows one to generalize the original C-objects. The argument is ‘specific’ because such equivalences only hold between rather special sorts of categories.

The details of Bain’s general argument can be reconstructed as follows:

G1: Physical objects and the structures they bear are typically identified with the elements of a set X and relations on X respectively.

G2: The set-theoretic entities of G1 are to be represented in category-theoretic language by considering the category whose objects are the relevant structured sets, and whose morphisms are functions that preserve ‘structure’.

G3: Set-theoretic statements about an object of a category (of the type in G2) can often be expressed without making reference to the elements of that object. For instance:

1. In any category with a terminal object any element of an object X can be expressed as a morphism from the terminal object to X. (So for instance, since the singleton {∗} is the terminal object in the category Set, an element of a set X can be described by a morphism {∗} → X.)

2. In a category with some universal property, this property can be described purely in terms of morphisms, i.e. without making any reference to elements of an object.

To sum up, G1 links O-objects with a standard mathematical representation, viz. elements of a set. And G2 and G3 are meant to establish the possibility that, in certain cases, category theory allows us to translate statements about elements of sets into statements about the structure of morphisms between C-objects.

Thus, Bain takes G1–G3 to suggest that: 

C: Category theory allows for the possibility of coherently describing physical structures without making any reference to physical objects.

Indeed, Bain thinks the argument suggests that the mathematical representatives of O- objects, i.e. the elements of sets, are surplus, and that category theory succeeds in removing this surplus structure. Note that even if there is surplus structure here, it is not of the same kind as, e.g. gauge-equivalent descriptions of fields in Yang-Mills theory. The latter has to do with various equivalent ways in which one can describe the dynamical objects of a theory, viz. field. By contrast, Bain’s strategy involves various equivalent descriptions of the entire theory.

Bain himself thinks that the inference from G1–G3 to C fails, but he does give it serious consideration, and it is easy to see why: its premises based on the most natural and general translation scheme in category theory, viz. redescribing the properties of C-objects in terms of morphisms, and indeed – if one is lucky – in terms of universal properties. 

First, the premise G1. Structural realist doctrines are typically formalized by modeling O-objects as elements of a set and structures as relations on that set. However, this is seldom the result of reasoned deliberation about whether standard set theory is the best expressive resource from some class of such resources, but rather the product of a deeply entrenched set-theoretic viewpoint within philosophy. Were philosophers familiar with an alternative to set theory that was at least as powerful, e.g. category theory, then O-objects and structures might well have been modeled directly in the alternative formalism. Of course, it is also a reasonable viewpoint to say that it is most ‘natural’ to do the philosophy/foundations of physics in terms of set theory – what is ‘natural’ depends on how one conceives of such foundational investigations.

So we maintain that there is no reason for the defender of O-objects to accept G1. For instance, he might try to construct a category such that O-objects are modeled by C-objects and structures are modeled by morphisms. For example, there are examples of categories whose C-objects might coincide with the mathematical representatives of O-objects. For instance, in a path homotopy category, the C-objects are just points of the relevant space, and one might in turn take the points of a space to be O-objects, as Bain does in his example of general relativity and Einstein algebras. Or he might take as his starting point a non-concrete category, whose objects have no underlying set and thus cannot be expressed in the terms of G1.

The premise G2, on the other hand, is ambiguous—it is unclear exactly how Bain wants us to understand ‘structure’ and thus ‘structure-preserving maps’. First, note that when mathematicians talk about ‘structure-preserving maps’ they usually have in mind morphisms that do not preserve all the features of a C-object, but rather the characteristic (albeit partial) features of that C-object. For instance, with respect to a group, a structure-preserving map is a homomorphism and not an isomorphism. Bain’s example of the category Set is of this type, because its morphisms are arbitrary functions (and not bijective functions).

However, Bain wants to introduce a different notion of ‘structure’ that contrasts with this standard usage, for he says:

(Structure) …the intuitions of the ontic structural realist may be preserved by defining “structure” in this context to be “object in a category”.

If we take this claim seriously, then a structure-preserving map will turn out to be an isomorphism in the relevant category – for only isomorphisms preserve the complete ‘structural essence’ of a structured set. For instance, Bain’s example of the category whose objects are smooth manifolds and whose morphisms are diffeomorphisms is of this type. If this is really what Bain has in mind, then one inevitably ends up with a very limited and dull class of categories. But even if one relaxes this notion of ‘structure’ to mean ‘the structure that is preserved by the morphisms of the category, whatever they happen to be’, one still runs into trouble with G3.

We now turn to the premise G3. First, note that G3 (i) is false, as we now explain. It will be convenient to introduce a piece of standard terminology: a morphism from a terminal object to some object X is called a global element of X. And the question of whether an element of X can be expressed as a global element in the relevant category turns on the structure of the category in question. For instance, in the category Man with smooth manifolds as objects and smooth maps as morphisms, this question receives a positive answer: global elements are indeed in bijective correspondence with elements of a manifold. This is because the terminal object is the 0-dimensional manifold {0}, and so an element of a manifold M is a morphism {0} → M. But in many other categories, e.g. the category Grp, the answer is negative. As an example, consider that Grp has the trivial group 1 as its terminal object and so a morphism from 1 to a group G only picks out its identity and not its other elements. In order to obtain the other elements, one has to introduce the notion of a generalized element of X, viz. a morphism from some ‘standard object’ U into X. For instance, in Grp, one takes Z as the standard object U, and the generalized elements Z → G allow us to recover the ordinary elements of a group G.

Second, while G3 (ii) is certainly true, i.e. universal properties can be expressed purely in terms of morphisms, it is a further – and significant – question for the scope and applicability of this premise whether all (or even most) physical properties can be articulated as universal properties.

Hence we have seen that the categorically-informed opponent of (Objectless) need not accept these premises – there is a lot of room for debate about how exactly one should use category theory to conceptualize the notion of physical structure. But supposing that one does: is there a valid inference from G1–G3 to C? Bain himself notes that the plausibility of this inference trades on an ambiguity in what one means by ‘reference’ in C. If one merely means that such constructions eliminate explicit but not implicit reference to objects, then the argument is indeed valid. On the other hand, a defense of OSR requires the elimination of implicit reference to objects, and this is what the general argument fails to offer – it merely provides a translation scheme from statements involving elements (of sets) to statements involving morphisms between C-objects. So, the defender of objects can maintain that one is still implicitly quantifying over elements. 

Theories of Fields: Gravitational Field as “the More Equal Among Equals”

large-scalestructureoflightdistributionintheuniverse

Descartes, in Le Monde, gave a fully relational definition of localization (space) and motion. According to Descartes, there is no “empty space”. There are only objects, and it makes sense to say that an object A is contiguous to an object B. The “location” of an object A is the set of the objects to which A is contiguous. “Motion” is change in location. That is, when we say that A moves we mean that A goes from the contiguity of an object B to the contiguity of an object C3. A consequence of this relationalism is that there is no meaning in saying “A moves”, except if we specify with respect to which other objects (B, C,. . . ) it is moving. Thus, there is no “absolute” motion. This is the same definition of space, location, and motion, that we find in Aristotle. Aristotle insists on this point, using the example of the river that moves with respect to the ground, in which there is a boat that moves with respect to the water, on which there is a man that walks with respect to the boat . . . . Aristotle’s relationalism is tempered by the fact that there is, after all, a preferred set of objects that we can use as universal reference: the Earth at the center of the universe, the celestial spheres, the fixed stars. Thus, we can say, if we desire so, that something is moving “in absolute terms”, if it moves with respect to the Earth. Of course, there are two preferred frames in ancient cosmology: the one of the Earth and the one of the fixed stars; the two rotates with respect to each other. It is interesting to notice that the thinkers of the middle ages did not miss this point, and discussed whether we can say that the stars rotate around the Earth, rather than being the Earth that rotates under the fixed stars. Buridan concluded that, on ground of reason, in no way one view is more defensible than the other. For Descartes, who writes, of course, after the great Copernican divide, the Earth is not anymore the center of the Universe and cannot offer a naturally preferred definition of stillness. According to malignants, Descartes, fearing the Church and scared by what happened to Galileo’s stubborn defense of the idea that “the Earth moves”, resorted to relationalism, in Le Monde, precisely to be able to hold Copernicanism without having to commit himself to the absolute motion of the Earth!

Relationalism, namely the idea that motion can be defined only in relation to other objects, should not be confused with Galilean relativity. Galilean relativity is the statement that “rectilinear uniform motion” is a priori indistinguishable from stasis. Namely that velocity (but just velocity!), is relative to other bodies. Relationalism holds that any motion (however zigzagging) is a priori indistinguishable from stasis. The very formulation of Galilean relativity requires a nonrelational definition of motion (“rectilinear and uniform” with respect to what?).

Newton took a fully different course. He devotes much energy to criticise Descartes’ relationalism, and to introduce a different view. According to him, space exists. It exists even if there are no bodies in it. Location of an object is the part of space that the object occupies. Motion is change of location. Thus, we can say whether an object moves or not, irrespectively from surrounding objects. Newton argues that the notion of absolute motion is necessary for constructing mechanics. His famous discussion of the experiment of the rotating bucket in the Principia is one of the arguments to prove that motion is absolute.

This point has often raised confusion because one of the corollaries of Newtonian mechanics is that there is no detectable preferred referential frame. Therefore the notion of absolute velocity is, actually, meaningless, in Newtonian mechanics. The important point, however, is that in Newtonian mechanics velocity is relative, but any other feature of motion is not relative: it is absolute. In particular, acceleration is absolute. It is acceleration that Newton needs to construct his mechanics; it is acceleration that the bucket experiment is supposed to prove to be absolute, against Descartes. In a sense, Newton overdid a bit, introducing the notion of absolute position and velocity (perhaps even just for explanatory purposes?). Many people have later criticised Newton for his unnecessary use of absolute position. But this is irrelevant for the present discussion. The important point here is that Newtonian mechanics requires absolute acceleration, against Aristotle and against Descartes. Precisely the same does special relativistic mechanics.

Similarly, Newton introduced absolute time. Newtonian space and time or, in modern terms, spacetime, are like a stage over which the action of physics takes place, the various dynamical entities being the actors. The key feature of this stage, Newtonian spacetime, is its metrical structure. Curves have length, surfaces have area, regions of spacetime have volume. Spacetime points are at fixed distance the one from the other. Revealing, or measuring, this distance, is very simple. It is sufficient to take a rod and put it between two points. Any two points which are one rod apart are at the same distance. Using modern terminology, physical space is a linear three-dimensional (3d) space, with a preferred metric. On this space there exist preferred coordinates xi, i = 1,2,3, in terms of which the metric is just δij. Time is described by a single variable t. The metric δij determines lengths, areas and volumes and defines what we mean by straight lines in space. If a particle deviates with respect to this straight line, it is, according to Newton, accelerating. It is not accelerating with respect to this or that dynamical object: it is accelerating in absolute terms.

Special relativity changes this picture only marginally, loosing up the strict distinction between the “space” and the “time” components of spacetime. In Newtonian spacetime, space is given by fixed 3d planes. In special relativistic spacetime, which 3d plane you call space depends on your state of motion. Spacetime is now a 4d manifold M with a flat Lorentzian metric ημν. Again, there are preferred coordinates xμ, μ = 0, 1, 2, 3, in terms of which ημν = diag[1, −1, −1, −1]. This tensor, ημν , enters all physical equations, representing the determinant influence of the stage and of its metrical properties on the motion of anything. Absolute acceleration is deviation of the world line of a particle from the straight lines defined by ημν. The only essential novelty with special relativity is that the “dynamical objects”, or “bodies” moving over spacetime now include the fields as well. Example: a violent burst of electromagnetic waves coming from a distant supernova has traveled across space and has reached our instruments. For the rest, the Newtonian construct of a fixed background stage over which physics happen is not altered by special relativity.

The profound change comes with general relativity (GTR). The central discovery of GR, can be enunciated in three points. One of these is conceptually simple, the other two are tremendous. First, the gravitational force is mediated by a field, very much like the electromagnetic field: the gravitational field. Second, Newton’s spacetime, the background stage that Newton introduced introduced, against most of the earlier European tradition, and the gravitational field, are the same thing. Third, the dynamics of the gravitational field, of the other fields such as the electromagnetic field, and any other dynamical object, is fully relational, in the Aristotelian-Cartesian sense. Let me illustrate these three points.

First, the gravitational field is represented by a field on spacetime, gμν(x), just like the electromagnetic field Aμ(x). They are both very concrete entities: a strong electromagnetic wave can hit you and knock you down; and so can a strong gravitational wave. The gravitational field has independent degrees of freedom, and is governed by dynamical equations, the Einstein equations.

Second, the spacetime metric ημν disappears from all equations of physics (recall it was ubiquitous). At its place – we are instructed by GTR – we must insert the gravitational field gμν(x). This is a spectacular step: Newton’s background spacetime was nothing but the gravitational field! The stage is promoted to be one of the actors. Thus, in all physical equations one now sees the direct influence of the gravitational field. How can the gravitational field determine the metrical properties of things, which are revealed, say, by rods and clocks? Simply, the inter-atomic separation of the rods’ atoms, and the frequency of the clock’s pendulum are determined by explicit couplings of the rod’s and clock’s variables with the gravitational field gμν(x), which enters the equations of motion of these variables. Thus, any measurement of length, area or volume is, in reality, a measurement of features of the gravitational field.

But what is really formidable in GTR, the truly momentous novelty, is the third point: the Einstein equations, as well as all other equations of physics appropriately modified according to GTR instructions, are fully relational in the Aristotelian-Cartesian sense. This point is independent from the previous one. Let me give first a conceptual, then a technical account of it.

The point is that the only physically meaningful definition of location that makes physical sense within GTR is relational. GTR describes the world as a set of interacting fields and, possibly, other objects. One of these interacting fields is gμν(x). Motion can be defined only as positioning and displacements of these dynamical objects relative to each other.

To describe the motion of a dynamical object, Newton had to assume that acceleration is absolute, namely it is not relative to this or that other dynamical object. Rather, it is relative to a background space. Faraday, Maxwell and Einstein extended the notion of “dynamical object”: the stuff of the world is fields, not just bodies. Finally, GTR tells us that the background space is itself one of these fields. Thus, the circle is closed, and we are back to relationalism: Newton’s motion with respect to space is indeed motion with respect to a dynamical object: the gravitational field.

All this is coded in the active diffeomorphism invariance (diff invariance) of GR. Active diff invariance should not be confused with passive diff invariance, or invariance under change of coordinates. GTR can be formulated in a coordinate free manner, where there are no coordinates, and no changes of coordinates. In this formulation, there field equations are still invariant under active diffs. Passive diff invariance is a property of a formulation of a dynamical theory, while active diff invariance is a property of the dynamical theory itself. A field theory is formulated in manner invariant under passive diffs (or change of coordinates), if we can change the coordinates of the manifold, re-express all the geometric quantities (dynamical and non-dynamical) in the new coordinates, and the form of the equations of motion does not change. A theory is invariant under active diffs, when a smooth displacement of the dynamical fields (the dynamical fields alone) over the manifold, sends solutions of the equations of motion into solutions of the equations of motion. Distinguishing a truly dynamical field, namely a field with independent degrees of freedom, from a nondynamical filed disguised as dynamical (such as a metric field g with the equations of motion Riemann[g]=0) might require a detailed analysis (for instance, Hamiltonian) of the theory. Because active diff invariance is a gauge, the physical content of GTR is expressed only by those quantities, derived from the basic dynamical variables, which are fully independent from the points of the manifold.

In introducing the background stage, Newton introduced two structures: a spacetime manifold, and its non-dynamical metric structure. GTR gets rid of the non-dynamical metric, by replacing it with the gravitational field. More importantly, it gets rid of the manifold, by means of active diff invariance. In GTR, the objects of which the world is made do not live over a stage and do not live on spacetime: they live, so to say, over each other’s shoulders.

Of course, nothing prevents us, if we wish to do so, from singling out the gravitational field as “the more equal among equals”, and declaring that location is absolute in GTR, because it can be defined with respect to it. But this can be done within any relationalism: we can always single out a set of objects, and declare them as not-moving by definition. The problem with this attitude is that it fully misses the great Einsteinian insight: that Newtonian spacetime is just one field among the others. More seriously, this attitude sends us into a nightmare when we have to deal with the motion of the gravitational field itself (which certainly “moves”: we are spending millions for constructing gravity wave detectors to detect its tiny vibrations). There is no absolute referent of motion in GTR: the dynamical fields “move” with respect to each other.

Notice that the third step was not easy for Einstein, and came later than the previous two. Having well understood the first two, but still missing the third, Einstein actively searched for non-generally covariant equations of motion for the gravitational field between 1912 and 1915. With his famous “hole argument” he had convinced himself that generally covariant equations of motion (and therefore, in this context, active diffeomorphism invariance) would imply a truly dramatic revolution with respect to the Newtonian notions of space and time. In 1912 he was not able to take this profoundly revolutionary step, but in 1915 he took this step, and found what Landau calls “the most beautiful of the physical theories”.

Loop Quantum Gravity and Nature of Reality. Briefer.

fisica49_01

To some “loop quantum gravity is an attempt to define a quantization of gravity paying special attention to the conceptual lessons of general relativity”, while to others it does not have to be about the quantization of gravity but should be “at least conceivable that such a theory marries a classical understanding of gravity with a quantum understanding of matter”

The term ‘loop’ comes from the solution written for every line closed on itself on the proposed structure of quanta’s interactions. John Archibald Wheeler was one of the pioneers in constructing a representation of space which had a granular structure on a very small scale. Together with Bryce DeWitt they produced a mathematical formula known as Wheeler-DeWitt equation, “an equation which should determine the probability of one or another curved space”. The starting point was spacetime of general relativity having “loop-like states”. Having a quantum approach to gravity on closed loops, which are threads of the Faraday lines of the quantum field, constitutes a gravitational field which looks like a spiderweb. A solution could be written for every line closed on itself. Moreover, every line determining a solution of the Wheeler-DeWitt equation describes one of the threads of the spiderweb created by Faraday force lines of the quantum field which are the threads with which the space is woven. The physical Hilbert space as the space of all quantum states of the theory solves all the constraints and thus ought to be considered as the physical states. This implies that the physical Hilbert space of Loop Quantum Gravity is not yet known. The larger space of states which satisfy the first two families of constraints is often termed the kinematical Hilbert space. The one constraint that has so far resisted resolution is the Hamiltonian constraint equation with the seemingly simple form Hˆ|ψ⟩ = 0, the Wheeler-DeWitt equation, where Hˆ is the Hamiltonian operator usually interpreted to generate the dynamical evolution and |ψ⟩ is a quantum state in the kinematical Hilbert space. Of course, the Hamiltonian operator Hˆ is a complicated function(al) of the basic operators corresponding to the basic canonical variables. In fact, the very functional form of Hˆ is debated as several inequivalent candidates are on the table. Insofar as the physical Hilbert space has thus not yet been constructed, Loop Quantum Gravity remains incomplete.

Space, then, is defined based on the nodes on this spiderweb, which is called a spin network, and time, which already lost its fundamental status with special and general relativity, vanishes from the picture of the universe altogether.

Loop quantum gravity combines the dynamic spacetime approach of general relativity with quanta nature of gravity fields. Accordingly, space that bends and stretches are made up of very small particles which are called quanta of space. If one had eyes capable zooming into the space and seeing magnetic fields and quanta, then, by observing the space, one would first witness the quantum field, and then would end up seeing quanta which are extremely small and granular.

The escape velocity of theory competency: of no consequence

this is a bolt of lightening from the darkest abyss of unreclaimed comprehensibility, a nondescript translation of the experiential excess of reading, and the vanity of seemingly gripping binaries and the subsequent refusal to admit to a camp.

all along the trajectory over the last few months, a decidedly harsh allegiance to the dilution of self and/or subject affined me towards the Buddhist creed, which was gaining in exacerbative rigors. this had to be curtailed and the way out was to go back to Kant and Hegel. from the latter, the notion of subject being formed over history, as against the Kantian fashionable timeless entitlement given to spirit as a transcendental pre-given set of cognitive faculties, releases one from the formal take on history and throws one into the speculative core. not only does spirit become a historical auto-production thus delineating the real self from the reflective self through a logically conducted orchestration of thinking through time, but points to the becoming of subject as not merely expressed, but how it gets ‘thought’ through a series of predicates.

this auto-production/differentiation is precisely what Marx contorted (albeit in a materialist sense) to call it the process of alienation, since through these mechanisms in the proper Hegelian world, spirit would gain freedom/autonomy to escape the strictures of phenomenality.

certain consequences are to be drawn here. first and foremost, a blast from the past to the positions of many of the modern day thinkers associated with lending legitimacy to the dissolution of self, which incidentally maps to the Buddhist creed. this blast is nothing more than gaining the momentum of critique as delimitation. secondly, the efficacy of this position depends upon the unintelligibility of Hegel to be converted into an intelligibility, which is the hardest nut to crack in philosophy, if one were to go by popular and ironically strong academic sentiments. third and perhaps the most important one here is to gauge the velocity of escape competency of theory, which when measured would draw us inside to the real possibility of realism, if attainable at all.

just maybe, i am beginning to see through the fog, or this might be a curl back into the Parmenidean thought ‘+’ being.