Being Mediatized: How 3 Realms and 8 Dimensions Explain ‘Being’ by Peter Blank.


Experience of Reflection: ‘Self itself is an empty word’
Leary – The neuroatomic winner: “In the province of the mind, what is believed true is true, or becomes true within limits to be learned by experience and experiment.” (Dr. John Lilly)

Media theory had noted the shoring up or even annihilation of the subject due to technologies that were used to reconfigure oneself and to see oneself as what one was: pictures, screens. Depersonalization was an often observed, reflective state of being that stood for the experience of anxiety dueto watching a ‘movie of one’s own life’ or experiencing a malfunction or anomaly in one’s self-awareness.

To look at one’s scaffolded media identity meant in some ways to look at the redactionary product of an extreme introspective process. Questioning what one interpreted oneself to be doing in shaping one’s media identities enhanced endogenous viewpoints and experience, similar to focusing on what made a car move instead of deciding whether it should stay on the paved road or drive across a field. This enabled the individual to see the formation of identity from the ‘engine perspective’.

Experience of the Hyperreal: ‘I am (my own) God’
Leary – The metaprogramming winner: “I make my own coincidences, synchronities, luck, and Destiny.”

Meta-analysis of distinctions – seeing a bird fly by, then seeing oneself seeing a bird fly by, then thinking the self that thought that – becomes routine in hyperreality. Media represent the opposite: a humongous distraction from Heidegger’s goal of the search for ‘Thinking’: capturing at present the most alarming of what occupies the mind. Hyperreal experiences could not be traced back to a person’s ‘real’ identities behind their aliases. The most questionable therefore related to dismantled privacy: a privacy that only existed because all aliases were constituting a false privacy realm. There was nothing personal about the conversations, no facts that led back to any person, no real change achieved, no political influence asserted.

From there it led to the difference between networked relations and other relations, call these other relations ‘single’ relations, or relations that remained solemnly silent. They were relations that could not be disclosed against their will because they were either too vague, absent, depressing, shifty, or dangerous to make the effort worthwhile to outsiders.

The privacy of hyperreal being became the ability to hide itself from being sensed by others through channels of information (sight, touch, hearing), but also to hide more private other selves, stored away in different, more private networks from others in more open social networks.

Choosing ‘true’ privacy, then, was throwing away distinctions one experienced between several identities. As identities were space the meaning of time became the capacity for introspection. The hyperreal being’s overall identity to the inside as lived history attained an extra meaning – indeed: as alter- or hyper-ego. With Nietzsche, the physical body within its materiality occasioned a performance that subjected its own subjectivity. Then and only then could it become its own freedom.

With Foucault one could say that the body was not so much subjected but still there functioning on its own premises. Therefore the sensitory systems lived the body’s life in connection with (not separated from) a language based in a mediated faraway from the body. If language and our sensitory systems were inseparable, beings and God may as well be.

Being Mediatized


Ricci-flow as an “intrinsic-Ricci-flat” Space-time.

A Ricci flow solution {(Mm, g(t)), t ∈ I ⊂ R} is a smooth family of metrics satisfying the evolution equation

∂/∂t g = −2Rc —– (1)

where Mm is a complete manifold of dimension m. We assume that supM |Rm|g(t) < ∞ for each time t ∈ I. This condition holds automatically if M is a closed manifold. It is very often to put an extra term on the right hand side of (1) to obtain the following rescaled Ricci flow

∂/∂t g = −2 {Rc + λ(t)g} —– (2)

where λ(t) is a function depending only on time. Typically, λ(t) is chosen as the average of the scalar curvature, i.e. , 1/m ∱Rdv or some fixed constant independent of time. In the case that M is closed and λ(t) = 1/m ∱Rdv, the flow is called the normalized Ricci flow. Starting from a positive Ricci curvature metric on a 3-manifold, Richard Hamilton showed that the normalized Ricci flow exists forever and converges to a space form metric. Hamilton developed the maximum principle for tensors to study the Ricci flow initiated from some metric with positive curvature conditions. For metrics without positive curvature condition, the study of Ricci flow was profoundly affected by the celebrated work of Grisha Perelman. He introduced new tools, i.e., the entropy functionals μ, ν, the reduced distance and the reduced volume, to investigate the behavior of the Ricci flow. Perelman’s new input enabled him to revive Hamilton’s program of Ricci flow with surgery, leading to solutions of the Poincaré conjecture and Thurston’s geometrization conjecture.

In the general theory of the Ricci flow developed by Perelman in, the entropy functionals μ and ν are of essential importance. Perelman discovered the monotonicity of such functionals and applied them to prove the no-local-collapsing theorem, which removes the stumbling block for Hamilton’s program of Ricci flow with surgery. By delicately using such monotonicity, he further proved the pseudo-locality theorem, which claims that the Ricci flow can not quickly turn an almost Euclidean region into a very curved one, no matter what happens far away. Besides the functionals, Perelman also introduced the reduced distance and reduced volume. In terms of them, the Ricci flow space-time admits a remarkable comparison geometry picture, which is the foundation of his “local”-version of the no-local-collapsing theorem. Each of the tools has its own advantages and shortcomings. The functionals μ and ν have the advantage that their definitions only require the information for each time slice (M, g(t)) of the flow. However, they are global invariants of the underlying manifold (M, g(t)). It is not convenient to apply them to study the local behavior around a given point x. Correspondingly, the reduced volume and the reduced distance reflect the natural comparison geometry picture of the space-time. Around a base point (x, t), the reduced volume and the reduced distance are closely related to the “local” geometry of (x, t). Unfortunately, it is the space-time “local”, rather than the Riemannian geometry “local” that is concerned by the reduced volume and reduced geodesic. In order to apply them, some extra conditions of the space-time neighborhood of (x, t) are usually required. However, such strong requirement of space-time is hard to fulfill. Therefore, it is desirable to have some new tools to balance the advantages of the reduced volume, the reduced distance and the entropy functionals.

Let (Mm, g) be a complete Ricci-flat manifold, x0 is a point on M such that d(x0, x) < A. Suppose the ball B(x0, r0) is A−1−non-collapsed, i.e., r−m0|B(x0, r0)| ≥ A−1, can we obtain uniform non-collapsing for the ball B(x, r), whenever 0 < r < r0 and d(x, x0) < Ar0? This question can be answered easily by applying triangle inequalities and Bishop-Gromov volume comparison theorems. In particular, there exists a κ = κ(m, A) ≥ 3−mA−m−1 such that B(x, r) is κ-non-collapsed, i.e., r−m|B(x, r)| ≥ κ. Consequently, there is an estimate of propagation speed of non-collapsing constant on the manifold M. This is illustrated by Figure


We now regard (M, g) as a trivial space-time {(M, g(t)), −∞ < t < ∞} such that g(t) ≡ g. Clearly, g(t) is a static Ricci flow solution by the Ricci-flatness of g. Then the above estimate can be explained as the propagation of volume non-collapsing constant on the space-time.


However, in a more intrinsic way, it can also be interpreted as the propagation of non-collapsing constant of Perelman’s reduced volume. On the Ricci flat space-time, Perelman’s reduced volume has a special formula

V((x, t)r2) = (4π)-m/2 r-m ∫M e-d2(y, x)/4r2 dvy —– (3)

which is almost the volume ratio of Bg(t)(x, r). On a general Ricci flow solution, the reduced volume is also well-defined and has monotonicity with respect to the parameter r2, if one replace d2(y, x)/4r2 in the above formula by the reduced distance l((x, t), (y, t − r2)). Therefore, via the comparison geometry of Bishop-Gromov type, one can regard a Ricci-flow as an “intrinsic-Ricci-flat” space-time. However, the disadvantage of the reduced volume explanation is also clear: it requires the curvature estimate in a whole space-time neighborhood around the point (x, t), rather than the scalar curvature estimate of a single time slice t.

Conjuncted: Gross Domestic Product. Part 2.

Conjuncted here.

The topology of the World Trade, which is encapsulated in its adjacency matrix aij defined by

aij(t) ≡ 1 if fij(t) > 0

aij(t) ≡ 0 if fij(t) = 0

, strongly depends on the GDP values wi. Indeed, the problem can be mapped onto the so-called fitness model where it is assumed that the probability pij for a link from i to j is a function p(xi, xj) of the values of a fitness variable x assigned to each vertex and drawn from a given distribution. The importance of this model relies in the possibility to write all the expected topological properties of the network (whose specification requires in principle the knowledge of the N2 entries of its adjacency matrix) in terms of only N fitness values. Several topological properties including the degree distribution, the degree correlations and the clustering hierarchy are determined by the GDP distribution. Moreover, an additional understanding of the World Trade as a directed network comes from the study of its reciprocity, which represents the strong tendency of the network to form pairs of mutual links pointing in opposite directions between two vertices. In this case too, the observed reciprocity structure can be traced back to the GDP values.

The probability that at time t a link exists from i to j (aij = 1) is empirically found to be

pt [xi(t), xj(t)] = [α(t) xi(t) xj(t)]/[1 + β(t) xi(t) xj(t)]

where xi is the rescaled GDP and the parameters α(t) and β(t) can be fixed by imposing that the expected number of links

Lexp(t) = ∑i≠j pt [xi(t), xj(t)]

equals its empirical value

L(t) = ∑i≠j aij(t)

and that the expected number of reciprocated links

Lexp(t) = ∑i≠j pt[xi(t), xj(t)] pt[xj(t), xi(t)]

equals its observed value

L(t) = ∑i≠j aij(t) aji(t)

This particular structure of the World Trade topology can be tested by comparing various expected topological properties with the empirical ones. For instance, we can compare the empirical and the theoretical plots of vertex degrees (at time t) versus their rescaled GDP xi(t). Note that since pt [xi(t), xj(t)] is symmetric under the exchange of i and j, at any given time the expected in-degree and the expected out-degree of a vertex i are equal. We denote both by kexpi, which can be expressed as

kexpi(t) = ∑j≠i pt[xi(t), xj(t)]

Since the number of countries N(t) increases in time, we define the rescaled degrees

k ̃i(t) ≡ ki(t)/[N(t) − 1]

that always represent the fraction of vertices which are connected to i (the term −1 comes from the fact that there are no self-loops in the network, hence the maximum degree is always N − 1). In this way, we can easily compare the data corresponding to different years and network sizes. The results are shown in the figure below for various snapshots of the system.


Figure: Plot of the rescaled degrees versus the rescaled GDP at four different years, and comparison with the expected trend. 

The empirical trends are in accordance with the expected ones. Then we can also compare the cumulative distribution Pexp>(k ̃exp) of the expected degrees with the empirical degree distributions Pin>(k ̃in) and Pout>(k ̃out). The results are shown in the following figure and are in conformity to a good agreement between the theoretical prediction and the observed behavior.


Figure: Cumulative degree distributions of the World Trade topology for four different years and comparison with the expected trend. 

Note that the accordance with the predicted behaviour is extremely important since the expected quantities are computed by using only the N GDP values of all countries, with no information regarding the N2 trade values. On the other hand, the empirical properties of the World Trade topology are extracted from trade data, with no knowledge of the GDP values. The agreement between the properties obtained by using these two independent sources of information is therefore surprising. This also shows that the World Trade topology crucially depends on the GDP distribution ρ(x).

Rhizomatic Topology and Global Politics. A Flirtatious Relationship.



Deleuze and Guattari see concepts as rhizomes, biological entities endowed with unique properties. They see concepts as spatially representable, where the representation contains principles of connection and heterogeneity: any point of a rhizome must be connected to any other. Deleuze and Guattari list the possible benefits of spatial representation of concepts, including the ability to represent complex multiplicity, the potential to free a concept from foundationalism, and the ability to show both breadth and depth. In this view, geometric interpretations move away from the insidious understanding of the world in terms of dualisms, dichotomies, and lines, to understand conceptual relations in terms of space and shapes. The ontology of concepts is thus, in their view, appropriately geometric, a multiplicity defined not by its elements, nor by a center of unification and comprehension and instead measured by its dimensionality and its heterogeneity. The conceptual multiplicity, is already composed of heterogeneous terms in symbiosis, and is continually transforming itself such that it is possible to follow, and map, not only the relationships between ideas but how they change over time. In fact, the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference. They outline the unique contribution of geometric models to the understanding of contingent structure:

Principle of cartography and decalcomania: a rhizome is not amenable to any structural or generative model. It is a stranger to any idea of genetic axis or deep structure. A genetic axis is like an objective pivotal unity upon which successive stages are organized; deep structure is more like a base sequence that can be broken down into immediate constituents, while the unity of the product passes into another, transformational and subjective, dimension. (Deleuze and Guattari)

The word that Deleuze and Guattari use for ‘multiplicities’ can also be translated to the topological term ‘manifold.’ If we thought about their multiplicities as manifolds, there are a virtually unlimited number of things one could come to know, in geometric terms, about (and with) our object of study, abstractly speaking. Among those unlimited things we could learn are properties of groups (homological, cohomological, and homeomorphic), complex directionality (maps, morphisms, isomorphisms, and orientability), dimensionality (codimensionality, structure, embeddedness), partiality (differentiation, commutativity, simultaneity), and shifting representation (factorization, ideal classes, reciprocity). Each of these functions allows for a different, creative, and potentially critical representation of global political concepts, events, groupings, and relationships. This is how concepts are to be looked at: as manifolds. With such a dimensional understanding of concept-formation, it is possible to deal with complex interactions of like entities, and interactions of unlike entities. Critical theorists have emphasized the importance of such complexity in representation a number of times, speaking about it in terms compatible with mathematical methods if not mathematically. For example, Foucault’s declaration that: practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in (apparently simple) concepts deployed both in global politics.  This leads to a shift in the concept of danger as well, where danger is not an objective condition but “an effect of interpretation”. Critical thinking about how-possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses, sending a wave of added complexity through other concepts as well. This work seeking complexity serves one of the major underlying functions of critical theorizing: finding invisible injustices in (modernist, linear, structuralist) givens in the operation and analysis of global politics.

In a geometric sense, this complexity could be thought about as multidimensional mapping. In theoretical geometry, the process of mapping conceptual spaces is not primarily empirical, but for the purpose of representing and reading the relationships between information, including identification, similarity, differentiation, and distance. The reason for defining topological spaces in math, the essence of the definition, is that there is no absolute scale for describing the distance or relation between certain points, yet it makes sense to say that an (infinite) sequence of points approaches some other (but again, no way to describe how quickly or from what direction one might be approaching). This seemingly weak relationship, which is defined purely ‘locally’, i.e., in a small locale around each point, is often surprisingly powerful: using only the relationship of approaching parts, one can distinguish between, say, a balloon, a sheet of paper, a circle, and a dot.

To each delineated concept, one should distinguish and associate a topological space, in a (necessarily) non-explicit yet definite manner. Whenever one has a relationship between concepts (here we think of the primary relationship as being that of constitution, but not restrictively, we ‘specify’ a function (or inclusion, or relation) between the topological spaces associated to the concepts). In these terms, a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented. Such an approach can be leveraged for thinking about conceptual components, dimensionality, and structure. In these terms, dimensions can be thought of as properties or qualities, each with their own (often-multidimensional) properties or qualities. A key goal of the modeling of conceptual space being representation means that a key (mathematical and theoretical) goal of concept space mapping is

associationism, where associations between different kinds of information elements carry the main burden of representation. (Conceptual_Spaces_as_a_Framework_for_Knowledge_Representation)

To this end,

objects in conceptual space are represented by points, in each domain, that characterize their dimensional values. A concept geometry for conceptual spaces

These dimensional values can be arranged in relation to each other, as Gardenfors explains that

distances represent degrees of similarity between objects represented in space and therefore conceptual spaces are “suitable for representing different kinds of similarity relation. Concept

These similarity relationships can be explored across ideas of a concept and across contexts, but also over time, since “with the aid of a topological structure, we can speak about continuity, e.g., a continuous change” a possibility which can be found only in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations.

Autonomous Capitalist Algorithmic Virus and its Human Co-conspirators. #AltWoke


Although #AltWoke is a vertical political school of thought, it doesn’t disregard protests or horizontal action. However, we are opposed to kitsch recklessness. Effective protests come from frustrated entities with specific goals in mind, which is why Occupy fizzled out, despite its global audience. The Civil Right’s Movement was strategic and had specific goals in mind. Standing Rock is another example. Yes, riots are protests as well and can also be effective insofar the anger of the oppressed, expressed as violence against private property, highlights a failure or injustice on the state’s end. As an example, the violence in Ferguson led to the development of the Black Lives Matter movement, and this movement examined a specific problem.

We are not opposed to identity politics, per se. We’re opposed to identity politics in its current form. We think a better answer to the current mode of virtue signaling would be to add terms to the modern lexicon that explain intersectionality and, specifically, terms that talk about internalized racism, patriarchy, etc. It’s important to discuss identity in a way that is deserving of the complexity the issue presents. Take lived experience and match it up against statistics, don’t present it as an absolute fact that everyone should automatically agree with.

Many of these terms already exist and deal with identity in a systematic way, as opposed to pointing to lived experience as if it’s an infinite truth. If you think notions like Othering by way of Fanon or cultural hegemony by way of Gramsci are too academic, then make these terms part of the general lexicon until they no longer seem obscure. Teaching Gen Z to understand hegemony and media should be the next big emancipatory project.

We’re a technologically based society and  #AltWoke believes that our political decisions should be framed around this premise. Having access to endless streams of information will cause profound changes, culturally, sociologically, psychologically and perhaps even neurologically. Thus, you should think more critically about the way you engage with technology. This can only happen by educating yourself outside of our writing and manifesto.

The Left thinks we’re the Alt-Right while the Alt-Right thinks we’re AntiFa. In an age where nuance is meaningless, this proves to us that we’re doing something useful.

Conjuncted: Speculatively Accelerated Capital – Trading Outside the Pit.


High Frequency Traders (HFTs hereafter) may anticipate the trades of a mutual fund, for instance, if the mutual fund splits large orders into a series of smaller ones and the initial trades reveal information about the mutual funds’ future trading intentions. HFTs might also forecast order flow if traditional asset managers with similar trading demands do not all trade at the same time, allowing the possibility that the initiation of a trade by one mutual fund could forecast similar future trades by other mutual funds. If an HFT were able to forecast a traditional asset managers’ order flow by either these or some other means, then the HFT could potentially trade ahead of them and profit from the traditional asset manager’s subsequent price impact.

There are two main empirical implications of HFTs engaging in such a trading strategy. The first implication is that HFT trading should lead non-HFT trading – if an HFT buys a stock, non-HFTs should subsequently come into the market and buy those same stocks. Second, since the HFT’s objective would be to profit from non-HFTs’ subsequent price impact, it should be the case that the prices of the stocks they buy rise and those of the stocks they sell fall. These two patterns, together, are consistent with HFTs trading stocks in order to profit from non-HFTs’ future buying and selling pressure. 

While HFTs may in aggregate anticipate non-HFT order flow, it is also possible that among HFTs, some firms’ trades are strongly correlated with future non-HFT order flow, while other firms’ trades have little or no correlation with non-HFT order flow. This may be the case if certain HFTs focus more on strategies that anticipate order flow or if some HFTs are more skilled than other firms. If certain HFTs are better at forecasting order flow or if they focus more on such a strategy, then these HFTs’ trades should be consistently more strongly correlated with future non-HFT trades than are trades from other HFTs. Additionally, if these HFTs are more skilled, then one might expect these HFTs’ trades to be more strongly correlated with future returns. 

Another implication of the anticipatory trading hypothesis is that the correlation between HFT trades and future non-HFT trades should be stronger at times when non-HFTs are impatient. The reason is anticipating buying and selling pressure requires forecasting future trades based on patterns in past trades and orders. To make anticipating their order flow difficult, non-HFTs typically use execution algorithms to disguise their trading intentions. But there is a trade-off between disguising order flow and trading a large position quickly. When non-HFTs are impatient and focused on trading a position quickly, they may not hide their order flow as well, making it easier for HFTs to anticipate their trades. At such times, the correlation between HFT trades and future non-HFT trades should be stronger. 

Bayesianism in Game Theory. Thought of the Day 24.0


Bayesianism in game theory can be characterised as the view that it is always possible to define probabilities for anything that is relevant for the players’ decision-making. In addition, it is usually taken to imply that the players use Bayes’ rule for updating their beliefs. If the probabilities are to be always definable, one also has to specify what players’ beliefs are before the play is supposed to begin. The standard assumption is that such prior beliefs are the same for all players. This common prior assumption (CPA) means that the players have the same prior probabilities for all those aspects of the game for which the description of the game itself does not specify different probabilities. Common priors are usually justified with the so called Harsanyi doctrine, according to which all differences in probabilities are to be attributed solely to differences in the experiences that the players have had. Different priors for different players would imply that there are some factors that affect the players’ beliefs even though they have not been explicitly modelled. The CPA is sometimes considered to be equivalent to the Harsanyi doctrine, but there seems to be a difference between them: the Harsanyi doctrine is best viewed as a metaphysical doctrine about the determination of beliefs, and it is hard to see why anybody would be willing to argue against it: if everything that might affect the determination of beliefs is included in the notion of ‘experience’, then it alone does determine the beliefs. The Harsanyi doctrine has some affinity to some convergence theorems in Bayesian statistics: if individuals are fed with similar information indefinitely, their probabilities will ultimately be the same, irrespective of the original priors.

The CPA however is a methodological injunction to include everything that may affect the players’ behaviour in the game: not just everything that motivates the players, but also everything that affects the players’ beliefs should be explicitly modelled by the game: if players had different priors, this would mean that the game structure would not be completely specified because there would be differences in players’ behaviour that are not explained by the model. In a dispute over the status of the CPA, Faruk Gul essentially argues that the CPA does not follow from the Harsanyi doctrine. He does this by distinguishing between two different interpretations of the common prior, the ‘prior view’ and the ‘infinite hierarchy view’. The former is a genuinely dynamic story in which it is assumed that there really is a prior stage in time. The latter framework refers to Mertens and Zamir’s construction in which prior beliefs can be consistently formulated. This framework however, is static in the sense that the players do not have any information on a prior stage, indeed, the ‘priors’ in this framework do not even pin down a player’s priors for his own types. Thus, the existence of a common prior in the latter framework does not have anything to do with the view that differences in beliefs reflect differences in information only.

It is agreed by everyone that for most (real-world) problems there is no prior stage in which the players know each other’s beliefs, let alone that they would be the same. The CPA, if understood as a modelling assumption, is clearly false. Robert Aumann, however, defends the CPA by arguing that whenever there are differences in beliefs, there must have been a prior stage in which the priors were the same, and from which the current beliefs can be derived by conditioning on the differentiating events. If players differ in their present beliefs, they must have received different information at some previous point in time, and they must have processed this information correctly. Based on this assumption, he further argues that players cannot ‘agree to disagree’: if a player knows that his opponents’ beliefs are different from his own, he should revise his beliefs to take the opponents’ information into account. The only case where the CPA would be violated, then, is when players have different beliefs, and have common knowledge about each others’ different beliefs and about each others’ epistemic rationality. Aumann’s argument seems perfectly legitimate if it is taken as a metaphysical one, but we do not see how it could be used as a justification for using the CPA as a modelling assumption in this or that application of game theory and Aumann does not argue that it should.


Abelian Categories, or Injective Resolutions are Diagrammatic. Note Quote.


Jean-Pierre Serre gave a more thoroughly cohomological turn to the conjectures than Weil had. Grothendieck says

Anyway Serre explained the Weil conjectures to me in cohomological terms around 1955 – and it was only in these terms that they could possibly ‘hook’ me …I am not sure anyone but Serre and I, not even Weil if that is possible, was deeply convinced such [a cohomology] must exist.

Specifically Serre approached the problem through sheaves, a new method in topology that he and others were exploring. Grothendieck would later describe each sheaf on a space T as a “meter stick” measuring T. The cohomology of a given sheaf gives a very coarse summary of the information in it – and in the best case it highlights just the information you want. Certain sheaves on T produced the Betti numbers. If you could put such “meter sticks” on Weil’s arithmetic spaces, and prove standard topological theorems in this form, the conjectures would follow.

By the nuts and bolts definition, a sheaf F on a topological space T is an assignment of Abelian groups to open subsets of T, plus group homomorphisms among them, all meeting a certain covering condition. Precisely these nuts and bolts were unavailable for the Weil conjectures because the arithmetic spaces had no useful topology in the then-existing sense.

At the École Normale Supérieure, Henri Cartan’s seminar spent 1948-49 and 1950-51 focussing on sheaf cohomology. As one motive, there was already de Rham cohomology on differentiable manifolds, which not only described their topology but also described differential analysis on manifolds. And during the time of the seminar Cartan saw how to modify sheaf cohomology as a tool in complex analysis. Given a complex analytic variety V Cartan could define sheaves that reflected not only the topology of V but also complex analysis on V.

These were promising for the Weil conjectures since Weil cohomology would need sheaves reflecting algebra on those spaces. But understand, this differential analysis and complex analysis used sheaves and cohomology in the usual topological sense. Their innovation was to find particular new sheaves which capture analytic or algebraic information that a pure topologist might not focus on.

The greater challenge to the Séminaire Cartan was, that along with the cohomology of topological spaces, the seminar looked at the cohomology of groups. Here sheaves are replaced by G-modules. This was formally quite different from topology yet it had grown from topology and was tightly tied to it. Indeed Eilenberg and Mac Lane created category theory in large part to explain both kinds of cohomology by clarifying the links between them. The seminar aimed to find what was common to the two kinds of cohomology and they found it in a pattern of functors.

The cohomology of a topological space X assigns to each sheaf F on X a series of Abelian groups HnF and to each sheaf map f : F → F′ a series of group homomorphisms Hnf : HnF → HnF′. The definition requires that each Hn is a functor, from sheaves on X to Abelian groups. A crucial property of these functors is:

HnF = 0 for n > 0

for any fine sheaf F where a sheaf is fine if it meets a certain condition borrowed from differential geometry by way of Cartan’s complex analytic geometry.

The cohomology of a group G assigns to each G-module M a series of Abelian groups HnM and to each homomorphism f : M →M′ a series of homomorphisms HnF : HnM → HnM′. Each Hn is a functor, from G-modules to Abelian groups. These functors have the same properties as topological cohomology except that:

HnM = 0 for n > 0

for any injective module M. A G-module I is injective if: For every G-module inclusion N M and homomorphism f : N → I there is at least one g : M → I making this commute


Cartan could treat the cohomology of several different algebraic structures: groups, Lie groups, associative algebras. These all rest on injective resolutions. But, he could not include topological spaces, the source of the whole, and still one of the main motives for pursuing the other cohomologies. Topological cohomology rested on the completely different apparatus of fine resolutions. As to the search for a Weil cohomology, this left two questions: What would Weil cohomology use in place of topological sheaves or G-modules? And what resolutions would give their cohomology? Specifically, Cartan & Eilenberg defines group cohomology (like several other constructions) as a derived functor, which in turn is defined using injective resolutions. So the cohomology of a topological space was not a derived functor in their technical sense. But a looser sense was apparently current.

Grothendieck wrote to Serre:

I have realized that by formulating the theory of derived functors for categories more general than modules, one gets the cohomology of spaces at the same time at small cost. The existence follows from a general criterion, and fine sheaves will play the role of injective modules. One gets the fundamental spectral sequences as special cases of delectable and useful general spectral sequences. But I am not yet sure if it all works as well for non-separated spaces and I recall your doubts on the existence of an exact sequence in cohomology for dimensions ≥ 2. Besides this is probably all more or less explicit in Cartan-Eilenberg’s book which I have not yet had the pleasure to see.

Here he lays out the whole paper, commonly cited as Tôhoku for the journal that published it. There are several issues. For one thing, fine resolutions do not work for all topological spaces but only for the paracompact – that is, Hausdorff spaces where every open cover has a locally finite refinement. The Séminaire Cartan called these separated spaces. The limitation was no problem for differential geometry. All differential manifolds are paracompact. Nor was it a problem for most of analysis. But it was discouraging from the viewpoint of the Weil conjectures since non-trivial algebraic varieties are never Hausdorff.

Serre replied using the same loose sense of derived functor:

The fact that sheaf cohomology is a special case of derived func- tors (at least for the paracompact case) is not in Cartan-Sammy. Cartan was aware of it and told [David] Buchsbaum to work on it, but he seems not to have done it. The interest of it would be to show just which properties of fine sheaves we need to use; and so one might be able to figure out whether or not there are enough fine sheaves in the non-separated case (I think the answer is no but I am not at all sure!).

So Grothendieck began rewriting Cartan-Eilenberg before he had seen it. Among other things he preempted the question of resolutions for Weil cohomology. Before anyone knew what “sheaves” it would use, Grothendieck knew it would use injective resolutions. He did this by asking not what sheaves “are” but how they relate to one another. As he later put it, he set out to:

consider the set13 of all sheaves on a given topological space or, if you like, the prodigious arsenal of all the “meter sticks” that measure it. We consider this “set” or “arsenal” as equipped with its most evident structure, the way it appears so to speak “right in front of your nose”; that is what we call the structure of a “category”…From here on, this kind of “measuring superstructure” called the “category of sheaves” will be taken as “incarnating” what is most essential to that space.

The Séminaire Cartan had shown this structure in front of your nose suffices for much of cohomology. Definitions and proofs can be given in terms of commutative diagrams and exact sequences without asking, most of the time, what these are diagrams of.  Grothendieck went farther than any other, insisting that the “formal analogy” between sheaf cohomology and group cohomology should become “a common framework including these theories and others”. To start with, injectives have a nice categorical sense: An object I in any category is injective if, for every monic N → M and arrow f : N → I there is at least one g : M → I such that


Fine sheaves are not so diagrammatic.

Grothendieck saw that Reinhold Baer’s original proof that modules have injective resolutions was largely diagrammatic itself. So Grothendieck gave diagrammatic axioms for the basic properties used in cohomology, and called any category that satisfies them an Abelian category. He gave further diagrammatic axioms tailored to Baer’s proof: Every category satisfying these axioms has injective resolutions. Such a category is called an AB5 category, and sometimes around the 1960s a Grothendieck category though that term has been used in several senses.

So sheaves on any topological space have injective resolutions and thus have derived functor cohomology in the strict sense. For paracompact spaces this agrees with cohomology from fine, flabby, or soft resolutions. So you can still use those, if you want them, and you will. But Grothendieck treats paracompactness as a “restrictive condition”, well removed from the basic theory, and he specifically mentions the Weil conjectures.

Beyond that, Grothendieck’s approach works for topology the same way it does for all cohomology. And, much further, the axioms apply to many categories other than categories of sheaves on topological spaces or categories of modules. They go far beyond topological and group cohomology, in principle, though in fact there were few if any known examples outside that framework when they were given.

Stationarity or Homogeneity of Random Fields


Let (Ω, F, P) be a probability space on which all random objects will be defined. A filtration {Ft : t ≥ 0} of σ-algebras, is fixed and defines the information available at each time t.

Random field: A real-valued random field is a family of random variables Z(x) indexed by x ∈ Rd together with a collection of distribution functions of the form Fx1,…,xn which satisfy

Fx1,…,xn(b1,…,bn) = P[Z(x1) ≤ b1,…,Z(xn) ≤ bn], b1,…,bn ∈ R

The mean function of Z is m(x) = E[Z(x)] whereas the covariance function and the correlation function are respectively defined as

R(x, y) = E[Z(x)Z(y)] − m(x)m(y)

c(x, y) = R(x, x)/√(R(x, x)R(y, y))

Notice that the covariance function of a random field Z is a non-negative definite function on Rd × Rd, that is if x1, . . . , xk is any collection of points in Rd, and ξ1, . . . , ξk are arbitrary real constants, then

l=1kj=1k ξlξj R(xl, xj) = ∑l=1kj=1k ξlξj E(Z(xl) Z(xj)) = E (∑j=1k ξj Z(xj))2 ≥ 0

Without loss of generality, we assumed m = 0. The property of non-negative definiteness characterizes covariance functions. Hence, given any function m : Rd → R and a non-negative definite function R : Rd × Rd → R, it is always possible to construct a random field for which m and R are the mean and covariance function, respectively.

Bochner’s Theorem: A continuous function R from Rd to the complex plane is non-negative definite if and only if it is the Fourier-Stieltjes transform of a measure F on Rd, that is the representation

R(x) = ∫Rd eix.λ dF(λ)

holds for x ∈ Rd. Here, x.λ denotes the scalar product ∑k=1d xkλk and F is a bounded,  real-valued function satisfying ∫A dF(λ) ≥ 0 ∀ measurable A ⊂ Rd

The cross covariance function is defined as R12(x, y) = E[Z1(x)Z2(y)] − m1(x)m2(y)

, where m1 and m2 are the respective mean functions. Obviously, R12(x, y) = R21(y, x). A family of processes Zι with ι belonging to some index set I can be considered as a process in the product space (Rd, I).

A central concept in the study of random fields is that of homogeneity or stationarity. A random field is homogeneous or (second-order) stationary if E[Z(x)2] is finite ∀ x and

• m(x) ≡ m is independent of x ∈ Rd

• R(x, y) solely depends on the difference x − y

Thus we may consider R(h) = Cov(Z(x), Z(x+h)) = E[Z(x) Z(x+h)] − m2, h ∈ Rd,

and denote R the covariance function of Z. In this case, the following correspondence exists between the covariance and correlation function, respectively:

c(h) = R(h)/R(o)

i.e. c(h) ∝ R(h). For this reason, the attention is confined to either c or R. Two stationary random fields Z1, Z2 are stationarily correlated if their cross covariance function R12(x, y) depends on the difference x − y only. The two random fields are uncorrelated if R12 vanishes identically.

An interesting special class of homogeneous random fields that often arise in practice is the class of isotropic fields. These are characterized by the property that the covariance function R depends only on the length ∥h∥ of the vector h:

R(h) = R(∥h∥) .

In many applications, random fields are considered as functions of “time” and “space”. In this case, the parameter set is most conveniently written as (t,x) with t ∈ R+ and x ∈ Rd. Such processes are often homogeneous in (t, x) and isotropic in x in the sense that

E[Z(t, x)Z(t + h, x + y)] = R(h, ∥y∥) ,

where R is a function from R2 into R. In such a situation, the covariance function can be written as

R(t, ∥x∥) = ∫Rλ=0 eitu Hd (λ ∥x∥) dG(u, λ),


Hd(r) = (2/r)(d – 2)/2 Γ(d/2) J(d – 2)/2 (r)

and Jm is the Bessel function of the first kind of order m and G is a multiple of a distribution function on the half plane {(λ,u)|λ ≥ 0,u ∈ R}.

Abstract Expressions of Time’s Modalities. Thought of the Day 21.0


According to Gregory Bateson,

What we mean by information — the elementary unit of information — is a difference which makes a difference, and it is able to make a difference because the neural pathways along which it travels and is continually transformed are themselves provided with energy. The pathways are ready to be triggered. We may even say that the question is already implicit in them.

In other words, we always need to know some second order logic, and presuppose a second order of “order” (cybernetics) usually shared within a distinct community, to realize what a certain claim, hypothesis or theory means. In Koichiro Matsuno’s opinion Bateson’s phrase

must be a prototypical example of second-order logic in that the difference appearing both in the subject and predicate can accept quantification. Most statements framed in second-order logic are not decidable. In order to make them decidable or meaningful, some qualifier needs to be used. A popular example of such a qualifier is a subjective observer. However, the point is that the subjective observer is not limited to Alice or Bob in the QBist parlance.

This is what is necessitated in order understand the different viewpoints in logic of mathematicians, physicists and philosophers in the dispute about the existence of time. An essential aspect of David Bohm‘s “implicate order” can be seen in the grammatical formulation of theses such as the law of motion:

While it is legitimate in its own light, the physical law of motion alone framed in eternal time referable in the present tense, whether in classical or quantum mechanics, is not competent enough to address how the now could be experienced. … Measurement differs from the physical law of motion as much as the now in experience differs from the present tense in description. The watershed separating between measurement and the law of motion is in the distinction between the now and the present tense. Measurement is thus subjective and agential in making a punctuation at the moment of now. (Matsuno)

The distinction between experiencing and capturing experience of time in terms of language is made explicit in Heidegger’s Being and Time

… by passing away constantly, time remains as time. To remain means: not to disappear, thus, to presence. Thus time is determined by a kind of Being. How, then, is Being supposed to be determined by time?

Koichiro Matsuno’s comment on this is:

Time passing away is an abstraction from accepting the distinction of the grammatical tenses, while time remaining as time refers to the temporality of the durable now prior to the abstraction of the tenses.

Therefore, when trying to understand the “local logics/phenomenologies” of the individual disciplines (mathematics physics, philosophy, etc., including their fields), one should be aware of the fact that the capabilities of our scientific language are not limitless:

…the now of the present moment is movable and dynamic in updating the present perfect tense in the present progressive tense. That is to say, the now is prior and all of the grammatical tenses including the ubiquitous present tense are the abstract derivatives from the durable now. (Matsuno)

This presupposes the adequacy of mathematical abstractions specifically invented or adopted and elaborated for the expression of more sophisticated modalities of time’s now than those currently used in such formalisms as temporal logic.