The Natural Theoretic of Electromagnetism. Thought of the Day 147.0

pRwcC

In Maxwell’s theory, the field strength F = 1/2Fμν dxμ ∧ dxν is a real 2-form on spacetime, and thence a natural object at the same time. The homogeneous Maxwell equation dF = 0 is an equation involving forms and it has a well-known local solution F = dA’, i.e. there exists a local spacetime 1-form A’ which is a potential for the field strength F. Of course, if spacetime is contractible, as e.g. for Minkowski space, the solution is also a global one. As is well-known, in the non-commutative Yang-Mills theory case the field strength F = 1/2FAμν TA ⊗ dxμ ∧ dxν is no longer a spacetime form. This is a somewhat trivial remark since the transformation laws of such field strength are obtained as the transformation laws of the curvature of a principal connection with values in the Lie algebra of some (semisimple) non-Abelian Lie group G (e.g. G = SU(n), n 2 ≥ 2). However, the common belief that electromagnetism is to be intended as the particular case (for G =U(1)) of a non-commutative theory is not really physically evident. Even if we subscribe this common belief, which is motivated also by the tremendous success of the quantized theory, let us for a while discuss electromagnetism as a standalone theory.

From a mathematical viewpoint this is a (different) approach to electromagnetism and the choice between the two can be dealt with on a physical ground only. Of course the 1-form A’ is defined modulo a closed form, i.e. locally A” = A’ + dα is another solution.

How can one decide whether the potential of electromagnetism should be considered as a 1-form or rather as a principal connection on a U(1)-bundle? First of all we notice that by a standard hole argument (one can easily define compact supported closed 1-forms, e.g. by choosing the differential of compact supported functions which always exist on a paracompact manifold) the potentials A and A’ represent the same physical situation. On the other hand, from a mathematical viewpoint we would like the dynamical field, i.e. the potential A’, to be a global section of some suitable configuration bundle. This requirement is a mathematical one, motivated on the wish of a well-defined geometrical perspective based on global Variational Calculus.

The first mathematical way out is to restrict attention to contractible spacetimes, where A’ may be always chosen to be global. Then one can require the gauge transformations A” = A’ + dα to be Lagrangian symmetries. In this way, field equations select a whole equivalence class of gauge-equivalent potentials, a procedure which solves the hole argument problem. In this picture the potential A’ is really a 1-form, which can be dragged along spacetime diffeomorphism and which admits the ordinary Lie derivatives of 1-forms. Unfortunately, the restriction to contractible spacetimes is physically unmotivated and probably wrong.

Alternatively, one can restrict electromagnetic fields F, deciding that only exact 2-forms F are allowed. That actually restricts the observable physical situations, by changing the homogeneous Maxwell equations (i.e. Bianchi identities) by requiring that F is not only closed but exact. One should in principle be able to empirically reject this option.

On non-contractible spacetimes, one is necessarily forced to resort to a more “democratic” attitude. The spacetime is covered by a number of patches Uα. On each patch Uα one defines a potential A(α). In the intersection of two patches the two potentials A(α) and A(β) may not agree. In each patch, in fact, the observer chooses his own conventions and he finds a different representative of the electromagnetic potential, which is related by a gauge transformation to the representatives chosen in the neighbour patch(es). Thence we have a family of gauge transformations, one in each intersection Uαβ, which obey cocycle identities. If one recognizes in them the action of U(1) then one can build a principal bundle P = (P, M, π; U(1)) and interpret the ensuing potential as a connection on P. This leads way to the gauge natural formalism.

Anyway this does not close the matter. One can investigate if and when the principal bundle P, in addition to the obvious principal structure, can be also endowed with a natural structure. If that were possible then the bundle of connections Cp (which is associated to P) would also be natural. The problem of deciding whether a given gauge natural bundle can be endowed with a natural structure is quite difficult in general and no full theory is yet completely developed in mathematical terms. That is to say, there is no complete classification of the topological and differential geometric conditions which a principal bundle P has to satisfy in order to ensure that, among the principal trivializations which determine its gauge natural structure, one can choose a sub-class of trivializations which induce a purely natural bundle structure. Nor it is clear how many inequivalent natural structures a good principal bundle may support. Though, there are important examples of bundles which support at the same time a natural and a gauge natural structure. Actually any natural bundle is associated to some frame bundle L(M), which is principal; thence each natural bundle is also gauge natural in a trivial way. Since on any paracompact manifold one can choose a global Riemannian metric g, the corresponding tangent bundle T(M) can be associated to the orthonormal frame bundle O(M, g) besides being obviously associated to L(M). Thence the natural bundle T(M) may be also endowed with a gauge natural bundle structure with structure group O(m). And if M is orientable the structure can be further reduced to a gauge natural bundle with structure group SO(m).

Roughly speaking, the task is achieved by imposing restrictions to cocycles which generate T(M) according to the prescription by imposing a privileged class of changes of local laboratories and sets of measures. Imposing the cocycle ψ(αβ) to take its values in O(m) rather than in the larger group GL(m). Inequivalent gauge natural structures are in one-to-one correspondence with (non isometric) Riemannian metrics on M. Actually whenever there is a Lie group homomorphism ρ : GU(m) → G for some s onto some given Lie group G we can build a natural G-principal bundle on M. In fact, let (Uα, ψ(α)) be an atlas of the given manifold M, ψ(αβ) be its transition functions and jψ(αβ) be the induced transition functions of L(M). Then we can define a G-valued cocycle on M by setting ρ(jψ(αβ)) and thence a (unique up to fibered isomorphisms) G-principal bundle P(M) = (P(M), M, π; G). The bundle P(M), as well as any gauge natural bundle associated to it, is natural by construction. Now, defining a whole family of natural U(1)-bundles Pq(M) by using the bundle homomorphisms

ρq: GL(m) → U(1): J ↦ exp(iq ln det|J|) —– (1)

where q is any real number and In denotes the natural logarithm. In the case q = 0 the image of ρ0 is the trivial group {I}; and, all the induced bundles are trivial, i.e. P = M x U(1).

The natural lift φ’ of a diffeomorphism φ: M → M is given by

φ'[x, e]α = [φ(x), eiq ln det|J|. e]α —– (2)

where J is the Jacobin of the morphism φ. The bundles Pq(M) are all trivial since they allow a global section. In fact, on any manifold M, one can define a global Riemannian metric g, where the local sections glue together.

Since the bundles Pq(M) are all trivial, they are all isomorphic to M x U(1) as principal U(1)-bundles, though in a non-canonical way unless q = 0. Any two of the bundles Pq1(M) and Pq2(M) for two different values of q are isomorphic as principal bundles but the isomorphism obtained is not the lift of a spacetime diffeomorphism because of the two different values of q. Thence they are not isomorphic as natural bundles. We are thence facing a very interesting situation: a gauge natural bundle C associated to the trivial principal bundle P can be endowed with an infinite family of natural structures, one for each q ∈ R; each of these natural structures can be used to regard principal connections on P as natural objects on M and thence one can regard electromagnetism as a natural theory.

Now that the mathematical situation has been a little bit clarified, it is again a matter of physical interpretation. One can in fact restrict to electromagnetic potentials which are a priori connections on a trivial structure bundle P ≅ M x U(1) or to accept that more complicated situations may occur in Nature. But, non-trivial situations are still empirically unsupported, at least at a fundamental level.

Advertisement

Canonical Actions on Bundles – Philosophizing Identity Over Gauge Transformations.

Untitled

In physical applications, fiber bundles often come with a preferred group of transformations (usually the symmetry group of the system). The modem attitude of physicists is to regard this group as a fundamental structure which should be implemented from the very beginning enriching bundles with a further structure and defining a new category.

A similar feature appears on manifolds as well: for example, on ℜ2 one can restrict to Cartesian coordinates when we regard it just as a vector space endowed with a differentiable structure, but one can allow also translations if the “bigger” affine structure is considered. Moreover, coordinates can be chosen in much bigger sets: for instance one can fix the symplectic form w = dx ∧ dy on ℜ2 so that ℜ2 is covered by an atlas of canonical coordinates (which include all Cartesian ones). But ℜ2 also happens to be identifiable with the cotangent bundle T*ℜ so that we can restrict the previous symplectic atlas to allow only natural fibered coordinates. Finally, ℜ2 can be considered as a bare manifold so that general curvilinear coordinates should be allowed accordingly; only if the full (i.e., unrestricted) manifold structure is considered one can use a full maximal atlas. Other choices define instead maximal atlases in suitably restricted sub-classes of allowed charts. As any manifold structure is associated with a maximal atlas, geometric bundles are associated to “maximal trivializations”. However, it may happen that one can restrict (or enlarge) the allowed local trivializations, so that the same geometrical bundle can be trivialized just using the appropriate smaller class of local trivializations. In geometrical terms this corresponds, of course, to impose a further structure on the bare bundle. Of course, this newly structured bundle is defined by the same basic ingredients, i.e. the same base manifold M, the same total space B, the same projection π and the same standard fiber F, but it is characterized by a new maximal trivialization where, however, maximal refers now to a smaller set of local trivializations.

Examples are: vector bundles are characterized by linear local trivializations, affine bundles are characterized by affine local trivializations, principal bundles are characterized by left translations on the fiber group. Further examples come from Physics: gauge transformations are used as transition functions for the configuration bundles of any gauge theory. For these reasons we give the following definition of a fiber bundle with structure group.

A fiber bundle with structure group G is given by a sextuple B = (E, M, π; F ;>.., G) such that:

  • (E, M, π; F) is a fiber bundle. The structure group G is a Lie group (possibly a discrete one) and λ : G —–> Diff(F) defines a left action of G on the standard fiber F .
  • There is a family of preferred trivializations {(Uα, t(α)}α∈I of B such that the following holds: let the transition functions be gˆ(αβ) : Uαβ —–> Diff(F) and let eG be the neutral element of G. ∃ a family of maps g(αβ) : Uαβ —–> G such

    that, for each x ∈ Uαβγ = Uα ∩ Uβ ∩ Uγ

    g(αα)(x) = eG

    g(αβ)(x) = [g(βα)(x)]-1

    g(αβ)(x) . g(βγ)(x) . g(γα)(x) = eG

    and

    (αβ)(x) = λ(g(αβ)(x)) ∈ Diff(F)

The maps g(αβ) : Uαβ —–> G, which depend on the trivialization, are said to form a cocycle with values in G. They are called the transition functions with values in G (or also shortly the transition functions). The preferred trivializations will be said to be compatible with the structure. Whenever dealing with fiber bundles with structure group the choice of a compatible trivialization will be implicitly assumed.

Fiber bundles with structure group provide the suitable framework to deal with bundles with a preferred group of transformations. To see this, let us begin by introducing the notion of structure bundle of a fiber bundle with structure group B = (B, M, π; F; x, G).

Let B = (B, M, π; F; x, G) be a bundle with a structure group; let us fix a trivialization {(Uα, t(α)}α∈I and denote by g(αβ) : Uαβ —–> G its transition functions. By using the canonical left action L : G —–> Diff(G) of G onto itself, let us define gˆ(αβ) : Uαβ —–> Diff(G) given by gˆ(αβ)(x) = L (g(αβ)(x)); they obviously satisfy the cocycle properties. Now by constructing a (unique modulo isomorphisms) principal bundle PB = P(B) having G as structure group and g(αβ) as transition functions acting on G by left translation Lg : G —> G.

The principal bundle P(B) = (P, M, p; G) constructed above is called the structure bundle of B = (B, M, π; F; λ, G).

Notice that there is no similar canonical way of associating a structure bundle to a geometric bundle B = (B, M, π; F), since in that case the structure group G is at least partially undetermined.

Each automorphism of P(B) naturally acts over B.

Let, in fact, {σ(α)}α∈I be a trivialization of PB together with its transition functions g(αβ) : Uαβ —–> G defined by σ(β) = σ(α) . g(αβ). Then any principal morphism Φ = (Φ, φ) over PB is locally represented by local maps ψ(α) : Uα —> G such that

Φ : [x, h]α ↦ [φ(α)(x), ψ(α)(x).h](α)

Since Φ is a global automorphism of PB for the above local expression, the following property holds true in Uαβ.

φ(α)(x) = φ(β)(x) ≡ x’

ψ(α)(x) = g(αβ)(x’) . ψ(β)(x) . g(βα)(x)

By using the family of maps {(φ(α), ψ(α))} one can thence define a family of global automorphisms of B. In fact, using the trivialization {(Uα, t(α)}α∈I, one can define local automorphisms of B given by

Φ(α)B : (x, y) ↦ (φ(α)(x), [λ(ψ(α)(x))](y))

These local maps glue together to give a global automorphism ΦB of the bundle B, due to the fact that g(αβ) are also transition functions of B with respect to its trivialization {(Uα, t(α)}α∈I.

In this way B is endowed with a preferred group of transformations, namely the group Aut(PB) of automorphisms of the structure bundle PB, represented on B by means of the canonical action. These transformations are called (generalized) gauge transformations. Vertical gauge transformations, i.e. gauge transformations projecting over the identity, are also called pure gauge transformations.

Momentum of Accelerated Capital. Note Quote.

high-frequency-trading

Distinct types of high frequency trading firms include independent proprietary firms, which use private funds and specific strategies which remain secretive, and may act as market makers generating automatic buy and sell orders continuously throughout the day. Broker-dealer proprietary desks are part of traditional broker-dealer firms but are not related to their client business, and are operated by the largest investment banks. Thirdly hedge funds focus on complex statistical arbitrage, taking advantage of pricing inefficiencies between asset classes and securities.

Today strategies using algorithmic trading and High Frequency Trading play a central role on financial exchanges, alternative markets, and banks‘ internalized (over-the-counter) dealings:

High frequency traders typically act in a proprietary capacity, making use of a number of strategies and generating a very large number of trades every single day. They leverage technology and algorithms from end-to-end of the investment chain – from market data analysis and the operation of a specific trading strategy to the generation, routing, and execution of orders and trades. What differentiates HFT from algorithmic trading is the high frequency turnover of positions as well as its implicit reliance on ultra-low latency connection and speed of the system.

The use of algorithms in computerised exchange trading has experienced a long evolution with the increasing digitalisation of exchanges:

Over time, algorithms have continuously evolved: while initial first-generation algorithms – fairly simple in their goals and logic – were pure trade execution algos, second-generation algorithms – strategy implementation algos – have become much more sophisticated and are typically used to produce own trading signals which are then executed by trade execution algos. Third-generation algorithms include intelligent logic that learns from market activity and adjusts the trading strategy of the order based on what the algorithm perceives is happening in the market. HFT is not a strategy per se, but rather a technologically more advanced method of implementing particular trading strategies. The objective of HFT strategies is to seek to benefit from market liquidity imbalances or other short-term pricing inefficiencies.

While algorithms are employed by most traders in contemporary markets, the intense focus on speed and the momentary holding periods are the unique practices of the high frequency traders. As the defence of high frequency trading is built around the principles that it increases liquidity, narrows spreads, and improves market efficiency, the high number of trades made by HFT traders results in greater liquidity in the market. Algorithmic trading has resulted in the prices of securities being updated more quickly with more competitive bid-ask prices, and narrowing spreads. Finally HFT enables prices to reflect information more quickly and accurately, ensuring accurate pricing at smaller time intervals. But there are critical differences between high frequency traders and traditional market makers:

  1. HFT do not have an affirmative market making obligation, that is they are not obliged to provide liquidity by constantly displaying two sides quotes, which may translate into a lack of liquidity during volatile conditions.
  2. HFT contribute little market depth due to the marginal size of their quotes, which may result in larger orders having to transact with many small orders, and this may impact on overall transaction costs.
  3. HFT quotes are barely accessible due to the extremely short duration for which the liquidity is available when orders are cancelled within milliseconds.

Besides the shallowness of the HFT contribution to liquidity, are the real fears of how HFT can compound and magnify risk by the rapidity of its actions:

There is evidence that high-frequency algorithmic trading also has some positive benefits for investors by narrowing spreads – the difference between the price at which a buyer is willing to purchase a financial instrument and the price at which a seller is willing to sell it – and by increasing liquidity at each decimal point. However, a major issue for regulators and policymakers is the extent to which high-frequency trading, unfiltered sponsored access, and co-location amplify risks, including systemic risk, by increasing the speed at which trading errors or fraudulent trades can occur.

Although there have always been occasional trading errors and episodic volatility spikes in markets, the speed, automation and interconnectedness of today‘s markets create a different scale of risk. These risks demand that exchanges and market participants employ effective quality management systems and sophisticated risk mitigation controls adapted to these new dynamics in order to protect against potential threats to market stability arising from technology malfunctions or episodic illiquidity. However, there are more deliberate aspects of HFT strategies which may present serious problems for market structure and functioning, and where conduct may be illegal, for example in order anticipation seeks to ascertain the existence of large buyers or sellers in the marketplace and then to trade ahead of those buyers and sellers in anticipation that their large orders will move market prices. A momentum strategy involves initiating a series of orders and trades in an attempt to ignite a rapid price move. HFT strategies can resemble traditional forms of market manipulation that violate the Exchange Act:

  1. Spoofing and layering occurs when traders create a false appearance of market activity by entering multiple non-bona fide orders on one side of the market at increasing or decreasing prices in order to induce others to buy or sell the stock at a price altered by the bogus orders.
  2. Painting the tape involves placing successive small amount of buy orders at increasing prices in order to stimulate increased demand.

  3. Quote Stuffing and price fade are additional HFT dubious practices: quote stuffing is a practice that floods the market with huge numbers of orders and cancellations in rapid succession which may generate buying or selling interest, or compromise the trading position of other market participants. Order or price fade involves the rapid cancellation of orders in response to other trades.

The World Federation of Exchanges insists: ― Exchanges are committed to protecting market stability and promoting orderly markets, and understand that a robust and resilient risk control framework adapted to today‘s high speed markets, is a cornerstone of enhancing investor confidence. However this robust and resilient risk control framework‘ seems lacking, including in the dark pools now established for trading that were initially proposed as safer than the open market.

Evolutionary Game Theory. Note Quote

Untitled

In classical evolutionary biology the fitness landscape for possible strategies is considered static. Therefore optimization theory is the usual tool in order to analyze the evolution of strategies that consequently tend to climb the peaks of the static landscape. However in more realistic scenarios the evolution of populations modifies the environment so that the fitness landscape becomes dynamic. In other words, the maxima of the fitness landscape depend on the number of specimens that adopt every strategy (frequency-dependent landscape). In this case, when the evolution depends on agents’ actions, game theory is the adequate mathematical tool to describe the process. But this is precisely the scheme in that the evolving physical laws (i.e. algorithms or strategies) are generated from the agent-agent interactions (bottom-up process) submitted to natural selection.

The concept of evolutionarily stable strategy (ESS) is central to evolutionary game theory. An ESS is defined as that strategy that cannot be displaced by any alternative strategy when being followed by the great majority – almost all of systems in a population. In general,

an ESS is not necessarily optimal; however it might be assumed that in the last stages of evolution — before achieving the quantum equilibrium — the fitness landscape of possible strategies could be considered static or at least slow varying. In this simplified case an ESS would be one with the highest payoff therefore satisfying an optimizing criterion. Different ESSs could exist in other regions of the fitness landscape.

In the information-theoretic Darwinian approach it seems plausible to assume as optimization criterion the optimization of information flows for the system. A set of three regulating principles could be:

Structure: The complexity of the system is optimized (maximized).. The definition that is adopted for complexity is Bennett’s logical depth that for a binary string is the time needed to execute the minimal program that generates such string. There is no a general acceptance of the definition of complexity, neither is there a consensus on the relation between the increase of complexity – for a certain definition – and Darwinian evolution. However, it seems that there is some agreement on the fact that, in the long term, Darwinian evolution should drive to an increase in complexity in the biological realm for an adequate natural definition of this concept. Then the complexity of a system at time in this theory would be the Bennett’s logical depth of the program stored at time in its Turing machine. The increase of complexity is a characteristic of Lamarckian evolution, and it is also admitted that the trend of evolution in the Darwinian theory is in the direction in which complexity grows, although whether this tendency depends on the timescale – or some other factors – is still not very clear.

Dynamics: The information outflow of the system is optimized (minimized). The information is the Fisher information measure for the probability density function of the position of the system. According to S. A. Frank, natural selection acts maximizing the Fisher information within a Darwinian system. As a consequence, assuming that the flow of information between a system and its surroundings can be modeled as a zero-sum game, Darwinian systems would follow dynamics.

Interaction: The interaction between two subsystems optimizes (maximizes) the complexity of the total system. The complexity is again equated to the Bennett’s logical depth. The role of Interaction is central in the generation of composite systems, therefore in the structure for the information processor of composite systems resulting from the logical interconnections among the processors of the constituents. There is an enticing option of defining the complexity of a system in contextual terms as the capacity of a system for anticipating the behavior at t + ∆t of the surrounding systems included in the sphere of radius r centered in the position X(t) occupied by the system. This definition would directly drive to the maximization of the predictive power for the systems that maximized their complexity. However, this magnitude would definitely be very difficult to even estimate, in principle much more than the usual definitions for complexity.

Quantum behavior of microscopic systems should now emerge from the ESS. In other terms, the postulates of quantum mechanics should be deduced from the application of the three regulating principles on our physical systems endowed with an information processor.

Let us apply Structure. It is reasonable to consider that the maximization of the complexity of a system would in turn maximize the predictive power of such system. And this optimal statistical inference capacity would plausibly induce the complex Hilbert space structure for the system’s space of states. Let us now consider Dynamics. This is basically the application of the principle of minimum Fisher information or maximum Cramer-Rao bound on the probability distribution function for the position of the system. The concept of entanglement seems to be determinant to study the generation of composite systems, in particular in this theory through applying Interaction. The theory admits a simple model that characterizes the entanglement between two subsystems as the mutual exchange of randomizers (R1, R2), programs (P1, P2) – with their respective anticipation modules (A1, A2) – and wave functions (Ψ1, Ψ2). In this way, both subsystems can anticipate not only the behavior of their corresponding surrounding systems, but also that of the environment of its partner entangled subsystem. In addition, entanglement can be considered a natural phenomenon in this theory, a consequence of the tendency to increase the complexity, and therefore, in a certain sense, an experimental support to the theory.

In addition, the information-theoretic Darwinian approach is a minimalist realist theory – every system follows a continuous trajectory in time, as in Bohmian mechanics, a local theory in physical space – in this theory apparent nonlocality, as in Bell’s inequality violations, would be an artifact of the anticipation module in the information space, although randomness would necessarily be intrinsic to nature through the random number generator methodologically associated with every fundamental system at t = 0, and as essential ingredient to start and fuel – through variation – Darwinian evolution. As time increases, random events determined by the random number generators would progressively be replaced by causal events determined by the evolving programs that gradually take control of the elementary systems. Randomness would be displaced by causality as physical Darwinian evolution gave rise to the quantum equilibrium regime, but not completely, since randomness would play a crucial role in the optimization of strategies – thus, of information flows – as game theory states.

Category of a Quantum Groupoid

A873C024-16E2-408D-8521-AC452457B0C4

For a quantum groupoid H let Rep(H) be the category of representations of H, whose objects are finite-dimensional left H -modules and whose morphisms are H -linear homomorphisms. We shall show that Rep(H) has a natural structure of a monoidal category with duality.

For objects V, W of Rep(H) set

V ⊗ W = x ∈ V ⊗k W|x = ∆(1) · x ⊂ V ⊗k W —– (1)

with the obvious action of H via the comultiplication ∆ (here ⊗k denotes the usual tensor product of vector spaces). Note that ∆(1) is an idempotent and therefore V ⊗ W = ∆(1) × (V ⊗k W). The tensor product of morphisms is the restriction of usual tensor product of homomorphisms. The standard associativity isomorphisms (U ⊗ V ) ⊗ W → U ⊗ (V ⊗ W ) are functorial and satisfy the pentagon condition, since ∆ is coassociative. We will suppress these isomorphisms and write simply U ⊗ V ⊗ W.

The target counital subalgebra Ht ⊂ H has an H-module structure given by h · z = εt(hz),where h ∈ H, z ∈ Ht.

Ht is the unit object of Rep(H).

Define a k-linear homomorphism lV : Ht ⊗ V → V by lV(1(1) · z ⊗ 1(2) · v) = z · v, z ∈ Ht, v ∈ V.

This map is H-linear, since

lV h · (1(1) · z ⊗ 1(2) · v) = lV(h(1) · z ⊗ h(2) · v) = εt(h(1)z)h(2) · v = hz · v = h · lV (1(1) · z ⊗ 1(2) · v),

∀ h ∈ H. The inverse map l−1V: → Ht ⊗ V is given by V

l−1V(v) = S(1(1)) ⊗ (1(2) · v) = (1(1) · 1) ⊗ (1(2) · v)

The collection {lV}V gives a natural equivalence between the functor Ht ⊗ (·) and the identity functor. Indeed, for any H -linear homomorphism f : V → U we have:

lU ◦ (id ⊗ f)(1(1) · z ⊗ 1(2) · v) = lU 1(1) · z ⊗ 1(2) · f(v) = z · f(v) = f(z·v) = f ◦ lV(1(1) · z ⊗ 1(2) · v)

Similarly, the k-linear homomorphism rV : V ⊗ Ht → V defined by rV(1(1) · v ⊗ 1(2) · z) = S(z) · v, z ∈ Ht, v ∈ V, has the inverse r−1V(v) = 1(1) · v ⊗ 1(2) and satisfies the necessary properties.

Finally, we can check the triangle axiom idV ⊗ lW = rV ⊗ idW : V ⊗ Ht ⊗ W → V ⊗ W ∀ objects V, W of Rep(H). For v ∈ V, w ∈ W we have

(idV ⊗ lW)(1(1) · v ⊗ 1′(1)1(2) · z ⊗ 1′(2) · w)

= 1(1) · v ⊗ 1′(2)z · w) = 1(1)S(z) · v ⊗ 1(2) · w

=(rV ⊗ idW) (1′(1) · v ⊗ 1′(2) 1(1) · z ⊗ 1(2) · w),

therefore, idV ⊗ lW = rV ⊗ idW

Using the antipode S of H, we can provide Rep(H) with a duality. For any object V of Rep(H), define the action of H on V = Homk(V, k) by

(h · φ)(v) = φ S(h) · v —– (2)

where h ∈ H , v ∈ V , φ ∈ V. For any morphism f : V → W , let f: W → V be the morphism dual to f. For any V in Rep(H), we define the duality morphisms dV : V ⊗ V → Ht, bV : Ht → V ⊗ V∗ as follows. For ∑j φj ⊗ vj ∈ V* ⊗ V, set

dV(∑j φj ⊗ vj)  = ∑j φj (1(1) · vj) 1(2) —– (3)

Let {fi}i and {ξi}i be bases of V and V, respectively, dual to each other. The element ∑i fi ⊗ ξi does not depend on choice of these bases; moreover, ∀ v ∈ V, φ ∈ V one has φ = ∑i φ(fi) ξi and v = ∑i fi ξi (v). Set

bV(z) = z · (∑i fi ⊗ ξi) —– (4)

The category Rep(H) is a monoidal category with duality. We know already that Rep(H) is monoidal, it remains to prove that dV and bV are H-linear and satisfy the identities

(idV ⊗ dV)(bV ⊗ idV) = idV, (dV ⊗ idV)(idV ⊗ bV) = idV.

Take ∑j φj ⊗ vj ∈ V ⊗ V, z ∈ Ht, h ∈ H. Using the axioms of a quantum groupoid, we have

h · dV(∑j φj ⊗ vj) = ((∑j φj (1(1) · vj) εt(h1(2))

= (∑j φj ⊗ εs(1(1)h) · vj 1(2)j φj S(h(1))1(1)h(2) · vj 1(2)

= (∑j h(1) · φj )(1(1) · (h(2) · vj))1(2)

= (∑j dV(h(1) · φj  ⊗ h(2) · vj) = dV(h · ∑j φj ⊗ vj)

therefore, dV is H-linear. To check the H-linearity of bV we have to show that h · bV(z) =

bV (h · z), i.e., that

i h(1)z · fi ⊗ h(2) · ξi = ∑i 1(1) εt(hz) · fi ⊗ 1(2) · ξi

Since both sides of the above equality are elements of V ⊗k V, evaluating the second factor on v ∈ V, we get the equivalent condition

h(1)zS(h(2)) · v = 1(1)εt (hz)S(1(2)) · v, which is easy to check. Thus, bV is H-linear.

Using the isomorphisms lV and rV identifying Ht ⊗ V, V ⊗ Ht, and V, ∀ v ∈ V and φ ∈ V we have:

(idV ⊗ dV)(bV ⊗ idV)(v)

=(idV ⊗ dV)bV(1(1) · 1) ⊗ 1(2) · v

= (idV ⊗ dV)bV(1(2)) ⊗ S−1(1(1)) · v

= ∑i (idV ⊗ dV) 1(2) · fi ⊗ 1(3) · ξi ⊗ S−1 (1(1)) · v

= ∑1(2) · fi ⊗ 1(3) · ξi (1′(1)S-1 (1(1)) · v) 1′(2)

= 1(2) S(1(3)) 1′(1) S-1 (1(1)) · v ⊗ 1′(2) = v

(dV ⊗ idV)(idV ⊗ bV)(φ)

= (dV ⊗ idV) 1(1) · φ ⊗ bV(1(2))

= ∑i (dV ⊗ idV)(1(1) · φ ⊗ 1(2) · 1(2) · 1(3) · ξi )

= ∑i (1(1) · φ (1′(1)1(2) · fi)1′(2) ⊗ 1(3) · ξi

= 1′(2) ⊗ 1(3)1(1) S(1′(1)1(2)) · φ = φ,

QED.

 

Conjuncted: Ergodicity. Thought of the Day 51.1

ergod_noise

When we scientifically investigate a system, we cannot normally observe all possible histories in Ω, or directly access the conditional probability structure {PrE}E⊆Ω. Instead, we can only observe specific events. Conducting many “runs” of the same experiment is an attempt to observe as many histories of a system as possible, but even the best experimental design rarely allows us to observe all histories or to read off the full conditional probability structure. Furthermore, this strategy works only for smaller systems that we can isolate in laboratory conditions. When the system is the economy, the global ecosystem, or the universe in its entirety, we are stuck in a single history. We cannot step outside that history and look at alternative histories. Nonetheless, we would like to infer something about the laws of the system in general, and especially about the true probability distribution over histories.

Can we discern the system’s laws and true probabilities from observations of specific events? And what kinds of regularities must the system display in order to make this possible? In other words, are there certain “metaphysical prerequisites” that must be in place for scientific inference to work?

To answer these questions, we first consider a very simple example. Here T = {1,2,3,…}, and the system’s state at any time is the outcome of an independent coin toss. So the state space is X = {Heads, Tails}, and each possible history in Ω is one possible Heads/Tails sequence.

Suppose the true conditional probability structure on Ω is induced by the single parameter p, the probability of Heads. In this example, the Law of Large Numbers guarantees that, with probability 1, the limiting frequency of Heads in a given history (as time goes to infinity) will match p. This means that the subset of Ω consisting of “well-behaved” histories has probability 1, where a history is well-behaved if (i) there exists a limiting frequency of Heads for it (i.e., the proportion of Heads converges to a well-defined limit as time goes to infinity) and (ii) that limiting frequency is p. For this reason, we will almost certainly (with probability 1) arrive at the true conditional probability structure on Ω on the basis of observing just a single history and counting the number of Heads and Tails in it.

Does this result generalize? The short answer is “yes”, provided the system’s symmetries are of the right kind. Without suitable symmetries, generalizing from local observations to global laws is not possible. In a slogan, for scientific inference to work, there must be sufficient regularities in the system. In our toy system of the coin tosses, there are. Wigner (1967) recognized this point, taking symmetries to be “a prerequisite for the very possibility of discovering the laws of nature”.

Generally, symmetries allow us to infer general laws from specific observations. For example, let T = {1,2,3,…}, and let Y and Z be two subsets of the state space X. Suppose we have made the observation O: “whenever the state is in the set Y at time 5, there is a 50% probability that it will be in Z at time 6”. Suppose we know, or are justified in hypothesizing, that the system has the set of time symmetries {ψr : r = 1,2,3,….}, with ψr(t) = t + r, as defined as in the previous section. Then, from observation O, we can deduce the following general law: “for any t in T, if the state of the system is in the set Y at time t, there is a 50% probability that it will be in Z at time t + 1”.

However, this example still has a problem. It only shows that if we could make observation O, then our generalization would be warranted, provided the system has the relevant symmetries. But the “if” is a big “if”. Recall what observation O says: “whenever the system’s state is in the set Y at time 5, there is a 50% probability that it will be in the set Z at time 6”. Clearly, this statement is only empirically well supported – and thus a real observation rather than a mere hypothesis – if we can make many observations of possible histories at times 5 and 6. We can do this if the system is an experimental apparatus in a lab or a virtual system in a computer, which we are manipulating and observing “from the outside”, and on which we can perform many “runs” of an experiment. But, as noted above, if we are participants in the system, as in the case of the economy, an ecosystem, or the universe at large, we only get to experience times 5 and 6 once, and we only get to experience one possible history. How, then, can we ever assemble a body of evidence that allows us to make statements such as O?

The solution to this problem lies in the property of ergodicity. This is a property that a system may or may not have and that, if present, serves as the desired metaphysical prerequisite for scientific inference. To explain this property, let us give an example. Suppose T = {1,2,3,…}, and the system has all the time symmetries in the set Ψ = {ψr : r = 1,2,3,….}. Heuristically, the symmetries in Ψ can be interpreted as describing the evolution of the system over time. Suppose each time-step corresponds to a day. Then the history h = (a,b,c,d,e,….) describes a situation where today’s state is a, tomorrow’s is b, the next day’s is c, and so on. The transformed history ψ1(h) = (b,c,d,e,f,….) describes a situation where today’s state is b, tomorrow’s is c, the following day’s is d, and so on. Thus, ψ1(h) describes the same “world” as h, but as seen from the perspective of tomorrow. Likewise, ψ2(h) = (c,d,e,f,g,….) describes the same “world” as h, but as seen from the perspective of the day after tomorrow, and so on.

Given the set Ψ of symmetries, an event E (a subset of Ω) is Ψ-invariant if the inverse image of E under ψ is E itself, for all ψ in Ψ. This implies that if a history h is in E, then ψ(h) will also be in E, for all ψ. In effect, if the world is in the set E today, it will remain in E tomorrow, and the day after tomorrow, and so on. Thus, E is a “persistent” event: an event one cannot escape from by moving forward in time. In a coin-tossing system, where Ψ is still the set of time translations, examples of Ψ- invariant events are “all Heads”, where E contains only the history (Heads, Heads, Heads, …), and “all Tails”, where E contains only the history (Tails, Tails, Tails, …).

The system is ergodic (with respect to Ψ) if, for any Ψ-invariant event E, the unconditional probability of E, i.e., PrΩ(E), is either 0 or 1. In other words, the only persistent events are those which occur in almost no history (i.e., PrΩ(E) = 0) and those which occur in almost every history (i.e., PrΩ(E) = 1). Our coin-tossing system is ergodic, as exemplified by the fact that the Ψ-invariant events “all Heads” and “all Tails” occur with probability 0.

In an ergodic system, it is possible to estimate the probability of any event “empirically”, by simply counting the frequency with which that event occurs. Frequencies are thus evidence for probabilities. The formal statement of this is the following important result from the theory of dynamical systems and stochastic processes.

Ergodic Theorem: Suppose the system is ergodic. Let E be any event and let h be any history. For all times t in T, let Nt be the number of elements r in the set {1, 2, …, t} such that ψr(h) is in E. Then, with probability 1, the ratio Nt/t will converge to PrΩ(E) as t increases towards infinity.

Intuitively, Nt is the number of times the event E has “occurred” in history h from time 1 up to time t. The ratio Nt/t is therefore the frequency of occurrence of event E (up to time t) in history h. This frequency might be measured, for example, by performing a sequence of experiments or observations at times 1, 2, …, t. The Ergodic Theorem says that, almost certainly (i.e., with probability 1), the empirical frequency will converge to the true probability of E, PrΩ(E), as the number of observations becomes large. The estimation of the true conditional probability structure from the frequencies of Heads and Tails in our illustrative coin-tossing system is possible precisely because the system is ergodic.

To understand the significance of this result, let Y and Z be two subsets of X, and suppose E is the event “h(1) is in Y”, while D is the event “h(2) is in Z”. Then the intersection E ∩ D is the event “h(1) is in Y, and h(2) is in Z”. The Ergodic Theorem says that, by performing a sequence of observations over time, we can empirically estimate PrΩ(E) and PrΩ(E ∩ D) with arbitrarily high precision. Thus, we can compute the ratio PrΩ(E ∩ D)/PrΩ(E). But this ratio is simply the conditional probability PrΕ(D). And so, we are able to estimate the conditional probability that the state at time 2 will be in Z, given that at time 1 it was in Y. This illustrates that, by allowing us to estimate unconditional probabilities empirically, the Ergodic Theorem also allows us to estimate conditional probabilities, and in this way to learn the properties of the conditional probability structure {PrE}E⊆Ω.

We may thus conclude that ergodicity is what allows us to generalize from local observations to global laws. In effect, when we engage in scientific inference about some system, or even about the world at large, we rely on the hypothesis that this system, or the world, is ergodic. If our system, or the world, were “dappled”, then presumably we would not be able to presuppose ergodicity, and hence our ability to make scientific generalizations would be compromised.

Ergodic Theory. Thought of the Day 51.0

giuliopic

Classical dynamical systems have a particularly rich set of time symmetries. Let (X, φ) be a dynamical system. A classical dynamical system consists of a set X (the state space) and a function φ from X into itself that determines how the state changes over time (the dynamics). Let T={0,1,2,3,….}. Given any state x in X (the initial conditions), the orbit of x is the history h defined by h(0) = x, h(1) = φ(x), h(2) = φ(φ(x)), and so on. Let Ω be the set of all orbits determined by (X, φ) in this way. Let {Pr’E}E⊆X be any conditional probability structure on X. For any events E and D in Ω, we define PrE(D) = Pr’E’(D’), where E’ is the set of all states x in X whose orbits lie in E, and D’ is the set of all states x in X whose orbits lie in D. Then {PrE}E⊆Ω is a conditional probability structure on Ω. Thus, Ω and {PrE}E⊆Ω together form a temporally evolving system. However, not every temporally evolving system arises in this way. Suppose the function φ (which maps from X into itself) is surjective, i.e., for all x in X, there is some y in X such that φ(y)=x. Then the set Ω of orbits is invariant under all time-shifts. Let {Pr’E}E⊆X be a conditional probability structure on X, and let {PrE}E⊆Ω be the conditional probability structure it induces on Ω. Suppose that {Pr’E}E⊆X is φ-invariant, i.e., for any subsets E and D of X, if E’ = φ–1(E) and D’ = φ–1(D), then Pr’E’(D’) = Pr’E(D). Then every time shift is a temporal symmetry of the resulting temporally evolving system. The study of dynamical systems equipped with invariant probability measures is the purview of ergodic theory.

Meillassoux’s Principle of Unreason Towards an Intuition of the Absolute In-itself. Note Quote.

geotime_usgs

The principle of reason such as it appears in philosophy is a principle of contingent reason: not only how philosophical reason concerns difference instead of identity, we but also why the Principle of Sufficient Reason can no longer be understood in terms of absolute necessity. In other words, Deleuze disconnects the Principle of Sufficient Reason from the ontotheological tradition no less than from its Heideggerian deconstruction. What remains then of Meillassoux’s criticism in After finitude: An Essay on the Necessity of Contigency that Deleuze no less than Hegel hypostatizes or absolutizes the correlation between thinking and being and thus brings back a vitalist version of speculative idealism through the back door?

At stake in Meillassoux’s criticism of the Principle of Sufficient Reason is a double problem: the conditions of possibility of thinking and knowing an absolute and subsequently the conditions of possibility of rational ideology critique. The first problem is primarily epistemological: how can philosophy justify scientific knowledge claims about a reality that is anterior to our relation to it and that is hence not given in the transcendental object of possible experience (the arche-fossil )? This is a problem for all post-Kantian epistemologies that hold that we can only ever know the correlate of being and thought. Instead of confronting this weak correlationist position head on, however, Meillassoux seeks a solution in the even stronger correlationist position that denies not only the knowability of the in itself, but also its very thinkability or imaginability. Simplified: if strong correlationists such as Heidegger or Wittgenstein insist on the historicity or facticity (non-necessity) of the correlation of reason and ground in order to demonstrate the impossibility of thought’s self-absolutization, then the very force of their argument, if it is not to contradict itself, implies more than they are willing to accept: the necessity of the contingency of the transcendental structure of the for itself. As a consequence, correlationism is incapable of demonstrating itself to be necessary. This is what Meillassoux calls the principle of factiality or the principle of unreason. It says that it is possible to think of two things that exist independently of thought’s relation to it: contingency as such and the principle of non-contradiction. The principle of unreason thus enables the intellectual intuition of something that is absolutely in itself, namely the absolute impossibility of a necessary being. And this in turn implies the real possibility of the completely random and unpredictable transformation of all things from one moment to the next. Logically speaking, the absolute is thus a hyperchaos or something akin to Time in which nothing is impossible, except it be necessary beings or necessary temporal experiences such as the laws of physics.

There is, moreover, nothing mysterious about this chaos. Contingency, and Meillassoux consistently refers to this as Hume’s discovery, is a purely logical and rational necessity, since without the principle of non-contradiction not even the principle of factiality would be absolute. It is thus a rational necessity that puts the Principle of Sufficient Reason out of action, since it would be irrational to claim that it is a real necessity as everything that is is devoid of any reason to be as it is. This leads Meillassoux to the surprising conclusion that [t]he Principle of Sufficient Reason is thus another name for the irrational… The refusal of the Principle of Sufficient Reason is not the refusal of reason, but the discovery of the power of chaos harboured by its fundamental principle (non-contradiction). (Meillassoux 2007: 61) The principle of factiality thus legitimates or founds the rationalist requirement that reality be perfectly amenable to conceptual comprehension at the same time that it opens up [a] world emancipated from the Principle of Sufficient Reason (Meillassoux) but founded only on that of non-contradiction.

This emancipation brings us to the practical problem Meillassoux tries to solve, namely the possibility of ideology critique. Correlationism is essentially a discourse on the limits of thought for which the deabsolutization of the Principle of Sufficient Reason marks reason’s discovery of its own essential inability to uncover an absolute. Thus if the Galilean-Copernican revolution of modern science meant the paradoxical unveiling of thought’s capacity to think what there is regardless of whether thought exists or not, then Kant’s correlationist version of the Copernican revolution was in fact a Ptolemaic counterrevolution. Since Kant and even more since Heidegger, philosophy has been adverse precisely to the speculative import of modern science as a formal, mathematical knowledge of nature. Its unintended consequence is therefore that questions of ultimate reasons have been dislocated from the domain of metaphysics into that of non-rational, fideist discourse. Philosophy has thus made the contemporary end of metaphysics complicit with the religious belief in the Principle of Sufficient Reason beyond its very thinkability. Whence Meillassoux’s counter-intuitive conclusion that the refusal of the Principle of Sufficient Reason furnishes the minimal condition for every critique of ideology, insofar as ideology cannot be identified with just any variety of deceptive representation, but is rather any form of pseudo-rationality whose aim is to establish that what exists as a matter of fact exists necessarily. In this way a speculative critique pushes skeptical rationalism’s relinquishment of the Principle of Sufficient Reason to the point where it affirms that there is nothing beneath or beyond the manifest gratuitousness of the given nothing, but the limitless and lawless power of its destruction, emergence, or persistence. Such an absolutizing even though no longer absolutist approach would be the minimal condition for every critique of ideology: to reject dogmatic metaphysics means to reject all real necessity, and a fortiori to reject the Principle of Sufficient Reason, as well as the ontological argument.

On the one hand, Deleuze’s criticism of Heidegger bears many similarities to that of Meillassoux when he redefines the Principle of Sufficient Reason in terms of contingent reason or with Nietzsche and Mallarmé: nothing rather than something such that whatever exists is a fiat in itself. His Principle of Sufficient Reason is the plastic, anarchic and nomadic principle of a superior or transcendental empiricism that teaches us a strange reason, that of the multiple, chaos and difference. On the other hand, however, the fact that Deleuze still speaks of reason should make us wary. For whereas Deleuze seeks to reunite chaotic being with systematic thought, Meillassoux revives the classical opposition between empiricism and rationalism precisely in order to attack the pre-Kantian, absolute validity of the Principle of Sufficient Reason. His argument implies a return to a non-correlationist version of Kantianism insofar as it relies on the gap between being and thought and thus upon a logic of representation that renders Deleuze’s Principle of Sufficient Reason unrecognizable, either through a concept of time, or through materialism.

Meillassoux, Deleuze, and the Ordinal Relation Un-Grounding Hyper-Chaos. Thought of the Day 41.0

v1v2a

As Heidegger demonstrates in Kant and the Problem of Metaphysics, Kant limits the metaphysical hypostatization of the logical possibility of the absolute by subordinating the latter to a domain of real possibility circumscribed by reason’s relation to sensibility. In this way he turns the necessary temporal becoming of sensible intuition into the sufficient reason of the possible. Instead, the anti-Heideggerian thrust of Meillassoux’s intellectual intuition is that it absolutizes the a priori realm of pure logical possibility and disconnects the domain of mathematical intelligibility from sensibility. (Ray Brassier’s The Enigma of Realism: Robin Mackay – Collapse_ Philosophical Research and Development. Speculative Realism.) Hence the chaotic structure of his absolute time: Anything is possible. Whereas real possibility is bound to correlation and temporal becoming, logical possibility is bound only by non-contradiction. It is a pure or absolute possibility that points to a radical diachronicity of thinking and being: we can think of being without thought, but not of thought without being.

Deleuze clearly situates himself in the camp when he argues with Kant and Heidegger that time as pure auto-affection (folding) is the transcendental structure of thought. Whatever exists, in all its contingency, is grounded by the first two syntheses of time and ungrounded by the third, disjunctive synthesis in the implacable difference between past and future. For Deleuze, it is precisely the eternal return of the ordinal relation between what exists and what may exist that destroys necessity and guarantees contingency. As a transcendental empiricist, he thus agrees with the limitation of logical possibility to real possibility. On the one hand, he thus also agrees with Hume and Meillassoux that [r]eality is not the result of the laws which govern it. The law of entropy or degradation in thermodynamics, for example, is unveiled as nihilistic by Nietzsche s eternal return, since it is based on a transcendental illusion in which difference [of temperature] is the sufficient reason of change only to the extent that the change tends to negate difference. On the other hand, Meillassoux’s absolute capacity-to-be-other relative to the given (Quentin Meillassoux, Ray Brassier, Alain Badiou – After finitude: an essay on the necessity of contingency) falls away in the face of what is actual here and now. This is because although Meillassoux s hyper-chaos may be like time, it also contains a tendency to undermine or even reject the significance of time. Thus one may wonder with Jon Roffe (Time_and_Ground_A_Critique_of_Meillassou) how time, as the sheer possibility of any future or different state of affairs, can provide the (non-)ground for the realization of this state of affairs in actuality. The problem is less that Meillassoux’s contingency is highly improbable than that his ontology includes no account of actual processes of transformation or development. As Peter Hallward (Levi Bryant, Nick Srnicek and Graham Harman (editors) – The Speculative Turn: Continental Materialism and Realism) has noted, the abstract logical possibility of change is an empty and indeterminate postulate, completely abstracted from all experience and worldly or material affairs. For this reason, the difference between Deleuze and Meillassoux seems to come down to what is more important (rather than what is more originary): the ordinal sequences of sensible intuition or the logical lack of reason.

But for Deleuze time as the creatio ex nihilo of pure possibility is not just irrelevant in relation to real processes of chaosmosis, which are both chaotic and probabilistic, molecular and molar. Rather, because it puts the Principle of Sufficient Reason as principle of difference out of real action it is either meaningless with respecting to the real or it can only have a negative or limitative function. This is why Deleuze replaces the possible/real opposition with that of virtual/actual. Whereas conditions of possibility always relate asymmetrically and hierarchically to any real situation, the virtual as sufficient reason is no less real than the actual since it is first of all its unconditioned or unformed potential of becoming-other.

Universal Inclusion of the Void. Thought of the Day 38.0

entering-the-void

The universal inclusion of the void means that the intersection between two sets whatsoever is comparable with the void set. That is to say, there is no multiple that does not include within it some part of the “inconsistency” that it structures. The diversity of multiplicity can exhibit multiple modes of articulation, but as multiples, they have nothing to do with one another, they are two absolutely heterogeneous presentations, and this is why this relation – of non-relation – can only be thought under the signifier of being (of the void), which indicates that the multiples in question have nothing in common apart from being multiples. The universal inclusion of the void thus guarantees the consistency of the infinite multiplicities immanent to its presentation. That is to say, it underlines the universal distribution of the ontological structure seized at the point of the axiom of the void set. The void does not merely constitute a consistency at a local point but also organises, from this point of difference, a universal structure that legislates on the structure of all sets, the universe of consistent multiplicity.

This final step, the carrying over of the void seized as a local point of the presentation of the unpresentable, to a global field of sets provides us with the universal point of difference, applicable equally to any number of sets, that guarantees the universal consistency of ontological presentation. In one sense, the universal inclusion of the void demonstrates that, as a unit of presentation, the void anchors the set theoretical universe by its universal inclusion. As such, every presentation in ontological thought is situated in this elementary seizure of ontological difference. The void is that which “fills” ontological or set theoretical presentation. It is what makes common the universe of sets. It is in this sense that the “substance” or constitution of ontology is the void. At the same stroke, however, the universal inclusion of the void also concerns the consistency of set theory in a logical sense.

The universal inclusion of the void provides an important synthesis of the consistency of presentation. What is presented is necessarily consistent but its consistency gives way to two distinct senses. Consistency can refer to its own “substance,” its immanent presentation. Distinct presentations constitute different presentations principally because “what” they present are different. Ontology’s particularity is its presentation of the void. On the other hand, a political site might present certain elements just as a scientific procedure might present yet others. The other sense of consistency is tied to presentation as such, the consistency of presentation in its generality. When one speaks loosely about the “world” being consistent, where natural laws are verifiable against a background of regularity, it is this consistency that is invoked and not the elements that constitute the particularity of their presentation. This sense of consistency, occurring across presentations would certainly take us beyond the particularity of ontology. That is to say, ontological presentation presents a species of this consistency. However, the possibility of multiple approaches does not exclude an ontological treatment of this consistency.