Time and World-Lines

Let γ: [s1, s2] → M be a smooth, future-directed timelike curve in M with tangent field ξa. We associate with it an elapsed proper time (relative to gab) given by

∥γ∥= ∫s1s2 (gabξaξb)1/2 ds

This elapsed proper time is invariant under reparametrization of γ and is just what we would otherwise describe as the length of (the image of) γ . The following is another basic principle of relativity theory:

Clocks record the passage of elapsed proper time along their world-lines.

Again, a number of qualifications and comments are called for. We have taken for granted that we know what “clocks” are. We have assumed that they have worldlines (rather than worldtubes). And we have overlooked the fact that ordinary clocks (e.g., the alarm clock on the nightstand) do not do well at all when subjected to extreme acceleration, tidal forces, and so forth. (Try smashing the alarm clock against the wall.) Again, these concerns are important and raise interesting questions about the role of idealization in the formulation of physical theory. (One might construe an “ideal clock” as a point-size test object that perfectly records the passage of proper time along its worldline, and then take the above principle to assert that real clocks are, under appropriate conditions and to varying degrees of accuracy, approximately ideal.) But they do not have much to do with relativity theory as such. Similar concerns arise when one attempts to formulate corresponding principles about clock behavior within the framework of Newtonian theory.

Now suppose that one has determined the conformal structure of spacetime, say, by using light rays. Then one can use clocks, rather than free particles, to determine the conformal factor.

Let g′ab be a second smooth metric on M, with g′ab = Ω2gab. Further suppose that the two metrics assign the same lengths to timelike curves – i.e., ∥γ∥g′ab = ∥γ∥gab ∀ smooth, timelike curves γ: I → M. Then Ω = 1 everywhere. (Here ∥γ∥gab is the length of γ relative to gab.)

Let ξoa be an arbitrary timelike vector at an arbitrary point p in M. We can certainly find a smooth, timelike curve γ: [s1, s2] → M through p whose tangent at p is ξoa. By our hypothesis, ∥γ∥g′ab = ∥γ∥gab. So, if ξa is the tangent field to γ,

s1s2 (g’ab ξaξb)1/2 ds = ∫s1s2 (gabξaξb)1/2 ds

∀ s in [s1, s2]. It follows that g′abξaξb = gabξaξb at every point on the image of γ. In particular, it follows that (g′ab − gab) ξoa ξob = 0 at p. But ξoa was an arbitrary timelike vector at p. So, g′ab = gab at our arbitrary point p. The principle gives the whole story of relativistic clock behavior. In particular, it implies the path dependence of clock readings. If two clocks start at an event p and travel along different trajectories to an event q, then, in general, they will record different elapsed times for the trip. This is true no matter how similar the clocks are. (We may stipulate that they came off the same assembly line.) This is the case because, as the principle asserts, the elapsed time recorded by each of the clocks is just the length of the timelike curve it traverses from p to q and, in general, those lengths will be different.

Suppose we consider all future-directed timelike curves from p to q. It is natural to ask if there are any that minimize or maximize the recorded elapsed time between the events. The answer to the first question is “no.” Indeed, one then has the following proposition:

Let p and q be events in M such that p ≪ q. Then, for all ε > 0, there exists a smooth, future directed timelike curve γ from p to q with ∥γ ∥ < ε. (But there is no such curve with length 0, since all timelike curves have non-zero length.)

Untitled

If there is a smooth, timelike curve connecting p and q, there is also a jointed, zig-zag null curve connecting them. It has length 0. But we can approximate the jointed null curve arbitrarily closely with smooth timelike curves that swing back and forth. So (by the continuity of the length function), we should expect that, for all ε > 0, there is an approximating timelike curve that has length less than ε.

The answer to the second question (“Can one maximize recorded elapsed time between p and q?”) is “yes” if one restricts attention to local regions of spacetime. In the case of positive definite metrics, i.e., ones with signature of form (n, 0) – we know geodesics are locally shortest curves. The corresponding result for Lorentzian metrics is that timelike geodesics are locally longest curves.

Let γ: I → M be a smooth, future-directed, timelike curve. Then γ can be reparametrized so as to be a geodesic iff ∀ s ∈ I there exists an open set O containing γ(s) such that , ∀ s1, s2 ∈ I with s1 ≤ s ≤ s2, if the image of γ′ = γ|[s1, s2] is contained in O, then γ′ (and its reparametrizations) are longer than all other timelike curves in O from γ(s1) to γ(s2). (Here γ|[s1, s2] is the restriction of γ to the interval [s1, s2].)

Of all clocks passing locally from p to q, the one that will record the greatest elapsed time is the one that “falls freely” from p to q. To get a clock to read a smaller elapsed time than the maximal value, one will have to accelerate the clock. Now, acceleration requires fuel, and fuel is not free. So the above proposition has the consequence that (locally) “saving time costs money.” And proposition before that may be taken to imply that “with enough money one can save as much time as one wants.” The restriction here to local regions of spacetime is essential. The connection described between clock behavior and acceleration does not, in general, hold on a global scale. In some relativistic spacetimes, one can find future-directed timelike geodesics connecting two events that have different lengths, and so clocks following the curves will record different elapsed times between the events even though both are in a state of free fall. Furthermore – this follows from the preceding claim by continuity considerations alone – it can be the case that of two clocks passing between the events, the one that undergoes acceleration during the trip records a greater elapsed time than the one that remains in a state of free fall. (A rolled-up version of two-dimensional Minkowski spacetime provides a simple example)

Untitled

Two-dimensional Minkowski spacetime rolledup into a cylindrical spacetime. Three timelike curves are displayed: γ1 and γ3 are geodesics; γ2 is not; γ1 is longer than γ2; and γ2 is longer than γ3.

The connection we have been considering between clock behavior and acceleration was once thought to be paradoxical. Recall the so-called “clock paradox.” Suppose two clocks, A and B, pass from one event to another in a suitably small region of spacetime. Further suppose A does so in a state of free fall but B undergoes acceleration at some point along the way. Then, we know, A will record a greater elapsed time for the trip than B. This was thought paradoxical because it was believed that relativity theory denies the possibility of distinguishing “absolutely” between free-fall motion and accelerated motion. (If we are equally well entitled to think that it is clock B that is in a state of free fall and A that undergoes acceleration, then, by parity of reasoning, it should be B that records the greater elapsed time.) The resolution of the paradox, if one can call it that, is that relativity theory makes no such denial. The situations of A and B here are not symmetric. The distinction between accelerated motion and free fall makes every bit as much sense in relativity theory as it does in Newtonian physics.

A “timelike curve” should be understood to be a smooth, future-directed, timelike curve parametrized by elapsed proper time – i.e., by arc length. In that case, the tangent field ξa of the curve has unit length (ξaξa = 1). And if a particle happens to have the image of the curve as its worldline, then, at any point, ξa is called the particle’s four-velocity there.

Advertisement

Metric. Part 1.

maxresdefault

A (semi-Riemannian) metric on a manifold M is a smooth field gab on M that is symmetric and invertible; i.e., there exists an (inverse) field gbc on M such that gabgbc = δac.

The inverse field gbc of a metric gab is symmetric and unique. It is symmetric since

gcb = gnb δnc = gnb(gnm gmc) = (gmn gnb)gmc = δmb gmc = gbc

(Here we use the symmetry of gnm for the third equality.) It is unique because if g′bc is also an inverse field, then

g′bc = g′nc δnb = g′nc(gnm gmb) = (gmn g′nc) gmb = δmc gmb = gcb = gbc

(Here again we use the symmetry of gnm for the third equality; and we use the symmetry of gcb for the final equality.) The inverse field gbc of a metric gab is smooth. This follows, essentially, because given any invertible square matrix A (over R), the components of the inverse matrix A−1 depend smoothly on the components of A.

The requirement that a metric be invertible can be given a second formulation. Indeed, given any field gab on the manifold M (not necessarily symmetric and not necessarily smooth), the following conditions are equivalent.

(1) There is a tensor field gbc on M such that gabgbc = δac.

(2) ∀ p in M, and all vectors ξa at p, if gabξa = 0, then ξa =0.

(When the conditions obtain, we say that gab is non-degenerate.) To see this, assume first that (1) holds. Then given any vector ξa at any point p, if gab ξa = 0, it follows that ξc = δac ξa = gbc gab ξa = 0. Conversely, suppose that (2) holds. Then at any point p, the map from (Mp)a to (Mp)b defined by ξa → gab ξa is an injective linear map. Since (Mp)a and (Mp)b have the same dimension, it must be surjective as well. So the map must have an inverse gbc defined by gbc(gab ξa) = ξc or gbc gab = δac.

Untitled

In the presence of a metric gab, it is customary to adopt a notation convention for “lowering and raising indices.” Consider first the case of vectors. Given a contravariant vector ξa at some point, we write gab ξa as ξb; and given a covariant vector ηb, we write gbc ηb as ηc. The notation is evidently consistent in the sense that first lowering and then raising the index of a vector (or vice versa) leaves the vector intact.

One would like to extend this notational convention to tensors with more complex index structure. But now one confronts a problem. Given a tensor αcab at a point, for example, how should we write gmc αcab? As αmab? Or as αamb? Or as αabm? In general, these three tensors will not be equal. To get around the problem, we introduce a new convention. In any context where we may want to lower or raise indices, we shall write indices, whether contravariant or covariant, in a particular sequence. So, for example, we shall write αabc or αacb or αcab. (These tensors may be equal – they belong to the same vector space – but they need not be.) Clearly this convention solves our problem. We write gmc αabc as αabm; gmc αacb as αamb; and so forth. No ambiguity arises. (And it is still the case that if we first lower an index on a tensor and then raise it (or vice versa), the result is to leave the tensor intact.)

We claimed in the preceding paragraph that the tensors αabc and αacb (at some point) need not be equal. Here is an example. Suppose ξ1a, ξ2a, … , ξna is a basis for the tangent space at a point p. Further suppose αabc = ξia ξjb ξkc at the point. Then αacb = ξia ξjc ξkb. Hence, lowering indices, we have αabc =ξia ξjb ξkc but αacb =ξia ξjc ξib at p. These two will not be equal unless j = k.

We have reserved special notation for two tensor fields: the index substiution field δba and the Riemann curvature field Rabcd (associated with some derivative operator). Our convention will be to write these as δab and Rabcd – i.e., with contravariant indices before covariant ones. As it turns out, the order does not matter in the case of the first since δab = δba. (It does matter with the second.) To verify the equality, it suffices to observe that the two fields have the same action on an arbitrary field αb:

δbaαb = (gbngamδnmb = gbnganαb = gbngnaαb = δabαb

Now suppose gab is a metric on the n-dimensional manifold M and p is a point in M. Then there exists an m, with 0 ≤ m ≤ n, and a basis ξ1a, ξ2a,…, ξna for the tangent space at p such that

gabξia ξib = +1 if 1≤i≤m

gabξiaξib = −1 if m<i≤n

gabξiaξjb = 0 if i ≠ j

Such a basis is called orthonormal. Orthonormal bases at p are not unique, but all have the same associated number m. We call the pair (m, n − m) the signature of gab at p. (The existence of orthonormal bases and the invariance of the associated number m are basic facts of linear algebraic life.) A simple continuity argument shows that any connected manifold must have the same signature at each point. We shall henceforth restrict attention to connected manifolds and refer simply to the “signature of gab

A metric with signature (n, 0) is said to be positive definite. With signature (0, n), it is said to be negative definite. With any other signature it is said to be indefinite. A Lorentzian metric is a metric with signature (1, n − 1). The mathematics of relativity theory is, to some degree, just a chapter in the theory of four-dimensional manifolds with Lorentzian metrics.

Suppose gab has signature (m, n − m), and ξ1a, ξ2a, . . . , ξna is an orthonormal basis at a point. Further, suppose μa and νa are vectors there. If

μa = ∑ni=1 μi ξia and νa = ∑ni=1 νi ξia, then it follows from the linearity of gab that

gabμa νb = μ1ν1 +…+ μmνm − μ(m+1)ν(m+1) −…−μnνn.

In the special case where the metric is positive definite, this comes to

gabμaνb = μ1ν1 +…+ μnνn

And where it is Lorentzian,

gab μaνb = μ1ν1 − μ2ν2 −…− μnνn

Metrics and derivative operators are not just independent objects, but, in a quite natural sense, a metric determines a unique derivative operator.

Suppose gab and ∇ are both defined on the manifold M. Further suppose

γ : I → M is a smooth curve on M with tangent field ξa and λa is a smooth field on γ. Both ∇ and gab determine a criterion of “constancy” for λa. λa is constant with respect to ∇ if ξnnλa = 0 and is constant with respect to gab if gab λa λb is constant along γ – i.e., if ξnn (gab λa λb = 0. It seems natural to consider pairs gab and ∇ for which the first condition of constancy implies the second. Let us say that ∇ is compatible with gab if, for all γ and λa as above, λa is constant w.r.t. gab whenever it is constant with respect to ∇.

Dialectics: Mathematico-Philosophical Sequential Quantification. Drunken Risibility.

Untitled

Figure: Graphical representation of the quantification of dialectics.

A sequence S of P philosophers along a given period of time would incorporate the P most prominent and visible philosophers in that interval. The use of such a criterion to build the time-sequence for the philosophers implies in not necessarily uniform time-intervals between each pair of subsequent entries.

The set of C measurements used to characterize the philosophers define a C−dimensional feature space which will be henceforth referred to as the philosophical space. The characteristic vector v⃗i of each philosopher i defines a respective philosophical state in the philosophical space. Given a set of P philosophers, the average state at time i, i ≤ P, is defined as

a⃗i = 1/i ∑k=1i v⃗k

The opposite state of a given philosophical state v⃗i is defined as:

r⃗i = v⃗i +2(a⃗i −v⃗i) = 2a⃗i − v⃗i

The opposition vector of philosophical state v⃗i is given by D⃗i = r⃗i − v⃗i. The opposition amplitude of that same state is defined as ||D⃗i||.

An emphasis move taking place from the philosophical state v⃗i is any displacement from v⃗i along the direction −r⃗i. A contrary move from the philosophical state v⃗i is any displacement from v⃗i along the direction r⃗i.

Given a time-sequence S of P philosophers, the philosophical move implied by two successive philosophers i and j corresponds to the M⃗i,j vector extending from v⃗to v⃗j , i.e.

M⃗i,j = v⃗j – v⃗i

In principle, an innovative or differentiated philosophical move would be such that it departs substantially from the current philosophical state v⃗i. Decomposing innovation moves into two main subtypes: opposition and skewness.

The opposition index Wi,j of a given philosophical move M⃗i,j is defined as

Wi,j = 〈M⃗i,j, D⃗i〉/  ||D⃗i||2

This index quantifies the intensity of opposition of that respective philosophical move, in the sense of having a large projection along the vector D⃗i. It should also be noticed that the repetition of opposition moves lead to little innovation, as it would imply in an oscillation around the average state. The skewness index si,j of that same philosophical move is the distance between v⃗j and the line L defined by the vector D⃗i, and therefore quantifies how much the new philosophical state departs from the respective opposition move. Actually, a sequence of moves with zero skewness would represent more trivial oscillations within the opposition line Li.

We also suggest an index to quantify the dialectics between a triple of successive philosophers i, j and k. More specifically, the philosophical state v⃗i is understood as the thesis, the state j is taken as the antithesis, with the synthesis being associated to the state v⃗k. The hypothesis that k is the consequence, among other forces, of a dialectics between the views v⃗i and v⃗j can be expressed by the fact that the philosophical state v⃗k be located near the middle line MLi,j defined by the thesis and antithesis (i.e. the points which are at an equal distance to both v⃗i and v⃗j) relatively to the opposition amplitude ||D⃗i||.

Therefore, the counter-dialectic index is defined as

ρi→k = di→k /||M⃗i,j||

where di→k is the distance between the philosophical state v⃗k and the middle-line MLi,j between v⃗i and v⃗j. Note that 0 ≤ di→k ≤ 1. The choice of counter-dialectics instead of dialectics is justified to maintain compatibility with the use of a distance from point to line as adopted for the definition of skewness….

Production Function as a Growth Model

Cobb-Douglas_Production_Function

Any science is tempted by the naive attitude of describing its object of enquiry by means of input-output representations, regardless of state. Typically, microeconomics describes the behavior of firms by means of a production function:

y = f(x) —– (1)

where x ∈ R is a p×1 vector of production factors (the input) and y ∈ R is a q × 1 vector of products (the output).

Both y and x are flows expressed in terms of physical magnitudes per unit time. Thus, they may refer to both goods and services.

Clearly, (1) is independent of state. Economics knows state variables as capital, which may take the form of financial capital (the financial assets owned by a firm), physical capital (the machinery owned by a firm) and human capital (the skills of its employees). These variables should appear as arguments in (1).

This is done in the Georgescu-Roegen production function, which may be expressed as follows:

y= f(k,x) —– (2)

where k ∈ R is a m × 1 vector of capital endowments, measured in physical magnitudes. Without loss of generality, we may assume that the first mp elements represent physical capital, the subsequent mh elements represent human capital and the last mf elements represent financial capital, with mp + mh + mf = m.

Contrary to input and output flows, capital is a stock. Physical capital is measured by physical magnitudes such as the number of machines of a given type. Human capital is generally proxied by educational degrees. Financial capital is measured in monetary terms.

Georgescu-Roegen called the stocks of capital funds, to be contrasted to the flows of products and production factors. Thus, Georgescu-Roegen’s production function is also known as the flows-funds model.

Georgescu-Roegen’s production function is little known and seldom used, but macroeconomics often employs aggregate production functions of the following form:

Y = f(K,L) —– (3)

where Y ∈ R is aggregate income, K ∈ R is aggregate capital and L ∈ R is aggregate labor. Though this connection is never made, (3) is a special case of (2).

The examination of (3) highlighted a fundamental difficulty. In fact, general equilibrium theory requires that the remunerations of production factors are proportional to the corresponding partial derivatives of the production function. In particular, the wage must be proportional to ∂f/∂L and the interest rate must be proportional to ∂f/∂K. These partial derivatives are uniquely determined if df is an exact differential.

If the production function is (1), this translates into requiring that:

2f/∂xi∂xj = ∂2f/∂xj∂xi ∀i, j —– (4)

which are surely satisfied because all xi are flows so they can be easily reverted. If the production function is expressed by (2), but m = 1 the following conditions must be added to (4):

2f/∂k∂xi2f/∂xi∂k ∀i —– (5)

Conditions 5 are still surely satisfied because there is only one capital good. However, if m > 1 the following conditions must be added to conditions 4:

2f/∂ki∂xj = ∂2f/∂xj∂ki ∀i, j —– (6)

2f/∂ki∂kj = ∂2f/∂kj∂ki ∀i, j —– (7)

Conditions 6 and 7 are not necessarily satisfied because each derivative depends on all stocks of capital ki. In particular, conditions 6 and 7 do not hold if, after capital ki has been accumulated in order to use the technique i, capital kj is accumulated in order to use the technique j but, subsequently, production reverts to technique i. This possibility, known as reswitching of techniques, undermines the validity of general equilibrium theory.

For many years, the reswitching of techniques has been regarded as a theoretical curiosum. However, the recent resurgence of coal as a source of energy may be regarded as instances of reswitching.

Finally, it should be noted that as any input-state-output representation, (2) must be complemented by the dynamics of the state variables:

k ̇ = g ( k , x , y ) —– ( 8 )

which updates the vector k in (2) making it dependent on time. In the case of aggregate production function (3), (8) combines with (3) to constitute a growth model.

Bernard Cache’s Earth Moves: The Furnishing of Territories (Writing Architecture)

bernard_cache_lectur

Take the concept of singularity. In mathematics, what is said to be singular is not a given point, but rather a set of points on a given curve. A point is not singular; it becomes singularized on a continuum. And several types of singularity exist, starting with fractures in curves and other bumps in the road. We will discount them at the outset, for singularities that are marked by discontinuity signal events that are exterior to the curvature and are themselves easily identifiable. In the same way, we will eliminate singularities such as backup points [points de rebroussement]. For though they are indeed discontinuous, they refer to a vector that is tangential to the curve and thus trace a symmetrical axis that constitutive of the backup point. Whether it be a reflection of the tan- gential plane or a rebound with respect to the orthogonal plane, the backup point is thus not a basic singularity. It is rather the result of an operation effectuated on any part of the curve. Here again, the singular would be the sign of too noisy, too memorable an event, while what we want to do is to deal with what is most smooth: ordinary continua, sleek and polished.

On one hand there are the extrema, the maximum and minimum on a given curve. And on the other there are those singular points that, in relation to the extrema, figure as in-betweens. These are known as points of inflection. They are different from the extrema in that they are defined only in relation to themselves, whereas the definition of the extrema presupposes the prior choice of an axis or an orientation, that is to say of a vector.

Indeed, a maximum or a minimum is a point where the tangent to the curve is directed perpendicularly to the axis of the ordinates [y-axis]. Any new orientation of the coordinate axes repositions the maxima and the min- ima; they are thus extrinsic singularities. The point of inflection, however, designates a pure event of curvature where the tangent crosses the curve; yet this event does not depend in any way on the orientation of the axes, which is why it can be said that inflection is an intrinsic singularity. On either side of the inflection, we know that there will be a highest point and a lowest point, but we cannot designate them as long as the curve has not been related to the orientation of a vector. Points of inflection are singularities in and of themselves, while they confer an indeterminacy to the rest of the curve. Preceding the vector, inflection makes of each of the points a possible extremum in relation to its inverse: virtual maxima and minima. In this way, inflection represents a totality of possibilities, as well as an openness, a receptiveness, or an anticipation……

Bernard Cache Earth Moves The Furnishing of Territories

Biogrammatic Vir(Ac)tuality. Note Quote.

In Foucault’s most famous example, the prison acts as the confluence of content (prisoners) and expression (law, penal code) (Gilles Deleuze, Sean Hand-Foucault). Informal Diagrams are proliferate. As abstract machines they contain the transversal vectors that cut across a panoply of features (such as institutions, classes, persons, economic formation, etc), mapping from point to relational point, the generalized features of power economies. The disciplinary diagram explored by Foucault, imposes “a particular conduct upon a particular human multiplicity”. The imposition of force upon force affects and effectuates the felt experience of a life, a living. Deleuze has called the abstract machine “pure matter/function” in which relations between forces are nonetheless very real.

[…] the diagram acts as a non-unifying immanent cause that is co-extensive with the whole social field: the abstract machine is like the cause of the concrete assemblages that execute its relations; and these relations between forces take place ‘not above’ but within the very tissue of the assemblages they produce.

The processual conjunction of content and expression; the cutting edge of deterritorialization:

The relations of power and resistance between theory and practice resonate – becoming-form; diagrammatics as praxis, integrates and differentiates the immanent cause and quasi-cause of the actualized occasions of research/creation. What do we mean by immanent cause? It is a cause which is realized, integrated and distinguished in its effect. Or rather, the immanent cause is realized, integrated and distinguished by its effect. In this way there is a correlation or mutual presupposition between cause and effect, between abstract machine and concrete assemblages

Memory is the real name of the relation to oneself, or the affect of self by self […] Time becomes a subject because it is the folding of the outside…forces every present into forgetting but preserves the whole of the past within memory: forgetting is the impossibiltiy of return and memory is the necessity of renewal.

Untitled

The figure on the left is Henri Bergson’s diagram of an infinitely contracted past that directly intersects with the body at point S – a mobile, sensorimotor present where memory is closest to action. Plane P represents the actual present; plane of contact with objects. The AB segments represent repetitive compressions of memory. As memory contracts it gets closer to action. In it’s more expanded forms it is closer to dreams. The figure on the right extrapolates from Bergson’s memory model to describe the Biogrammatic ontological vector of the Diagram as it moves from abstract (informal) machine in the most expanded form “A” through the cone “tissue” to the phase-shifting (formal), arriving at the Strata of the P plane to become artefact. The ontological vector passes through the stratified, through the interval of difference created in the phase shift (the same phase shift that separates and folds content and expression to move vertically, transversally, back through to the abstract diagram.)

A spatio-temporal-material contracting-expanding of the abstract machine is the processual thinking-feeling-articulating of the diagram becoming-cartographic; synaesthetic conceptual mapping. A play of forces, a series of relays, affecting a tendency toward an inflection of the informal diagram becoming-form. The inflected diagram/biogram folds and unfolds perception, appearances; rides in the gap of becoming between content and expression; intuitively transduces the actualizing (thinking, drawing, marking, erasing) of matter-movement, of expressivity-movement. “To follow the flow of matter… is intuition in action.” A processual stage that prehends the process of the virtual actualizing;

the creative construction of a new reality. The biogrammatic stage of the diagrammatic is paradoxically double in that it is both the actualizing of the abstract machine (contraction) and the recursive counter-actualization of the formal diagram (détournement); virtual and actual.

It is the event-dimension of potential – that is the effective dimension of the interrelating of elements, of their belonging to each other. That belonging is a dynamic corporeal “abstraction” – the “drawing off” (transductive conversion) of the corporeal into its dynamism (yielding the event) […] In direct channeling. That is, in a directional channeling: ontological vector. The transductive conversion is an ontological vector that in-gathers a heterogeneity of substantial elements along with the already-constituted abstractions of language (“meaning”) and delivers them together to change. (Brian Massumi Parables for the Virtual Movement, Affect, Sensation)

Skin is the space of the body the BwO that is interior and exterior. Interstitial matter of the space of the body.

Untitled

The material markings and traces of a diagrammatic process, a ‘capturing’ becoming-form. A diagrammatic capturing involves a transductive process between a biogrammatic form of content and a form of expression. The formal diagram is thus an individuating phase-shift as Simondon would have it, always out-of-phase with itself. A becoming-form that inhabits the gap, the difference, between the wave phase of the biogrammatic that synaesthetically draws off the intermix of substance and language in the event-dimension and the drawing of wave phase in which partial capture is formalized. The phase shift difference never acquires a vectorial intention. A pre-decisive, pre-emptive drawing of phase-shifting with a “drawing off” the biogram.

Untitled

If effects realize something this is because the relations between forces or power relations, are merely virtual, potential, unstable vanishing and molecular, and define only possibilities of interaction so long as they do not enter a macroscopic whole capable of giving form to their fluid manner and diffuse function. But realization is equally an integration, a collection of progressive integrations that are initially local and then become or tend to become global, aligning, homogenizing and summarizing relations between forces: here law is the integration of illegalisms.

 

Stationarity or Homogeneity of Random Fields

Untitled

Let (Ω, F, P) be a probability space on which all random objects will be defined. A filtration {Ft : t ≥ 0} of σ-algebras, is fixed and defines the information available at each time t.

Random field: A real-valued random field is a family of random variables Z(x) indexed by x ∈ Rd together with a collection of distribution functions of the form Fx1,…,xn which satisfy

Fx1,…,xn(b1,…,bn) = P[Z(x1) ≤ b1,…,Z(xn) ≤ bn], b1,…,bn ∈ R

The mean function of Z is m(x) = E[Z(x)] whereas the covariance function and the correlation function are respectively defined as

R(x, y) = E[Z(x)Z(y)] − m(x)m(y)

c(x, y) = R(x, x)/√(R(x, x)R(y, y))

Notice that the covariance function of a random field Z is a non-negative definite function on Rd × Rd, that is if x1, . . . , xk is any collection of points in Rd, and ξ1, . . . , ξk are arbitrary real constants, then

l=1kj=1k ξlξj R(xl, xj) = ∑l=1kj=1k ξlξj E(Z(xl) Z(xj)) = E (∑j=1k ξj Z(xj))2 ≥ 0

Without loss of generality, we assumed m = 0. The property of non-negative definiteness characterizes covariance functions. Hence, given any function m : Rd → R and a non-negative definite function R : Rd × Rd → R, it is always possible to construct a random field for which m and R are the mean and covariance function, respectively.

Bochner’s Theorem: A continuous function R from Rd to the complex plane is non-negative definite if and only if it is the Fourier-Stieltjes transform of a measure F on Rd, that is the representation

R(x) = ∫Rd eix.λ dF(λ)

holds for x ∈ Rd. Here, x.λ denotes the scalar product ∑k=1d xkλk and F is a bounded,  real-valued function satisfying ∫A dF(λ) ≥ 0 ∀ measurable A ⊂ Rd

The cross covariance function is defined as R12(x, y) = E[Z1(x)Z2(y)] − m1(x)m2(y)

, where m1 and m2 are the respective mean functions. Obviously, R12(x, y) = R21(y, x). A family of processes Zι with ι belonging to some index set I can be considered as a process in the product space (Rd, I).

A central concept in the study of random fields is that of homogeneity or stationarity. A random field is homogeneous or (second-order) stationary if E[Z(x)2] is finite ∀ x and

• m(x) ≡ m is independent of x ∈ Rd

• R(x, y) solely depends on the difference x − y

Thus we may consider R(h) = Cov(Z(x), Z(x+h)) = E[Z(x) Z(x+h)] − m2, h ∈ Rd,

and denote R the covariance function of Z. In this case, the following correspondence exists between the covariance and correlation function, respectively:

c(h) = R(h)/R(o)

i.e. c(h) ∝ R(h). For this reason, the attention is confined to either c or R. Two stationary random fields Z1, Z2 are stationarily correlated if their cross covariance function R12(x, y) depends on the difference x − y only. The two random fields are uncorrelated if R12 vanishes identically.

An interesting special class of homogeneous random fields that often arise in practice is the class of isotropic fields. These are characterized by the property that the covariance function R depends only on the length ∥h∥ of the vector h:

R(h) = R(∥h∥) .

In many applications, random fields are considered as functions of “time” and “space”. In this case, the parameter set is most conveniently written as (t,x) with t ∈ R+ and x ∈ Rd. Such processes are often homogeneous in (t, x) and isotropic in x in the sense that

E[Z(t, x)Z(t + h, x + y)] = R(h, ∥y∥) ,

where R is a function from R2 into R. In such a situation, the covariance function can be written as

R(t, ∥x∥) = ∫Rλ=0 eitu Hd (λ ∥x∥) dG(u, λ),

where

Hd(r) = (2/r)(d – 2)/2 Γ(d/2) J(d – 2)/2 (r)

and Jm is the Bessel function of the first kind of order m and G is a multiple of a distribution function on the half plane {(λ,u)|λ ≥ 0,u ∈ R}.

Fock Space

619px-Fock-space

Fock space is just another separable infinite dimensional Hilbert space (and so isomorphic to all its separable infinite dimensional brothers). But the key is writing it down in a fashion that suggests a particle interpretation. In particular, suppose that H is the one-particle Hilbert space, i.e. the state space for a single particle. Now depending on whether our particle is a Boson or a Fermion, the state space of a pair of these particles is either Es(H ⊗ H) or Ea(H ⊗ H), where Es is the projection onto the vectors invariant under the permutation ΣH,H on H ⊗ H, and Ea is the projection onto vectors that change signs under ΣH,H. For

present purposes, we ignore these differences, and simply use H ⊗ H to denote one possibility or the other. Now, proceeding down the line, for n particles, we have the Hilbert space Hn ≡ H ⊗ · · · ⊗ H, etc..

A state in Hn is definitely a state of n particles. To get disjunctive states, we make use of the direct sum operation “⊕” on Hilbert spaces. So we define the Fock space F(H) over H as the infinite direct sum:

F (H ) = C ⊕ H ⊕ (H ⊗ H ) ⊕ (H ⊗ H ⊗ H ) ⊕ · · · .

So, the state vectors in Fock space include a state where there are no particles (the vector lies in the first summand), a state where there is one particle, a state where there are two particles, etc.. Furthermore, there are states that are superpositions of different numbers of particles.

One can spend time worrying about what it means to say that particle numbers can be superposed. But that is the “half empty cup” point of view. From the “half full cup” point of view, it makes sense to count particles. Indeed, the positive (unbounded) operator

N=0 ⊕ 1 ⊕ 2 ⊕ 3 ⊕ 4 ⊕···,

is the formal element of our model that permits us to talk about the number of particles.

In the category of Hilbert spaces, all separable Hilbert spaces are isomorphic – there is no difference between Fock space and the single particle space. If we are not careful, we could become confused about the bearer of the name “Fock space.”

The confusion goes away when we move to the appropriate category. According to Wigner’s analysis, a particle corresponds to an irreducible unitary representation of the identity component P of the Poincaré group. Then the single particle space and Fock space are distinct objects in the category of representations of P. The underlying Hilbert spaces of the two representations are both separable (and hence isomorphic as Hilbert spaces); but the two representations are most certainly not equivalent (one is irreducible, the other reducible).

C∗-algebras and their Representations

E8Petrie

Definition. A C∗-algebra is a pair consisting of a ∗-algebra U and a norm

∥ · ∥ : A → C such that
∥AB∥ ≤ ∥A∥ · ∥B∥, ∥A∗A∥ = ∥A∥2,

∀ A, B ∈ A. We usually use A to denote the algebra and its norm.

Definition. A state ω on A is a linear functional such that ω(A∗A) ≥ 0 ∀ A ∈ U, and ω(I) = 1.

Definition. A state ω of U is said to be mixed if ω = 1/2(ω12) with ω1 ≠ ω2. Otherwise ω is said to be pure.

Definition. Let U be a C∗-algebra. A representation of U is a pair (H,π), where H is a Hilbert space and π is a ∗-homomorphism of U into B(H). A representation (H,π) is said to be irreducible if π(U) is weakly dense in B(H). A representation (H,π) is said to be faithful if π is an isomorphism.

Definition. Let (H, π) and (K, φ) be representations of a C∗-algebra U. Then (H,π) and (K,φ) are said to be:

  1. unitarily equivalent if there is a unitary U : H → K such that Uπ(A) = φ(A)U for all A ∈ U.
  2. quasiequivalent if the von Neumann algebras π(U)′′ and φ(U)′′ are ∗-isomorphic.
  3. disjoint if they are not quasiequivalent.

Definition. A representation (K, φ) is said to be a subrepresentation of (H, π) just in case there is an isometry V : K → H such that π(A)V =Vφ(A) ∀ A ∈ U.

Two representations are quasiequivalent iff they have unitarily equivalent subrepresentations.

The Gelfand-Naimark-Segal (GNS) theorem shows that every C∗-algebraic state can be represented by a vector in a Hilbert space.

Theorem:

(GNS). Let ω be a state of U. Then there is a representation (H,π) of U, and a unit vector Ω ∈ H such that:

1. ω(A)=⟨Ω, π(A)Ω⟩, ∀ A ∈ U;

2. π(U)Ω is dense in H.

Furthermore, the representation (H,π) is the unique one (up to unitarily equivalence) satisfying the two conditions.

Proof:

We construct the Hilbert space H from equivalence classes of elements in U, and the representation π is given by the action of left multiplication. In particular, define a bounded sesquilinear form on U by setting

⟨A, B⟩ω = ω(A∗B), A, B ∈ A.

When is the Spacetime Temporally Orientable?

space-time

In both general relativity and Newtonian gravitation, forces are represented by vectors at a point. We assume that the total force acting on a particle at a point (computed by taking the vector sum of all of the individual forces acting at that point) must be proportional to the acceleration of the particle at that point, as in F = ma, which holds in both theories. We understand forces to give rise to acceleration, and so we expect the total force at a point to vanish just in case the acceleration vanishes. Since the acceleration of a curve at a point, as determined relative to some derivative operator, must satisfy certain properties, it follows that the vector representing total force must also satisfy certain properties. In particular, in relativity theory, the acceleration of a curve at a point is always orthogonal to the tangent vector of the curve at that point, and thus the total force on a particle at a point must always be orthogonal to the tangent vector of the particle’s worldline at that point.

More precisely, we take a model of relativity theory to be a relativistic spacetime, which is an ordered pair (M, gab), where M is a smooth, connected, paracompact, Hausdorff 4-manifold and gab is a smooth Lorentzian metric. A model of Newtonian gravitation, meanwhile, is a classical spacetime, which is an ordered quadruple (M, tab, hab, ∇), where M is again a smooth, connected, paracompact, Hausdorff 4-manifold, tab and hab are smooth fields with signatures (1, 0, 0, 0) and (0, 1, 1, 1), respectively, which together satisfy tabhbc = 0, and ∇ is a smooth derivative operator satisfying the compatibility conditions ∇atbc = 0 and ∇ahab = 0. The fields tab and hab may be interpreted as a (degenerate) “temporal metric” and a (degenerate) “spatial metric”, respectively. Note that the signature of tab guarantees that locally, we can always find a field ta such that tab = tatb. In the special case where this field can be smoothly extended to a global field with the stated property, we call the spacetime temporally orientable.