Canonical Fibrations on Geodesics

conjA5

There is a realisation of the canonical fibrations of flag manifolds that serves to introduce a twistor space. For this, assume that G is of adjoint type (i.e. has trivial centre) and let ΩG denote the infinite-dimensional manifold of based loops in G: the loop group. In fact ΩG is a Kähler manifold and may be viewed as a flag manifold GC/P where GC is the manifold of loops in GC and P is the subgroup of those that extend holomorphically to the disc. We have various fibrations ρλ: ΩG → G given by evaluation at λ ∈ S1 and in some ways ρ−1 behaves like a canonical fibration making ΩG into a universal twistor space for G. It is a theorem of Uhlenbeck that any harmonic map of S2 into G is of the form ρ−1 ◦ Φ for some “super-horzontal” holomorphic map Φ : S2 → ΩG.

The flag manifolds of G embed in ΩG as conjugacy classes of geodesics and we find a particular embedding of this kind using the canonical element. Indeed, our assumption that G be centre-free means that exp 2πξ = e for any canonical element ξ. Thus if F = G/H = GC/P is a flag manifold with ξ the canonical element of p, we may define a map Γ: F → ΩG by setting

Γ(eH) = (e√−1t → exp tξ)

and extending by equivariance. Moreover, if N is the inner symmetric space associated to F, we have a totally geodesic immersion γ : N → G defined by setting γ(x) equal to the element of G that generates the involution at x. We now have:

Γ: F → ΩG is a totally geodesic, holomorphic, isometric immersion and the following diagram commutes

Untitled

where π1 is a canonical fibration. Thus we have a realisation of the canonical fibrations as the trace of ρ−1 on certain conjugacy classes of geodesics.

Advertisement

Reclaim Modernity: Beyond Markets, Beyond Machines (Mark Fisher & Jeremy Gilbert)

Untitled

It is understandable that the mainstream left has traditionally been suspicious of anti-bureaucratic politics. The Fabian tradition has always believed – has been defined by its belief – in the development and extension of an enlightened bureaucracy as the main vehicle of social progress. Attacking ‘bureaucracy’ has been – since at least the 1940s – a means by which the Right has attacked the very idea of public service and collective action. Since the early days of Thatcherism, there has been very good reason to become nervous whenever someone attacks bureaucracy, because such attacks are almost invariably followed by plans not for democratisation, but for privatisation.

Nonetheless, it is precisely this situation that has produced a certain paralysis of the Left in the face of one of its greatest political opportunities, an opportunity which it can only take if it can learn to speak an anti-bureaucratic language with confidence and conviction. On the one hand, this is a simple populist opportunity to unite constituencies within both the public and private sectors: simple, but potentially strategically crucial. As workers in both sectors and as users of public services, the public dislike bureaucracy and apparent over-regulation. The Left misses an enormous opportunity if it fails to capitalise on this dislike and transform it into a set of democratic demands.

On the other hand, anti-bureaucratism marks one of the critical points of failure and contradiction in the entire neoliberal project. For the truth is that neoliberalism has not kept its promise in this regard. It has not reduced the interference of managerial mechanisms and apparently pointless rules and regulations in the working life of public-sector professionals, or of public-service users, or of the vast majority of workers in the private sector. In fact it has led in many cases to an enormous proliferation and intensification of just these processes. Targets, performance indicators, quantitative surveys and managerial algorithms dominate more of life today than ever before, not less. The only people who really suffer less regulation than they did in the past are the agents of finance capital: banks, traders, speculators and fund managers.

Where de-regulation is a reality for most workers is not in their working lives as such, but in the removal of those regulations which once protected their rights to secure work, and to a decent life outside of work (pensions, holidays, leave entitlements, etc.). The precarious labour market is not a zone of freedom for such workers, but a space in which the fact of precarity itself becomes a mechanism of discipline and regulation. It only becomes a zone of freedom for those who already have enough capital to be able to choose when and where to work, or to benefit from the hyper-mobility and enforced flexibility of contemporary capitalism.

Reclaiming Modernity Beyond Markets Beyond Machines

The Concern for Historical Materialism. Thought of the Day 53.0

plaksin-a-spectrum-of-glass-1920

The concern for historical materialism, in spite of Marx’s differentiation between history and pre-history, is that totalisation might not be historically groundable after all, and must instead be constituted in other ways: whether logically, transcendentally or naturally. The ‘Consciousness’ chapter of the Phenomenology, a blend of all three, becomes a transcendent(al) logic of phenomena – individual, universal, particular – and ceases to provide any genuine phenomenology of ‘the experience of consciousness’. Natural consciousness is not strictly speaking a standpoint (no real opposition), so it can offer no critical grounds of itself to confer synthetic unity upon the universal, that which is taken to a higher level in ‘Self-Consciousness’ (only to be retrospectively confirmed). Yet Hegel does just this from the outset. In ‘Perception’, we read that, ‘[o]n account of the universality [Allgemeinheit] of the property, I must … take the objective essence to be on the whole a community [Gemeinschaft]’. Universality always sides with community, the Allgemeine with the Gemeinschaft, as if the synthetic operation had taken place prior to its very operability. Unfortunately for Hegel, the ‘free matters’ of all possible properties paves the way for the ‘interchange of forces’ in ‘Force and the Understanding’, and hence infinity, life and – spirit. In the midst of the master-slave dialectic, Hegel admits that, ‘[i]n this movement we see repeated the process which represented itself as the play of forces, but repeated now in consciousness [sic].

Iain Hamilton Grant’s Schelling in Opposition to Fichte. Note Quote.

33576_640

The stated villain of Philosophies of Nature is not Hegelianism but rather ‘neo-Fichteanism’. It is Grant’s ‘Philosophies of Nature After Schelling‘, which takes up the issue of graduating Schelling to escape the accoutrements of Kantian and Fichtean narrow transcendentalism. Grant frees Schelling from the grips of narrow minded inertness and mechanicality in nature that Kant and Fichte had presented nature with. This idea is the Deleuzean influence on Grant. Manuel De Landa makes a vociferous case in this regard. According to De Landa, the inertness of matter was rubbished by Deleuze in the way that Deleuze sought for a morphogenesis of form thereby launching a new kind of materialism. This is the anti-essentialist position of Deleuze. Essentialism says that matter and energy are inert, they do not have any morphogenetic capabilities. They cannot give rise to new forms on their own. Disciplines like complexity theory, non-linear dynamics do give matter its autonomy over inertness, its capabilities in terms of charge. But its account of the relationship between Fichte and Schelling actually obscures the rich meaning of speculation in Hegel and after. Grant quite accurately recalls that Schelling confronted Fichte’s identification of the ‘not I’ with passive nature – the consequence of identifying all free activity with the ‘I’ alone. For Grant, that which Jacobi termed ‘speculative egotism’ becomes the nightmare of modern philosophy and of technological modernity at large. The ecological concern is never quite made explicit in Philosophies of Nature. Yet Grant’s introduction to Schelling’s On the World Soul helps to contextualise the meaning of his ‘geology of morals’.

What we miss from Grant’s critique of Fichte is the manner by which the corrective, positive characterisation of nature proceeds from Schelling’s confirmation of Fichte’s rendering of the fact of consciousness (Tatsache) into the act of consciousness (Tathandlung). Schelling, as a consequence, becomes singularly critical of contemplative speculation, since activity now implies working on nature and thereby changing it – along with it, we might say – rather than either simply observing it or even experimenting upon it.

In fact, Grant reads Schelling only in opposition to Fichte, with drastic consequences for his speculative realism: the post-Fichtean element of Schelling’s naturephilosophy allows for the new sense of speculation he will share with Hegel – even though they will indeed turn this against Kant and Fichte. Without this account, we are left with the older, contemplative understanding of metaphysical speculation, which leads to a certain methodologism in Grant’s study. Hence, ‘the principle method of naturephilosophy consists in “unconditioning” the phenomena’. Relatedly, Meillassoux defines the ‘speculative’ as ‘every type of thinking’ – not acting, – ‘that claims to be able to access some form of absolute’.

In direct contrast to this approach, the collective ‘system programme’ of Hegel, Schelling and Hölderlin was not a programme for thinking alone. Their revolutionised sense of speculation, from contemplation of the stars to reform of the worldly, is overlooked by today’s speculative realism – a philosophy that, ‘refuses to interrogate reality through human (linguistic, cultural or political) mediations of it’. We recall that Kant similarly could not extend his Critique to speculative reason precisely on account of his contemplative determination of pure reason (in terms of the hierarchical gap between reason and the understanding). Grant’s ‘geology of morals’ does not oppose ‘Kanto-Fichtean philosophy’, as he has it, but rather remains structurally within the sphere of Kant’s pre-political metaphysics.

Quantum Energy Teleportation. Drunken Risibility.

dizzzdergunov

Time is one of the most difficult concepts in physics. It enters in the equations in a rather artificial way – as an external parameter. Although strictly speaking time is a quantity that we measure, it is not possible in quantum physics to define a time-observable in the same way as for the other quantities that we measure (position, momentum, etc.). The intuition that we have about time is that of a uniform flow, as suggested by the regular ticks of clocks. Time flows undisturbed by the variety of events that may occur in an irregular pattern in the world. Similarly, the quantum vacuum is the most regular state one can think of. For example, a persistent superconducting current flows at a constant speed – essentially forever. Can then one use the quantum vacuum as a clock? This is a fascinating dispute in condensed-matter physics, formulated as the problem of existence of time crystals. A time crystal, by analogy with a crystal in space, is a system that displays a time-regularity under measurement, while being in the ground (vacuum) state.

Then, if there is an energy (the zero-point energy) associated with empty space, it follows via the special theory of relativity that this energy should correspond to an inertial mass. By the principle of equivalence of the general theory of relativity, inertial mass is identical with the gravitational mass. Thus, empty space must gravitate. So, how much does empty space weigh? This question brings us to the frontiers of our knowledge of vacuum – the famous problem of the cosmological constant, a problem that Einstein was wrestling with, and which is still an open issue in modern cosmology.

Finally, although we cannot locally extract the zero-point energy of the vacuum fluctuations, the vacuum state of a field can be used to transfer energy from one place to another by using only information. This protocol has been called quantum energy teleportation and uses the fact that different spatial regions of a quantum field in the ground state are entangled. It then becomes possible to extract locally energy from the vacuum by making a measurement in one place, then communicating the result to an experimentalist in a spatially remote region, who would be able then to extract energy by making an appropriate (depending on the result communicated) measurement on her or his local vacuum. This suggests that the vacuum is the primordial essence, the ousia from which everything came into existence.

Potential Synapses. Thought of the Day 52.0

For a neuron to recognize a pattern of activity it requires a set of co-located synapses (typically fifteen to twenty) that connect to a subset of the cells that are active in the pattern to be recognized. Learning to recognize a new pattern is accomplished by the formation of a set of new synapses collocated on a dendritic segment.

Untitled

Figure: Learning by growing new synapses. Learning in an HTM neuron is modeled by the growth of new synapses from a set of potential synapses. A “permanence” value is assigned to each potential synapse and represents the growth of the synapse. Learning occurs by incrementing or decrementing permanence values. The synapse weight is a binary value set to 1 if the permanence is above a threshold.

Figure shows how we model the formation of new synapses in a simulated Hierarchical Temporal Memory (HTM) neuron. For each dendritic segment we maintain a set of “potential” synapses between the dendritic segment and other cells in the network that could potentially form a synapse with the segment. The number of potential synapses is larger than the number of actual synapses. We assign each potential synapse a scalar value called “permanence” which represents stages of growth of the synapse. A permanence value close to zero represents an axon and dendrite with the potential to form a synapse but that have not commenced growing one. A 1.0 permanence value represents an axon and dendrite with a large fully formed synapse.

The permanence value is incremented and decremented using a Hebbian-like rule. If the permanence value exceeds a threshold, such as 0.3, then the weight of the synapse is 1, if the permanence value is at or below the threshold then the weight of the synapse is 0. The threshold represents the establishment of a synapse, albeit one that could easily disappear. A synapse with a permanence value of 1.0 has the same effect as a synapse with a permanence value at threshold but is not as easily forgotten. Using a scalar permanence value enables on-line learning in the presence of noise. A previously unseen input pattern could be noise or it could be the start of a new trend that will repeat in the future. By growing new synapses, the network can start to learn a new pattern when it is first encountered, but only act differently after several presentations of the new pattern. Increasing permanence beyond the threshold means that patterns experienced more than others will take longer to forget.

HTM neurons and HTM networks rely on distributed patterns of cell activity, thus the activation strength of any one neuron or synapse is not very important. Therefore, in HTM simulations we model neuron activations and synapse weights with binary states. Additionally, it is well known that biological synapses are stochastic, so a neocortical theory cannot require precision of synaptic efficacy. Although scalar states and weights might improve performance, they are not required from a theoretical point of view.

Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome

Untitled

The design, synthesis and assembly of the 1.08- Mbp Mycoplasma mycoides JCVI-syn1.0 genome starting from digitized genome sequence information and its transplantation into a Mycoplasma capricolum recipient cell to create new Mycoplasma mycoides cells are controlled only by the synthetic chromosome. The only DNA in the cells is the designed synthetic DNA sequence, including “watermark” sequences and other designed gene deletions and polymorphisms, and mutations acquired during the building process. The new cells have expected phenotypic properties and are capable of continuous self-replication. Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome

Categorial Functorial Monads

Typeclassopedia-diagram

Algebraic constructs (A,U), such as Vec, Grp, Mon, and Lat, can be fully described by the following data, called the monad associated with (A,U):

1. the functor T : Set → Set, where T = U ◦ F and F : Set → A is the associated free functor,

2. the natural transformation η : idSet → T formed by universal arrows, and

3. the natural transformation μ : T ◦ T → T given by the unique homomorphism μX : T(TX) → TX that extends idTX.

In these cases, there is a canonical concrete isomorphism K between (A,U) and the full concrete subcategory of Alg(T) consisting of those T-algebras TX →x X that satisfy the equations x ◦ ηX = idX and x ◦ Tx = x ◦ μX. The latter subcategory is called the Eilenberg-Moore category of the monad (T, η, μ). The above observation makes it possible, in the following four steps, to express the “degree of algebraic character” of arbitrary concrete categories that have free objects:

Step 1: With every concrete category (A,U) over X that has free objects (or, more generally, with every adjoint functor A →U X) one can associate, in an essentially unique way, an adjoint situation (η, ε) : F -|U : A → X.

Step 2: With every adjoint situation (η, ε) : F -|U : A → X one can associate a monad T = (T, η, μ) on X, where T = U ◦ F : X → X.

Step 3: With every monad T = (T, η, μ) on X one can associate a concrete subcategory of Alg(T) denoted by (XT, UT) and called the category of T-algebras.

Step 4:  With every concrete category (A,U) over X that has free objects one can associate a distinguished concrete functor (A,U) →K (XT , UT) into the associated category of T-algebras called the comparison functor for (A, U).

Concrete categories that are concretely isomorphic to a category of T-algebras for some monad T have a distinct “algebraic flavor”. Such categories (A,U) and their forgetful functors U are called monadic. It turns out that a concrete category (A, U ) is monadic iff it has free objects and its associated comparison functor (A,U) →K (XT , UT) is an isomorphism. Thus, for concrete categories (A,U) that have free objects, the associated comparison functor can be considered as a means of measuring the “algebraic character” of (A,U); and the associated category of T-algebras can be considered to be the “algebraic part” of (A,U). In particular,

(a) every finitary variety is monadic,

(b) the category TopGrp, considered as a concrete category

  1. over Top, is monadic,
  2. over Set, is not monadic; the associated comparison functor is the forgetful functor TopGrp → Grp, so that the construct Grp may be considered as the “algebraic part” of the construct TopGrp,

(c) the construct Top is not monadic; the associated comparison functor is the forgetful functor Top → Set itself, so that the construct Set may be considered as the “algebraic part” of the construct Top; hence the construct Top may be considered as having a trivial “algebraic part”.

Among constructs, monadicity captures the idea of “algebraicness” rather well. Unfortunately, however, the behavior of monadic categories in general is far from satisfactory. Monadic functors can fail badly to reflect properties of the base category (e.g., the existence of colimits or of suitable factorization structures), and they are not closed under composition.

Of Magnitudes, Metrization and Materiality of Abstracto-Concrete Objects.

im6gq0

The possibility of introducing magnitudes in a certain domain of concrete material objects is by no means immediate, granted or elementary. First of all, it is necessary to find a property of such objects that permits to compare them, so that a quasi-serial ordering be introduced in their set, that is a total linear ordering not excluding that more than one object may occupy the same position in the series. Such an ordering must then undergo a metrization, which depends on finding a fundamental measuring procedure permitting the determination of a standard sample to which the unit of measure can be bound. This also depends on the existence of an operation of physical composition, which behaves additively with respect to the quantity which we intend to measure. Only if all these conditions are satisfied will it be possible to introduce a magnitude in a proper sense, that is a function which assigns to each object of the material domain a real number. This real number represents the measure of the object with respect to the intended magnitude. This condition, by introducing an homomorphism between the domain of the material objects and that of the positive real numbers, transforms the language of analysis (that is of the concrete theory of real numbers) into a language capable of speaking faithfully and truly about those physical objects to which it is said that such a magnitude belongs.

Does the success of applying mathematics in the study of the physical world mean that this world has a mathematical structure in an ontological sense, or does it simply mean that we find in mathematics nothing but a convenient practical tool for putting order in our representations of the world? Neither of the answers to this question is right, and this is because the question itself is not correctly raised. Indeed it tacitly presupposes that the endeavour of our scientific investigations consists in facing the reality of “things” as it is, so to speak, in itself. But we know that any science is uniquely concerned with a limited “cut” operated in reality by adopting a particular point of view, that is concretely manifested by adopting a restricted number of predicates in the discourse on reality. Several skilful operational manipulations are needed in order to bring about a homomorphism with the structure of the positive real numbers. It is therefore clear that the objects that are studied by an empirical theory are by no means the rough things of everyday experience, but bundles of “attributes” (that is of properties, relations and functions), introduced through suitable operational procedures having often the explicit and declared goal of determining a concrete structure as isomorphic, or at least homomorphic, to the structure of real numbers or to some other mathematical structure. But now, if the objects of an empirical theory are entities of this kind, we are fully entitled to maintain that they are actually endowed with a mathematical structure: this is simply that structure which we have introduced through our operational procedures. However, this structure is objective and real and, with respect to it, the mathematized discourse is far from having a purely conventional and pragmatic function, with the goal of keeping our ideas in order: it is a faithful description of this structure. Of course, we could never pretend that such a discourse determines the structure of reality in a full and exhaustive way, and this for two distinct reasons: In the first place, reality (both in the sense of the totality of existing things, and of the ”whole” of any single thing), is much richer than the particular “slide” that it is possible to cut out by means of our operational manipulations. In the second place, we must be aware that a scientific object, defined as a structured set of attributes, is an abstract object, is a conceptual construction that is perfectly defined just because it is totally determined by a finite list of predicates. But concrete objects are by no means so: they are endowed with a great deal of attributes of an indefinite variety, so that they can at best exemplify with an acceptable approximation certain abstract objects that are totally encoding a given set of attributes through their corresponding predicates. The reason why such an exemplification can only be partial is that the different attributes that are simultaneously present in a concrete object are, in a way, mutually limiting themselves, so that this object does never fully exemplify anyone of them. This explains the correct sense of such common and obvious remarks as: “a rigid body, a perfect gas, an adiabatic transformation, a perfect elastic recoil, etc, do not exist in reality (or in Nature)”. Sometimes this remark is intended to vehiculate the thesis that these are nothing but intellectual fictions devoid of any correspondence with reality, but instrumentally used by scientists in order to organize their ideas. This interpretation is totally wrong, and is simply due to a confusion between encoding and exemplifying: no concrete thing encodes any finite and explicit number of characteristics that, on the contrary, can be appropriately encoded in a concept. Things can exemplify several concepts, while concepts (or abstract objects) do not exemplify the attributes they encode. Going back to the distinction between sense on the one hand, and reference or denotation on the other hand, we could also say that abstract objects belong to the level of sense, while their exemplifications belong to the level of reference, and constitute what is denoted by them. It is obvious that in the case of empirical sciences we try to construct conceptual structures (abstract objects) having empirical denotations (exemplified by concrete objects). If one has well understood this elementary but important distinction, one is in the position of correctly seeing how mathematics can concern physical objects. These objects are abstract objects, are structured sets of predicates, and there is absolutely nothing surprising in the fact that they could receive a mathematical structure (for example, a structure isomorphic to that of the positive real numbers, or to that of a given group, or of an abstract mathematical space, etc.). If it happens that these abstract objects are exemplified by concrete objects within a certain degree of approximation, we are entitled to say that the corresponding mathematical structure also holds true (with the same degree of approximation) for this domain of concrete objects. Now, in the case of physics, the abstract objects are constructed by isolating certain ontological attributes of things by means of concrete operations, so that they actually refer to things, and are exemplified by the concrete objects singled out by means of such operations up to a given degree of approximation or accuracy. In conclusion, one can maintain that mathematics constitutes at the same time the most exact language for speaking of the objects of the domain under consideration, and faithfully mirrors the concrete structure (in an ontological sense) of this domain of objects. Of course, it is very reasonable to recognize that other aspects of these things (or other attributes of them) might not be treatable by means of the particular mathematical language adopted, and this may imply either that these attributes could perhaps be handled through a different available mathematical language, or even that no mathematical language found as yet could be used for handling them.

Conjuncted: Occam’s Razor and Nomological Hypothesis. Thought of the Day 51.1.1

rockswater

Conjuncted here, here and here.

A temporally evolving system must possess a sufficiently rich set of symmetries to allow us to infer general laws from a finite set of empirical observations. But what justifies this hypothesis?

This question is central to the entire scientific enterprise. Why are we justified in assuming that scientific laws are the same in different spatial locations, or that they will be the same from one day to the next? Why should replicability of other scientists’ experimental results be considered the norm, rather than a miraculous exception? Why is it normally safe to assume that the outcomes of experiments will be insensitive to irrelevant details? Why, for that matter, are we justified in the inductive generalizations that are ubiquitous in everyday reasoning?

In effect, we are assuming that the scientific phenomena under investigation are invariant under certain symmetries – both temporal and spatial, including translations, rotations, and so on. But where do we get this assumption from? The answer lies in the principle of Occam’s Razor.

Roughly speaking, this principle says that, if two theories are equally consistent with the empirical data, we should prefer the simpler theory:

Occam’s Razor: Given any body of empirical evidence about a temporally evolving system, always assume that the system has the largest possible set of symmetries consistent with that evidence.

Making it more precise, we begin by explaining what it means for a particular symmetry to be “consistent” with a body of empirical evidence. Formally, our total body of evidence can be represented as a subset E of H, i.e., namely the set of all logically possible histories that are not ruled out by that evidence. Note that we cannot assume that our evidence is a subset of Ω; when we scientifically investigate a system, we do not normally know what Ω is. Hence we can only assume that E is a subset of the larger set H of logically possible histories.

Now let ψ be a transformation of H, and suppose that we are testing the hypothesis that ψ is a symmetry of the system. For any positive integer n, let ψn be the transformation obtained by applying ψ repeatedly, n times in a row. For example, if ψ is a rotation about some axis by angle θ, then ψn is the rotation by the angle nθ. For any such transformation ψn, we write ψ–n(E) to denote the inverse image in H of E under ψn. We say that the transformation ψ is consistent with the evidence E if the intersection

E ∩ ψ–1(E) ∩ ψ–2(E) ∩ ψ–3(E) ∩ …

is non-empty. This means that the available evidence (i.e., E) does not falsify the hypothesis that ψ is a symmetry of the system.

For example, suppose we are interested in whether cosmic microwave background radiation is isotropic, i.e., the same in every direction. Suppose we measure a background radiation level of x1 when we point the telescope in direction d1, and a radiation level of x2 when we point it in direction d2. Call these events E1 and E2. Thus, our experimental evidence is summarized by the event E = E1 ∩ E2. Let ψ be a spatial rotation that rotates d1 to d2. Then, focusing for simplicity just on the first two terms of the infinite intersection above,

E ∩ ψ–1(E) = E1 ∩ E2 ∩ ψ–1(E1) ∩ ψ–1(E2).

If x1 = x2, we have E1 = ψ–1(E2), and the expression for E ∩ ψ–1(E) simplifies to E1 ∩ E2 ∩ ψ–1(E1), which has at least a chance of being non-empty, meaning that the evidence has not (yet) falsified isotropy. But if x1 ≠ x2, then E1 and ψ–1(E2) are disjoint. In that case, the intersection E ∩ ψ–1(E) is empty, and the evidence is inconsistent with isotropy. As it happens, we know from recent astronomy that x1 ≠ x2 in some cases, so cosmic microwave background radiation is not isotropic, and ψ is not a symmetry.

Our version of Occam’s Razor now says that we should postulate as symmetries of our system a maximal monoid of transformations consistent with our evidence. Formally, a monoid Ψ of transformations (where each ψ in Ψ is a function from H into itself) is consistent with evidence E if the intersection

ψ∈Ψ ψ–1(E)

is non-empty. This is the generalization of the infinite intersection that appeared in our definition of an individual transformation’s consistency with the evidence. Further, a monoid Ψ that is consistent with E is maximal if no proper superset of Ψ forms a monoid that is also consistent with E.

Occam’s Razor (formal): Given any body E of empirical evidence about a temporally evolving system, always assume that the set of symmetries of the system is a maximal monoid Ψ consistent with E.

What is the significance of this principle? We define Γ to be the set of all symmetries of our temporally evolving system. In practice, we do not know Γ. A monoid Ψ that passes the test of Occam’s Razor, however, can be viewed as our best guess as to what Γ is.

Furthermore, if Ψ is this monoid, and E is our body of evidence, the intersection

ψ∈Ψ ψ–1(E)

can be viewed as our best guess as to what the set of nomologically possible histories is. It consists of all those histories among the logically possible ones that are not ruled out by the postulated symmetry monoid Ψ and the observed evidence E. We thus call this intersection our nomological hypothesis and label it Ω(Ψ,E).

To see that this construction is not completely far-fetched, note that, under certain conditions, our nomological hypothesis does indeed reflect the truth about nomological possibility. If the hypothesized symmetry monoid Ψ is a subset of the true symmetry monoid Γ of our temporally evolving system – i.e., we have postulated some of the right symmetries – then the true set Ω of all nomologically possible histories will be a subset of Ω(Ψ,E). So, our nomological hypothesis will be consistent with the truth and will, at most, be logically weaker than the truth.

Given the hypothesized symmetry monoid Ψ, we can then assume provisionally (i) that any empirical observation we make, corresponding to some event D, can be generalized to a Ψ-invariant law and (ii) that unconditional and conditional probabilities can be estimated from empirical frequency data using a suitable version of the Ergodic Theorem.