Quantifier – Ontological Commitment: The Case for an Agnostic. Note Quote.

1442843080570

What about the mathematical objects that, according to the platonist, exist independently of any description one may offer of them in terms of comprehension principles? Do these objects exist on the fictionalist view? Now, the fictionalist is not committed to the existence of such mathematical objects, although this doesn’t mean that the fictionalist is committed to the non-existence of these objects. The fictionalist is ultimately agnostic about the issue. Here is why.

There are two types of commitment: quantifier commitment and ontological commitment. We incur quantifier commitment to the objects that are in the range of our quantifiers. We incur ontological commitment when we are committed to the existence of certain objects. However, despite Quine’s view, quantifier commitment doesn’t entail ontological commitment. Fictional discourse (e.g. in literature) and mathematical discourse illustrate that. Suppose that there’s no way of making sense of our practice with fiction but to quantify over fictional objects. Still, people would strongly resist the claim that they are therefore committed to the existence of these objects. The same point applies to mathematical objects.

This move can also be made by invoking a distinction between partial quantifiers and the existence predicate. The idea here is to resist reading the existential quantifier as carrying any ontological commitment. Rather, the existential quantifier only indicates that the objects that fall under a concept (or have certain properties) are less than the whole domain of discourse. To indicate that the whole domain is invoked (e.g. that every object in the domain have a certain property), we use a universal quantifier. So, two different functions are clumped together in the traditional, Quinean reading of the existential quantifier: (i) to assert the existence of something, on the one hand, and (ii) to indicate that not the whole domain of quantification is considered, on the other. These functions are best kept apart. We should use a partial quantifier (that is, an existential quantifier free of ontological commitment) to convey that only some of the objects in the domain are referred to, and introduce an existence predicate in the language in order to express existence claims.

By distinguishing these two roles of the quantifier, we also gain expressive resources. Consider, for instance, the sentence:

(∗) Some fictional detectives don’t exist.

Can this expression be translated in the usual formalism of classical first-order logic with the Quinean interpretation of the existential quantifier? Prima facie, that doesn’t seem to be possible. The sentence would be contradictory! It would state that ∃ fictional detectives who don’t exist. The obvious consistent translation here would be: ¬∃x Fx, where F is the predicate is a fictional detective. But this states that fictional detectives don’t exist. Clearly, this is a different claim from the one expressed in (∗). By declaring that some fictional detectives don’t exist, (∗) is still compatible with the existence of some fictional detectives. The regimented sentence denies this possibility.

However, it’s perfectly straightforward to express (∗) using the resources of partial quantification and the existence predicate. Suppose that “∃” stands for the partial quantifier and “E” stands for the existence predicate. In this case, we have: ∃x (Fx ∧¬Ex), which expresses precisely what we need to state.

Now, under what conditions is the fictionalist entitled to conclude that certain objects exist? In order to avoid begging the question against the platonist, the fictionalist cannot insist that only objects that we can causally interact with exist. So, the fictionalist only offers sufficient conditions for us to be entitled to conclude that certain objects exist. Conditions such as the following seem to be uncontroversial. Suppose we have access to certain objects that is such that (i) it’s robust (e.g. we blink, we move away, and the objects are still there); (ii) the access to these objects can be refined (e.g. we can get closer for a better look); (iii) the access allows us to track the objects in space and time; and (iv) the access is such that if the objects weren’t there, we wouldn’t believe that they were. In this case, having this form of access to these objects gives us good grounds to claim that these objects exist. In fact, it’s in virtue of conditions of this sort that we believe that tables, chairs, and so many observable entities exist.

But recall that these are only sufficient, and not necessary, conditions. Thus, the resulting view turns out to be agnostic about the existence of the mathematical entities the platonist takes to exist – independently of any description. The fact that mathematical objects fail to satisfy some of these conditions doesn’t entail that these objects don’t exist. Perhaps these entities do exist after all; perhaps they don’t. What matters for the fictionalist is that it’s possible to make sense of significant features of mathematics without settling this issue.

Now what would happen if the agnostic fictionalist used the partial quantifier in the context of comprehension principles? Suppose that a vector space is introduced via suitable principles, and that we establish that there are vectors satisfying certain conditions. Would this entail that we are now committed to the existence of these vectors? It would if the vectors in question satisfied the existence predicate. Otherwise, the issue would remain open, given that the existence predicate only provides sufficient, but not necessary, conditions for us to believe that the vectors in question exist. As a result, the fictionalist would then remain agnostic about the existence of even the objects introduced via comprehension principles!

Category of a Quantum Groupoid

A873C024-16E2-408D-8521-AC452457B0C4

For a quantum groupoid H let Rep(H) be the category of representations of H, whose objects are finite-dimensional left H -modules and whose morphisms are H -linear homomorphisms. We shall show that Rep(H) has a natural structure of a monoidal category with duality.

For objects V, W of Rep(H) set

V ⊗ W = x ∈ V ⊗k W|x = ∆(1) · x ⊂ V ⊗k W —– (1)

with the obvious action of H via the comultiplication ∆ (here ⊗k denotes the usual tensor product of vector spaces). Note that ∆(1) is an idempotent and therefore V ⊗ W = ∆(1) × (V ⊗k W). The tensor product of morphisms is the restriction of usual tensor product of homomorphisms. The standard associativity isomorphisms (U ⊗ V ) ⊗ W → U ⊗ (V ⊗ W ) are functorial and satisfy the pentagon condition, since ∆ is coassociative. We will suppress these isomorphisms and write simply U ⊗ V ⊗ W.

The target counital subalgebra Ht ⊂ H has an H-module structure given by h · z = εt(hz),where h ∈ H, z ∈ Ht.

Ht is the unit object of Rep(H).

Define a k-linear homomorphism lV : Ht ⊗ V → V by lV(1(1) · z ⊗ 1(2) · v) = z · v, z ∈ Ht, v ∈ V.

This map is H-linear, since

lV h · (1(1) · z ⊗ 1(2) · v) = lV(h(1) · z ⊗ h(2) · v) = εt(h(1)z)h(2) · v = hz · v = h · lV (1(1) · z ⊗ 1(2) · v),

∀ h ∈ H. The inverse map l−1V: → Ht ⊗ V is given by V

l−1V(v) = S(1(1)) ⊗ (1(2) · v) = (1(1) · 1) ⊗ (1(2) · v)

The collection {lV}V gives a natural equivalence between the functor Ht ⊗ (·) and the identity functor. Indeed, for any H -linear homomorphism f : V → U we have:

lU ◦ (id ⊗ f)(1(1) · z ⊗ 1(2) · v) = lU 1(1) · z ⊗ 1(2) · f(v) = z · f(v) = f(z·v) = f ◦ lV(1(1) · z ⊗ 1(2) · v)

Similarly, the k-linear homomorphism rV : V ⊗ Ht → V defined by rV(1(1) · v ⊗ 1(2) · z) = S(z) · v, z ∈ Ht, v ∈ V, has the inverse r−1V(v) = 1(1) · v ⊗ 1(2) and satisfies the necessary properties.

Finally, we can check the triangle axiom idV ⊗ lW = rV ⊗ idW : V ⊗ Ht ⊗ W → V ⊗ W ∀ objects V, W of Rep(H). For v ∈ V, w ∈ W we have

(idV ⊗ lW)(1(1) · v ⊗ 1′(1)1(2) · z ⊗ 1′(2) · w)

= 1(1) · v ⊗ 1′(2)z · w) = 1(1)S(z) · v ⊗ 1(2) · w

=(rV ⊗ idW) (1′(1) · v ⊗ 1′(2) 1(1) · z ⊗ 1(2) · w),

therefore, idV ⊗ lW = rV ⊗ idW

Using the antipode S of H, we can provide Rep(H) with a duality. For any object V of Rep(H), define the action of H on V = Homk(V, k) by

(h · φ)(v) = φ S(h) · v —– (2)

where h ∈ H , v ∈ V , φ ∈ V. For any morphism f : V → W , let f: W → V be the morphism dual to f. For any V in Rep(H), we define the duality morphisms dV : V ⊗ V → Ht, bV : Ht → V ⊗ V∗ as follows. For ∑j φj ⊗ vj ∈ V* ⊗ V, set

dV(∑j φj ⊗ vj)  = ∑j φj (1(1) · vj) 1(2) —– (3)

Let {fi}i and {ξi}i be bases of V and V, respectively, dual to each other. The element ∑i fi ⊗ ξi does not depend on choice of these bases; moreover, ∀ v ∈ V, φ ∈ V one has φ = ∑i φ(fi) ξi and v = ∑i fi ξi (v). Set

bV(z) = z · (∑i fi ⊗ ξi) —– (4)

The category Rep(H) is a monoidal category with duality. We know already that Rep(H) is monoidal, it remains to prove that dV and bV are H-linear and satisfy the identities

(idV ⊗ dV)(bV ⊗ idV) = idV, (dV ⊗ idV)(idV ⊗ bV) = idV.

Take ∑j φj ⊗ vj ∈ V ⊗ V, z ∈ Ht, h ∈ H. Using the axioms of a quantum groupoid, we have

h · dV(∑j φj ⊗ vj) = ((∑j φj (1(1) · vj) εt(h1(2))

= (∑j φj ⊗ εs(1(1)h) · vj 1(2)j φj S(h(1))1(1)h(2) · vj 1(2)

= (∑j h(1) · φj )(1(1) · (h(2) · vj))1(2)

= (∑j dV(h(1) · φj  ⊗ h(2) · vj) = dV(h · ∑j φj ⊗ vj)

therefore, dV is H-linear. To check the H-linearity of bV we have to show that h · bV(z) =

bV (h · z), i.e., that

i h(1)z · fi ⊗ h(2) · ξi = ∑i 1(1) εt(hz) · fi ⊗ 1(2) · ξi

Since both sides of the above equality are elements of V ⊗k V, evaluating the second factor on v ∈ V, we get the equivalent condition

h(1)zS(h(2)) · v = 1(1)εt (hz)S(1(2)) · v, which is easy to check. Thus, bV is H-linear.

Using the isomorphisms lV and rV identifying Ht ⊗ V, V ⊗ Ht, and V, ∀ v ∈ V and φ ∈ V we have:

(idV ⊗ dV)(bV ⊗ idV)(v)

=(idV ⊗ dV)bV(1(1) · 1) ⊗ 1(2) · v

= (idV ⊗ dV)bV(1(2)) ⊗ S−1(1(1)) · v

= ∑i (idV ⊗ dV) 1(2) · fi ⊗ 1(3) · ξi ⊗ S−1 (1(1)) · v

= ∑1(2) · fi ⊗ 1(3) · ξi (1′(1)S-1 (1(1)) · v) 1′(2)

= 1(2) S(1(3)) 1′(1) S-1 (1(1)) · v ⊗ 1′(2) = v

(dV ⊗ idV)(idV ⊗ bV)(φ)

= (dV ⊗ idV) 1(1) · φ ⊗ bV(1(2))

= ∑i (dV ⊗ idV)(1(1) · φ ⊗ 1(2) · 1(2) · 1(3) · ξi )

= ∑i (1(1) · φ (1′(1)1(2) · fi)1′(2) ⊗ 1(3) · ξi

= 1′(2) ⊗ 1(3)1(1) S(1′(1)1(2)) · φ = φ,

QED.

 

Dialectics of God: Lautman’s Mathematical Ascent to the Absolute. Paper.

centurionrage

Figure and Translation, visit Fractal Ontology

The first of Lautman’s two theses (On the unity of the mathematical sciences) takes as its starting point a distinction that Hermann Weyl made on group theory and quantum mechanics. Weyl distinguished between ‘classical’ mathematics, which found its highest flowering in the theory of functions of complex variables, and the ‘new’ mathematics represented by (for example) the theory of groups and abstract algebras, set theory and topology. For Lautman, the ‘classical’ mathematics of Weyl’s distinction is essentially analysis, that is, the mathematics that depends on some variable tending towards zero: convergent series, limits, continuity, differentiation and integration. It is the mathematics of arbitrarily small neighbourhoods, and it reached maturity in the nineteenth century. On the other hand, the ‘new’ mathematics of Weyl’s distinction is ‘global’; it studies the structures of ‘wholes’. Algebraic topology, for example, considers the properties of an entire surface rather than aggregations of neighbourhoods. Lautman re-draws the distinction:

In contrast to the analysis of the continuous and the infinite, algebraic structures clearly have a finite and discontinuous aspect. Though the elements of a group, field or algebra (in the restricted sense of the word) may be infinite, the methods of modern algebra usually consist in dividing these elements into equivalence classes, the number of which is, in most applications, finite.

In his other major thesis, (Essay on the notions of structure and existence in mathematics), Lautman gives his dialectical thought a more philosophical and polemical expression. His thesis is composed of ‘structural schemas’ and ‘origination schemas’ The three structural schemas are: local/global, intrinsic properties/induced properties and the ‘ascent to the absolute’. The first two of these three schemas close to Lautman’s ‘unity’ thesis. The ‘ascent to the absolute’ is a different sort of pattern; it involves a progress from mathematical objects that are in some sense ‘imperfect’, towards an object that is ‘perfect’ or ‘absolute’. His two mathematical examples of this ‘ascent’ are: class field theory, which ‘ascends’ towards the absolute class field, and the covering surfaces of a given surface, which ‘ascend’ towards a simply-connected universal covering surface. In each case, there is a corresponding sequence of nested subgroups, which induces a ‘stepladder’ structure on the ‘ascent’. This dialectical pattern is rather different to the others. The earlier examples were of pairs of notions (finite/infinite, local/global, etc.) and neither member of any pair was inferior to the other. Lautman argues that on some occasions, finite mathematics offers insight into infinite mathematics. In mathematics, the finite is not a somehow imperfect version of the infinite. Similarly, the ‘local’ mathematics of analysis may depend for its foundations on ‘global’ topology, but the former is not a botched or somehow inadequate version of the latter. Lautman introduces the section on the ‘ascent to the absolute’ by rehearsing Descartes’s argument that his own imperfections lead him to recognise the existence of a perfect being (God). Man (for Descartes) is not the dialectical opposite of or alternative to God; rather, man is an imperfect image of his creator. In a similar movement of thought, according to Lautman, reflection on ‘imperfect’ class fields and covering surfaces leads mathematicians up to ‘perfect’, ‘absolute’ class fields and covering surfaces respectively.

Albert Lautman Dialectics in mathematics

Categorial Functorial Monads

Typeclassopedia-diagram

Algebraic constructs (A,U), such as Vec, Grp, Mon, and Lat, can be fully described by the following data, called the monad associated with (A,U):

1. the functor T : Set → Set, where T = U ◦ F and F : Set → A is the associated free functor,

2. the natural transformation η : idSet → T formed by universal arrows, and

3. the natural transformation μ : T ◦ T → T given by the unique homomorphism μX : T(TX) → TX that extends idTX.

In these cases, there is a canonical concrete isomorphism K between (A,U) and the full concrete subcategory of Alg(T) consisting of those T-algebras TX →x X that satisfy the equations x ◦ ηX = idX and x ◦ Tx = x ◦ μX. The latter subcategory is called the Eilenberg-Moore category of the monad (T, η, μ). The above observation makes it possible, in the following four steps, to express the “degree of algebraic character” of arbitrary concrete categories that have free objects:

Step 1: With every concrete category (A,U) over X that has free objects (or, more generally, with every adjoint functor A →U X) one can associate, in an essentially unique way, an adjoint situation (η, ε) : F -|U : A → X.

Step 2: With every adjoint situation (η, ε) : F -|U : A → X one can associate a monad T = (T, η, μ) on X, where T = U ◦ F : X → X.

Step 3: With every monad T = (T, η, μ) on X one can associate a concrete subcategory of Alg(T) denoted by (XT, UT) and called the category of T-algebras.

Step 4:  With every concrete category (A,U) over X that has free objects one can associate a distinguished concrete functor (A,U) →K (XT , UT) into the associated category of T-algebras called the comparison functor for (A, U).

Concrete categories that are concretely isomorphic to a category of T-algebras for some monad T have a distinct “algebraic flavor”. Such categories (A,U) and their forgetful functors U are called monadic. It turns out that a concrete category (A, U ) is monadic iff it has free objects and its associated comparison functor (A,U) →K (XT , UT) is an isomorphism. Thus, for concrete categories (A,U) that have free objects, the associated comparison functor can be considered as a means of measuring the “algebraic character” of (A,U); and the associated category of T-algebras can be considered to be the “algebraic part” of (A,U). In particular,

(a) every finitary variety is monadic,

(b) the category TopGrp, considered as a concrete category

  1. over Top, is monadic,
  2. over Set, is not monadic; the associated comparison functor is the forgetful functor TopGrp → Grp, so that the construct Grp may be considered as the “algebraic part” of the construct TopGrp,

(c) the construct Top is not monadic; the associated comparison functor is the forgetful functor Top → Set itself, so that the construct Set may be considered as the “algebraic part” of the construct Top; hence the construct Top may be considered as having a trivial “algebraic part”.

Among constructs, monadicity captures the idea of “algebraicness” rather well. Unfortunately, however, the behavior of monadic categories in general is far from satisfactory. Monadic functors can fail badly to reflect properties of the base category (e.g., the existence of colimits or of suitable factorization structures), and they are not closed under composition.

Conjuncted: Operations of Truth. Thought of the Day 47.1

mathBIG2

Conjuncted here.

Let us consider only the power set of the set of all natural numbers, which is the smallest infinite set – the countable infinity. By a model of set theory we understand a set in which  – if we restrict ourselves to its elements only – all axioms of set theory are satisfied. It follows from Gödel’s completeness theorem that as long as set theory is consistent, no statement which is true in some model of set theory can contradict logical consequences of its axioms. If the cardinality of p(N) was such a consequence, there would exist a cardinal number κ such that the sentence the cardinality of p(N) is κ would be true in all the models. However, for every cardinal κ the technique of forcing allows for finding a model M where the cardinality of p(N) is not equal to κ. Thus, for no κ, the sentence the cardinality of p(N) is κ is a consequence of the axioms of set theory, i.e. they do not decide the cardinality of p(N).

The starting point of forcing is a model M of set theory – called the ground model – which is countably infinite and transitive. As a matter of fact, the existence of such a model cannot be proved but it is known that there exists a countable and transitive model for every finite subset of axioms.

A characteristic subtlety can be observed here. From the perspective of an inhabitant of the universe, that is, if all the sets are considered, the model M is only a small part of this universe. It is deficient in almost every respect; for example all of its elements are countable, even though the existence of uncountable sets is a consequence of the axioms of set theory. However, from the point of view of an inhabitant of M, that is, if elements outside of M are disregarded, everything is in order. Some of M because in this model there are no functions establishing a one-to-one correspondence between them and ω0. One could say that M simulates the properties of the whole universe.

The main objective of forcing is to build a new model M[G] based on M, which contains M, and satisfies certain additional properties. The model M[G] is called the generic extension of M. In order to accomplish this goal, a particular set is distinguished in M and its elements are referred to as conditions which will be used to determine basic properties of the generic extension. In case of the forcing that proves the undecidability of the cardinality of p(N), the set of conditions codes finite fragments of a function witnessing the correspondence between p(N) and a fixed cardinal κ.

In the next step, an appropriately chosen set G is added to M as well as other sets that are indispensable in order for M[G] to satisfy the axioms of set theory. This set – called generic – is a subset of the set of conditions that always lays outside of M. The construction of M[G] is exceptional in the sense that its key properties can be described and proved using M only, or just the conditions, thus, without referring to the generic set. This is possible for three reasons. First of all, every element x of M[G] has a name existing already in M (that is, an element in M that codes x in some particular way). Secondly, based on these names, one can design a language called the forcing language or – as Badiou terms it – the subject language that is powerful enough to express every sentence of set theory referring to the generic extension. Finally, it turns out that the validity of sentences of the forcing language in the extension M[G] depends on the set of conditions: the conditions force validity of sentences of the forcing language in a precisely specified sense. As it has already been said, the generic set G consists of some of the conditions, so even though G is outside of M, its elements are in M. Recognizing which of them will end up in G is not possible for an inhabitant of M, however in some cases the following can be proved: provided that the condition p is an element of G, the sentence S is true in the generic extension constructed using this generic set G. We say then that p forces S.

In this way, with an aid of the forcing language, one can prove that every generic set of the Cohen forcing codes an entire function defining a one-to-one correspondence between elements of p(N) and a fixed (uncountable) cardinal number – it turns out that all the conditions force the sentence stating this property of G, so regardless of which conditions end up in the generic set, it is always true in the generic extension. On the other hand, the existence of a generic set in the model M cannot follow from axioms of set theory, otherwise they would decide the cardinality of p(N).

The method of forcing is of fundamental importance for Badious philosophy. The event escapes ontology; it is “that-which-is-not-being-qua-being”, so it has no place in set theory or the forcing construction. However, the post-evental truth that enters, and modifies the situation, is presented by forcing in the form of a generic set leading to an extension of the ground model. In other words, the situation, understood as the ground model M, is transformed by a post-evental truth identified with a generic set G, and becomes the generic model M[G]. Moreover, the knowledge of the situation is interpreted as the language of set theory, serving to discern elements of the situation; and as axioms of set theory, deciding validity of statements about the situation. Knowledge, understood in this way, does not decide the existence of a generic set in the situation nor can it point to its elements. A generic set is always undecidable and indiscernible.

Therefore, from the perspective of knowledge, it is not possible to establish, whether a situation is still the ground-model or it has undergone a generic extension resulting from the occurrence of an event; only the subject can interventionally decide this. And it is only the subject who decides about the belonging of particular elements to the generic set (i.e. the truth). A procedure of truth or procedure of fidelity (Alain Badiou – Being and Event) supported in this way gives rise to the subject language. It consists of sentences of set theory, so in this respect it is a part of knowledge, although the veridicity of the subject language originates from decisions of the faithful subject. Consequently, a procedure of fidelity forces statements about the situation as it is after being extended, and modified by the operation of truth.

Conjuncted: Internal Logic. Thought of the Day 46.1

adler-3DFiltration1

So, what exactly is an internal logic? The concept of topos is a generalization of the concept of set. In the categorial language of topoi, the universe of sets is just a topos. The consequence of this generalization is that the universe, or better the conglomerate, of topoi is of overwhelming amplitude. In set theory, the logic employed in the derivation of its theorems is classical. For this reason, the propositions about the different properties of sets are two-valued. There can only be true or false propositions. The traditional fundamental principles: identity, contradiction and excluded third, are absolutely valid.

But if the concept of a topos is a generalization of the concept of set, it is obvious that the logic needed to study, by means of deduction, the properties of all non-set-theoretical topoi, cannot be classic. If it were so, all topoi would coincide with the universe of sets. This fact suggests that to deductively study the properties of a topos, a non-classical logic must be used. And this logic cannot be other than the internal logic of the topos. We know, presently, that the internal logic of all topoi is intuitionistic logic as formalized by Heyting (a disciple of Brouwer). It is very interesting to compare the formal system of classical logic with the intuitionistic one. If both systems are axiomatized, the axioms of classical logic encompass the axioms of intuitionistic logic. The latter has all the axioms of the former, except one: the axiom that formally corresponds to the principle of the excluded middle. This difference can be shown in all kinds of equivalent versions of both logics. But, as Mac Lane says, “in the long run, mathematics is essentially axiomatic.” (Mac Lane). And it is remarkable that, just by suppressing an axiom of classical logic, the soundness of the theory (i.e., intuitionistic logic) can be demonstrated only through the existence of a potentially infinite set of truth-values.

We see, then, that the appellation “internal” is due to the fact that the logic by means of which we study the properties of a topos is a logic that functions within the topos, just as classical logic functions within set theory. As a matter of fact, classical logic is the internal logic of the universe of sets.

Another consequence of the fact that the general internal logic of every topos is the intuitionistic one, is that many different axioms can be added to the axioms of intuitionistic logic. This possibility enriches the internal logic of topoi. Through its application it reveals many new and quite unexpected properties of topoi. This enrichment of logic cannot be made in classical logic because, if we add one or more axioms to it, the new system becomes redundant or inconsistent. This does not happen with intuitionistic logic. So, topos theory shows that classical logic, although very powerful concerning the amount of the resulting theorems, is limited in its mathematical applications. It cannot be applied to study the properties of a mathematical system that cannot be reduced to the system of sets. Of course, if we want, we can utilize classical logic to study the properties of a topos. But, then, there are important properties of the topos that cannot be known, they are occult in the interior of the topos. Classical logic remains external to the topos.

Noneism. Part 2.

nPS6M

Noneism is a very rigourous and original philosophical doctrine, by and large superior to the classical mathematical philosophies. But there are some problems concerning the different ways of characterizing a universe of objects. It is very easy to understand the way a writer characterizes the protagonists of the novels he writes. But what about the characterization of the universe of natural numbers? Since in most kinds of civilizations the natural numbers are characterized the same way, we have the impression that the subject does not intervene in the forging of the characteristics of natural numbers. These numbers appear to be what they are, with total independence of the creative activity of the cognitive subject. There is, of course, the creation of theorems, but the potentially infinite sequence of natural numbers resists any effort to subjectivize its characteristics. It cannot be changed. A noneist might reply that natural numbers are non-existent, that they have no being, and that, in this respect, they are identical with mythological Objects. Moreover, the formal system of natural numbers can be interpreted in many ways: for instance, with respect to a universe of Skolem numbers. This is correct, but it does not explain why the properties of some universes are independent from subjective creation. It is an undeniable fact that there are two kinds of objectual characteristics. On the one hand, we have the characteristics created by subjective imagination or speculative thought; on the other hand, we find some characteristics that are not created by anybody; their corresponding Objects are, in most cases, non-existent but, at the same time, they are not invented. They are just found. The origin of the former characteristics is very easy to understand; the origin of the last ones is, a mystery.

Now, the subject-independence of a universe, suggests that it belongs to a Platonic realm. And as far as transafinite set theory is concerned, the subject-independence of its characteristics is much less evident than the characteristic subject-independence of the natural numbers. In the realm of the finite, both characteristics are subject-independent and can be reduced to combinatorics. The only difference between both is that, according to the classical Platonistic interpretation of mathematics, there can only be a single mathematical universe and that, to deductively study its properties, one can only employ classical logic. But this position is not at all unobjectionable. Once the subject-independence of the natural numbers system’s characteristics is posited, it becomes easy to overstep the classical phobia concerning the possibility of characterizing non-classical objective worlds. Euclidean geometry is incompatible with elliptical and hyperbolic geometries and, nevertheless, the validity of the first one does not invalidate the other ones. And vice versa, the fact that hyperbolic and other kinds of geometry are consistently characterized, does not invalidate the good old Euclidean geometry. And the fact that we have now several kinds of non-Cantorian set theories, does not invalidate the classical Cantorian set theory.

Of course, an universally non-Platonic point of view that includes classical set theory can also be assumed. But concerning natural numbers it would be quite artificial. It is very difficult not to surrender to the famous Kronecker’s dictum: God created natural numbers, men created all the rest. Anyhow, it is not at all absurd to adopt a whole platonistic conception of mathematics. And it is quite licit to adopt a noneist position. But if we do this, the origin of the natural numbers’ characteristics becomes misty. However, forgetting this cloudiness, the leap from noneist universes to the platonistic ones, and vice versa, becomes like a flip-flop connecting objectological with ontological (ideal) universes, like a kind of rabbit-duck Gestalt or a Sherrington staircase. So, the fundamental question with respect to the subject-dependent or subject-independent mathematical theories, is: are they created, or are they found? Regarding some theories, subject-dependency is far more understandable; and concerning other ones, subject-independency is very difficult, if not impossible, to negate.

From an epistemological point of view, the fact of non-subject dependent characteristic traits of a universe would mean that there is something like intellectual intuition. The properties of natural numbers, the finite properties of sets (or combinatorics), some geometric axioms, for instance, in Euclidean geometry, the axioms of betweenness, etc., would be apprehended in a manner, that pretty well coincides with the (nowadays rather discredited) concept of synthetical a priori knowledge. This aspect of mathematical knowledge shows that the old problem concerning the analytic and the a priori synthetical knowledge, in spite of the prevailing Quinean pragmatic conception, must be radically reset.

Noneism. Part 1.

Meinong

Noneism was created by Richard Routley. Its point of departure is the rejection of what Routley calls “The Ontological Assumption”. This assumption consists in the explicit or, more frequently, implicit belief that denoting always refers to existing objects. If the object, or objects, on which a proposition is about, do not exist, then these objects can only be one: the null entity. It is incredible that Frege believed that denoting descriptions without a real (empirical, theoretical, or ideal) referent denoted only the null set. And it is also difficult to believe that Russell sustained the thesis that non-existing objects cannot have properties and that propositions about these objects are false.

This means that we can have a very clear apprehension of imaginary objects, and quite clear intellection of abstract objects that are not real. This is possible because to determine an object we only need to describe it through its distinctive traits. This description is possible because an object is always chacterized through some definite notes. The amount of traits necessary to identify an object greatly varies. In some cases we need only a few, for instance, the golden mountain, or the blue bird; in other cases we need more, for instance, the goddess Venus or the centaur Chiron. In other instances the traits can be very numerous, even infinite. For instance the chiliedron, and the decimal number 0,0000…009, in which 9 comes after the first million zeros, have many traits. And the ordinal omega or any Hilbert space have infinite traits (although these traits can be reckoned through finite definitions). These examples show, in a convincing manner, that the Ontological Assumption is untenable. We must reject it and replace it with what Routley dubbs the Characterization Postulate. The Characterization Postulate says that, to be an object means to be characterized by determined traits. The set of the characterizing traits of an object can be called its “characteristic”. When the characteristic of an object is set up, the object is perfectly recognizable.

Once this postulate is adopted, its consequences are far reaching. Since we can characterize objects through any traits whatsoever, an object can not only be inexistent, it can even be absurd or inconsistent. For instance, the “squond” (the circle that is square and round). And we can make perfectly valid logical inferences from the premiss: x is the sqound:

(1) if x is the squond, then x is square
(2) if x is the squond, then x is round

So, the theory of objects has the widest realm of application. It is clear that the Ontological Assumption imposes unacceptable limits to logic. As a matter of fact, the existential quantifier of classical logic could not have been conceived without the Ontological Assumption. The expression “(∃x)Fx” means that there exists at least an object that has the property F (or, in extensional language, that there exists an x that is a member of the extension of F). For this reason, “∃x” is unappliable to non existing objects. Of course, in classical logic we can deny the existence of an Object, but we cannot say anything about Objects that have never existed and shall never exist (we are strictly speaking about classical logic). We cannot quantify individual variables of a first order predicate that do not refer to a real, actual, past or future entity. For instance, we cannot say “(∃x) (x is the eye of Polyphemus)”. This would be false, of course, because Polyphemus does not exist. But if the Ontological Assumption is set aside, it is true, within a mythological frame, that Polyphemus has a single eye and many other properties. And now we can understand why noneism leads to logical material-dependence.

As we have anticipated, there must be some limitations concerning the selection of the contradictory properties; otherwise the whole theory becomes inconsistent and is trivialized. To avoid trivialization neutral (noneist) logic distinguishes between two sorts of negation: the classical propositional negation: “8 is not P”, and the narrower negation: “8 is non-P”. In this way, and by applying some other technicalities (for instance, in case an universe is inconsistent, some kind of paraconsistent logic must be used) trivialization is avoided. With the former provisions, the Characterization Postulate can be applied to create inconsistent universes in which classical logic is not valid. For instance, a world in which there is a mysterious personage, that within determined but very subtle circumstances, is and is not at the same time in two different places. In this case the logic to be applied is, obviously, some kind of paraconsistent logic (the type to be selected depends on the characteristic of the personage). And in another universe there could be a jewel which has two false properties: it is false that it is transparent and it is false that it is opaque. In this kind of world we must use, clearly, some kind of paracomplete logic. To develop naive set theory (in Halmos sense), we must use some type of paraconsistent logic to cope with the paradoxes, that are produced through a natural way of mathematical reasoning; this logic can be of several orders, just like the classical. In other cases, we can use some kind of relevant and, a fortiori, paraconsistent logic; and so on, ad infinitum.

But if logic is content-dependent, and this dependence is a consequence of the Ontological Assumption’s rejection, what about ontology? Because the universes determined through the application of the Characterization Postulate may have no being (in fact, most of them do not), we cannot say that the objects that populate such universes are entities, because entities exist in the empirical world, or in the real world that underpins the phenomena, or (in a somewhat different way), in an ideal Platonic world. Instead of speaking about ontology, we should speak about objectology. In essence objectology is the discipline founded by Meinong (Theory of Objects), but enriched and made more precise by Routley and other noneist logicians. Its main division would be Ontology (the study of real physical and Platonic objects) and Medenology (the study of objects that have no existence).

The Locus of Renormalization. Note Quote.

IMM-3-200-g005

Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

Causality

Quantum_Computer

Causation is a form of event generation. To present an explicit definition of causation requires introducing some ontological concepts to formally characterize what is understood by ‘event’.

The concept of individual is the basic primitive concept of any ontological theory. Individuals associate themselves with other individuals to yield new individuals. It follows that they satisfy a calculus, and that they are rigorously characterized only through the laws of such a calculus. These laws are set with the aim of reproducing the way real things associate. Specifically, it is postulated that every individual is an element of a set s in such a way that the structure S = ⟨s, ◦, ◻⟩ is a commutative monoid of idempotents. This is a simple additive semi-group with neutral element.

In the structure S, s is the set of all individuals, the element ◻ ∈ s is a fiction called the null individual, and the binary operation ◦ is the association of individuals. Although S is a mathematical entity, the elements of s are not, with the only exception of ◻, which is a fiction introduced to form a calculus. The association of any element of s with ◻ yields the same element. The following definitions characterize the composition of individuals.

1. x ∈ s is composed ⇔ (∃ y, z) s (x = y ◦ z)
2. x ∈ s is simple ⇔ ∼ (∃ y, z) s (x = y ◦ z)
3. x ⊂ y ⇔ x ◦ y = y (x is part of y ⇔ x ◦ y = y)
4. Comp(x) ≡ {y ∈ s|y ⊂ x} is the composition of x.

Real things are distinguished from abstract individuals because they have a number of properties in addition to their capability of association. These properties can be intrinsic (Pi) or relational (Pr). The intrinsic properties are inherent and they are represented by predicates or unary applications, whereas relational properties depend upon more than a single thing and are represented by n-ary predicates, with n ≥ 1. Examples of intrinsic properties are electric charge and rest mass, whereas velocity of macroscopic bodies and volume are relational properties.

An individual with its properties make up a thing X : X =< x, P(x) >

Here P(x) is the collection of properties of the individual x. A material thing is an individual with concrete properties, i.e. properties that can change in some respect.

The state of a thing X is a set of functions S(X) from a domain of reference M (a set that can be enumerable or nondenumerable) to the set of properties PX. Every function in S(X) represents a property in PX. The set of the physically accessible states of a thing X is the lawful state space of X : SL(X). The state of a thing is represented by a point in SL(X). A change of a thing is an ordered pair of states. Only changing things can be material. Abstract things cannot change since they have only one state (their properties are fixed by definition).

A legal statement is a restriction upon the state functions of a given class of things. A natural law is a property of a class of material things represented by an empirically corroborated legal statement.

The ontological history h(X) of a thing X is a subset of SL(X) defined by h(X) = {⟨t, F(t)⟩|t ∈ M}

where t is an element of some auxiliary set M, and F are the functions that represent the properties of X.

If a thing is affected by other things we can introduce the following definition:

h(Y/X ) : “history of the thing Y in presence of the thing X”.

Let h(X) and h(Y) be the histories of the things X and Y, respectively. Then

h(Y/X) = {⟨t,H(t)⟩|t ∈ M},

where H≠ F is the total state function of Y as affected by the existence of X, and F is the total state function of X in the absence of Y. The history of Y in presence of X is different from the history of Y without X .

We can now introduce the notion of action:

X ▷ Y : “X acts on Y”

X ▷ Y =def h(Y/X) ≠ h(Y)

An event is a change of a thing X, i.e. an ordered pair of states:

(s1, s2) ∈ EL(X) = SL(X) × SL(X)

The space EL(X) is called the event space of X.

Causality is a relation between events, i.e. a relation between changes of states of concrete things. It is not a relation between things. Only the related concept of ‘action’ is a relation between things. Specifically,

C'(x): “an event in a thing x is caused by some unspecified event exxi“.

C'(x) =def (∃ exxi) [exxi ∈ EL(X) ⇔ xi ▷ x.

C(x, y): “an event in a thing x is caused by an event in a thing y”.

C(x, y) =def (∃ exy) [exy ∈ EL(x) ⇔ y ▷ x

In the above definitions, the notation exy indicates in the superscript the thing x to whose event space belongs the event e, whereas the subscript denotes the thing that acted triggering the event. The implicit arguments of both C’ and C are events, not things. Causation is a form of event generation. The crucial point is that a given event in the lawful event space EL(x) is caused by an action of a thing y iff the event happens only conditionally to the action, i.e., it would not be the case of exy without an action of y upon x. Time does not appear in this definition, allowing causal relations in space-time without a global time orientability or even instantaneous and non-local causation. If causation is non-local under some circumstances, e.g. when a quantum system is prepared in a specific state of polarization or spin, quantum entanglement poses no problem to realism and determinism. The quantum theory describes an aspect of a reality that is ontologically determined and with non-local relations. Under any circumstances the postulates of Special Relativity are violated, since no physical system ever crosses the barrier of the speed of light.