Wittgenstein’s Form is the Possibility of Structure

nb6

For given two arbitrary objects x and y they can be understood as arguments for a basic ontological connection which, in turn, is either positive or negative. A priori there exist just four cases: the case of positive connection – MP, the case of negative connection – MI, the case that connection is both positive and negative, hence incoherent, denoted – MPI, and the most popular in combinatorial ontology the case of mutual neutrality – N( , ). The first case is taken here to be fundamental.

Explication for σ

Now we can offer the following, rather natural explication for a powerful, nearly omnipotent, synthesizer: y is synthetizable from x iff it is be made possible from x:

σ(x) = {y : MP(x,y)}

Notice that the above explication connects the second approach (operator one) with the third (internal) approach to a general theory of analysis and synthesis.

Quoting one of the most mysterious theses of Wittgenstein’s Tractatus:

(2.033) Form is the possibility of structure.

Ask now what the possibility means? It has been pointed out by Frank Ramsey in his famous review of the Tractatus that it cannot be read as a logical modality (i. e., form cannot be treated as an alternative structure), for this reading would immediately make Tractatus inconsistent.

But, rather ‘Form of x is what makes the structure of y possible’.

Formalization: MP(Form(x), Str(y)), hence – through suitable generalization – MP(x, y).

Wittgensteinian and Leibnizian clues make the nature of MP more clear: form of x is determined by its substance, whereas structurality of y means that y is a complex built up in such and such way. Using syntactical categorization of Lésniewski and Ajdukiewicz we obtain therefore that MP has the category of quantifier: s/n, s – which, as is easy to see, is of higher order and deeply modal.

Therefore M P is a modal quantifier, characterized after Wittgenstein’s clue by

MP(x, y) ↔ MP(S(x), y)

Advertisement

Conjuncted: Occam’s Razor and Nomological Hypothesis. Thought of the Day 51.1.1

rockswater

Conjuncted here, here and here.

A temporally evolving system must possess a sufficiently rich set of symmetries to allow us to infer general laws from a finite set of empirical observations. But what justifies this hypothesis?

This question is central to the entire scientific enterprise. Why are we justified in assuming that scientific laws are the same in different spatial locations, or that they will be the same from one day to the next? Why should replicability of other scientists’ experimental results be considered the norm, rather than a miraculous exception? Why is it normally safe to assume that the outcomes of experiments will be insensitive to irrelevant details? Why, for that matter, are we justified in the inductive generalizations that are ubiquitous in everyday reasoning?

In effect, we are assuming that the scientific phenomena under investigation are invariant under certain symmetries – both temporal and spatial, including translations, rotations, and so on. But where do we get this assumption from? The answer lies in the principle of Occam’s Razor.

Roughly speaking, this principle says that, if two theories are equally consistent with the empirical data, we should prefer the simpler theory:

Occam’s Razor: Given any body of empirical evidence about a temporally evolving system, always assume that the system has the largest possible set of symmetries consistent with that evidence.

Making it more precise, we begin by explaining what it means for a particular symmetry to be “consistent” with a body of empirical evidence. Formally, our total body of evidence can be represented as a subset E of H, i.e., namely the set of all logically possible histories that are not ruled out by that evidence. Note that we cannot assume that our evidence is a subset of Ω; when we scientifically investigate a system, we do not normally know what Ω is. Hence we can only assume that E is a subset of the larger set H of logically possible histories.

Now let ψ be a transformation of H, and suppose that we are testing the hypothesis that ψ is a symmetry of the system. For any positive integer n, let ψn be the transformation obtained by applying ψ repeatedly, n times in a row. For example, if ψ is a rotation about some axis by angle θ, then ψn is the rotation by the angle nθ. For any such transformation ψn, we write ψ–n(E) to denote the inverse image in H of E under ψn. We say that the transformation ψ is consistent with the evidence E if the intersection

E ∩ ψ–1(E) ∩ ψ–2(E) ∩ ψ–3(E) ∩ …

is non-empty. This means that the available evidence (i.e., E) does not falsify the hypothesis that ψ is a symmetry of the system.

For example, suppose we are interested in whether cosmic microwave background radiation is isotropic, i.e., the same in every direction. Suppose we measure a background radiation level of x1 when we point the telescope in direction d1, and a radiation level of x2 when we point it in direction d2. Call these events E1 and E2. Thus, our experimental evidence is summarized by the event E = E1 ∩ E2. Let ψ be a spatial rotation that rotates d1 to d2. Then, focusing for simplicity just on the first two terms of the infinite intersection above,

E ∩ ψ–1(E) = E1 ∩ E2 ∩ ψ–1(E1) ∩ ψ–1(E2).

If x1 = x2, we have E1 = ψ–1(E2), and the expression for E ∩ ψ–1(E) simplifies to E1 ∩ E2 ∩ ψ–1(E1), which has at least a chance of being non-empty, meaning that the evidence has not (yet) falsified isotropy. But if x1 ≠ x2, then E1 and ψ–1(E2) are disjoint. In that case, the intersection E ∩ ψ–1(E) is empty, and the evidence is inconsistent with isotropy. As it happens, we know from recent astronomy that x1 ≠ x2 in some cases, so cosmic microwave background radiation is not isotropic, and ψ is not a symmetry.

Our version of Occam’s Razor now says that we should postulate as symmetries of our system a maximal monoid of transformations consistent with our evidence. Formally, a monoid Ψ of transformations (where each ψ in Ψ is a function from H into itself) is consistent with evidence E if the intersection

ψ∈Ψ ψ–1(E)

is non-empty. This is the generalization of the infinite intersection that appeared in our definition of an individual transformation’s consistency with the evidence. Further, a monoid Ψ that is consistent with E is maximal if no proper superset of Ψ forms a monoid that is also consistent with E.

Occam’s Razor (formal): Given any body E of empirical evidence about a temporally evolving system, always assume that the set of symmetries of the system is a maximal monoid Ψ consistent with E.

What is the significance of this principle? We define Γ to be the set of all symmetries of our temporally evolving system. In practice, we do not know Γ. A monoid Ψ that passes the test of Occam’s Razor, however, can be viewed as our best guess as to what Γ is.

Furthermore, if Ψ is this monoid, and E is our body of evidence, the intersection

ψ∈Ψ ψ–1(E)

can be viewed as our best guess as to what the set of nomologically possible histories is. It consists of all those histories among the logically possible ones that are not ruled out by the postulated symmetry monoid Ψ and the observed evidence E. We thus call this intersection our nomological hypothesis and label it Ω(Ψ,E).

To see that this construction is not completely far-fetched, note that, under certain conditions, our nomological hypothesis does indeed reflect the truth about nomological possibility. If the hypothesized symmetry monoid Ψ is a subset of the true symmetry monoid Γ of our temporally evolving system – i.e., we have postulated some of the right symmetries – then the true set Ω of all nomologically possible histories will be a subset of Ω(Ψ,E). So, our nomological hypothesis will be consistent with the truth and will, at most, be logically weaker than the truth.

Given the hypothesized symmetry monoid Ψ, we can then assume provisionally (i) that any empirical observation we make, corresponding to some event D, can be generalized to a Ψ-invariant law and (ii) that unconditional and conditional probabilities can be estimated from empirical frequency data using a suitable version of the Ergodic Theorem.

Conjuncted: Internal Logic. Thought of the Day 46.1

adler-3DFiltration1

So, what exactly is an internal logic? The concept of topos is a generalization of the concept of set. In the categorial language of topoi, the universe of sets is just a topos. The consequence of this generalization is that the universe, or better the conglomerate, of topoi is of overwhelming amplitude. In set theory, the logic employed in the derivation of its theorems is classical. For this reason, the propositions about the different properties of sets are two-valued. There can only be true or false propositions. The traditional fundamental principles: identity, contradiction and excluded third, are absolutely valid.

But if the concept of a topos is a generalization of the concept of set, it is obvious that the logic needed to study, by means of deduction, the properties of all non-set-theoretical topoi, cannot be classic. If it were so, all topoi would coincide with the universe of sets. This fact suggests that to deductively study the properties of a topos, a non-classical logic must be used. And this logic cannot be other than the internal logic of the topos. We know, presently, that the internal logic of all topoi is intuitionistic logic as formalized by Heyting (a disciple of Brouwer). It is very interesting to compare the formal system of classical logic with the intuitionistic one. If both systems are axiomatized, the axioms of classical logic encompass the axioms of intuitionistic logic. The latter has all the axioms of the former, except one: the axiom that formally corresponds to the principle of the excluded middle. This difference can be shown in all kinds of equivalent versions of both logics. But, as Mac Lane says, “in the long run, mathematics is essentially axiomatic.” (Mac Lane). And it is remarkable that, just by suppressing an axiom of classical logic, the soundness of the theory (i.e., intuitionistic logic) can be demonstrated only through the existence of a potentially infinite set of truth-values.

We see, then, that the appellation “internal” is due to the fact that the logic by means of which we study the properties of a topos is a logic that functions within the topos, just as classical logic functions within set theory. As a matter of fact, classical logic is the internal logic of the universe of sets.

Another consequence of the fact that the general internal logic of every topos is the intuitionistic one, is that many different axioms can be added to the axioms of intuitionistic logic. This possibility enriches the internal logic of topoi. Through its application it reveals many new and quite unexpected properties of topoi. This enrichment of logic cannot be made in classical logic because, if we add one or more axioms to it, the new system becomes redundant or inconsistent. This does not happen with intuitionistic logic. So, topos theory shows that classical logic, although very powerful concerning the amount of the resulting theorems, is limited in its mathematical applications. It cannot be applied to study the properties of a mathematical system that cannot be reduced to the system of sets. Of course, if we want, we can utilize classical logic to study the properties of a topos. But, then, there are important properties of the topos that cannot be known, they are occult in the interior of the topos. Classical logic remains external to the topos.

Noneism. Part 1.

Meinong

Noneism was created by Richard Routley. Its point of departure is the rejection of what Routley calls “The Ontological Assumption”. This assumption consists in the explicit or, more frequently, implicit belief that denoting always refers to existing objects. If the object, or objects, on which a proposition is about, do not exist, then these objects can only be one: the null entity. It is incredible that Frege believed that denoting descriptions without a real (empirical, theoretical, or ideal) referent denoted only the null set. And it is also difficult to believe that Russell sustained the thesis that non-existing objects cannot have properties and that propositions about these objects are false.

This means that we can have a very clear apprehension of imaginary objects, and quite clear intellection of abstract objects that are not real. This is possible because to determine an object we only need to describe it through its distinctive traits. This description is possible because an object is always chacterized through some definite notes. The amount of traits necessary to identify an object greatly varies. In some cases we need only a few, for instance, the golden mountain, or the blue bird; in other cases we need more, for instance, the goddess Venus or the centaur Chiron. In other instances the traits can be very numerous, even infinite. For instance the chiliedron, and the decimal number 0,0000…009, in which 9 comes after the first million zeros, have many traits. And the ordinal omega or any Hilbert space have infinite traits (although these traits can be reckoned through finite definitions). These examples show, in a convincing manner, that the Ontological Assumption is untenable. We must reject it and replace it with what Routley dubbs the Characterization Postulate. The Characterization Postulate says that, to be an object means to be characterized by determined traits. The set of the characterizing traits of an object can be called its “characteristic”. When the characteristic of an object is set up, the object is perfectly recognizable.

Once this postulate is adopted, its consequences are far reaching. Since we can characterize objects through any traits whatsoever, an object can not only be inexistent, it can even be absurd or inconsistent. For instance, the “squond” (the circle that is square and round). And we can make perfectly valid logical inferences from the premiss: x is the sqound:

(1) if x is the squond, then x is square
(2) if x is the squond, then x is round

So, the theory of objects has the widest realm of application. It is clear that the Ontological Assumption imposes unacceptable limits to logic. As a matter of fact, the existential quantifier of classical logic could not have been conceived without the Ontological Assumption. The expression “(∃x)Fx” means that there exists at least an object that has the property F (or, in extensional language, that there exists an x that is a member of the extension of F). For this reason, “∃x” is unappliable to non existing objects. Of course, in classical logic we can deny the existence of an Object, but we cannot say anything about Objects that have never existed and shall never exist (we are strictly speaking about classical logic). We cannot quantify individual variables of a first order predicate that do not refer to a real, actual, past or future entity. For instance, we cannot say “(∃x) (x is the eye of Polyphemus)”. This would be false, of course, because Polyphemus does not exist. But if the Ontological Assumption is set aside, it is true, within a mythological frame, that Polyphemus has a single eye and many other properties. And now we can understand why noneism leads to logical material-dependence.

As we have anticipated, there must be some limitations concerning the selection of the contradictory properties; otherwise the whole theory becomes inconsistent and is trivialized. To avoid trivialization neutral (noneist) logic distinguishes between two sorts of negation: the classical propositional negation: “8 is not P”, and the narrower negation: “8 is non-P”. In this way, and by applying some other technicalities (for instance, in case an universe is inconsistent, some kind of paraconsistent logic must be used) trivialization is avoided. With the former provisions, the Characterization Postulate can be applied to create inconsistent universes in which classical logic is not valid. For instance, a world in which there is a mysterious personage, that within determined but very subtle circumstances, is and is not at the same time in two different places. In this case the logic to be applied is, obviously, some kind of paraconsistent logic (the type to be selected depends on the characteristic of the personage). And in another universe there could be a jewel which has two false properties: it is false that it is transparent and it is false that it is opaque. In this kind of world we must use, clearly, some kind of paracomplete logic. To develop naive set theory (in Halmos sense), we must use some type of paraconsistent logic to cope with the paradoxes, that are produced through a natural way of mathematical reasoning; this logic can be of several orders, just like the classical. In other cases, we can use some kind of relevant and, a fortiori, paraconsistent logic; and so on, ad infinitum.

But if logic is content-dependent, and this dependence is a consequence of the Ontological Assumption’s rejection, what about ontology? Because the universes determined through the application of the Characterization Postulate may have no being (in fact, most of them do not), we cannot say that the objects that populate such universes are entities, because entities exist in the empirical world, or in the real world that underpins the phenomena, or (in a somewhat different way), in an ideal Platonic world. Instead of speaking about ontology, we should speak about objectology. In essence objectology is the discipline founded by Meinong (Theory of Objects), but enriched and made more precise by Routley and other noneist logicians. Its main division would be Ontology (the study of real physical and Platonic objects) and Medenology (the study of objects that have no existence).