Quantifier – Ontological Commitment: The Case for an Agnostic. Note Quote.

1442843080570

What about the mathematical objects that, according to the platonist, exist independently of any description one may offer of them in terms of comprehension principles? Do these objects exist on the fictionalist view? Now, the fictionalist is not committed to the existence of such mathematical objects, although this doesn’t mean that the fictionalist is committed to the non-existence of these objects. The fictionalist is ultimately agnostic about the issue. Here is why.

There are two types of commitment: quantifier commitment and ontological commitment. We incur quantifier commitment to the objects that are in the range of our quantifiers. We incur ontological commitment when we are committed to the existence of certain objects. However, despite Quine’s view, quantifier commitment doesn’t entail ontological commitment. Fictional discourse (e.g. in literature) and mathematical discourse illustrate that. Suppose that there’s no way of making sense of our practice with fiction but to quantify over fictional objects. Still, people would strongly resist the claim that they are therefore committed to the existence of these objects. The same point applies to mathematical objects.

This move can also be made by invoking a distinction between partial quantifiers and the existence predicate. The idea here is to resist reading the existential quantifier as carrying any ontological commitment. Rather, the existential quantifier only indicates that the objects that fall under a concept (or have certain properties) are less than the whole domain of discourse. To indicate that the whole domain is invoked (e.g. that every object in the domain have a certain property), we use a universal quantifier. So, two different functions are clumped together in the traditional, Quinean reading of the existential quantifier: (i) to assert the existence of something, on the one hand, and (ii) to indicate that not the whole domain of quantification is considered, on the other. These functions are best kept apart. We should use a partial quantifier (that is, an existential quantifier free of ontological commitment) to convey that only some of the objects in the domain are referred to, and introduce an existence predicate in the language in order to express existence claims.

By distinguishing these two roles of the quantifier, we also gain expressive resources. Consider, for instance, the sentence:

(∗) Some fictional detectives don’t exist.

Can this expression be translated in the usual formalism of classical first-order logic with the Quinean interpretation of the existential quantifier? Prima facie, that doesn’t seem to be possible. The sentence would be contradictory! It would state that ∃ fictional detectives who don’t exist. The obvious consistent translation here would be: ¬∃x Fx, where F is the predicate is a fictional detective. But this states that fictional detectives don’t exist. Clearly, this is a different claim from the one expressed in (∗). By declaring that some fictional detectives don’t exist, (∗) is still compatible with the existence of some fictional detectives. The regimented sentence denies this possibility.

However, it’s perfectly straightforward to express (∗) using the resources of partial quantification and the existence predicate. Suppose that “∃” stands for the partial quantifier and “E” stands for the existence predicate. In this case, we have: ∃x (Fx ∧¬Ex), which expresses precisely what we need to state.

Now, under what conditions is the fictionalist entitled to conclude that certain objects exist? In order to avoid begging the question against the platonist, the fictionalist cannot insist that only objects that we can causally interact with exist. So, the fictionalist only offers sufficient conditions for us to be entitled to conclude that certain objects exist. Conditions such as the following seem to be uncontroversial. Suppose we have access to certain objects that is such that (i) it’s robust (e.g. we blink, we move away, and the objects are still there); (ii) the access to these objects can be refined (e.g. we can get closer for a better look); (iii) the access allows us to track the objects in space and time; and (iv) the access is such that if the objects weren’t there, we wouldn’t believe that they were. In this case, having this form of access to these objects gives us good grounds to claim that these objects exist. In fact, it’s in virtue of conditions of this sort that we believe that tables, chairs, and so many observable entities exist.

But recall that these are only sufficient, and not necessary, conditions. Thus, the resulting view turns out to be agnostic about the existence of the mathematical entities the platonist takes to exist – independently of any description. The fact that mathematical objects fail to satisfy some of these conditions doesn’t entail that these objects don’t exist. Perhaps these entities do exist after all; perhaps they don’t. What matters for the fictionalist is that it’s possible to make sense of significant features of mathematics without settling this issue.

Now what would happen if the agnostic fictionalist used the partial quantifier in the context of comprehension principles? Suppose that a vector space is introduced via suitable principles, and that we establish that there are vectors satisfying certain conditions. Would this entail that we are now committed to the existence of these vectors? It would if the vectors in question satisfied the existence predicate. Otherwise, the issue would remain open, given that the existence predicate only provides sufficient, but not necessary, conditions for us to believe that the vectors in question exist. As a result, the fictionalist would then remain agnostic about the existence of even the objects introduced via comprehension principles!

Metaphysics of the Semantics of HoTT. Thought of the Day 73.0

PMquw

Types and tokens are interpreted as concepts (rather than spaces, as in the homotopy interpretation). In particular, a type is interpreted as a general mathematical concept, while a token of a given type is interpreted as a more specific mathematical concept qua instance of the general concept. This accords with the fact that each token belongs to exactly one type. Since ‘concept’ is a pre-mathematical notion, this interpretation is admissible as part of an autonomous foundation for mathematics.

Expressions in the language are the names of types and tokens. Those naming types correspond to propositions. A proposition is ‘true’ just if the corresponding type is inhabited (i.e. there is a token of that type, which we call a ‘certificate’ to the proposition). There is no way in the language of HoTT to express the absence or non-existence of a token. The negation of a proposition P is represented by the type P → 0, where P is the type corresponding to proposition P and 0 is a type that by definition has no token constructors (corresponding to a contradiction). The logic of HoTT is not bivalent, since the inability to construct a token of P does not guarantee that a token of P → 0 can be constructed, and vice versa.

The rules governing the formation of types are understood as ways of composing concepts to form more complex concepts, or as ways of combining propositions to form more complex propositions. They follow from the Curry-Howard correspondence between logical operations and operations on types. However, we depart slightly from the standard presentation of the Curry-Howard correspondence, in that the tokens of types are not to be thought of as ‘proofs’ of the corresponding propositions but rather as certificates to their truth. A proof of a proposition is the construction of a certificate to that proposition by a sequence of applications of the token construction rules. Two different such processes can result in construction of the same token, and so proofs and tokens are not in one-to-one correspondence.

When we work formally in HoTT we construct expressions in the language according to the formal rules. These expressions are taken to be the names of tokens and types of the theory. The rules are chosen such that if a construction process begins with non-contradictory expressions that all name tokens (i.e. none of the expressions are ‘empty names’) then the result will also name a token (i.e. the rules preserve non-emptiness of names).

Since we interpret tokens and types as concepts, the only metaphysical commitment required is to the existence of concepts. That human thought involves concepts is an uncontroversial position, and our interpretation does not require that concepts have any greater metaphysical status than is commonly attributed to them. Just as the existence of a concept such as ‘unicorn’ does not require the existence of actual unicorns, likewise our interpretation of tokens and types as mathematical concepts does not require the existence of mathematical objects. However, it is compatible with such beliefs. Thus a Platonist can take the concept, say, ‘equilateral triangle’ to be the concept corresponding to the abstract equilateral triangle (after filling in some account of how we come to know about these abstract objects in a way that lets us form the corresponding concepts). Even without invoking mathematical objects to be the ‘targets’ of mathematical concepts, one could still maintain that concepts have a mind-independent status, i.e. that the concept ‘triangle’ continues to exist even while no-one is thinking about triangles, and that the concept ‘elliptic curve’ did not come into existence at the moment someone first gave the definition. However, this is not a necessary part of the interpretation, and we could instead take concepts to be mind-dependent, with corresponding implications for the status of mathematics itself.

Categorial Logic – Paracompleteness versus Paraconsistency. Thought of the Day 46.2

1c67cadc0cd0f625e03b399121febd15--category-theory-mathematics

The fact that logic is content-dependent opens a new horizon concerning the relationship of logic to ontology (or objectology). Although the classical concepts of a priori and a posteriori propositions (or judgments) has lately become rather blurred, there is an undeniable fact: it is certain that the far origin of mathematics is based on empirical practical knowledge, but nobody can claim that higher mathematics is empirical.

Thanks to category theory, it is an established fact that some sort of very important logical systems: the classical and the intuitionistic (with all its axiomatically enriched subsystems), can be interpreted through topoi. And these possibility permits to consider topoi, be it in a Noneist or in a Platonist way, as universes, that is, as ontologies or as objectologies. Now, the association of a topos with its correspondent ontology (or objectology) is quite different from the association of theoretical terms with empirical concepts. Within the frame of a physical theory, if a new fact is discovered in the laboratory, it must be explained through logical deduction (with the due initial conditions and some other details). If a logical conclusion is inferred from the fundamental hypotheses, it must be corroborated through empirical observation. And if the corroboration fails, the theory must be readjusted or even rejected.

In the case of categorial logic, the situation has some similarity with the former case; but we must be careful not to be influenced by apparent coincidences. If we add, as an axiom, the tertium non datur to the formalized intuitionistic logic we obtain classical logic. That is, we can formally pass from the one to the other, just by adding or suppressing the tertium. This fact could induce us to think that, just as in physics, if a logical theory, let’s say, intuitionistic logic, cannot include a true proposition, then its axioms must be readjusted, to make it possible to include it among its theorems. But there is a radical difference: in the semantics of intuitionistic logic, and of any logic, the point of departure is not a set of hypothetical propositions that must be corroborated through experiment; it is a set of propositions that are true under some interpretation. This set can be axiomatic or it can consist in rules of inference, but the theorems of the system are not submitted to verification. The derived propositions are just true, and nothing more. The logician surely tries to find new true propositions but, when they are found (through some effective method, that can be intuitive, as it is in Gödel’s theorem) there are only three possible cases: they can be formally derivable, they can be formally underivable, they can be formally neither derivable nor underivable, that is, undecidable. But undecidability does not induce the logician to readjust or to reject the theory. Nobody tries to add axioms or to diminish them. In physics, when we are handling a theory T, and a new describable phenomenon is found that cannot be deduced from the axioms (plus initial or some other conditions), T must be readjusted or even rejected. A classical logician will never think of changing the axioms or rules of inference of classical logic because it is undecidable. And an intuitionist logician would not care at all to add the tertium to the axioms of Heyting’s system because it cannot be derived within it.

The foregoing considerations sufficiently show that in logic and mathematics there is something that, with full right, can be called “a priori“. And although, as we have said, we must acknowledge that the concepts of a priori and a posteriori are not clear-cut, in some cases, we can rightly speak of synthetical a priori knowledge. For instance, the Gödel’s proposition that affirms its own underivabilty is synthetical and a priori. But there are other propositions, for instance, mathematical induction, that can also be considered as synthetical and a priori. And a great deal of mathematical definitions, that are not abbreviations, are synthetical. For instance, the definition of a monoid action is synthetical (and, of course, a priori) because the concept of a monoid does not have among its characterizing traits the concept of an action, and vice versa.

Categorial logic is, the deepest knowledge of logic that has ever been achieved. But its scope does not encompass the whole field of logic. There are other kinds of logic that are also important and, if we intend to know, as much as possible, what logic is and how it is related to mathematics and ontology (or objectology), we must pay attention to them. From a mathematical and a philosophical point of view, the most important logical non-paracomplete systems are the paraconsistent ones. These systems are something like a dual to paracomplete logics. They are employed in inconsistent theories without producing triviality (in this sense also relevant logics are paraconsistent). In intuitionist logic there are interpretations that, with respect to some topoi, include two false contradictory propositions; whereas in paraconsistent systems we can find interpretations in which there are two contradictory true propositions.

There is, though, a difference between paracompleteness and paraconsistency. Insofar as mathematics is concerned, paracomplete systems had to be coined to cope with very deep problems. The paraconsistent ones, on the other hand, although they have been applied with success to mathematical theories, were conceived for purely philosophical and, in some cases, even for political and ideological motivations. The common point of them all was the need to construe a logical system able to cope with contradictions. That means: to have at one’s disposal a deductive method which offered the possibility of deducing consistent conclusions from inconsistent premisses. Of course, the inconsistency of the premisses had to comply with some (although very wide) conditions to avoid triviality. But these conditions made it possible to cope with paradoxes or antinomies with precision and mathematical sense.

But, philosophically, paraconsistent logic has another very important property: it is used in a spontaneous way to formalize the naive set theory, that is, the kind of theory that pre-Zermelian mathematicians had always employed. And it is, no doubt, important to try to develop mathematics within the frame of naive, spontaneous, mathematical thought, without falling into the artificiality of modern set theory. The formalization of the naive way of mathematical thinking, although every formalization is unavoidably artificial, has opened the possibility of coping with dialectical thought.