Fallibilist a priori. Thought of the Day 127.0


Kant’s ‘transcendental subject’ is pragmatized in this notion in Peirce, transcending any delimitation of reason to the human mind: the ‘anybody’ is operational and refers to anything which is able to undertake reasoning’s formal procedures. In the same way, Kant’s synthetic a priori notion is pragmatized in Peirce’s account:

Kant declares that the question of his great work is ‘How are synthetical judgments a priori possible?’ By a priori he means universal; by synthetical, experiential (i.e., relating to experience, not necessarily derived wholly from experience). The true question for him should have been, ‘How are universal propositions relating to experience to be justified?’ But let me not be understood to speak with anything less than profound and almost unparalleled admiration for that wonderful achievement, that indispensable stepping-stone of philosophy. (The Essential Peirce Selected Philosophical Writings)

Synthetic a priori is interpreted as experiential and universal, or, to put it another way, observational and general – thus Peirce’s rationalism in demanding rational relations is connected to his scholastic realism posing the existence of real universals.

But we do not make a diagram simply to represent the relation of killer to killed, though it would not be impossible to represent this relation in a Graph-Instance; and the reason why we do not is that there is little or nothing in that relation that is rationally comprehensible. It is known as a fact, and that is all. I believe I may venture to affirm that an intelligible relation, that is, a relation of thought, is created only by the act of representing it. I do not mean to say that if we should some day find out the metaphysical nature of the relation of killing, that intelligible relation would thereby be created. [ ] No, for the intelligible relation has been signified, though not read by man, since the first killing was done, if not long before. (The New Elements of Mathematics)

Peirce’s pragmatizing Kant enables him to escape the threatening subjectivism: rational relations are inherent in the universe and are not our inventions, but we must know (some of) them in order to think. The relation of killer to killed, is not, however, given our present knowledge, one of those rational relations, even if we might later become able to produce a rational diagram of aspects of it. Yet, such a relation is, as Peirce says, a mere fact. On the other hand, rational relations are – even if inherent in the universe – not only facts. Their extension is rather that of mathematics as such, which can be seen from the fact that the rational relations are what make necessary reasoning possible – at the same time as Peirce subscribes to his father’s mathematics definition: Mathematics is the science that draws necessary conclusions – with Peirce’s addendum that these conclusions are always hypothetical. This conforms to Kant’s idea that the result of synthetic a priori judgments comprised mathematics as well as the sciences built on applied mathematics. Thus, in constructing diagrams, we have all the possible relations in mathematics (which is inexhaustible, following Gödel’s 1931 incompleteness theorem) at our disposal. Moreover, the idea that we might later learn about the rational relations involved in killing entails a historical, fallibilist rendering of the a priori notion. Unlike the case in Kant, the a priori is thus removed from a privileged connection to the knowing subject and its transcendental faculties. Thus, Peirce rather anticipates a fallibilist notion of the a priori.

Categorial Logic – Paracompleteness versus Paraconsistency. Thought of the Day 46.2


The fact that logic is content-dependent opens a new horizon concerning the relationship of logic to ontology (or objectology). Although the classical concepts of a priori and a posteriori propositions (or judgments) has lately become rather blurred, there is an undeniable fact: it is certain that the far origin of mathematics is based on empirical practical knowledge, but nobody can claim that higher mathematics is empirical.

Thanks to category theory, it is an established fact that some sort of very important logical systems: the classical and the intuitionistic (with all its axiomatically enriched subsystems), can be interpreted through topoi. And these possibility permits to consider topoi, be it in a Noneist or in a Platonist way, as universes, that is, as ontologies or as objectologies. Now, the association of a topos with its correspondent ontology (or objectology) is quite different from the association of theoretical terms with empirical concepts. Within the frame of a physical theory, if a new fact is discovered in the laboratory, it must be explained through logical deduction (with the due initial conditions and some other details). If a logical conclusion is inferred from the fundamental hypotheses, it must be corroborated through empirical observation. And if the corroboration fails, the theory must be readjusted or even rejected.

In the case of categorial logic, the situation has some similarity with the former case; but we must be careful not to be influenced by apparent coincidences. If we add, as an axiom, the tertium non datur to the formalized intuitionistic logic we obtain classical logic. That is, we can formally pass from the one to the other, just by adding or suppressing the tertium. This fact could induce us to think that, just as in physics, if a logical theory, let’s say, intuitionistic logic, cannot include a true proposition, then its axioms must be readjusted, to make it possible to include it among its theorems. But there is a radical difference: in the semantics of intuitionistic logic, and of any logic, the point of departure is not a set of hypothetical propositions that must be corroborated through experiment; it is a set of propositions that are true under some interpretation. This set can be axiomatic or it can consist in rules of inference, but the theorems of the system are not submitted to verification. The derived propositions are just true, and nothing more. The logician surely tries to find new true propositions but, when they are found (through some effective method, that can be intuitive, as it is in Gödel’s theorem) there are only three possible cases: they can be formally derivable, they can be formally underivable, they can be formally neither derivable nor underivable, that is, undecidable. But undecidability does not induce the logician to readjust or to reject the theory. Nobody tries to add axioms or to diminish them. In physics, when we are handling a theory T, and a new describable phenomenon is found that cannot be deduced from the axioms (plus initial or some other conditions), T must be readjusted or even rejected. A classical logician will never think of changing the axioms or rules of inference of classical logic because it is undecidable. And an intuitionist logician would not care at all to add the tertium to the axioms of Heyting’s system because it cannot be derived within it.

The foregoing considerations sufficiently show that in logic and mathematics there is something that, with full right, can be called “a priori“. And although, as we have said, we must acknowledge that the concepts of a priori and a posteriori are not clear-cut, in some cases, we can rightly speak of synthetical a priori knowledge. For instance, the Gödel’s proposition that affirms its own underivabilty is synthetical and a priori. But there are other propositions, for instance, mathematical induction, that can also be considered as synthetical and a priori. And a great deal of mathematical definitions, that are not abbreviations, are synthetical. For instance, the definition of a monoid action is synthetical (and, of course, a priori) because the concept of a monoid does not have among its characterizing traits the concept of an action, and vice versa.

Categorial logic is, the deepest knowledge of logic that has ever been achieved. But its scope does not encompass the whole field of logic. There are other kinds of logic that are also important and, if we intend to know, as much as possible, what logic is and how it is related to mathematics and ontology (or objectology), we must pay attention to them. From a mathematical and a philosophical point of view, the most important logical non-paracomplete systems are the paraconsistent ones. These systems are something like a dual to paracomplete logics. They are employed in inconsistent theories without producing triviality (in this sense also relevant logics are paraconsistent). In intuitionist logic there are interpretations that, with respect to some topoi, include two false contradictory propositions; whereas in paraconsistent systems we can find interpretations in which there are two contradictory true propositions.

There is, though, a difference between paracompleteness and paraconsistency. Insofar as mathematics is concerned, paracomplete systems had to be coined to cope with very deep problems. The paraconsistent ones, on the other hand, although they have been applied with success to mathematical theories, were conceived for purely philosophical and, in some cases, even for political and ideological motivations. The common point of them all was the need to construe a logical system able to cope with contradictions. That means: to have at one’s disposal a deductive method which offered the possibility of deducing consistent conclusions from inconsistent premisses. Of course, the inconsistency of the premisses had to comply with some (although very wide) conditions to avoid triviality. But these conditions made it possible to cope with paradoxes or antinomies with precision and mathematical sense.

But, philosophically, paraconsistent logic has another very important property: it is used in a spontaneous way to formalize the naive set theory, that is, the kind of theory that pre-Zermelian mathematicians had always employed. And it is, no doubt, important to try to develop mathematics within the frame of naive, spontaneous, mathematical thought, without falling into the artificiality of modern set theory. The formalization of the naive way of mathematical thinking, although every formalization is unavoidably artificial, has opened the possibility of coping with dialectical thought.

Whitehead’s Non-Anthropocentric Quantum Field Ontology. Note Quote.


Whitehead builds also upon James’s claim that “The thought is itself the thinker”.

Either your experience is of no content, of no change, or it is of a perceptible amount of content or change. Your acquaintance with reality grows literally by buds or drops of perception. Intellectually and on reflection you can divide them into components, but as immediately given they come totally or not at all. — William James.

If the quantum vacuum displays features that make it resemble a material, albeit a really special one, we can immediately ask: then what is this material made of? Is it a continuum, or are the “atoms” of vacuum? Is vacuum the primordial substance of which everything is made of? Let us start by decoupling the concept of vacuum from that of spacetime. The concept of vacuum as accepted and used in standard quantum field theory is tied with that of spacetime. This is important for the theory of quantum fields, because it leads to observable effects. It is the variation of geometry, either as a change in boundary conditions or as a change in the speed of light (and therefore the metric) which is responsible for the creation of particles. Now, one can legitimately go further and ask: which one is the fundamental “substance”, the space-time or the vacuum? Is the geometry fundamental in any way, or it is just a property of the empty space emerging from a deeper structure? That geometry and substance can be separated is of course not anything new for philosophers. Aristotle’s distinction between form and matter is one example. For Aristotle the “essence” becomes a true reality only when embodied in a form. Otherwise it is just a substratum of potentialities, somewhat similar to what quantum physics suggests. Immanuel Kant was even more radical: the forms, or in general the structures that we think of as either existing in or as being abstracted from the realm of noumena are actually innate categories of the mind, preconditions that make possible our experience of reality as phenomena. Structures such as space and time, causality, etc. are a priori forms of intuition – thus by nature very different from anything from the outside reality, and they are used to formulate synthetic a priori judgments. But almost everything that was discovered in modern physics is at odds with Kant’s view. In modern philosophy perhaps Whitehead’s process metaphysics provides the closest framework for formulating these problems. For Whitehead, potentialities are continuous, while the actualizations are discrete, much like in the quantum theory the unitary evolution is continuous, while the measurement is non-unitary and in some sense “discrete”. An important concept is the “extensive continuum”, defined as a “relational complex” containing all the possibilities of objectification. This continuum also contains the potentiality for division; this potentiality is effected in what Whitehead calls “actual entities (occasions)” – the basic blocks of his cosmology. The core issue for both Whiteheadian Process and Quantum Process is the emergence of the discrete from the continuous. But what fixes, or determines, the partitioning of the continuous whole into the discrete set of subsets? The orthodox answer is this: it is an intentional action of an experimenter that determines the partitioning! But, in Whiteheadian process the world of fixed and settled facts grows via a sequence actual occasions. The past actualities are the causal and structural inputs for the next actual occasion, which specifies a new space-time standpoint (region) from which the potentialities created by the past actualities will be prehended (grasped) by the current occasion. This basic autogenetic process creates the new actual entity, which, upon becoming actual, contributes to the potentialities for the succeeding actual occasions. For the pragmatic physicist, since the extensive continuum provides the space of possibilities from which the actual entities arise, it is tempting to identify it with the quantum vacuum. The actual entities are then assimilated with events in spacetime, as resulting from a quantum measurement, or simply with particles. The following caveat is however due: Whitehead’s extensive continuum is also devoid of geometrical content, while the quantum vacuum normally carries information about the geometry, be it flat or curved. Objective/absolute actuality consist of a sequence of psycho-physical quantum reduction events, identified as Whiteheadian actual entities/occasions. These happenings combine to create a growing “past” of fixed and settled “facts”. Each “fact” is specified by an actual occasion/entity that has a physical aspect (pole), and a region in space-time from which it views reality. The physical input is precisely the aspect of the physical state of the universe that is localized along the part of the contemporary space-like surface σ that constitutes the front of the standpoint region associated with the actual occasion. The physical output is reduced state ψ(σ) on this space-like surface σ. The mental pole consists of an input and an output. The mental inputs and outputs have the ontological character of thoughts, ideas, or feelings, and they play an essential dynamical role in unifying, evaluating, and selecting discrete classically conceivable activities from among the continuous range of potentialities offered by the operation of the physically describable laws. The paradigmatic example of an actual occasion is an event whose mental pole is experienced by a human being as an addition to his or her stream of conscious events, and whose output physical pole is the neural correlate of that experiential event. Such events are “high-grade” actual occasions. But the Whitehead/Quantum ontology postulates that simpler organisms will have fundamentally similar but lower-grade actual occasions, and that there can be actual occasions associated with any physical systems that possess a physical structure that will support physically effective mental interventions of the kind described above. Thus the Whitehead/Quantum ontology is essentially an ontologicalization of the structure of orthodox relativistic quantum field theory, stripped of its anthropocentric trappings. It identifies the essential physical and psychological aspects of contemporary orthodox relativistic quantum field theory, and lets them be essential features of a general non-anthropocentric ontology.


It is reasonable to expect that the continuous differentiable manifold that we use as spacetime in physics (and experience in our daily life) is a coarse-grained manifestation of a deeper reality, perhaps also of quantum (probabilistic) nature. This search for the underlying structure of spacetime is part of the wider effort of bringing together quantum physics and the theory of gravitation under the same conceptual umbrella. From various the- oretical considerations, it is inferred that this unification should account for physics at the incredibly small scale set by the Planck length, 10−35m, where the effects of gravitation and quantum physics would be comparable. What happens below this scale, which concepts will survive in the new description of the world, is not known. An important point is that, in order to incorporate the main conceptual innovation of general relativity, the the- ory should be background-independent. This contrasts with the case of the other fields (electromagnetic, Dirac, etc.) that live in the classical background provided by gravitation. The problem with quantizing gravitation is – if we believe that the general theory of relativity holds in the regime where quantum effects of gravitation would appear, that is, beyond the Planck scale – that there is no underlying background on which the gravitational field lives. There are several suggestions and models for a “pre-geometry” (a term introduced by Wheeler) that are currently actively investigated. This is a question of ongoing investigation and debate, and several research programs in quantum gravity (loops, spinfoams, noncommutative geometry, dynamical triangulations, etc.) have proposed different lines of attack. Spacetime would then be an emergent entity, an approximation valid only at scales much larger than the Planck length. Incidentally, nothing guarantees that background-independence itself is a fundamental concept that will survive in the new theory. For example, string theory is an approach to unifying the Standard Model of particle physics with gravitation which uses quantization in a fixed (non-dynamic) background. In string theory, gravitation is just another force, with the graviton (zero mass and spin 2) obtained as one of the string modes in the perturbative expansion. A background-independent formulation of string theory would be a great achievement, but so far it is not known if it can be achieved.