# Carnap, c-notions. Thought of the Day 87.0

A central distinction for Carnap is that between definite and indefinite notions. A definite notion is one that is recursive, such as “is a formula” and “is a proof of φ”. An indefinite notion is one that is non-recursive, such as “is an ω-consequence of PA” and “is true in Vω+ω”. This leads to a distinction between (i) the method of derivation (or d-method), which investigates the semi-definite (recursively enumerable) metamathematical notions, such as demonstrable, derivable, refutable, resoluble, and irresoluble, and (ii) the method of consequence (or c-method), which investigates the (typically) non-recursively enumerable metamathematical notions such as consequence, analytic, contradictory, determinate, and synthetic.

A language for Carnap is what we would today call a formal axiom system. The rules of the formal system are definite (recursive) and Carnap is fully aware that a given language cannot include its own c-notions. The logical syntax of a language is what we would today call metatheory. It is here that one formalizes the c-notions for the (object) language. From among the various c-notions Carnap singles out one as central, namely, the notion of (direct) consequence; from this c-notion all of the other c-notions can be defined in routine fashion.

We now turn to Carnap’s account of his fundamental notions, most notably, the analytic/synthetic distinction and the division of primitive terms into ‘logico-mathematical’ and ‘descriptive’. Carnap actually has two approaches. The first approach occurs in his discussion of specific languages – Languages I and II. Here he starts with a division of primitive terms into ‘logico-mathematical’ and ‘descriptive’ and upon this basis defines the c-notions, in particular the notions of being analytic and synthetic. The second approach occurs in the discussion of general syntax. Here Carnap reverses procedure: he starts with a specific c-notion – namely, the notion of direct consequence – and he uses it to define the other c-notions and draw the division of primitive terms into ‘logico-mathematical’ and ‘descriptive’.

In the first approach Carnap introduces two languages – Language I and Language II. The background languages (in the modern sense) of Language I and Language II are quite general – they include expressions that we would call ‘descriptive’. Carnap starts with a demarcation of primitive terms into ‘logico-mathematical’ and ‘descriptive’. The expressions he classifies as ‘logico-mathematical’ are exactly those included in the modern versions of these systems; the remaining expressions are classified as ‘descriptive’. Language I is a version of Primitive Recursive Arithmetic and Language II is a version of finite type theory built over Peano Arithmetic. The d-notions for these languages are the standard proof-theoretic ones.

For Language I Carnap starts with a consequence relation based on two rules – (i) the rule that allows one to infer φ if T  φ (where T is some fixed ∑10-complete formal system) and (ii) the ω-rule. It is then easily seen that one has a complete theory for the logico-mathematical fragment, that is, for any logico-mathematical sentence φ, either φ or ¬φ is a consequence of the null set. The other c-notions are then defined in the standard fashion. For example, a sentence is analytic if it is a consequence of the null set; contradictory if its negation is analytic; and so on.

For Language II Carnap starts by defining analyticity. His definition is a notational variant of the Tarskian truth definition with one important difference – namely, it involves an asymmetric treatment of the logico-mathematical and descriptive expressions. For the logico-mathematical expressions his definition really just is a notational variant of the Tarskian truth definition. But descriptive expressions must pass a more stringent test to count as analytic – they must be such that if one replaces all descriptive expressions in them by variables of the appropriate type, then the resulting logico-mathematical expression is analytic, that is, true. In other words, to count as analytic a descriptive expression must be a substitution-instance of a general logico-mathematical truth. With this definition in place the other c-notions are defined in the standard fashion.

The content of a sentence is defined to be the set of its non-analytic consequences. It then follows immediately from the definitions that logico-mathematical sentences (of both Language I and Language II) are analytic or contradictory and (assuming consistency) that analytic sentences are without content.

In the second approach, for a given language, Carnap starts with an arbitrary notion of direct consequence and from this notion he defines the other c-notions in the standard fashion. More importantly, in addition to defining the other c-notion, Carnap also uses the primitive notion of direct consequence (along with the derived c-notions) to effect the classification of terms into ‘logico-mathematical’ and ‘descriptive’. The guiding idea is that “the formally expressible distinguishing peculiarity of logical symbols and expressions [consists] in the fact that each sentence constructed solely from them is determinate”. Howsoever the guiding idea is implemented the actual division between “logico-mathematical” and “descriptive” expressions that one obtains as output is sensitive to the scope of the direct consequence relation with which one starts.

With this basic division in place, Carnap can now draw various derivative divisions, most notably, the division between analytic and synthetic statements: Suppose φ is a consequence of Γ. Then φ is said to be an L-consequence of Γ if either (i) φ and the sentences in Γ are logico-mathematical, or (ii) letting φ’ and Γ’ be the result of unpacking all descriptive symbols, then for every result φ” and Γ” of replacing every (primitive) descriptive symbol by an expression of the same genus, maintaining equal expressions for equal symbols, we have that φ” is a consequence of Γ”. Otherwise φ is a P-consequence of Γ. This division of the notion of consequence into L-consequence and P-consequence induces a division of the notion of demonstrable into L-demonstrable and P-demonstrable and the notion of valid into L-valid and P-valid and likewise for all of the other d-notions and c-notions. The terms ‘analytic’, ‘contradictory’, and ‘synthetic’ are used for ‘L-valid’, ‘L-contravalid’, and ‘L-indeterminate’.

It follows immediately from the definitions that logico-mathematical sentences are analytic or contradictory and that analytic sentences are without content. The trouble with the first approach is that the definitions of analyticity that Carnap gives for Languages I and II are highly sensitive to the original classification of terms into ‘logico-mathematical’ and ‘descriptive’. And the trouble with the second approach is that the division between ‘logico-mathematical’ and ‘descriptive’ expressions (and hence division between ‘analytic’ and ‘synthetic’ truths) is sensitive to the scope of the direct consequence relation with which one starts. This threatens to undermine Carnap’s thesis that logico-mathematical truths are analytic and hence without content.

In the first approach, the original division of terms into ‘logico-mathematical’ and ‘descriptive’ is made by stipulation and if one alters this division one thereby alters the derivative division between analytic and synthetic sentences. For example, consider the case of Language II. If one calls only the primitive terms of first-order logic ‘logico-mathematical’ and then extends the language by adding the machinery of arithmetic and set theory, then, upon running the definition of ‘analytic’, one will have the result that true statements of first-order logic are without content while (the distinctive) statements of arithmetic and set theory have content. For another example, if one takes the language of arithmetic, calls the primitive terms ‘logico-mathematical’ and then extends the language by adding the machinery of finite type theory, calling the basic terms ‘descriptive’, then, upon running the definition of ‘analytic’, the result will be that statements of first-order arithmetic are analytic or contradictory while (the distinctive) statements of second- and higher-order arithmetic are synthetic and hence have content. In general, by altering the input, one alters the output, and Carnap adjusts the input to achieve his desired output.

In the second approach, there are no constraints on the scope of the direct consequence relation with which one starts and if one alters it one thereby alters the derivative division between ‘logico-mathematical’ and ‘descriptive’ expressions. Logical symbols and expressions have the feature that sentences composed solely of them are determinate. The trouble is that the resulting division of terms into ‘logico-mathematical’ and ‘descriptive’ will be highly sensitive to the scope of the direct consequence relation with which one starts. For example, let S be first-order PA and for the direct consequence relation take “provable in PA”. Under this assignment Fermat’s Last Theorem will be deemed descriptive, synthetic, and to have non-trivial content. For an example at the other extreme, let S be an extension of PA that contains a physical theory and let the notion of direct consequence be given by a Tarskian truth definition for the language. Since in the metalanguage one can prove that every sentence is true or false, every sentence will be either analytic (and so have null content) or contradictory (and so have total content). To overcome such counter-examples and get the classification that Carnap desires one must ensure that the consequence relation is (i) complete for the sublanguage consisting of expressions that one wants to come out as ‘logico-mathematical’ and (ii) not complete for the sublanguage consisting of expressions that one wants to come out as ‘descriptive’. Once again, by altering the input, one alters the output.

Carnap merely provides us with a flexible piece of technical machinery involving free parameters that can be adjusted to yield a variety of outcomes concerning the classifications of analytic/synthetic, contentful/non-contentful, and logico-mathematical/descriptive. In his own case, he has adjusted the parameters in such a way that the output is a formal articulation of his logicist view of mathematics that the truths of mathematics are analytic and without content. And one can adjust them differently to articulate a number of other views, for example, the view that the truths of first-order logic are without content while the truths of arithmetic and set theory have content. The point, however, is that we have been given no reason for fixing the parameters one way rather than another. The distinctions are thus not principled distinctions. It is trivial to prove that mathematics is trivial if one trivializes the claim.

Carnap is perfectly aware that to define c-notions like analyticity one must ascend to a stronger metalanguage. However, there is a distinction that he appears to overlook, namely, the distinction between (i) having a stronger system S that can define ‘analytic in S’ and (ii) having a stronger system S that can, in addition, evaluate a given statement of the form ‘φ is analytic in S’. It is an elementary fact that two systems S1 and S2 can employ the same definition (from an intensional point of view) of ‘analytic in S’ (using either the definition given for Language I or Language II) but differ on their evaluation of ‘φ is analytic in S’ (that is, differ on the extension of ‘analytic in S’). Thus, to determine whether ‘φ is analytic in S’ holds one needs to access much more than the “syntactic design” of φ – in addition to ascending to an essentially richer metalanguage one must move to a sufficiently strong system to evaluate ‘φ is analytic in S’.

In fact, to answer ‘Is φ analytic in Language I?’ is just to answer φ and, in the more general setting, to answer all questions of the form ‘Is φ analytic in S?’ (for various mathematical φ and S), where here ‘analytic’ is defined as Carnap defines it for Language II, just to answer all questions of mathematics. The same, of course, applies to the c-notion of consequence. So, when in first stating the Principle of Tolerance, Carnap tells us that we can choose our system S arbitrarily and that ‘no question of justification arises at all, but only the question of the syntactical consequences to which one or other of the choices leads’, where he means the c-notion of consequence.

# Transcendentally Realist Modality. Thought of the Day 78.1

Let us start at the beginning first! Though the fact is not mentioned in Genesis, the first thing God said on the first day of creation was ‘Let there be necessity’. And there was necessity. And God saw necessity, that it was good. And God divided necessity from contingency. And only then did He say ‘Let there be light’. Several days later, Adam and Eve were introducing names for the animals into their language, and during a break between the fish and the birds, introduced also into their language modal auxiliary verbs, or devices that would be translated into English using modal auxiliary verbs, and rules for their use, rules according to which it can be said of some things that they ‘could’ have been otherwise, and of other things that they ‘could not’. In so doing they were merely putting labels on a distinction that was no more their creation than were the fishes of the sea or the beasts of the field or the birds of the air.

And here is the rival view. The failure of Genesis to mention any command ‘Let there be necessity’ is to be explained simply by the fact that no such command was issued. We have no reason to suppose that the language in which God speaks to the angels contains modal auxiliary verbs or any equivalent device. Sometime after the Tower of Babel some tribes found that their purposes would be better served by introducing into their language certain modal auxiliary verbs, and fixing certain rules for their use. When we say that this is necessary while that is contingent, we are applying such rules, rules that are products of human, not divine intelligence.

This theological language would have been the natural way for seventeenth or eighteenth century philosophers, who nearly all were or professed to be theists or deists, to discuss the matter. For many today, such language cannot be literally accepted, and if it is only taken metaphorically, then at least better than those who speak figuratively and frame the question as that of whether the ‘origin’ of necessity lies outside us or within us. So let us drop the theological language, and try again.

Well, here the first view: Ultimately reality as it is in itself, independently of our attempts to conceptualize and comprehend it, contains both facts about what is, and superfacts about what not only is but had to have been. Our modal usages, for instance, the distinction between the simple indicative ‘is’ and the construction ‘had to have been’, simply reflect this fundamental distinction in the world, a distinction that is and from the beginning always was there, independently of us and our concerns.

And here is the second view: We have reasons, connected with our various purposes in life, to use certain words, including ‘would’ and ‘might’, in certain ways, and thereby to make certain distinctions. The distinction between those things in the world that would have been no matter what and those that might have failed to be if only is a projection of the distinctions made in our language. Our saying there were necessities there before us is a retroactive application to the pre-human world of a way of speaking invented and created by human beings in order to solve human problems.

Well, that’s the second try. With it even if one has gotten rid of theology, unfortunately one has not gotten rid of all metaphors. The key remaining metaphor is the optical one: reflection vs projection. Perhaps the attempt should be to get rid of all metaphors, and admit that the two views are not so much philosophical theses or doctrines as ‘metaphilosophical’ attitudes or orientations: a stance that finds the ‘reflection’ metaphor congenial, and the stance that finds the ‘projection’ metaphor congenial. So, lets try a third time to describe the distinction between the two outlooks in literal terms, avoiding optics as well as theology.

To begin with, both sides grant that there is a correspondence or parallelism between two items. On the one hand, there are facts about the contrast between what is necessary and what is contingent. On the other hand, there are facts about our usage of modal auxiliary verbs such as ‘would’ and ‘might’, and these include, for instance, the fact that we have no use for questions of the form ‘Would 29 still have been a prime number if such-and- such?’ but may have use for questions of the form ‘Would 29 still have been the number of years it takes for Saturn to orbit the sun if such-and-such?’ The difference between the two sides concerns the order of explanation of the relation between the two parallel ranges of facts.

And what is meant by that? Well, both sides grant that ‘29 is necessarily prime’, for instance, is a proper thing to say, but they differ in the explanation why it is a proper thing to say. Asked why, the first side will say that ultimately it is simply because 29 is necessarily prime. That makes the proposition that 29 is necessarily prime true, and since the sentence ‘29 is necessarily prime’ expresses that proposition, it is true also, and a proper thing to say. The second side will say instead that ‘29 is necessarily prime’ is a proper thing to say because there is a rule of our language according to which it is a proper thing to say. This formulation of the difference between the two sides gets rid of metaphor, though it does put an awful lot of weight on the perhaps fragile ‘why’ and ‘because’.

Note that the adherents of the second view need not deny that 29 is necessarily prime. On the contrary, having said that the sentence ‘29 is necessarily prime’ is, per rules of our language, a proper thing to say, they will go on to say it. Nor need the adherents of the first view deny that recognition of the propriety of saying ‘29 is necessarily prime’ is enshrined in a rule of our language. The adherents of the first view need not even deny that proximately, as individuals, we learn that ‘29 is necessarily prime’ is a proper thing to say by picking up the pertinent rule in the course of learning our language. But the adherents of the first view will maintain that the rule itself is only proper because collectively, as the creators of the language, we or our remote answers have, in setting up the rule, managed to achieve correspondence with a pre-existing fact, or rather, a pre-existing superfact, the superfact that 29 is necessarily prime. The difference between the two views is, in the order of explanation.

The adherents regarding labels for the two sides, or ‘metaphilosophical’ stances, rather than inventing new ones, will simply take two of the most overworked terms in the philosophical lexicon and give them one more job to do, calling the reflection view ‘realism’ about modality, and the projection view ‘pragmatism’. That at least will be easy to remember, since ‘realism’ and ‘reflection’ begin with the same first two letters, as do ‘pragmatism’ and ‘projection’. The realist/pragmatist distinction has bearing across a range of issues and problems, and above all it has bearing on the meta-issue of which issues are significant. For the two sides will, or ought to, recognize quite different questions as the central unsolved problems in the theory of modality.

For those on the realist side, the old problem of the ultimate source of our knowledge of modality remains, even if it is granted that the proximate source lies in knowledge of linguistic conventions. For knowledge of linguistic conventions constitutes knowledge of a reality independent of us only insofar as our linguistic conventions reflect, at least to some degree, such an ultimate reality. So for the realist the problem remains of explaining how such degree of correspondence as there is between distinctions in language and distinctions in the world comes about. If the distinction in the world is something primary and independent, and not a mere projection of the distinction in language, then how the distinction in language comes to be even imperfectly aligned with the distinction in the world remains to be explained. For it cannot be said that we have faculties responsive to modal facts independent of us – not in any sense of ‘responsive’ implying that if the facts had been different, then our language would have been different, since modal facts couldn’t have been different. What then is the explanation? This is the problem of the epistemology of modality as it confronts the realist, and addressing it is or ought to be at the top of the realist agenda.

As for the pragmatist side, a chief argument of thinkers from Kant to Ayer and Strawson and beyond for their anti-realist stance has been precisely that if the distinction we perceive in reality is taken to be merely a projection of a distinction created by ourselves, then the epistemological problem dissolves. That seems more like a reason for hoping the Kantian or Ayerite or Strawsonian view is the right one, than for believing that it is; but in any case, even supposing the pragmatist view is the right one, and the problems of the epistemology of modality are dissolved, still the pragmatist side has an important unanswered question of its own to address. The pragmatist account, begins by saying that we have certain reasons, connected with our various purposes in life, to use certain words, including ‘would’ and ‘might’, in certain ways, and thereby to make certain distinctions. What the pragmatist owes us is an account of what these purposes are, and how the rules of our language help us to achieve them. Addressing that issue is or ought to be at the top of the pragmatists’ to-do list.

While the positivist Ayer dismisses all metaphysics, the ordinary-language philosopher Strawson distinguishes good metaphysics, which he calls ‘descriptive’, from bad metaphysics, which he calls ‘revisionary’, but which rather be called ‘transcendental’ (without intending any specifically Kantian connotations). Descriptive metaphysics aims to provide an explicit account of our ‘conceptual scheme’, of the most general categories of commonsense thought, as embodied in ordinary language. Transcendental metaphysics aims to get beyond or behind all merely human conceptual schemes and representations to ultimate reality as it is in itself, an aim that Ayer and Strawson agree is infeasible and probably unintelligible. The descriptive/transcendental divide in metaphysics is a paradigmatically ‘metaphilosophical’ issue, one about what philosophy is about. Realists about modality are paradigmatic transcendental metaphysicians. Pragmatists must in the first instance be descriptive metaphysicians, since we must to begin with understand much better than we currently do how our modal distinctions work and what work they do for us, before proposing any revisions or reforms. And so the difference between realists and pragmatists goes beyond the question of what issue should come first on the philosopher’s agenda, being as it is an issue about what philosophical agendas are about.

# Concepts – Intensional and Extensional.

Let us start in this fashion: objects to which concepts apply (or not). The first step in arriving at a theory for this situation is, to assume that the objects in question are completely arbitrary (as urelements in set theory). This assumption is evidently wrong in empirical experience as also in mathematics itself, e.g., in function theory. So to admit this assumption forces us to build our own theory of sets to take care of the case of complex objects later on.

Concepts are normally given to us by linguistic expressions, disregarding by abstraction the origin of languages or signals or what have you. Now we can develop a theory of concepts as follows. We idealize our language by fixing a vocabulary together with logical operators and formulate expressions for classes, functions, and relations in the way of the λ-calculus. Here we have actually a theory of concepts, understood intensionally. Note that the extensional point of view is by no means lost, since we read for e.g., λx,yR(x,y) as the relation R over a domain of urelements; but either R is in the vocabulary or given by a composed expression in our logical language; equality does not refer to equal extensions but to logical equivalence and reduction processes. By the way, there is no hindrance to apply λ-expressions again to λ-expressions so that hierarchies of concepts can be included.

Another approach to the question of obtaining a theory of concepts is the algebraic one. Here introducing variables for extensions over a domain of urelements, and calling them classes helps develop the axiomatic class calculus. Adding (two-place) relations again with axioms, and we can obtain the relation calculus. One could go a step further to polyadic algebra. These theories do not have a prominent role nowadays, if one compares them with the λ-calculus or set theory. This is probably due to the circumstance that it seems difficult, not to say actually against the proper idea behind these theories, to allow iteration in the sense of classes of classes, etc.

For the mathematical purposes and for the use of logics, the appropriate way is to restrict a theory of concepts to a theory of their extensions. This has a good reason, since in an abstract theory we are interested in being as neutral as possible with respect to a description or factual theory given beforehand. There is a philosophical principle behind this, namely that logical (and in this case set theoretical) assumptions should be as far as possible distinguishable from any factual or descriptive assumption.

# Of Phenomenology, Noumenology and Appearances. Note Quote.

Heidegger’s project in Being and Time does not itself escape completely the problematic of transcendental reflection. The idea of fundamental ontology and its foundation in Dasein, which is concerned “with being” and the analysis of Dasein, at first seemed simply to mark a new dimension within transcendental phenomenology. But under the title of a hermeneutics of facticity, Heidegger objected to Husserl’s eidetic phenomenology that a hermeneutic phenomenology must contain also the theory of facticity, which is not in itself an eidos, Husserl’s phenomenology which consistently holds to the central idea of proto-I cannot be accepted without reservation in interpretation theory in particular that this eidos belong only to the eidetic sphere of universal essences. Phenomenology should be based ontologically on the facticity of the Dasein, and this existence cannot be derived from anything else.

Nevertheless, Heidegger’s complete reversal of reflection and its redirection of it toward “Being”, i.e, the turn or kehre, still is not so much an alteration of his point of view as the indirect result of his critique of Husserl’s concept of transcendental reflection, which had not yet become fully effective in Being and Time. Gadamer, however, would incorporate Husserl’s ideal of an eidetic ontology somewhat “alongside” transcendental constitutional research. Here, the philosophical justification lies ultimately in the completion of the transcendental reduction, which can come only at a higher level of direct access of the individual to the object. Thus there is a question of how our awareness of essences remains subordinated to transcendental phenomenology, but this does not rule out the possibility of turning transcendental phenomenology into an essence-oriented mundane science.

Heidegger does not follow Husserl from eidetic to transcendental phenomenology, but stays with the interpretation of phenomena in relation to their essences. As ‘hermeneutic’, his phenomenology still proceeds from a given Dasein in order to determine the meaning of existence, but now it takes the form of a fundamental ontology. By his careful discussion of the etymology of the words “phenomenon” and “Logos” he shows that “phenomenology” must be taken as letting that which shows itself be seen from itself, and in the very way in it which shows itself from itself. The more genuinely a methodological concept is worked out and the more comprehensively it determines the principles on which a science is to be conducted, the more deeply and primordially it is rooted in terms of the things themselves; whereas if understanding is restricted to the things themselves only so far as they correspond to those judgments considered “first in themselves”, then the things themselves cannot be addressed beyond particular judgements regarding events.

The doctrine of the thing-in-itself entails the possibility of a continuous transition from one aspect of a thing to another, which alone makes possible a unified matrix of experience. Husserl’s idea of the thing-in-itself, as Gadamer introduces it, must be understood in terms of the hermeneutic progress of our knowledge. In other words, in the hermeneutical context the maxim to the thing itself signifies to the text itself. Phenomenology here means grasping the text in such a way that every interpretation about the text must be considered first as directly exhibiting the text and then as demonstrating it with regard to other texts.

Heidegger called this “descriptive phenomenology” which is fundamentally tautological. He explains that phenomenon in Greek first signifies that which looks like something, or secondly that which is semblant or a semblance (das scheinbare, der Schein). He sees both these expressions as structurally interconnected, and having nothing to do with what is called an “appearance” or mere “appearance”. Based on the ordinary conception of phenomenon, the definition of “appearance” as referring to can be regarded also as characterizing the phenomenological concern for the text in itself and for itself. Only through referring to the text in itself can we have a real phenomenology based on appearance. This theory, however, requires a broad meaning of appearance including what does the referring as well as the noumenon.

Heidegger explains that what does the referring must show itself in itself. Further, the appearance “of something” does not mean showing-itself, but that the thing itself announces itself through something which does show itself. Thus, Heidegger urges that what appears does not show itself and anything which fails to show itself can never seem. On the other hand, while appearing is never a showing-itself in the sense of phenomenon, it is preconditioned by something showing-itself (through which the thing announces itself). This showing itself is not appearing itself, but makes the appearing possible. Appearing then is an announcing-itself (das sich-melden) through something that shows itself.

Also, a phenomenon cannot be represented by the word “appearance” if it alludes to that wherein something appears without itself being an appearance. That wherein something appears means that wherein something announces itself without showing itself, in other words without being itself an “appearance” (appearance signifying the showing itself which belongs essentially to that “wherein” something announces itself). Based upon this argument, phenomena are never appearances. This, however, does not deny the fact that every appearance is dependent on phenomena.