# Mathematical Reductionism: As Case Via C. S. Peirce’s Hypothetical Realism.

During the 20th century, the following epistemology of mathematics was predominant: a sufficient condition for the possibility of the cognition of objects is that these objects can be reduced to set theory. The conditions for the possibility of the cognition of the objects of set theory (the sets), in turn, can be given in various manners; in any event, the objects reduced to sets do not need an additional epistemological discussion – they “are” sets. Hence, such an epistemology relies ultimately on ontology. Frege conceived the axioms as descriptions of how we actually manipulate extensions of concepts in our thinking (and in this sense as inevitable and intuitive “laws of thought”). Hilbert admitted the use of intuition exclusively in metamathematics where the consistency proof is to be done (by which the appropriateness of the axioms would be established); Bourbaki takes the axioms as mere hypotheses. Hence, Bourbaki’s concept of justification is the weakest of the three: “it works as long as we encounter no contradiction”; nevertheless, it is still epistemology, because from this hypothetical-deductive point of view, one insists that at least a proof of relative consistency (i.e., a proof that the hypotheses are consistent with the frequently tested and approved framework of set theory) should be available.

Doing mathematics, one tries to give proofs for propositions, i.e., to deduce the propositions logically from other propositions (premisses). Now, in the reductionist perspective, a proof of a mathematical proposition yields an insight into the truth of the proposition, if the premisses are already established (if one has already an insight into their truth); this can be done by giving in turn proofs for them (in which new premisses will occur which ask again for an insight into their truth), or by agreeing to put them at the beginning (to consider them as axioms or postulates). The philosopher tries to understand how the decision about what propositions to take as axioms is arrived at, because he or she is dissatisfied with the reductionist claim that it is on these axioms that the insight into the truth of the deduced propositions rests. Actually, this epistemology might contain a short-coming since Poincaré (and Wittgenstein) stressed that to have a proof of a proposition is by no means the same as to have an insight into its truth.

Attempts to disclose the ontology of mathematical objects reveal the following tendency in epistemology of mathematics: Mathematics is seen as suffering from a lack of ontological “determinateness”, namely that this science (contrarily to many others) does not concern material data such that the concept of material truth is not available (especially in the case of the infinite). This tendency is embarrassing since on the other hand mathematical cognition is very often presented as cognition of the “greatest possible certainty” just because it seems not to be bound to material evidence, let alone experimental check.

The technical apparatus developed by the reductionist and set-theoretical approach nowadays serves other purposes, partly for the reason that tacit beliefs about sets were challenged; the explanations of the science which it provides are considered as irrelevant by the practitioners of this science. There is doubt that the above mentioned sufficient condition is also necessary; it is not even accepted throughout as a sufficient one. But what happens if some objects, as in the case of category theory, do not fulfill the condition? It seems that the reductionist approach, so to say, has been undocked from the historical development of the discipline in several respects; an alternative is required.

Anterior to Peirce, epistemology was dominated by the idea of a grasp of objects; since Descartes, intuition was considered throughout as a particular, innate capacity of cognition (even if idealists thought that it concerns the general, and empiricists that it concerns the particular). The task of this particular capacity was the foundation of epistemology; already from Aristotle’s first premisses of syllogism, what was aimed at was to go back to something first. In this traditional approach, it is by the ontology of the objects that one hopes to answer the fundamental question concerning the conditions for the possibility of the cognition of these objects. One hopes that there are simple “basic objects” to which the more complex objects can be reduced and whose cognition is possible by common sense – be this an innate or otherwise distinguished capacity of cognition common to all human beings. Here, epistemology is “wrapped up” in (or rests on) ontology; to do epistemology one has to do ontology first.

Peirce shares Kant’s opinion according to which the object depends on the subject; however, he does not agree that reason is the crucial means of cognition to be criticised. In his paper “Questions concerning certain faculties claimed for man”, he points out the basic assumption of pragmatist philosophy: every cognition is semiotically mediated. He says that there is no immediate cognition (a cognition which “refers immediately to its object”), but that every cognition “has been determined by a previous cognition” of the same object. Correspondingly, Peirce replaces critique of reason by critique of signs. He thinks that Kant’s distinction between the world of things per se (Dinge an sich) and the world of apparition (Erscheinungswelt) is not fruitful; he rather distinguishes the world of the subject and the world of the object, connected by signs; his position consequently is a “hypothetical realism” in which all cognitions are only valid with reservations. This position does not negate (nor assert) that the object per se (with the semiotical mediation stripped off) exists, since such assertions of “pure” existence are seen as necessarily hypothetical (that means, not withstanding philosophical criticism).

By his basic assumption, Peirce was led to reveal a problem concerning the subject matter of epistemology, since this assumption means in particular that there is no intuitive cognition in the classical sense (which is synonymous to “immediate”). Hence, one could no longer consider cognitions as objects; there is no intuitive cognition of an intuitive cognition. Intuition can be no more than a relation. “All the cognitive faculties we know of are relative, and consequently their products are relations”. According to this new point of view, intuition cannot any longer serve to found epistemology, in departure from the former reductionist attitude. A central argument of Peirce against reductionism or, as he puts it,

the reply to the argument that there must be a first is as follows: In retracing our way from our conclusions to premisses, or from determined cognitions to those which determine them, we finally reach, in all cases, a point beyond which the consciousness in the determined cognition is more lively than in the cognition which determines it.

Peirce gives some examples derived from physiological observations about perception, like the fact that the third dimension of space is inferred, and the blind spot of the retina. In this situation, the process of reduction loses its legitimacy since it no longer fulfills the function of cognition justification. At such a place, something happens which I would like to call an “exchange of levels”: the process of reduction is interrupted in that the things exchange the roles performed in the determination of a cognition: what was originally considered as determining is now determined by what was originally considered as asking for determination.

The idea that contents of cognition are necessarily provisional has an effect on the very concept of conditions for the possibility of cognitions. It seems that one can infer from Peirce’s words that what vouches for a cognition is not necessarily the cognition which determines it but the livelyness of our consciousness in the cognition. Here, “to vouch for a cognition” means no longer what it meant before (which was much the same as “to determine a cognition”), but it still means that the cognition is (provisionally) reliable. This conception of the livelyness of our consciousness roughly might be seen as a substitute for the capacity of intuition in Peirce’s epistemology – but only roughly, since it has a different coverage.

# Task of the Philosopher. Thought of the Day 75.0

Poincaré in Science and Method discusses how “reasonable” axioms (theories) are chosen. In a section which is intended to cool down the expectations put in the “logistic” project, he points out the problem as follows:

Even admitting that it has been established that all theorems can be deduced by purely analytical processes, by simple logical combinations of a finite number of axioms, and that these axioms are nothing but conventions, the philosopher would still retain the right to seek the origin of these conventions, and to ask why they were judged preferable to the contrary conventions.

[ …] A selection must be made out of all the constructions that can be combined with the materials furnished by logic. the true geometrician makes this decision judiciously, because he is guided by a sure instinct, or by some vague consciousness of I know not what profounder and more hidden geometry, which alone gives a value to the constructed edifice.

Hence, Poincaré sees the task of the philosophers to be the explanation of how conventions came to be. At the end of the quotation, Poincaré tries to give such an explanation, namely in referring to an “instinct” (in the sequel, he mentions briefly that one can obviously ask where such an instinct comes from, but he gives no answer to this question). The pragmatist position to be developed will lead to an essentially similar, but more complete and clear point of view.

According to Poincaré’s definition, the task of the philosopher starts where that of the mathematician ends: for a mathematician, a result is right if he or she has a proof, that means, if the result can be logically deduced from the axioms; that one has to adopt some axioms is seen as a necessary evil, and one perhaps puts some energy in the project to minimize the number of axioms (this might have been how set theory become thought of as a foundation of mathematics). A philosopher, however, wants to understand why exactly these axioms and no other were chosen. In particular, the philosopher is concerned with the question whether the chosen axioms actually grasp the intended model. This question is justified since formal definitions are not automatically sufficient to grasp the intention of a concept; at the same time, the question is methodologically very hard, since ultimately a concept is available in mathematical proof only by a formal explication. At any rate, it becomes clear that the task of the philosopher is related to a criterion problem.

Georg Kreisel thinks that we do indeed have the capacity to decide whether a given model was intended or not:

many formal independence proofs consist in the construction of models which we recognize to be different from the intended notion. It is a fact of experience that one can be honest about such matters! When we are shown a ‘non-standard’ model we can honestly say that it was not intended. [ . . . ] If it so happens that the intended notion is not formally definable this may be a useful thing to know about the notion, but it does not cast doubt on its objectivity.

Poincaré could not yet know (but he was experienced enough a mathematician to “feel”) that axiom systems quite often fail to grasp the intended model. It was seldom the work of professional philosophers and often the byproduct of the actual mathematical work to point out such discrepancies.

Following Kant, one defines the task of epistemology thus: to determine the conditions of the possibility of the cognition of objects. Now, what is meant by “cognition of objects”? It is meant that we have an insight into (the truth of) propositions about the objects (we can then speak about the propositions as facts); and epistemology asks what are the conditions for the possibility of such an insight. Hence, epistemology is not concerned with what objects are (ontology), but with what (and how) we can know about them (ways of access). This notwithstanding, both things are intimately related, especially, in the Peircean stream of pragmatist philosophy. The 19th century (in particular Helmholtz) stressed against Kant the importance of physiological conditions for this access to objects. Nevertheless, epistemology is concerned with logic and not with the brain. Pragmatism puts the accent on the means of cognition – to which also the brain belongs.

Kant in his epistemology stressed that the object depends on the subject, or, more precisely, that the cognition of an object depends on the means of cognition used by the subject. For him, the decisive means of cognition was reason; thus, his epistemology was to a large degree critique of reason. Other philosophers disagreed about this special role of reason but shared the view that the task of philosophy is to criticise the means of cognition. For all of them, philosophy has to point out about what we can speak “legitimately”. Such a critical approach is implicitly contained in Poincaré’s description of the task of the philosopher.

Reichenbach decomposes the task of epistemology into different parts: guiding, justification and limitation of cognition. While justification is usually considered as the most important of the three aspects, the “task of the philosopher” as specified above following Poincaré is not limited to it. Indeed, the question why just certain axioms and no others were chosen is obviously a question concerning the guiding principles of cognition: which criteria are at work? Mathematics presents itself at its various historical stages as the result of a series of decisions on questions of the kind “Which objects should we consider? Which definitions should we make? Which theorems should we try to prove?” etc. – for short: instances of the “criterion problem”. Epistemology, has all the task to evoke these criteria – used but not evoked by the researchers themselves. For after all, these criteria cannot be without effect on the conditions for the possibility of cognition of the objects which one has decided to consider. (In turn, the conditions for this possibility in general determine the range of objects from which one has to choose.) However, such an epistemology has not the task to resolve the criterion problem normatively (that means to prescribe for the scientist which choices he has to make).

# Liberalism.

In a humanistic society, boundary conditions (‘laws’) are set which are designed to make the lives of human beings optimal. The laws are made by government. Yet, the skimming of surplus labor by the capital is only overshadowed by the skimming by politicians. Politicians are often ‘auto-invited’ (by colleagues) in board-of-directors of companies (the capital), further enabling amassing buying power. This shows that, in most countries, the differences between the capital and the political class are flimsy if not non-existent. As an example, all communist countries, in fact, were pure capitalist implementations, with a distinction that a greater share of the skimming was done by politicians compared to more conventional capitalist societies.

One form of a humanistic (!!!!!????) government is socialism, which has set as its goals the welfare of humans. One can argue if socialism is a good form to achieve a humanistic society. Maybe it is not efficient to reach this goal, whatever ‘efficient’ may mean and the difficulty in defining that concept.

Another form of government is liberalism. Before we continue, it is remarkable to observe that in practical ‘liberal’ societies, everything is free and allowed, except the creation of banks and doing banking. By definition, a ‘liberal government’ is a contradiction in terms. A real liberal government would be called ‘anarchy’. ‘Liberal’ is a name given by politicians to make people think they are free, while in fact it is the most binding and oppressing form of government.

Liberalism, by definition, has set no boundary conditions. A liberal society has at its core the absence of goals. Everything is left free; “Let a Darwinistic survival-of-the-fittest mechanism decide which things are ‘best'”. Best are, by definition, those things that survive. That means that it might be the case that humans are a nuisance. Inefficient monsters. Does this idea look far-fetched? May it be so that in a liberal society, humans will disappear and only capital (the money and the means of production) will survive in a Darwinistic way? Mathematically it is possible. Let me show you.

Trade unions are organizations that represent the humans in this cycle and they are the ways to break the cycle and guarantee minimization of the skimming of laborers. If you are human, you should like trade unions. (If you are a bank manager, you can – and should – organize yourself in a bank-managers trade union). If you are capital, you do not like them. (And there are many spokesmen of the capital in the world, paid to propagate this dislike). Capital, however, in itself cannot ‘think’, it is not human, nor has it a brain, or a way to communicate. It is just a ‘concept’, an ‘idea’ of a ‘system’. It does not ‘like’ or ‘dislike’ anything. You are not capital, even if you are paid by it. Even if you are paid handsomely by it. Even if you are paid astronomically by it. (In the latter case you are probably just an asocial asshole!!!!). We can thus morally confiscate as much from the capital we wish, without feeling any remorse whatsoever. As long as it does not destroy the game; destroying the game would put human happiness at risk by undermining the incentives for production and reduce the access to consumption.

On the other hand, the spokesmen of the capital will always talk about labor cost contention, because that will increase the marginal profit M’-M. Remember this, next time somebody talks in the media. Who is paying their salary? To give an idea how much you are being fleeced, compare your salary to that of difficult-to-skim, strike-prone, trade-union-bastion professions, like train drivers. The companies still hire them, implying that they still bring a net profit to the companies, in spite of their astronomical salaries. You deserve the same salary.

Continuing. For the capital, there is no ‘special place’ for human labor power LP. If the Marxist equation can be replaced by

M – C{MoP} – P – C’ – M’

i.e., without LP, capital would do just that, if that is optimizing M’-M. Mathematically, there is no difference whatsoever between MoP and LP. The only thing a liberal system seeks is optimization. It does not care at all, in no way whatsoever, how this is achieved. The more liberal the better. Less restrictions, more possibilities for optimizing marginal profit M’-M. If it means destruction of the human race, who cares? Collateral damage.

To make my point: Would you care if you had to pay (feed) monkeys one-cent peanuts to find you kilo-sized gold nuggets? Do you care if no human LP is involved in your business scheme? I guess you just care about maximizing your skimming of the labor power involved, be they human, animal or mechanic. Who cares?

There is only one problem. Somebody should consume the products made (no monkey cares about your gold nuggets). That is why the French economist Jean-Baptiste Say said “Every product creates its own demand”. If nobody can pay for the products made (because no LP is paid for the work done), the products cannot be sold, and the cycle stops at the step C’-M’, the M’ becoming zero (not sold), the profit M’-M reduced to a loss M and the company goes bankrupt.

However, individual companies can sell products, as long as there are other companies in the world still paying LP somewhere. Companies everywhere in the world thus still have a tendency to robotize their production. Companies exist in the world that are nearly fully robotized. The profit, now effectively skimming of the surplus of MoP-power instead of labor power, fully goes to the capital, since MoP has no way of organizing itself in trade unions and demand more ‘payment’. Or, and be careful with this step here – a step Marx could never have imagined – what if the MoP start consuming as well? Imagine that a factory robot needs parts. New robot arms, electricity, water, cleaning, etc. Factories will start making these products. There is a market for them. Hail the market! Now we come to the conclusion that the ‘system’, when liberalized will optimize the production (it is the only intrinsic goal) Preindustrial (without tools):

M – C{LP} – P – C’ – M’

Marxian: M – C{MoP, LP} – P – C’ – M’

Post-modern: M – C{MoP} – P – C’ – M’

If the latter is most efficient, in a completely liberalized system, it will be implemented.

This means

1) No (human) LP will be used in production

2) No humans will be paid for work of producing

3) No human consumption is possible

4) Humans will die from lack of consumption

In a Darwinistic way humanity will die to be substituted by something else; we are too inefficient to survive. We are not fit for this planet. We will be substituted by the exact things we created. There is nowhere a rule written “liberalism, with the condition that it favors humans”. No, liberalism is liberalism. It favors the fittest.

It went good so far. As long as we had exponential growth, even if the growth rate for MoP was far larger than the growth rate for rewards for LP, also LP was rewarded increasingly. When the exponential growth stops, when the system reaches saturation as it seems to do now, only the strongest survive. That is not necessarily mankind. Mathematically it can be either one or the other, without preference; the Marxian equation is symmetrical. Future will tell. Maybe the MoP (they will also acquire intelligence and reason somewhere probably) will later discuss how they won the race, the same way we, Homo Sapiens, currently talk about “those backward unfit Neanderthals”.

Your ideal dream job would be to manage the peanut bank, monopolizing the peanut supply, while the peanut eaters build for you palaces in the Italian Riviera and feed you grapes while you enjoy the scenery. Even if you were one of the few remaining humans. A world in which humans are extinct is not a far-fetched world. It might be the result of a Darwinian selection of the fittest.

# Category Theory of a Sketch. Thought of the Day 50.0

If a sketch can be thought of as an abstract concept, a model of a sketch is not so much an interpretation of a sketch, but a concrete or particular instantiation or realization of it. It is tempting to adopt a Kantian terminology here and say that a sketch is an abstract concept, a functor between a sketch and a category C a schema and the models of a sketch the constructions in the “intuition” of the concept.

The schema is not unique since a sketch can be realized in many different categories by many different functors. What varies from one category to the other is not the basic structure of the realizations, but the types of morphisms of the underlying category, e.g., arbitrary functions, continuous maps, etc. Thus, even though a sketch captures essential structural ingredients, others are given by the “environment” in which this structure will be realized, which can be thought of as being itself another structure. Hence, the “meaning” of some concepts cannot be uniquely given by a sketch, which is not to say that it cannot be given in a structuralist fashion.

We now distinguish the group as a structure, given by the sketch for the theory of groups, from the structure of groups, given by a category of groups, that is the category of models of the sketch for groups in a given category, be it Set or another category, e.g., the category of topological spaces with continuous maps. In the latter case, the structure is given by the exactness properties of the category, e.g., Cartesian closed, etc. This is an important improvement over the traditional framework in which one was unable to say whether we should talk about the structure common to all groups, usually taken to be given by the group axioms, or the structure generated by “all” groups. Indeed, one can now ask in a precise manner whether a category C of structures, e.g., the category of (small) groups, is sketchable, that is, whether there exists a sketch S such that Mod(S, Set) is equivalent as a category to C.

There is another category associated to a sketch, namely the theory of that sketch. The theory of a sketch S, denoted by Th(S), is in a sense “freely” constructed from S : the arrows of the underlying graph are freely composed and the diagrams are imposed as equations, and so are the cones and the cocones. Th(S) is in fact a model of S in the previous sense with the following universal property: for any other model M of S in a category C there is a unique functor F: Th(S) → C such that FU = M, where U: S → Th(S). Thus, for instance, the theory of groups is a category with a group object, the generic group, “freely” constructed from the sketch for groups. It is in a way the “universal” group in the sense that any other group in any category can be constructed from it. This is possible since it contains all possible arrows, i.e., all definable operations, obtained in a purely internal or abstract manner. It is debatable whether this category should be called the theory of the sketch. But that may be more a matter of terminology than anything else, since it is clear that the “free” category called the theory is there to stay in one way or another.

# Dissipations – Bifurcations – Synchronicities. Thought of the Day 29.0

Deleuze’s thinking expounds on Bergson’s adaptation of multiplicities in step with the catastrophe theory, chaos theory, dissipative systems theory, and quantum theory of his era. For Bergson, hybrid scientific/philosophical methodologies were not viable. He advocated tandem explorations, the two “halves” of the Absolute “to which science and metaphysics correspond” as a way to conceive the relations of parallel domains. The distinctive creative processes of these disciplines remain irreconcilable differences-in-kind, commonly manifesting in lived experience. Bergson: Science is abstract, philosophy is concrete. Deleuze and Guattari: Science thinks the function, philosophy the concept. Bergson’s Intuition is a method of division. It differentiates tendencies, forces. Division bifurcates. Bifurcations are integral to contingency and difference in systems logic.

The branching of a solution into multiple solutions as a system is varied. This bifurcating principle is also known as contingency. Bifurcations mark a point or an event at which a system divides into two alternative behaviours. Each trajectory is possible. The line of flight actually followed is often indeterminate. This is the site of a contingency, were it a positionable “thing.” It is at once a unity, a dualism and a multiplicity:

Bifurcations are the manifestation of an intrinsic differentiation between parts of the system itself and the system and its environment. […] The temporal description of such systems involves both deterministic processes (between bifurcations) and probabilistic processes (in the choice of branches). There is also a historical dimension involved […] Once we have dissipative structures we can speak of self-organisation.

Figure: In a dynamical system, a bifurcation is a period doubling, quadrupling, etc., that accompanies the onset of chaos. It represents the sudden appearance of a qualitatively different solution for a nonlinear system as some parameter is varied. The illustration above shows bifurcations (occurring at the location of the blue lines) of the logistic map as the parameter r is varied. Bifurcations come in four basic varieties: flip bifurcation, fold bifurcation, pitchfork bifurcation, and transcritical bifurcation.

A bifurcation, according to Prigogine and Stengers, exhibits determinacy and choice. It pertains to critical points, to singular intensities and their division into multiplicities. The scientific term, bifurcation, can be substituted for differentiation when exploring processes of thought or as Massumi explains affect:

Affect and intensity […] is akin to what is called a critical point, or bifurcation point, or singular point, in chaos theory and the theory of dissipative structures. This is the turning point at which a physical system paradoxically embodies multiple and normally mutually exclusive potentials…

The endless bifurcating division of progressive iterations, the making of multiplicities by continually differentiating binaries, by multiplying divisions of dualities – this is the ontological method of Bergson and Deleuze after him. Bifurcations diagram multiplicities, from monisms to dualisms, from differentiation to differenciation, creatively progressing. Manuel Delanda offers this account, which describes the additional technicality of control parameters, analogous to higher-level computer technologies that enable dynamic interaction. These protocols and variable control parameters are later discussed in detail in terms of media objects in the metaphorical state space of an in situ technology:

[…] for the purpose of defining an entity to replace essences, the aspect of state space that mattered was its singularities. One singularity (or set of singularities) may undergo a symmetry-breaking transition and be converted into another one. These transitions are called bifurcations and may be studied by adding to a particular state space one or more ‘control knobs’ (technically control parameters) which determine the strength of external shocks or perturbations to which the system being modeled may be subject.

Another useful example of bifurcation with respect to research in the neurological and cognitive sciences is Francesco Varela’s theory of the emergence of microidentities and microworlds. The ready-for-action neuronal clusters that produce microindentities, from moment to moment, are what he calls bifurcating “break- downs”. These critical events in which a path or microidentity is chosen are, by implication, creative:

# Representation in the Philosophy of Science.

The concept of representation has gained momentum in the philosophy of science. The simplest concept of representation conceivable is expressed by the following dyadic predicate: structure S(HeB) represents HeB. Steven French defended that to represent something in science is the same as to have a model for it, where models are set-structures; then ‘representation’ and ‘model’ become synonyms and so do ‘to represent’ and ‘to model’. Nevertheless, this simplest conception was quickly thrown overboard as too simple by amongst others Ronald Giere, who replaced this dyadic predicate with a quadratic predicate to express a more involved concept of representation:

Scientist S uses model S to represent being B for purpose P,

where ‘model’ can here be identified with ‘structure’. Another step was set by Bas van Fraassen. As early as 1994, in his contribution to J. Hilgevoord’s Physics and our View of the World, Van Fraassen brought Nelson Goodman’s distinction between representation-of and representation-as — drawn in his seminal Languages of Art – to bear on science; he went on to argue that all representation in science is representation-as. We represent a Helium atom in a uniform magnetic field as a set-theoretical wave-mechanical structure S(HeB). In his new tome Scientific Representation, Van Fraassen has moved essentially to a hexadic predicate to express the most fundamental and most involved concept of representation to date:

Repr(S, V, S, B, F, P) ,

which reads: subject or scientist S is V -ing artefact S to represent B as an F for purpose P. Example: In the 1920ies, Heisenberg (S) constructed (V) a mathematical object (S) to represent a Helium atom (B) as a wave-mechanical structure (F) to calculate its electro-magnetic spectrum (P). We concentrate on the following triadic predicate, which is derived from the fundamental hexadic one:

ReprAs(S, B, F) iff ∃S, ∃V, ∃P : Repr(S, V, A, B, F, P)

which reads: abstract object S represents being B as an F, so that F(S).

Giere, Van Fraassen and contemporaries are not the first to include manifestations of human agency in their analysis of models and representation in science. A little more than most half a century ago, Peter Achinstein expounded the following as a characteristic of models in science:

A theoretical model is treated as an approximation useful for certain purposes. (…) The value of a given model, therefore, can be judged from different though related viewpoints: how well it serves the purposes for which it is eimployed, and the completeness and accuracy of the representation it proposes. (…) To propose something as a model of X is to suggest it as way of representing X which provides at least some approximation of the actual situation; moreover, it is to admit the possibility of alternative representations useful for different purposes.

One year later, M.W. Wartofsky explicitly proposed, during the Annual Meeting of the American Philosophical Association, Western Division, Philadelphia, 1966, to consider a model as a genus of representation, to take in that representation involves “relevant respects for relevant for purposes”, and to consider “the modelling relation triadically in this way: M(S,x,y), where S takes x as a model of y”.20 Two years later, in 1968, Wartofsky wrote in his essay ‘Telos and Technique: Models as Modes of Action’ the following:

In this sense, models are embodiments of purpose and, at the same time, instruments for carrying out such purposes. Let me attempt to clarify this idea. No entity is a model of anything simply by virtue of looking like, or being like, that thing. Anything is like anything else in an infinite number of respects and certainly in some specifiable respect; thus, if I like, I may take anything as a model of anything else, as long as I can specify the respect in which I take it. There is no restriction on this. Thus an array of teacups, for example, may be take as a model for the employment of infantry battalions, and matchsticks as models of mu-mesons, there being some properties that any of these things share with the others. But when we choose something to be a model, we choose it with some end in view, even when that end in view is simply to aid the imagination or the understanding. In the most trivial cases, then, the model is already normative and telic. It is normative in that is chosen to represent abstractly only certain features of the thing we model, not everything all at once, but those features we take to be important or significant or valuable. The model is telic in that significance and value can exist only with respect to some end in view or purpose that the model serves.

Further, during the 1950ies and 1960ies the role of analogies, besides that of models, was much discussed among philosophers of science (Hesse, Achinstein, Girill, Nagel, Braithwaite, Wartofsky).

On the basis of the general concept of representation, we can echo Wartofsky by asserting that almost anything can represent everything for someone for some purpose. In scientific representations, representans and representandum will share some features, but not all features, because to represent is neither to mirror nor to copy. Realists, a-realists and anti-realists will all agree that ReprAs(S, B, F) is true only if on the basis of F(S) one can save all phenomena that being B gives rise to, i.e. one can calculate or accommodate all measurement results obtained from observing B or experimenting with B. Whilst for structural empiricists like Van Fraassen this is also sufficient, for StrR it is not. StrR will want to add that structure S of type F ‘is realised’, that S of type F truly is the structure of being B or refers to B, so that also F(B). StrR will want to order the representations of being B that scientists have constructed during the course of history as approaching the one and only true structure of B, its structure an sich, the Kantian regulative ideal of StrR. But this talk of truth and reference, of beings and structures an sich, is in dissonance with the concept of representation-as.

Some being B can be represented as many other things and all the ensuing representations are all hunky-dory if each one serves some purpose of some subject. When the concept of representation-as is taken as pivotal to make sense of science, then the sort of ‘perspectivalism’ that Giere advocates is more in consonance with the ensuing view of science than realism is. Giere attempts to hammer a weak variety of realism into his ‘perspectivalism’: all perspectives are perspectives on one and the same reality and from every perspective something is said that can be interpreted realistically: in certain respects the representans resembles its representandum to certain degrees. A single unified picture of the world is however not to be had. Nancy Cartwright’s dappled world seems more near to Giere’s residence of patchwork realism. A unified picture of the physical world that realists dream of is completely out of the picture here. With friends like that, realism needs no enemies.

There is prima facie a way, however, for realists to express themselves in terms of representation, as follows. First, fix the purpose P to be: to describe the world as it is. When this fixed purpose leaves a variety of representations on the table, then choose the representation that is empirically superior, that is, that performs best in terms of describing the phenomena, because the phenomena are part of the world. This can be established objectively. When this still leaves more than one representation on the table, which thus save the phenomena equally well, choose the one that best explains the phenomena. In this context, Van Fraassen mentions the many interpretations of QM: each one constitutes a different representation of the same beings, or of only the same observable beings (phenomena), their similarities notwithstanding. Do all these interpre- tations provide equally good explanations? This can be established objectively too, but every judgment here will depend on which view of explanation is employed. Suppose we are left with a single structure A, of type G. Then we assert that ‘G(B)’ is true. When this ‘G’ predicates structure to B, we still need to know what ‘structure’ literally means in order to know what it is that we attribute to B, of what A is that B instantiates, and, even more important, we need to know this for our descriptivist account of reference, which realists need in order to be realists. Yes, we now have arrived where we were at the end of the previous two Sections. We conclude that this way for realists, to express themselves in terms of representation, is a dead end. The concept of representation is not going to help them.

The need for substantive accounts of truth and reference fade away as soon as one adopts a view of science that takes the concept of representation-as as its pivotal concept. Fundamentally different kinds of mathematical structure, set-theoretical and category-theoretical, can then easily be accommodated. They are ‘only representations’. That is moving away from realism, StrR included, dissolving rather than solving the problem for StrR of clarifying its Central Claim of what it means to say that being B is or has structure S — ‘dissolved’, because ‘is or has’ is replaced with ‘is represented-as’. Realism wants to know what B is, not only how it can be represented for someone who wants to do something for some purpose. When we take it for granted that StrR needs substantive accounts of truth and reference, more specifically a descriptivist account of reference and then an account of truth by means of reference, then a characterisation of structure as directly as possible, without committing one to a profusion of abstract objects, is mandatory.

The Characterisation of Structure