The Statistical Physics of Stock Markets. Thought of the Day 143.0

This video is an Order Routing Animation

The externalist view argues that we can make sense of, and profit from stock markets’ behavior, or at least few crucial properties of it, by crunching numbers and looking for patterns and regularities in certain sets of data. The notion of data, hence, is a key element in such an understanding and the quantitative side of the problem is prominent even if it does not mean that a qualitative analysis is ignored. The point here that the outside view maintains that it provides a better understanding than the internalist view. To this end, it endorses a functional perspective on finance and stock markets in particular.

The basic idea of the externalist view is that there are general properties and behavior of stock markets that can be detected and studied through mathematical lens, and they do not depend so much on contextual or domain-specific factors. The point at stake here is that the financial systems can be studied and approached at different scales, and it is virtually impossible to produce all the equations describing at a micro level all the objects of the system and their relations. So, in response, this view focuses on those properties that allow us to get an understanding of the behavior of the systems at a global level without having to produce a detailed conceptual and mathematical account of the inner ‘machinery’ of the system. Hence the two roads: The first one is to embrace an emergentist view on stock market, that is a specific metaphysical, ontological, and methodological thesis, while the second one is to embrace a heuristic view, that is the idea that the choice to focus on those properties that are tractable by the mathematical models is a pure problem-solving option.

A typical view of the externalist approach is the one provided, for instance, by statistical physics. In describing collective behavior, this discipline neglects all the conceptual and mathematical intricacies deriving from a detailed account of the inner, individual, and at micro level functioning of a system. Concepts such as stochastic dynamics, self-similarity, correlations (both short- and long-range), and scaling are tools to get this aim. Econophysics is a stock example in this sense: it employs methods taken from mathematics and mathematical physics in order to detect and forecast the driving forces of stock markets and their critical events, such as bubbles, crashes and their tipping points. Under this respect, markets are not ‘dark boxes’: you can see their characteristics from the outside, or better you can see specific dynamics that shape the trends of stock markets deeply and for a long time. Moreover, these dynamics are complex in the technical sense. This means that this class of behavior is such to encompass timescales, ontology, types of agents, ecologies, regulations, laws, etc. and can be detected, even if not strictly predictable. We can focus on the stock markets as a whole, on few of their critical events, looking at the data of prices (or other indexes) and ignoring all the other details and factors since they will be absorbed in these global dynamics. So this view provides a look at stock markets such that not only they do not appear as a unintelligible casino where wild gamblers face each other, but that shows the reasons and the properties of a systems that serve mostly as a means of fluid transactions that enable and ease the functioning of free markets.

Moreover the study of complex systems theory and that of stock markets seem to offer mutual benefits. On one side, complex systems theory seems to offer a key to understand and break through some of the most salient stock markets’ properties. On the other side, stock markets seem to provide a ‘stress test’ of the complexity theory. Didier Sornette expresses the analogies between stock markets and phase transitions, statistical mechanics, nonlinear dynamics, and disordered systems mold the view from outside:

Take our personal life. We are not really interested in knowing in advance at what time we will go to a given store or drive to a highway. We are much more interested in forecasting the major bifurcations ahead of us, involving the few important things, like health, love, and work, that count for our happiness. Similarly, predicting the detailed evolution of complex systems has no real value, and the fact that we are taught that it is out of reach from a fundamental point of view does not exclude the more interesting possibility of predicting phases of evolutions of complex systems that really count, like the extreme events. It turns out that most complex systems in natural and social sciences do exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their posterior evolution. Such extreme events express more than anything else the underlying “forces” usually hidden by almost perfect balance and thus provide the potential for a better scientific understanding of complex systems.

Phase transitions, critical points, extreme events seem to be so pervasive in stock markets that they are the crucial concepts to explain and, in case, foresee. And complexity theory provides us a fruitful reading key to understand their dynamics, namely their generation, growth and occurrence. Such a reading key proposes a clear-cut interpretation of them, which can be explained again by means of an analogy with physics, precisely with the unstable position of an object. Complexity theory suggests that critical or extreme events occurring at large scale are the outcome of interactions occurring at smaller scales. In the case of stock markets, this means that, unlike many approaches that attempt to account for crashes by searching for ‘mechanisms’ that work at very short time scales, complexity theory indicates that crashes have causes that date back months or year before it. This reading suggests that it is the increasing, inner interaction between the agents inside the markets that builds up the unstable dynamics (typically the financial bubbles) that eventually ends up with a critical event, the crash. But here the specific, final step that triggers the critical event: the collapse of the prices is not the key for its understanding: a crash occurs because the markets are in an unstable phase and any small interference or event may trigger it. The bottom line: the trigger can be virtually any event external to the markets. The real cause of the crash is its overall unstable position, the proximate ‘cause’ is secondary and accidental. Or, in other words, a crash could be fundamentally endogenous in nature, whilst an exogenous, external, shock is simply the occasional triggering factors of it. The instability is built up by a cooperative behavior among traders, who imitate each other (in this sense is an endogenous process) and contribute to form and reinforce trends that converge up to a critical point.

The main advantage of this approach is that the system (the market) would anticipate the crash by releasing precursory fingerprints observable in the stock market prices: the market prices contain information on impending crashes and this implies that:

if the traders were to learn how to decipher and use this information, they would act on it and on the knowledge that others act on it; nevertheless, the crashes would still probably happen. Our results suggest a weaker form of the “weak efficient market hypothesis”, according to which the market prices contain, in addition to the information generally available to all, subtle information formed by the global market that most or all individual traders have not yet learned to decipher and use. Instead of the usual interpretation of the efficient market hypothesis in which traders extract and consciously incorporate (by their action) all information contained in the market prices, we propose that the market as a whole can exhibit “emergent” behavior not shared by any of its constituents.

In a nutshell, the critical events emerge in a self-organized and cooperative fashion as the macro result of the internal and micro interactions of the traders, their imitation and mirroring.

 

Austrian School of Economics: The Praxeological Synthetic. Thought of the Day 135.0

human-action-ethics-praxeology-history

Within the Austrian economics (here, here, here and here), the a priori stance has dominated a tradition running from Carl Menger to Murray Rothbard. The idea here is that the basic structures of economy is entrenched in the more basic structures of human action as such. Nowhere is this more evident than in the work of Ludwig von Mises – his so-called ‘praxeology’, which rests on the fundamental axiom that individual human beings act on the primordial fact that individuals engage in conscious actions toward chosen goals, is built from the idea that all basic laws of economy can be derived apriorically from one premiss: the concept of human action. Of course, this concept is no simple concept, containing within itself purpose, product, time, scarcity of resources, etc. – so it would be more fair to say that economics lies as the implication of the basic schema of human action as such.

Even if the Austrian economists’ conception of the a priori is decidedly objectivist and anti-subjectivist, it is important to remark their insistence on subjectivity within their ontological domain. The Austrian economics tradition is famous exactly for their emphasis on the role of subjectivity in economy. From Carl Menger onwards, they protest against the mainstream economical assumption that the economic agent in the market is fully rational, knows his own preferences in detail, has constant preferences over time, has access to all prices for a given commodity at a given moment, etc. Thus, von Mises’ famous criticism of socialist planned economy is built on this idea: the system of ever-changing prices in the market constitutes a dispersed knowledge about the conditions of resource allocation which is a priori impossible for any single agent – let alone, any central planner’s office – to possess. Thus, their conception of the objective a priori laws of the economic domain perhaps surprisingly had the implication that they warned against a too objectivist conception of economy not taking into account the limits of economic rationality stemming from the general limitations of the capacities of real subjects. Their ensuing liberalism is thus built on a priori conclusions about the relative unpredictability of economics founded on the role played by subjective intentionality. For the same reason, Hayek ended up with a distinction between simple and complex processes, respectively, cutting across all empirical disciplines, where only the former permit precise, predictive, quantitative calculi based on mathemathical modeling while the latter permit only recognition of patterns (which may also be mathematically modeled, to be sure, but without quantitative predictability). It is of paramount importance, though, to distinguish this emphasis on the ineradicable role of subjectivity in certain regional domains from Kantian-like ideas about the foundational role of subjectivity in the construction of knowledge as such. The Austrians are as much subjectivists in the former respect as they are objectivists in the latter. In the history of economics, the Austrians occupy a middle position, being against historicism on the one hand as well as against positivism on the other. Against the former, they insist that a priori structures of economy transgress history which does not possess the power to form institutions at random but only as constrained by a priori structures. And against the latter, they insist that the mere accumulation of empirical data subject to induction will never in itself give rise to the formation of theoretical insights. Structures of intelligible concepts are in all cases necessary for any understanding of empirical regularities – in so far, the Austrian a priori approach is tantamount to a non-skepticist version of the doctrine of ‘theory-ladenness’ of observations.

A late descendant of the Austrian tradition after its emigration to the Anglo-Saxon world (von Mises, Hayek, and Schumpeter were such emigrés) was the anarcho-liberal economist Murray Rothbard, and it is the inspiration from him which allows Barry Smith to articulate the principles underlying the Austrians as ‘fallibilistic apriorism’. Rothbard characterizes in a brief paper what he calls ‘Extreme Apriorism’ as follows:

there are two basic differences between the positivists’ model science of physics on the one hand, and sciences dealing with human actions on the other: the former permits experimental verification of consequences of hypotheses, which the latter do not (or, only to a limited degree, we may add); the former admits of no possibility of testing the premisses of hypotheses (like: what is gravity?), while the latter permits a rational investigation of the premisses of hypotheses (like: what is human action?). This state of affairs makes it possible for economics to derive its basic laws with absolute – a priori – certainty: in addition to the fundamental axiom – the existence of human action – only two empirical postulates are needed: ‘(1) the most fundamental variety of resources, both natural and human. From this follows directly the division of labor, the market, etc.; (2) less important, that leisure is a consumer good’. On this basis, it may e.g. be inferred, ‘that every firm aims always at maximizing its psychic profit’.

Rothbard draws forth this example so as to counterargue traditional economists who will claim that the following proposition could be added as a corollary: ‘that every firm aims always at maximizing its money profit’. This cannot be inferred and is, according to Rothbard, an economical prejudice – the manager may, e.g. prefer for nepotistic reasons to employ his stupid brother even if that decreases the firm’s financial profit possibilities. This is an example of how the Austrians refute the basic premiss of absolute rationality in terms of maximal profit seeking. Given this basis, other immediate implications are:

the means-ends relationship, the time-structure of production, time-preference, the law of diminishing marginal utility, the law of optimum returns, etc.

Rothbard quotes Mises for seeing the fundamental Axiom as a ‘Law of Thought’ – while he himself sees this as a much too Kantian way of expressing it, he prefers instead the simple Aristotelian/Thomist idea of a ‘Law of Reality’. Rothbard furthermore insists that this doctrine is not inherently political – in order to attain the Austrians’ average liberalist political orientation, the preference for certain types of ends must be added to the a priori theory (such as the preference for life over death, abundance over poverty, etc.). This also displays the radicality of the Austrian approach: nothing is assumed about the content of human ends – this is why they will never subscribe to theories about Man as economically rational agent or Man as necessarily economical egotist. All different ends meet and compete on the market – including both desire for profit in one end and idealist, utopian, or altruist goals in the other. The principal interest, in these features of economical theory is the high degree of awareness of the difference between the – extreme – synthetic a priori theory developed, on the one hand, and its incarnation in concrete empirical cases and their limiting conditions on the other.

 

Quantifier – Ontological Commitment: The Case for an Agnostic. Note Quote.

1442843080570

What about the mathematical objects that, according to the platonist, exist independently of any description one may offer of them in terms of comprehension principles? Do these objects exist on the fictionalist view? Now, the fictionalist is not committed to the existence of such mathematical objects, although this doesn’t mean that the fictionalist is committed to the non-existence of these objects. The fictionalist is ultimately agnostic about the issue. Here is why.

There are two types of commitment: quantifier commitment and ontological commitment. We incur quantifier commitment to the objects that are in the range of our quantifiers. We incur ontological commitment when we are committed to the existence of certain objects. However, despite Quine’s view, quantifier commitment doesn’t entail ontological commitment. Fictional discourse (e.g. in literature) and mathematical discourse illustrate that. Suppose that there’s no way of making sense of our practice with fiction but to quantify over fictional objects. Still, people would strongly resist the claim that they are therefore committed to the existence of these objects. The same point applies to mathematical objects.

This move can also be made by invoking a distinction between partial quantifiers and the existence predicate. The idea here is to resist reading the existential quantifier as carrying any ontological commitment. Rather, the existential quantifier only indicates that the objects that fall under a concept (or have certain properties) are less than the whole domain of discourse. To indicate that the whole domain is invoked (e.g. that every object in the domain have a certain property), we use a universal quantifier. So, two different functions are clumped together in the traditional, Quinean reading of the existential quantifier: (i) to assert the existence of something, on the one hand, and (ii) to indicate that not the whole domain of quantification is considered, on the other. These functions are best kept apart. We should use a partial quantifier (that is, an existential quantifier free of ontological commitment) to convey that only some of the objects in the domain are referred to, and introduce an existence predicate in the language in order to express existence claims.

By distinguishing these two roles of the quantifier, we also gain expressive resources. Consider, for instance, the sentence:

(∗) Some fictional detectives don’t exist.

Can this expression be translated in the usual formalism of classical first-order logic with the Quinean interpretation of the existential quantifier? Prima facie, that doesn’t seem to be possible. The sentence would be contradictory! It would state that ∃ fictional detectives who don’t exist. The obvious consistent translation here would be: ¬∃x Fx, where F is the predicate is a fictional detective. But this states that fictional detectives don’t exist. Clearly, this is a different claim from the one expressed in (∗). By declaring that some fictional detectives don’t exist, (∗) is still compatible with the existence of some fictional detectives. The regimented sentence denies this possibility.

However, it’s perfectly straightforward to express (∗) using the resources of partial quantification and the existence predicate. Suppose that “∃” stands for the partial quantifier and “E” stands for the existence predicate. In this case, we have: ∃x (Fx ∧¬Ex), which expresses precisely what we need to state.

Now, under what conditions is the fictionalist entitled to conclude that certain objects exist? In order to avoid begging the question against the platonist, the fictionalist cannot insist that only objects that we can causally interact with exist. So, the fictionalist only offers sufficient conditions for us to be entitled to conclude that certain objects exist. Conditions such as the following seem to be uncontroversial. Suppose we have access to certain objects that is such that (i) it’s robust (e.g. we blink, we move away, and the objects are still there); (ii) the access to these objects can be refined (e.g. we can get closer for a better look); (iii) the access allows us to track the objects in space and time; and (iv) the access is such that if the objects weren’t there, we wouldn’t believe that they were. In this case, having this form of access to these objects gives us good grounds to claim that these objects exist. In fact, it’s in virtue of conditions of this sort that we believe that tables, chairs, and so many observable entities exist.

But recall that these are only sufficient, and not necessary, conditions. Thus, the resulting view turns out to be agnostic about the existence of the mathematical entities the platonist takes to exist – independently of any description. The fact that mathematical objects fail to satisfy some of these conditions doesn’t entail that these objects don’t exist. Perhaps these entities do exist after all; perhaps they don’t. What matters for the fictionalist is that it’s possible to make sense of significant features of mathematics without settling this issue.

Now what would happen if the agnostic fictionalist used the partial quantifier in the context of comprehension principles? Suppose that a vector space is introduced via suitable principles, and that we establish that there are vectors satisfying certain conditions. Would this entail that we are now committed to the existence of these vectors? It would if the vectors in question satisfied the existence predicate. Otherwise, the issue would remain open, given that the existence predicate only provides sufficient, but not necessary, conditions for us to believe that the vectors in question exist. As a result, the fictionalist would then remain agnostic about the existence of even the objects introduced via comprehension principles!

Ideological Morphology. Thought of the Day 105.1

34-bundle-of-sticks-fasce-logo-640x427

When applied to generic fascism, the combined concepts of ideal type and ideological morphology have profound implications for both the traditional liberal and Marxist definitions of fascism. For one thing it means that fascism is no longer defined in terms of style, for e.g. spectacular politics, uniformed paramilitary forces, the pervasive use of symbols like fasces and Swastika, or organizational structure, but in terms of ideology. Moreover, the ideology is not seen as essentially nihilistic or negative (anti-liberalism, anti-Marxism, resistance to transcendence etc.), or as the mystification and aestheticization of capitalist power. Instead, it is constructed in the positive, but not apologetic or revisionist terms of the fascists’ own diagnosis of society’s structural crisis and the remedies they propose to solve it, paying particular attention to the need to separate out the ineliminable, definitional conceptions from time- or place- specific adjacent or peripheral ones. However, for decades the state of fascist studies would have made Michael Freeden’s analysis well-nigh impossible to apply to generic fascism, because precisely what was lacking was any conventional wisdom embedded in common-sense usage of the term about what constituted the ineliminable cluster of concepts at its non-essentialist core. Despite a handful of attempts to establish its definitional constituents that combined deep comparative historiographical knowledge of the subject with a high degree of conceptual sophistication, there was a conspicuous lack of scholarly consensus over what constituted the fascist minimum. Whether there was such an entity as generic fascism even was a question to think through. Or whether Nazism’s eugenic racism and the euthanasia campaign it led to, combined with a policy of physically eliminating racial enemies that led to the systematic persecution and mass murder, was simply unique, and too exceptional to be located within the generic category was another question to think through. Both these positions suggest a naivety about the epistemological and ontological status of generic concepts most regrettable among professional intellectuals, since every generic entity is a utopian heuristic construct, not a real thing and every historically singularity is by definition unique no matter how many generic terms can be applied to it. Other common positions that implied considerable naivety were the ones that dismissed fascism’s ideology as too irrational or nihilistic to be part of the fascist minimum, or generalized about its generic traits by blending fascism and nazism.

Post-Foundationalism Versus Anti-Foundationalism. Thought of the Day 58.0

rain-over-asbestos-wallpapers_8873

In the words of Judith Butler,

the point is not to do away with foundations, or even to champion a position which goes under the name of antifoundationalism: Both of these positions belong together as different versions of foundationalism and the sceptical problematic it engenders. Rather, the task is to interrogate what the theoretical move that establishes foundations authorizes, and what precisely it excludes or forecloses.

The notion of contingent foundations, proposed by Butler as an alternative framing, could best be described as an ontological weakening of the status of foundation without doing away with foundations entirely. It is on its account, that what came to be called post-foundationalism should not be confused with anti-foundationalism. What distinguishes the former from the latter is that it does not assume the absence of any ground; what it assumes is the absence of an ultimate ground, since it is only on the basis of such absence that grounds, in the plural, are possible. The problem is therefore posed not in terms of no foundations (the logic of all- or -nothing), but in terms of contingent foundations. Hence, post-foundationalism does not stop after having assumed the absence of a final ground and so it does not turn into anti-foundationalist nihilism, existentialism or pluralism, all of which would assume the absence of any ground and would result in complete meaninglessness, absolute freedom or total autonomy. Nor does it turn into a sort of post-modern pluralism for which all meta-narratives have equally melted into air, for what is still accepted by post-foundationalism is the necessity for some grounds.

What becomes problematic as a result is not the existence of foundations (in the plural) but their ontological status – which is seen now as necessarily contingent. This shift in the analysis from the ‘actually existing’ foundations to their status – that is to say, to their conditions of possibility – can be described as a quasi-transcendental move. Although implicitly present in Spivak’s notion of a ‘perpetually rehearsed critique’ as well as in Butler’s notion of ‘interrogation’, this quasi-transcendental turn is made explicit by Ernesto Laclau who, starting from the post-foundational premise that ‘the crisis of essentialist universalism as a self-asserted ground has led our attention to the contingent grounds (in the plural) of its emergence and to the complex process of construction’, comes to the conclusion that ‘[t]his operation is, sensu stricto, transcendental: it involves a retreat from an object to its conditions of possibility’.

Suspicion on Consciousness as an Immanent Derivative

Untitled

The category of the subject (like that of the object) has no place in an immanent world. There can be no transcendent, subjective essence. What, then, is the ontological status of a body and its attendant instance of consciousness? In what would it exist? Sanford Kwinter (conjuncted here) here offers:

It would exist precisely in the ever-shifting pattern of mixtures or composites: both internal ones – the body as a site marked and traversed by forces that converge upon it in continuous variation; and external ones – the capacity of any individuated substance to combine and recombine with other bodies or elements (ensembles), both influencing their actions and undergoing influence by them. The ‘subject’ … is but a synthetic unit falling at the midpoint or interface of two more fundamental systems of articulation: the first composed of the fluctuating microscopic relations and mixtures of which the subject is made up, the second of the macro-blocs of relations or ensembles into which it enters. The image produced at the interface of these two systems – that which replaces, yet is too often mistaken for, subjective essence – may in turn have its own individuality characterized with a certain rigor. For each mixture at this level introduces into the bloc a certain number of defining capacities that determine both what the ‘subject’ is capable of bringing to pass outside of itself and what it is capable of receiving (undergoing) in terms of effects.

This description is sufficient to explain the immanent nature of the subjective bloc as something entirely embedded in and conditioned by its surroundings. What it does not offer – and what is not offered in any detail in the entirety of the work – is an in-depth account of what, exactly, these “defining capacities” are. To be sure, it would be unfair to demand a complete description of these capacities. Kwinter himself has elsewhere referred to the states of the nervous system as “magically complex”. Regardless of the specificity with which these capacities can presently be defined, we must nonetheless agree that it is at this interface, as he calls it, at this location where so many systems are densely overlaid, that consciousness is produced. We may be convinced that this consciousness, this apparent internal space of thought, is derived entirely from immanent conditions and can only be granted the ontological status of an effect, but this effect still manages to produce certain difficulties when attempting to define modes of behavior appropriate to an immanent world.

There is a palpable suspicion of the role of consciousness throughout Kwinter’s work, at least insofar as it is equated with some kind of internal, subjective space. (In one text he optimistically awaits the day when this space will “be left utterly in shreds.”) The basis of this suspicion is multiple and obvious. Among the capacities of consciousness is the ability to attribute to itself the (false) image of a stable and transcendent essence. The workings of consciousness are precisely what allow the subjective bloc to orient itself in a sequence of time, separating itself from an absolute experience of the moment. It is within consciousness that limiting and arbitrary moral categories seem to most stubbornly lodge themselves. (To be sure this is the location of all critical thought.) And, above all, consciousness may serve as the repository for conditioned behaviors which believe themselves to be free of external determination. Consciousness, in short, contains within itself an enormous number of limiting factors which would retard the production of novelty. Insofar as it appears to possess the capacity for self-determination, this capacity would seem most productively applied by turning on itself – that is, precisely by making the choice not to make conscious decisions and instead to permit oneself to be seized by extra-subjective forces.

Two Conceptions of Morphogenesis – World as a Dense Evolutionary Plasma of Perpetual Differentiation and Innovation. Thought of the Day 57.0

adriettemyburgh3

Sanford Kwinter‘s two conceptions of morhpogenesis, of which, one is appropriate to a world capable of sustaining transcendental ontological categories, while the other is inherent in a world of perfect immanence. According to the classical, hylomorphic model, a necessarily limited number of possibilities (forms or images) are reproduced (mirrored in reality) over a substratum, in a linear time-line. The insufficiency of such a model, however, is evident in its inability to find a place for novelty. Something either is or is not possible. This model cannot account for new possibilities and it fails to confront the inevitable imperfections and degradations evident in all of its realizations. It is indeed the inevitability of corruption and imperfection inherent in classical creation that points to the second mode of morphogenesis. This mode is dependent on an understanding of the world as a ceaseless pullulation and unfolding, a dense evolutionary plasma of perpetual differentiation and innovation. In this world forms are not carried over from some transcendent realm, but instead singularities and events emerge from within a rich plasma through the continual and dynamic interaction of forces. The morphogenetic process at work in such a world is not one whereby an active subject realizes forms from a set of transcendent possibilities, but rather one in which virtualities are actualized through the constant movement inherent in the very forces that compose the world. Virtuality is understood as the free difference or singularity, not yet combined with other differences into a complex ensemble or salient form. It is of course this immanentist description of the world and its attendant mode of morphogenesis that are viable. There is no threshold beneath which classical objects, states, or relations cease to have meaning yet beyond which they are endowed with a full pedigree and privileged status. Indeed, it is the nature of real time to ensure a constant production of innovation and change in all conditions. This is evidenced precisely by the imperfections introduced in an act of realizing a form. The classical mode of morphogenesis, then, has to be understood as a false model which is imposed on what is actually a rich, perpetually transforming universe. But the sort of novelty which the enactment of the classical model produces, a novelty which from its own perspective must be construed as a defect is not a primary concern if the novelty is registered as having emerged from a complex collision of forces. Above all, it is a novelty uncontaminated by procrustean notions of subjectivity and creation.

Wittgenstein’s Form is the Possibility of Structure

nb6

For given two arbitrary objects x and y they can be understood as arguments for a basic ontological connection which, in turn, is either positive or negative. A priori there exist just four cases: the case of positive connection – MP, the case of negative connection – MI, the case that connection is both positive and negative, hence incoherent, denoted – MPI, and the most popular in combinatorial ontology the case of mutual neutrality – N( , ). The first case is taken here to be fundamental.

Explication for σ

Now we can offer the following, rather natural explication for a powerful, nearly omnipotent, synthesizer: y is synthetizable from x iff it is be made possible from x:

σ(x) = {y : MP(x,y)}

Notice that the above explication connects the second approach (operator one) with the third (internal) approach to a general theory of analysis and synthesis.

Quoting one of the most mysterious theses of Wittgenstein’s Tractatus:

(2.033) Form is the possibility of structure.

Ask now what the possibility means? It has been pointed out by Frank Ramsey in his famous review of the Tractatus that it cannot be read as a logical modality (i. e., form cannot be treated as an alternative structure), for this reading would immediately make Tractatus inconsistent.

But, rather ‘Form of x is what makes the structure of y possible’.

Formalization: MP(Form(x), Str(y)), hence – through suitable generalization – MP(x, y).

Wittgensteinian and Leibnizian clues make the nature of MP more clear: form of x is determined by its substance, whereas structurality of y means that y is a complex built up in such and such way. Using syntactical categorization of Lésniewski and Ajdukiewicz we obtain therefore that MP has the category of quantifier: s/n, s – which, as is easy to see, is of higher order and deeply modal.

Therefore M P is a modal quantifier, characterized after Wittgenstein’s clue by

MP(x, y) ↔ MP(S(x), y)

Leibniz’s Compossibility and Compatibility

1200px-NaveGreca

Leibniz believed in discovering a suitable logical calculus of concepts enabling its user to solve any rational question. Assuming that it is done he was in power to sketch the full ontological system – from monads and qualities to the real world.

Thus let some logical calculus of concepts (names?, predicates?) be given. Cn is its connected consequence operator, whereas – for any x – Th(x) is the Cn-theory generated by x.

Leibniz defined modal concepts by the following metalogical conditions:

M(x) :↔ ⊥ ∉ Th(x)

x is possible (its theory is consistent)

L(x) :↔ ⊥ ∈ Th(¬x)

x is necessary (its negation is impossible)

C(x,y) :↔ ⊥ ∉ Cn(Th(x) ∪ Th(y))

x and y are compossible (their common theory is consistent).

Immediately we obtain Leibnizian ”soundness” conditions:

C(x, y) ↔ C(y, x) Compossibility relation is symmetric.

M(x) ↔ C(x, x) Possibility means self-compossibility.

C(x, y) → M(x)∧M(y) Compossibility implies possibility.

When can the above implication be reversed?

Onto\logical construction

Observe that in the framework of combination ontology we have already defined M(x) in a way respecting M(x) ↔ C(x, x).

On the other hand, between MP( , ) and C( , ) there is another relation, more fundamental than compossibility. It is so-called compatibility relation. Indeed, putting

CP(x, y) :↔ MP(x, y) ∧ MP(y, x) – for compatibility, and C(x,y) :↔ M(x) ∧ M(y) ∧ CP(x,y) – for compossibility

we obtain a manageable compossibility relation obeying the above Leibniz’s ”soundness” conditions.

Wholes are combinations of compossible collections, whereas possible worlds are obtained by maximalization of wholes.

Observe that we start with one basic ontological making: MP(x, y) – modality more fundamental than Leibnizian compossibility, for it is definable in two steps. Observe also that the above construction can be done for making impossible and to both basic ontological modalities as well (producing quite Hegelian output in this case!).

Of Magnitudes, Metrization and Materiality of Abstracto-Concrete Objects.

im6gq0

The possibility of introducing magnitudes in a certain domain of concrete material objects is by no means immediate, granted or elementary. First of all, it is necessary to find a property of such objects that permits to compare them, so that a quasi-serial ordering be introduced in their set, that is a total linear ordering not excluding that more than one object may occupy the same position in the series. Such an ordering must then undergo a metrization, which depends on finding a fundamental measuring procedure permitting the determination of a standard sample to which the unit of measure can be bound. This also depends on the existence of an operation of physical composition, which behaves additively with respect to the quantity which we intend to measure. Only if all these conditions are satisfied will it be possible to introduce a magnitude in a proper sense, that is a function which assigns to each object of the material domain a real number. This real number represents the measure of the object with respect to the intended magnitude. This condition, by introducing an homomorphism between the domain of the material objects and that of the positive real numbers, transforms the language of analysis (that is of the concrete theory of real numbers) into a language capable of speaking faithfully and truly about those physical objects to which it is said that such a magnitude belongs.

Does the success of applying mathematics in the study of the physical world mean that this world has a mathematical structure in an ontological sense, or does it simply mean that we find in mathematics nothing but a convenient practical tool for putting order in our representations of the world? Neither of the answers to this question is right, and this is because the question itself is not correctly raised. Indeed it tacitly presupposes that the endeavour of our scientific investigations consists in facing the reality of “things” as it is, so to speak, in itself. But we know that any science is uniquely concerned with a limited “cut” operated in reality by adopting a particular point of view, that is concretely manifested by adopting a restricted number of predicates in the discourse on reality. Several skilful operational manipulations are needed in order to bring about a homomorphism with the structure of the positive real numbers. It is therefore clear that the objects that are studied by an empirical theory are by no means the rough things of everyday experience, but bundles of “attributes” (that is of properties, relations and functions), introduced through suitable operational procedures having often the explicit and declared goal of determining a concrete structure as isomorphic, or at least homomorphic, to the structure of real numbers or to some other mathematical structure. But now, if the objects of an empirical theory are entities of this kind, we are fully entitled to maintain that they are actually endowed with a mathematical structure: this is simply that structure which we have introduced through our operational procedures. However, this structure is objective and real and, with respect to it, the mathematized discourse is far from having a purely conventional and pragmatic function, with the goal of keeping our ideas in order: it is a faithful description of this structure. Of course, we could never pretend that such a discourse determines the structure of reality in a full and exhaustive way, and this for two distinct reasons: In the first place, reality (both in the sense of the totality of existing things, and of the ”whole” of any single thing), is much richer than the particular “slide” that it is possible to cut out by means of our operational manipulations. In the second place, we must be aware that a scientific object, defined as a structured set of attributes, is an abstract object, is a conceptual construction that is perfectly defined just because it is totally determined by a finite list of predicates. But concrete objects are by no means so: they are endowed with a great deal of attributes of an indefinite variety, so that they can at best exemplify with an acceptable approximation certain abstract objects that are totally encoding a given set of attributes through their corresponding predicates. The reason why such an exemplification can only be partial is that the different attributes that are simultaneously present in a concrete object are, in a way, mutually limiting themselves, so that this object does never fully exemplify anyone of them. This explains the correct sense of such common and obvious remarks as: “a rigid body, a perfect gas, an adiabatic transformation, a perfect elastic recoil, etc, do not exist in reality (or in Nature)”. Sometimes this remark is intended to vehiculate the thesis that these are nothing but intellectual fictions devoid of any correspondence with reality, but instrumentally used by scientists in order to organize their ideas. This interpretation is totally wrong, and is simply due to a confusion between encoding and exemplifying: no concrete thing encodes any finite and explicit number of characteristics that, on the contrary, can be appropriately encoded in a concept. Things can exemplify several concepts, while concepts (or abstract objects) do not exemplify the attributes they encode. Going back to the distinction between sense on the one hand, and reference or denotation on the other hand, we could also say that abstract objects belong to the level of sense, while their exemplifications belong to the level of reference, and constitute what is denoted by them. It is obvious that in the case of empirical sciences we try to construct conceptual structures (abstract objects) having empirical denotations (exemplified by concrete objects). If one has well understood this elementary but important distinction, one is in the position of correctly seeing how mathematics can concern physical objects. These objects are abstract objects, are structured sets of predicates, and there is absolutely nothing surprising in the fact that they could receive a mathematical structure (for example, a structure isomorphic to that of the positive real numbers, or to that of a given group, or of an abstract mathematical space, etc.). If it happens that these abstract objects are exemplified by concrete objects within a certain degree of approximation, we are entitled to say that the corresponding mathematical structure also holds true (with the same degree of approximation) for this domain of concrete objects. Now, in the case of physics, the abstract objects are constructed by isolating certain ontological attributes of things by means of concrete operations, so that they actually refer to things, and are exemplified by the concrete objects singled out by means of such operations up to a given degree of approximation or accuracy. In conclusion, one can maintain that mathematics constitutes at the same time the most exact language for speaking of the objects of the domain under consideration, and faithfully mirrors the concrete structure (in an ontological sense) of this domain of objects. Of course, it is very reasonable to recognize that other aspects of these things (or other attributes of them) might not be treatable by means of the particular mathematical language adopted, and this may imply either that these attributes could perhaps be handled through a different available mathematical language, or even that no mathematical language found as yet could be used for handling them.