Noneism. Part 1.

Meinong

Noneism was created by Richard Routley. Its point of departure is the rejection of what Routley calls “The Ontological Assumption”. This assumption consists in the explicit or, more frequently, implicit belief that denoting always refers to existing objects. If the object, or objects, on which a proposition is about, do not exist, then these objects can only be one: the null entity. It is incredible that Frege believed that denoting descriptions without a real (empirical, theoretical, or ideal) referent denoted only the null set. And it is also difficult to believe that Russell sustained the thesis that non-existing objects cannot have properties and that propositions about these objects are false.

This means that we can have a very clear apprehension of imaginary objects, and quite clear intellection of abstract objects that are not real. This is possible because to determine an object we only need to describe it through its distinctive traits. This description is possible because an object is always chacterized through some definite notes. The amount of traits necessary to identify an object greatly varies. In some cases we need only a few, for instance, the golden mountain, or the blue bird; in other cases we need more, for instance, the goddess Venus or the centaur Chiron. In other instances the traits can be very numerous, even infinite. For instance the chiliedron, and the decimal number 0,0000…009, in which 9 comes after the first million zeros, have many traits. And the ordinal omega or any Hilbert space have infinite traits (although these traits can be reckoned through finite definitions). These examples show, in a convincing manner, that the Ontological Assumption is untenable. We must reject it and replace it with what Routley dubbs the Characterization Postulate. The Characterization Postulate says that, to be an object means to be characterized by determined traits. The set of the characterizing traits of an object can be called its “characteristic”. When the characteristic of an object is set up, the object is perfectly recognizable.

Once this postulate is adopted, its consequences are far reaching. Since we can characterize objects through any traits whatsoever, an object can not only be inexistent, it can even be absurd or inconsistent. For instance, the “squond” (the circle that is square and round). And we can make perfectly valid logical inferences from the premiss: x is the sqound:

(1) if x is the squond, then x is square
(2) if x is the squond, then x is round

So, the theory of objects has the widest realm of application. It is clear that the Ontological Assumption imposes unacceptable limits to logic. As a matter of fact, the existential quantifier of classical logic could not have been conceived without the Ontological Assumption. The expression “(∃x)Fx” means that there exists at least an object that has the property F (or, in extensional language, that there exists an x that is a member of the extension of F). For this reason, “∃x” is unappliable to non existing objects. Of course, in classical logic we can deny the existence of an Object, but we cannot say anything about Objects that have never existed and shall never exist (we are strictly speaking about classical logic). We cannot quantify individual variables of a first order predicate that do not refer to a real, actual, past or future entity. For instance, we cannot say “(∃x) (x is the eye of Polyphemus)”. This would be false, of course, because Polyphemus does not exist. But if the Ontological Assumption is set aside, it is true, within a mythological frame, that Polyphemus has a single eye and many other properties. And now we can understand why noneism leads to logical material-dependence.

As we have anticipated, there must be some limitations concerning the selection of the contradictory properties; otherwise the whole theory becomes inconsistent and is trivialized. To avoid trivialization neutral (noneist) logic distinguishes between two sorts of negation: the classical propositional negation: “8 is not P”, and the narrower negation: “8 is non-P”. In this way, and by applying some other technicalities (for instance, in case an universe is inconsistent, some kind of paraconsistent logic must be used) trivialization is avoided. With the former provisions, the Characterization Postulate can be applied to create inconsistent universes in which classical logic is not valid. For instance, a world in which there is a mysterious personage, that within determined but very subtle circumstances, is and is not at the same time in two different places. In this case the logic to be applied is, obviously, some kind of paraconsistent logic (the type to be selected depends on the characteristic of the personage). And in another universe there could be a jewel which has two false properties: it is false that it is transparent and it is false that it is opaque. In this kind of world we must use, clearly, some kind of paracomplete logic. To develop naive set theory (in Halmos sense), we must use some type of paraconsistent logic to cope with the paradoxes, that are produced through a natural way of mathematical reasoning; this logic can be of several orders, just like the classical. In other cases, we can use some kind of relevant and, a fortiori, paraconsistent logic; and so on, ad infinitum.

But if logic is content-dependent, and this dependence is a consequence of the Ontological Assumption’s rejection, what about ontology? Because the universes determined through the application of the Characterization Postulate may have no being (in fact, most of them do not), we cannot say that the objects that populate such universes are entities, because entities exist in the empirical world, or in the real world that underpins the phenomena, or (in a somewhat different way), in an ideal Platonic world. Instead of speaking about ontology, we should speak about objectology. In essence objectology is the discipline founded by Meinong (Theory of Objects), but enriched and made more precise by Routley and other noneist logicians. Its main division would be Ontology (the study of real physical and Platonic objects) and Medenology (the study of objects that have no existence).

Advertisement

Rhizomatic Topology and Global Politics. A Flirtatious Relationship.

 

rhizome

Deleuze and Guattari see concepts as rhizomes, biological entities endowed with unique properties. They see concepts as spatially representable, where the representation contains principles of connection and heterogeneity: any point of a rhizome must be connected to any other. Deleuze and Guattari list the possible benefits of spatial representation of concepts, including the ability to represent complex multiplicity, the potential to free a concept from foundationalism, and the ability to show both breadth and depth. In this view, geometric interpretations move away from the insidious understanding of the world in terms of dualisms, dichotomies, and lines, to understand conceptual relations in terms of space and shapes. The ontology of concepts is thus, in their view, appropriately geometric, a multiplicity defined not by its elements, nor by a center of unification and comprehension and instead measured by its dimensionality and its heterogeneity. The conceptual multiplicity, is already composed of heterogeneous terms in symbiosis, and is continually transforming itself such that it is possible to follow, and map, not only the relationships between ideas but how they change over time. In fact, the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference. They outline the unique contribution of geometric models to the understanding of contingent structure:

Principle of cartography and decalcomania: a rhizome is not amenable to any structural or generative model. It is a stranger to any idea of genetic axis or deep structure. A genetic axis is like an objective pivotal unity upon which successive stages are organized; deep structure is more like a base sequence that can be broken down into immediate constituents, while the unity of the product passes into another, transformational and subjective, dimension. (Deleuze and Guattari)

The word that Deleuze and Guattari use for ‘multiplicities’ can also be translated to the topological term ‘manifold.’ If we thought about their multiplicities as manifolds, there are a virtually unlimited number of things one could come to know, in geometric terms, about (and with) our object of study, abstractly speaking. Among those unlimited things we could learn are properties of groups (homological, cohomological, and homeomorphic), complex directionality (maps, morphisms, isomorphisms, and orientability), dimensionality (codimensionality, structure, embeddedness), partiality (differentiation, commutativity, simultaneity), and shifting representation (factorization, ideal classes, reciprocity). Each of these functions allows for a different, creative, and potentially critical representation of global political concepts, events, groupings, and relationships. This is how concepts are to be looked at: as manifolds. With such a dimensional understanding of concept-formation, it is possible to deal with complex interactions of like entities, and interactions of unlike entities. Critical theorists have emphasized the importance of such complexity in representation a number of times, speaking about it in terms compatible with mathematical methods if not mathematically. For example, Foucault’s declaration that: practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in (apparently simple) concepts deployed both in global politics.  This leads to a shift in the concept of danger as well, where danger is not an objective condition but “an effect of interpretation”. Critical thinking about how-possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses, sending a wave of added complexity through other concepts as well. This work seeking complexity serves one of the major underlying functions of critical theorizing: finding invisible injustices in (modernist, linear, structuralist) givens in the operation and analysis of global politics.

In a geometric sense, this complexity could be thought about as multidimensional mapping. In theoretical geometry, the process of mapping conceptual spaces is not primarily empirical, but for the purpose of representing and reading the relationships between information, including identification, similarity, differentiation, and distance. The reason for defining topological spaces in math, the essence of the definition, is that there is no absolute scale for describing the distance or relation between certain points, yet it makes sense to say that an (infinite) sequence of points approaches some other (but again, no way to describe how quickly or from what direction one might be approaching). This seemingly weak relationship, which is defined purely ‘locally’, i.e., in a small locale around each point, is often surprisingly powerful: using only the relationship of approaching parts, one can distinguish between, say, a balloon, a sheet of paper, a circle, and a dot.

To each delineated concept, one should distinguish and associate a topological space, in a (necessarily) non-explicit yet definite manner. Whenever one has a relationship between concepts (here we think of the primary relationship as being that of constitution, but not restrictively, we ‘specify’ a function (or inclusion, or relation) between the topological spaces associated to the concepts). In these terms, a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented. Such an approach can be leveraged for thinking about conceptual components, dimensionality, and structure. In these terms, dimensions can be thought of as properties or qualities, each with their own (often-multidimensional) properties or qualities. A key goal of the modeling of conceptual space being representation means that a key (mathematical and theoretical) goal of concept space mapping is

associationism, where associations between different kinds of information elements carry the main burden of representation. (Conceptual_Spaces_as_a_Framework_for_Knowledge_Representation)

To this end,

objects in conceptual space are represented by points, in each domain, that characterize their dimensional values. A concept geometry for conceptual spaces

These dimensional values can be arranged in relation to each other, as Gardenfors explains that

distances represent degrees of similarity between objects represented in space and therefore conceptual spaces are “suitable for representing different kinds of similarity relation. Concept

These similarity relationships can be explored across ideas of a concept and across contexts, but also over time, since “with the aid of a topological structure, we can speak about continuity, e.g., a continuous change” a possibility which can be found only in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations.

Anti-Haecceitism. Thought of the Day 45.0

Untitled

; Conc is the property of being concurrent, Red is the property of definiteness, and Heavy is the property of vividness.

In the language of modern metaphysics, w and w′ above are qualitatively indiscernible. And anti-haecceitism is the doctrine which says that qualitatively indiscernible worlds are identical. So, we immediately see a problem looming.

But why accept anti-haecceitism? The best reasons focus on physics. Just as the debate between Leibniz and Newton’s followers focused on physics, the strongest arguments still against haecceitism come from physics. Anti-haecceitism as understood here concerns the identity of indiscernible (“isomorphic”) worlds or “situations” or “states”. In many areas of physics, including statistical physics, spacetime physics and quantum theory, the physics tells us that certain “indiscernible situations” are in fact literally identical.

A simple example comes from the statistical physics of “indiscernibile particles”. Consider a box, partitioned into Left-side and Right-side (L and R), and containing two indiscernible particles. One naively thinks this permits four distinct states or situations: i.e., both in L; both in R, and one in L and one in R. However, physics tells us that there are only three states, not four, and we might denote these: S2,0, S1,1, S0,2. The state S1,1, i.e., where “one is L and one is R”, is a single state; there are not two distinct possibilities. The correct description of S1,1 uses existential quantifiers:

∃x ∃y (x ≠ y ∧ Lx ∧ Ry)

One can (syntactically) introduce labels for the particles, say a, b. One can do this in two ways, to obtain:

a ≠ b ∧ La ∧ Rb

b ≠ a ∧ Lb ∧ Ra

But this labelling is purely representational, and not in any way fixed by the physical state S1,1 itself. So, there are distinct indiscernible objects in “situations” or states.

From spacetime physics, consider the principle sometimes called “Leibniz equivalence” (Norton). A formulation (but under a different name) is given in Wald’s monograph General Relativity. Wald’s formulation of Leibniz equivalence is, essentially, this:

isomorphic spacetime models represent the same physical world.

For example, let

S = (M, g, . . . )

be a spacetime model with carrier set |M| of points. (i.e., M is the underlying manifold.) Then Leibniz Equivalence implies:

If π : |M| → |M| is any bijection, then πS and S represent

the same world. There are many other examples, including examples from quantum theory. Consequently, independently of our pre-theoretic considerations concerning modality, it seems to me that our best physics – statistical physics, relativity and quantum theory – is telling us that anti-haecceitism is true: given a structure A which represents a world w, any permuted copy πA should somehow represent the same world, w.

Abrupt rise of new machine ecology beyond human response time

Untitled

Figure: Empirical transition in size distribution for UEEs with duration above threshold t, as function of t. (A) Scale of times. 650 ms is the time for chess grandmaster to discern King is in checkmate. Plots show results of the best-fit power-law exponent (black) and goodness-of-fit (blue) to the distributions for size of (B) crashes, and (C) spikes, as shown in the inset schematic.

Society’s techno-social systems are becoming ever faster and more computer-orientated. However, far from simply generating faster versions of existing behaviour, we show that this speed-up can generate a new behavioural regime as humans lose the ability to intervene in real time. Analyzing millisecond-scale data for the world’s largest and most powerful techno-social system, the global financial market, we uncover an abrupt transition to a new all-machine phase characterized by large numbers of subsecond extreme events. The proliferation of these subsecond events shows an intriguing correlation with the onset of the system-wide financial collapse in 2008. Findings are consistent with an emerging ecology of competitive machines featuring ‘crowds’ of predatory algorithms, and highlight the need for a new scientific theory of subsecond financial phenomena.

Abrupt rise of new machine ecology beyond human response time