Two Conceptions of Morphogenesis – World as a Dense Evolutionary Plasma of Perpetual Differentiation and Innovation. Thought of the Day 57.0

adriettemyburgh3

Sanford Kwinter‘s two conceptions of morhpogenesis, of which, one is appropriate to a world capable of sustaining transcendental ontological categories, while the other is inherent in a world of perfect immanence. According to the classical, hylomorphic model, a necessarily limited number of possibilities (forms or images) are reproduced (mirrored in reality) over a substratum, in a linear time-line. The insufficiency of such a model, however, is evident in its inability to find a place for novelty. Something either is or is not possible. This model cannot account for new possibilities and it fails to confront the inevitable imperfections and degradations evident in all of its realizations. It is indeed the inevitability of corruption and imperfection inherent in classical creation that points to the second mode of morphogenesis. This mode is dependent on an understanding of the world as a ceaseless pullulation and unfolding, a dense evolutionary plasma of perpetual differentiation and innovation. In this world forms are not carried over from some transcendent realm, but instead singularities and events emerge from within a rich plasma through the continual and dynamic interaction of forces. The morphogenetic process at work in such a world is not one whereby an active subject realizes forms from a set of transcendent possibilities, but rather one in which virtualities are actualized through the constant movement inherent in the very forces that compose the world. Virtuality is understood as the free difference or singularity, not yet combined with other differences into a complex ensemble or salient form. It is of course this immanentist description of the world and its attendant mode of morphogenesis that are viable. There is no threshold beneath which classical objects, states, or relations cease to have meaning yet beyond which they are endowed with a full pedigree and privileged status. Indeed, it is the nature of real time to ensure a constant production of innovation and change in all conditions. This is evidenced precisely by the imperfections introduced in an act of realizing a form. The classical mode of morphogenesis, then, has to be understood as a false model which is imposed on what is actually a rich, perpetually transforming universe. But the sort of novelty which the enactment of the classical model produces, a novelty which from its own perspective must be construed as a defect is not a primary concern if the novelty is registered as having emerged from a complex collision of forces. Above all, it is a novelty uncontaminated by procrustean notions of subjectivity and creation.

Categorial Logic – Paracompleteness versus Paraconsistency. Thought of the Day 46.2

1c67cadc0cd0f625e03b399121febd15--category-theory-mathematics

The fact that logic is content-dependent opens a new horizon concerning the relationship of logic to ontology (or objectology). Although the classical concepts of a priori and a posteriori propositions (or judgments) has lately become rather blurred, there is an undeniable fact: it is certain that the far origin of mathematics is based on empirical practical knowledge, but nobody can claim that higher mathematics is empirical.

Thanks to category theory, it is an established fact that some sort of very important logical systems: the classical and the intuitionistic (with all its axiomatically enriched subsystems), can be interpreted through topoi. And these possibility permits to consider topoi, be it in a Noneist or in a Platonist way, as universes, that is, as ontologies or as objectologies. Now, the association of a topos with its correspondent ontology (or objectology) is quite different from the association of theoretical terms with empirical concepts. Within the frame of a physical theory, if a new fact is discovered in the laboratory, it must be explained through logical deduction (with the due initial conditions and some other details). If a logical conclusion is inferred from the fundamental hypotheses, it must be corroborated through empirical observation. And if the corroboration fails, the theory must be readjusted or even rejected.

In the case of categorial logic, the situation has some similarity with the former case; but we must be careful not to be influenced by apparent coincidences. If we add, as an axiom, the tertium non datur to the formalized intuitionistic logic we obtain classical logic. That is, we can formally pass from the one to the other, just by adding or suppressing the tertium. This fact could induce us to think that, just as in physics, if a logical theory, let’s say, intuitionistic logic, cannot include a true proposition, then its axioms must be readjusted, to make it possible to include it among its theorems. But there is a radical difference: in the semantics of intuitionistic logic, and of any logic, the point of departure is not a set of hypothetical propositions that must be corroborated through experiment; it is a set of propositions that are true under some interpretation. This set can be axiomatic or it can consist in rules of inference, but the theorems of the system are not submitted to verification. The derived propositions are just true, and nothing more. The logician surely tries to find new true propositions but, when they are found (through some effective method, that can be intuitive, as it is in Gödel’s theorem) there are only three possible cases: they can be formally derivable, they can be formally underivable, they can be formally neither derivable nor underivable, that is, undecidable. But undecidability does not induce the logician to readjust or to reject the theory. Nobody tries to add axioms or to diminish them. In physics, when we are handling a theory T, and a new describable phenomenon is found that cannot be deduced from the axioms (plus initial or some other conditions), T must be readjusted or even rejected. A classical logician will never think of changing the axioms or rules of inference of classical logic because it is undecidable. And an intuitionist logician would not care at all to add the tertium to the axioms of Heyting’s system because it cannot be derived within it.

The foregoing considerations sufficiently show that in logic and mathematics there is something that, with full right, can be called “a priori“. And although, as we have said, we must acknowledge that the concepts of a priori and a posteriori are not clear-cut, in some cases, we can rightly speak of synthetical a priori knowledge. For instance, the Gödel’s proposition that affirms its own underivabilty is synthetical and a priori. But there are other propositions, for instance, mathematical induction, that can also be considered as synthetical and a priori. And a great deal of mathematical definitions, that are not abbreviations, are synthetical. For instance, the definition of a monoid action is synthetical (and, of course, a priori) because the concept of a monoid does not have among its characterizing traits the concept of an action, and vice versa.

Categorial logic is, the deepest knowledge of logic that has ever been achieved. But its scope does not encompass the whole field of logic. There are other kinds of logic that are also important and, if we intend to know, as much as possible, what logic is and how it is related to mathematics and ontology (or objectology), we must pay attention to them. From a mathematical and a philosophical point of view, the most important logical non-paracomplete systems are the paraconsistent ones. These systems are something like a dual to paracomplete logics. They are employed in inconsistent theories without producing triviality (in this sense also relevant logics are paraconsistent). In intuitionist logic there are interpretations that, with respect to some topoi, include two false contradictory propositions; whereas in paraconsistent systems we can find interpretations in which there are two contradictory true propositions.

There is, though, a difference between paracompleteness and paraconsistency. Insofar as mathematics is concerned, paracomplete systems had to be coined to cope with very deep problems. The paraconsistent ones, on the other hand, although they have been applied with success to mathematical theories, were conceived for purely philosophical and, in some cases, even for political and ideological motivations. The common point of them all was the need to construe a logical system able to cope with contradictions. That means: to have at one’s disposal a deductive method which offered the possibility of deducing consistent conclusions from inconsistent premisses. Of course, the inconsistency of the premisses had to comply with some (although very wide) conditions to avoid triviality. But these conditions made it possible to cope with paradoxes or antinomies with precision and mathematical sense.

But, philosophically, paraconsistent logic has another very important property: it is used in a spontaneous way to formalize the naive set theory, that is, the kind of theory that pre-Zermelian mathematicians had always employed. And it is, no doubt, important to try to develop mathematics within the frame of naive, spontaneous, mathematical thought, without falling into the artificiality of modern set theory. The formalization of the naive way of mathematical thinking, although every formalization is unavoidably artificial, has opened the possibility of coping with dialectical thought.

Hyperstructures

universe_splatter2

In many areas of mathematics there is a need to have methods taking local information and properties to global ones. This is mostly done by gluing techniques using open sets in a topology and associated presheaves. The presheaves form sheaves when local pieces fit together to global ones. This has been generalized to categorical settings based on Grothendieck topologies and sites.

The general problem of going from local to global situations is important also outside of mathematics. Consider collections of objects where we may have information or properties of objects or subcollections, and we want to extract global information.

This is where hyperstructures are very useful. If we are given a collection of objects that we want to investigate, we put a suitable hyperstructure on it. Then we may assign “local” properties at each level and by the generalized Grothendieck topology for hyperstructures we can now glue both within levels and across the levels in order to get global properties. Such an assignment of global properties or states we call a globalizer. 

To illustrate our intuition let us think of a society organized into a hyperstructure. Through levelwise democratic elections leaders are elected and the democratic process will eventually give a “global” leader. In this sense democracy may be thought of as a sociological (or political) globalizer. This applies to decision making as well.

In “frustrated” spin systems in physics one may possibly think of the “frustation” being resolved by creating new levels and a suitable globalizer assigning a global state to the system corresponding to various exotic physical conditions like, for example, a kind of hyperstructured spin glass or magnet. Acting on both classical and quantum fields in physics may be facilitated by putting a hyperstructure on them.

There are also situations where we are given an object or a collection of objects with assignments of properties or states. To achieve a certain goal we need to change, let us say, the state. This may be very difficult and require a lot of resources. The idea is then to put a hyperstructure on the object or collection. By this we create levels of locality that we can glue together by a generalized Grothendieck topology.

It may often be much easier and require less resources to change the state at the lowest level and then use a globalizer to achieve the desired global change. Often it may be important to find a minimal hyperstructure needed to change a global state with minimal resources.

Again, to support our intuition let us think of the democratic society example. To change the global leader directly may be hard, but starting a “political” process at the lower individual levels may not require heavy resources and may propagate through the democratic hyperstructure leading to a change of leader.

Hence, hyperstructures facilitates local to global processes, but also global to local processes. Often these are called bottom up and top down processes. In the global to local or top down process we put a hyperstructure on an object or system in such a way that it is represented by a top level bond in the hyperstructure. This means that to an object or system X we assign a hyperstructure

H = {B0,B1,…,Bn} in such a way that X = bn for some bn ∈ B binding a family {bi1n−1} of Bn−1 bonds, each bi1n−1 binding a family {bi2n−2} of Bn−2 bonds, etc. down to B0 bonds in H. Similarly for a local to global process. To a system, set or collection of objects X, we assign a hyperstructure H such that X = B0. A hyperstructure on a set (space) will create “global” objects, properties and states like what we see in organized societies, organizations, organisms, etc. The hyperstructure is the “glue” or the “law” of the objects. In a way, the globalizer creates a kind of higher order “condensate”. Hyperstructures represent a conceptual tool for translating organizational ideas like for example democracy, political parties, etc. into a mathematical framework where new types of arguments may be carried through.