Sanford Kwinter‘s two conceptions of morhpogenesis, of which, one is appropriate to a world capable of sustaining transcendental ontological categories, while the other is inherent in a world of perfect immanence. According to the classical, hylomorphic model, a necessarily limited number of possibilities (forms or images) are reproduced (mirrored in reality) over a substratum, in a linear time-line. The insufficiency of such a model, however, is evident in its inability to find a place for novelty. Something either is or is not possible. This model cannot account for new possibilities and it fails to confront the inevitable imperfections and degradations evident in all of its realizations. It is indeed the inevitability of corruption and imperfection inherent in classical creation that points to the second mode of morphogenesis. This mode is dependent on an understanding of the world as a ceaseless pullulation and unfolding, a dense evolutionary plasma of perpetual differentiation and innovation. In this world forms are not carried over from some transcendent realm, but instead singularities and events emerge from within a rich plasma through the continual and dynamic interaction of forces. The morphogenetic process at work in such a world is not one whereby an active subject realizes forms from a set of transcendent possibilities, but rather one in which virtualities are actualized through the constant movement inherent in the very forces that compose the world. Virtuality is understood as the free difference or singularity, not yet combined with other differences into a complex ensemble or salient form. It is of course this immanentist description of the world and its attendant mode of morphogenesis that are viable. There is no threshold beneath which classical objects, states, or relations cease to have meaning yet beyond which they are endowed with a full pedigree and privileged status. Indeed, it is the nature of real time to ensure a constant production of innovation and change in all conditions. This is evidenced precisely by the imperfections introduced in an act of realizing a form. The classical mode of morphogenesis, then, has to be understood as a false model which is imposed on what is actually a rich, perpetually transforming universe. But the sort of novelty which the enactment of the classical model produces, a novelty which from its own perspective must be construed as a defect is not a primary concern if the novelty is registered as having emerged from a complex collision of forces. Above all, it is a novelty uncontaminated by procrustean notions of subjectivity and creation.
Tag: classical
Categorial Logic – Paracompleteness versus Paraconsistency. Thought of the Day 46.2
The fact that logic is content-dependent opens a new horizon concerning the relationship of logic to ontology (or objectology). Although the classical concepts of a priori and a posteriori propositions (or judgments) has lately become rather blurred, there is an undeniable fact: it is certain that the far origin of mathematics is based on empirical practical knowledge, but nobody can claim that higher mathematics is empirical.
Thanks to category theory, it is an established fact that some sort of very important logical systems: the classical and the intuitionistic (with all its axiomatically enriched subsystems), can be interpreted through topoi. And these possibility permits to consider topoi, be it in a Noneist or in a Platonist way, as universes, that is, as ontologies or as objectologies. Now, the association of a topos with its correspondent ontology (or objectology) is quite different from the association of theoretical terms with empirical concepts. Within the frame of a physical theory, if a new fact is discovered in the laboratory, it must be explained through logical deduction (with the due initial conditions and some other details). If a logical conclusion is inferred from the fundamental hypotheses, it must be corroborated through empirical observation. And if the corroboration fails, the theory must be readjusted or even rejected.
In the case of categorial logic, the situation has some similarity with the former case; but we must be careful not to be influenced by apparent coincidences. If we add, as an axiom, the tertium non datur to the formalized intuitionistic logic we obtain classical logic. That is, we can formally pass from the one to the other, just by adding or suppressing the tertium. This fact could induce us to think that, just as in physics, if a logical theory, let’s say, intuitionistic logic, cannot include a true proposition, then its axioms must be readjusted, to make it possible to include it among its theorems. But there is a radical difference: in the semantics of intuitionistic logic, and of any logic, the point of departure is not a set of hypothetical propositions that must be corroborated through experiment; it is a set of propositions that are true under some interpretation. This set can be axiomatic or it can consist in rules of inference, but the theorems of the system are not submitted to verification. The derived propositions are just true, and nothing more. The logician surely tries to find new true propositions but, when they are found (through some effective method, that can be intuitive, as it is in Gödel’s theorem) there are only three possible cases: they can be formally derivable, they can be formally underivable, they can be formally neither derivable nor underivable, that is, undecidable. But undecidability does not induce the logician to readjust or to reject the theory. Nobody tries to add axioms or to diminish them. In physics, when we are handling a theory T, and a new describable phenomenon is found that cannot be deduced from the axioms (plus initial or some other conditions), T must be readjusted or even rejected. A classical logician will never think of changing the axioms or rules of inference of classical logic because it is undecidable. And an intuitionist logician would not care at all to add the tertium to the axioms of Heyting’s system because it cannot be derived within it.
The foregoing considerations sufficiently show that in logic and mathematics there is something that, with full right, can be called “a priori“. And although, as we have said, we must acknowledge that the concepts of a priori and a posteriori are not clear-cut, in some cases, we can rightly speak of synthetical a priori knowledge. For instance, the Gödel’s proposition that affirms its own underivabilty is synthetical and a priori. But there are other propositions, for instance, mathematical induction, that can also be considered as synthetical and a priori. And a great deal of mathematical definitions, that are not abbreviations, are synthetical. For instance, the definition of a monoid action is synthetical (and, of course, a priori) because the concept of a monoid does not have among its characterizing traits the concept of an action, and vice versa.
Categorial logic is, the deepest knowledge of logic that has ever been achieved. But its scope does not encompass the whole field of logic. There are other kinds of logic that are also important and, if we intend to know, as much as possible, what logic is and how it is related to mathematics and ontology (or objectology), we must pay attention to them. From a mathematical and a philosophical point of view, the most important logical non-paracomplete systems are the paraconsistent ones. These systems are something like a dual to paracomplete logics. They are employed in inconsistent theories without producing triviality (in this sense also relevant logics are paraconsistent). In intuitionist logic there are interpretations that, with respect to some topoi, include two false contradictory propositions; whereas in paraconsistent systems we can find interpretations in which there are two contradictory true propositions.
There is, though, a difference between paracompleteness and paraconsistency. Insofar as mathematics is concerned, paracomplete systems had to be coined to cope with very deep problems. The paraconsistent ones, on the other hand, although they have been applied with success to mathematical theories, were conceived for purely philosophical and, in some cases, even for political and ideological motivations. The common point of them all was the need to construe a logical system able to cope with contradictions. That means: to have at one’s disposal a deductive method which offered the possibility of deducing consistent conclusions from inconsistent premisses. Of course, the inconsistency of the premisses had to comply with some (although very wide) conditions to avoid triviality. But these conditions made it possible to cope with paradoxes or antinomies with precision and mathematical sense.
But, philosophically, paraconsistent logic has another very important property: it is used in a spontaneous way to formalize the naive set theory, that is, the kind of theory that pre-Zermelian mathematicians had always employed. And it is, no doubt, important to try to develop mathematics within the frame of naive, spontaneous, mathematical thought, without falling into the artificiality of modern set theory. The formalization of the naive way of mathematical thinking, although every formalization is unavoidably artificial, has opened the possibility of coping with dialectical thought.
Statistical Mechanics. Note Quote.
Central to statistical mechanics is the notion of a state space. A state space is a space of possible states of the world at a time. All of the possibilities in this space are alike with regards to certain static properties, such as the spatiotemporal dimensions of the system, the number of particles, and the masses of these particles. The individual elements of this space are picked out by certain dynamic properties, the locations and momenta of the particles. In versions of classical statistical mechanics like that proposed by David Albert, one of the statistical mechanical laws is a constraint on the initial entropy of the universe. On such theories the space of classical statistical mechanical worlds (and the state spaces that partition it) will only contain worlds whose initial macroconditions are of a suitably low entropy.
In classical statistical mechanics these static and dynamic properties determine the state of the world at a time. Classical mechanics is deterministic: the state of the world at a time determines the history of the system. Earman, Xia, Norton and others have offered counterexamples to the claim that classical mechanics is deterministic. Two comments are in order, however. First, these cases spell trouble for the standard distinction between dynamic and static properties employed by classical statistical mechanics. For example, in some of these cases new particles will unpredictably zoom in from infinity. Since this leads to a change in the number of particles in the system, it would seem the number of particles cannot be properly understood as a static property. Second, although no proof of this exists, prevailing opinion is that these indeterministic cases form a set of Lebesgue measure zero. So in classical statistical mechanics each point in the state space corresponds to a unique history, i.e., to a possible world. We can therefore take a state space to be a set of possible worlds, and the state space and its subsets to be propositions. The state spaces form a partition of the classical statistical mechanical worlds, dividing the classical statistical mechanical worlds into groups of worlds that share the relevant static properties.
Given a state space, we can provide the classical statistical mechanical probabilities. Let m be the Liouville measure, the Lebesgue measure over the canonical representation of the state space, and let K be a subset of the state space. The classical statistical mechanical probability of A relative to K is m(A∩K)/m(K). Note that statistical mechanical probabilities aren’t defined for all object propositions A and relative propositions K. Given the above formula, two conditions must be satisfied for the chance of A relative to K to be defined. Both m(A ∩ K) and m(K) must be defined, and the ratio of m(A ∩ K) to m(K) must be defined.
Despite the superficial similarity, the statistical mechanical probability of A relative to K is not a conditional probability. If it were, we could define the probability of A ‘simpliciter’ as m(A), and retrieve the formula for the probability of A relative to K using the definition of conditional probability. The reason we can’t do this is that the Liouville measure m is not a probability measure; unlike probability measures, there is no upper bound on the value a Liouville measure can take. We only obtain a probability distribution after we take the ratio of m(A ∩ K) and m(K); since m(A ∩ K) ≤ m(K), the ratio of the two terms will always fall in the range of acceptable values, [0,1].
Now, how should we understand statistical mechanical probabilities? A satisfactory account must preserve their explanatory power and normative force. For example, classical mechanics has solutions where ice cubes grow larger when placed in hot water, as well as solutions where ice cubes melt when placed in hot water. Why is it that we only see ice cubes melt when placed in hot water? Statistical mechanics provides the standard explanation. When we look at systems of cups of hot water with ice cubes in them, we find that according to the Liouville measure the vast majority of them quickly develop into cups of lukewarm water, and only a few develop into cups of even hotter water with larger ice cubes. The explanation for why we always see ice cubes melt, then, is that it’s overwhelmingly likely that they’ll melt instead of grow, given the statistical mechanical probabilities. In addition to explanatory power, we take statistical mechanical probabilities to have normative force: it seems irrational to believe that ice cubes are likely to grow when placed in hot water.
The natural account of statistical mechanical probabilities is to take them to be chances. On this account, statistical mechanical probabilities have the explanatory power they do because they’re chances; they represent lawful, empirical and contingent features of the world. Likewise, statistical mechanical probabilities have normative force because they’re chances, and chances normatively constrain our credences via something like the Principal Principle.
But statistical mechanical probabilities cannot be chances on the Lewisian accounts. First, classical statistical mechanical chances are compatible with classical mechanics, a deterministic theory. But on the Lewisian accounts determinism and chance are incompatible. Second, classical statistical mechanics is time symmetric like the Aharonov, Bergmann and Lebowitz (ABL) theory of quantum mechanics and classical statistical mechanics (generally, the ABL theory assigns chances given pre-measurements, given post- measurements, and given pre and post-measurements), and is incompatible with the Lewisian accounts for similar reasons. Consider two propositions, A and K, where A is the proposition that the temperature of the world at t1 is T1, and K is the proposition that the temperature of the world at t0 and t2 is T0 and T2. Consider the chance of A relative to K. On the Lewisian accounts the arguments of the relevant chance distribution will be the classical statistical mechanical laws and a history up to a time. But a history up to what time? The statistical mechanical laws and this history entail the chance distribution on the Lewisian accounts. The distribution depends on the relative state K, and a history must run up to t2 to entail K, so we need a history up to t2 to obtain the desired distribution. Since the past is no longer chancy, the chance of any proposition entailed by the history up to t2, including A, must be trivial. But the statistical mechanical chance of A is generally not trivial, so the Lewisian account cannot accommodate such chances. Third, the Lewisian restriction of the second argument of chance distributions to histories is too narrow to accommodate statistical mechanical chances. Consider the case just given, where A is a proposition about the temperature of the world at t1 and K a proposition about the temperature of the world at t0 and t2. Consider also a third proposition K′, that the temperature of the world at t0, t1.5 and t2 is T0, T1.5 and T2, respectively. On the Lewisian accounts it looks like the chance of A relative to K and the chance of A relative to K′ will have the same arguments: the statistical mechanical laws and a history up to t2. But for many values of T1.5, statistical mechanics will assign different chances to A relative to K and A relative to K′.
It’s not surprising that the Lewisian account of the arguments of chance distributions is at odds with statistical mechanical chances. It’s natural to take classical statistical mechanics T and the relative state K to be the arguments of statistical mechanical distributions, since T and K alone entail these distributions. But taking T and K to be the arguments conflicts with the Lewisian accounts, since while K can be a history up to a time, often it is not.
So the Lewisian accounts are committed to denying that statistical mechanical probabilities are chances. Instead, they take them to be subjective values of some kind. There’s a long tradition of treating statistical mechanical probabilities this way, taking them to represent the degrees of belief a rational agent should have in a particular state of ignorance. Focusing on classical statistical mechanics, it proceeds along the following lines.
Start with the intuition that some version of the Indifference Principle – the principle that you should have equal credences in possibilities you’re epistemically ‘in- different’ between – should be a constraint on the beliefs of rational beings. There are generally too many possibilities in statistical mechanical cases – an uncountably infinite number – to apply the standard Indifference Principle to. But given the intuition behind indifference, it seems we can adopt a modified version of the Indifference Principle: when faced with a continuum number of possibilities that you’re epistemically indifferent between, your degrees of belief in these possibilities should match the values assigned to them by an appropriately uniform measure. The properties of the Lebesgue measure make it a natural candidate for this measure. Granting this, it seems the statistical mechanical probabilities fall out of principles of rationality: if you only know K about the world, then your credence that the world is in some set of states A should be equal to the proportion (according to the Lebesgue measure) of K states that are A states. Thus it seems we recover the normative force of statistical mechanical probabilities without having to posit chances.
However, this account of statistical mechanical probabilities is untenable. First, the account suffers from a technical problem. The representation of the state space determines the Lebesgue measure of a set of states, and there are an infinite number of ways to represent the state space. So there are an infinite number of ways to ‘uniformly’ assign credences to the space of possibilities. Classical statistical mechanics uses the Lebesgue measure over the canonical representation of the state space, the Liouville measure, but no compelling argument has been given for why this is the right way to represent the space of possibilities when we’re trying to quantify our ignorance. So it doesn’t seem that we can recover statistical mechanical probabilities from intuitions regarding indifference after all.
Second, the kinds of values this account provides can’t play the explanatory role we take statistical mechanical probabilities to play. On this account statistical mechanical probabilities don’t come from the laws. Rather, they’re a priori necessary facts about what it’s rational to believe when in a certain state of ignorance. But if these facts are a priori and necessary, they’re incapable of explaining a posteriori and contingent facts about our world, like why ice cubes usually melt when placed in hot water. Furthermore, as a purely normative principle, the Indifference Principle isn’t the kind of thing that could explain the success of statistical mechanics. Grant that a priori it’s rational to believe that ice cubes will usually melt when placed in hot water: that does nothing to explain why in fact ice cubes do usually melt when placed in hot water.
The indifference account of statistical mechanical probabilities is untenable. The only viable account of statistical mechanical probabilities on offer is that they are chances, and the Lewisian theories of chance are incompatible with statistical mechanical chances. A proposal is in place to correct these views. The proposal is to allow the second argument of chance distributions to be propositions other than histories, and to reject the two additional claims about chance the Lewisian theories make: that the past is no longer chancy, and that determinism and chance are incompatible. The two additional claims of the Lewisian theories stipulate properties of chance distributions that are incompatible with time symmetric and deterministic chances; by rejecting these two additional claims, we eliminate these stipulated incompatibilities. By allowing the second argument to be propositions other than histories, we can incorporate the time symmetric arguments needed for theories like the ABL theory and the more varied arguments needed for statistical mechanical theories.
Hyperstructures
In many areas of mathematics there is a need to have methods taking local information and properties to global ones. This is mostly done by gluing techniques using open sets in a topology and associated presheaves. The presheaves form sheaves when local pieces fit together to global ones. This has been generalized to categorical settings based on Grothendieck topologies and sites.
The general problem of going from local to global situations is important also outside of mathematics. Consider collections of objects where we may have information or properties of objects or subcollections, and we want to extract global information.
To illustrate our intuition let us think of a society organized into a hyperstructure. Through levelwise democratic elections leaders are elected and the democratic process will eventually give a “global” leader. In this sense democracy may be thought of as a sociological (or political) globalizer. This applies to decision making as well.
In “frustrated” spin systems in physics one may possibly think of the “frustation” being resolved by creating new levels and a suitable globalizer assigning a global state to the system corresponding to various exotic physical conditions like, for example, a kind of hyperstructured spin glass or magnet. Acting on both classical and quantum fields in physics may be facilitated by putting a hyperstructure on them.
There are also situations where we are given an object or a collection of objects with assignments of properties or states. To achieve a certain goal we need to change, let us say, the state. This may be very difficult and require a lot of resources. The idea is then to put a hyperstructure on the object or collection. By this we create levels of locality that we can glue together by a generalized Grothendieck topology.
It may often be much easier and require less resources to change the state at the lowest level and then use a globalizer to achieve the desired global change. Often it may be important to find a minimal hyperstructure needed to change a global state with minimal resources.
Again, to support our intuition let us think of the democratic society example. To change the global leader directly may be hard, but starting a “political” process at the lower individual levels may not require heavy resources and may propagate through the democratic hyperstructure leading to a change of leader.
Hence, hyperstructures facilitates local to global processes, but also global to local processes. Often these are called bottom up and top down processes. In the global to local or top down process we put a hyperstructure on an object or system in such a way that it is represented by a top level bond in the hyperstructure. This means that to an object or system X we assign a hyperstructure
H = {B0,B1,…,Bn} in such a way that X = bn for some bn ∈ B binding a family {bi1n−1} of Bn−1 bonds, each bi1n−1 binding a family {bi2n−2} of Bn−2 bonds, etc. down to B0 bonds in H. Similarly for a local to global process. To a system, set or collection of objects X, we assign a hyperstructure H such that X = B0. A hyperstructure on a set (space) will create “global” objects, properties and states like what we see in organized societies, organizations, organisms, etc. The hyperstructure is the “glue” or the “law” of the objects. In a way, the globalizer creates a kind of higher order “condensate”. Hyperstructures represent a conceptual tool for translating organizational ideas like for example democracy, political parties, etc. into a mathematical framework where new types of arguments may be carried through.