# The Locus of Renormalization. Note Quote.

Since symmetries and the related conservation properties have a major role in physics, it is interesting to consider the paradigmatic case where symmetry changes are at the core of the analysis: critical transitions. In these state transitions, “something is not preserved”. In general, this is expressed by the fact that some symmetries are broken or new ones are obtained after the transition (symmetry changes, corresponding to state changes). At the transition, typically, there is the passage to a new “coherence structure” (a non-trivial scale symmetry); mathematically, this is described by the non-analyticity of the pertinent formal development. Consider the classical para-ferromagnetic transition: the system goes from a disordered state to sudden common orientation of spins, up to the complete ordered state of a unique orientation. Or percolation, often based on the formation of fractal structures, that is the iteration of a statistically invariant motif. Similarly for the formation of a snow flake . . . . In all these circumstances, a “new physical object of observation” is formed. Most of the current analyses deal with transitions at equilibrium; the less studied and more challenging case of far form equilibrium critical transitions may require new mathematical tools, or variants of the powerful renormalization methods. These methods change the pertinent object, yet they are based on symmetries and conservation properties such as energy or other invariants. That is, one obtains a new object, yet not necessarily new observables for the theoretical analysis. Another key mathematical aspect of renormalization is that it analyzes point-wise transitions, that is, mathematically, the physical transition is seen as happening in an isolated mathematical point (isolated with respect to the interval topology, or the topology induced by the usual measurement and the associated metrics).

One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter.

In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account. This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship).

In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist, it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames.

As for criticality in biology, there exists substantial empirical evidence that living organisms undergo critical transitions. These are mostly analyzed as limit situations, either never really reached by an organism or as occasional point-wise transitions. Or also, as researchers nicely claim in specific analysis: a biological system, a cell genetic regulatory networks, brain and brain slices …are “poised at criticality”. In other words, critical state transitions happen continually.

Thus, as for the pertinent observables, the phenotypes, we propose to understand evolutionary trajectories as cascades of critical transitions, thus of symmetry changes. In this perspective, one cannot pre-give, nor formally pre-define, the phase space for the biological dynamics, in contrast to what has been done for the profound mathematical frame for physics. This does not forbid a scientific analysis of life. This may just be given in different terms.

As for evolution, there is no possible equational entailment nor a causal structure of determination derived from such entailment, as in physics. The point is that these are better understood and correlated, since the work of Noether and Weyl in the last century, as symmetries in the intended equations, where they express the underlying invariants and invariant preserving transformations. No theoretical symmetries, no equations, thus no laws and no entailed causes allow the mathematical deduction of biological trajectories in pre-given phase spaces – at least not in the deep and strong sense established by the physico-mathematical theories. Observe that the robust, clear, powerful physico-mathematical sense of entailing law has been permeating all sciences, including societal ones, economics among others. If we are correct, this permeating physico-mathematical sense of entailing law must be given up for unentailed diachronic evolution in biology, in economic evolution, and cultural evolution.

As a fundamental example of symmetry change, observe that mitosis yields different proteome distributions, differences in DNA or DNA expressions, in membranes or organelles: the symmetries are not preserved. In a multi-cellular organism, each mitosis asymmetrically reconstructs a new coherent “Kantian whole”, in the sense of the physics of critical transitions: a new tissue matrix, new collagen structure, new cell-to-cell connections . . . . And we undergo millions of mitosis each minute. More, this is not “noise”: this is variability, which yields diversity, which is at the core of evolution and even of stability of an organism or an ecosystem. Organisms and ecosystems are structurally stable, also because they are Kantian wholes that permanently and non-identically reconstruct themselves: they do it in an always different, thus adaptive, way. They change the coherence structure, thus its symmetries. This reconstruction is thus random, but also not random, as it heavily depends on constraints, such as the proteins types imposed by the DNA, the relative geometric distribution of cells in embryogenesis, interactions in an organism, in a niche, but also on the opposite of constraints, the autonomy of Kantian wholes.

In the interaction with the ecosystem, the evolutionary trajectory of an organism is characterized by the co-constitution of new interfaces, i.e. new functions and organs that are the proper observables for the Darwinian analysis. And the change of a (major) function induces a change in the global Kantian whole as a coherence structure, that is it changes the internal symmetries: the fish with the new bladder will swim differently, its heart-vascular system will relevantly change ….

Organisms transform the ecosystem while transforming themselves and they can stand/do it because they have an internal preserved universe. Its stability is maintained also by slightly, yet constantly changing internal symmetries. The notion of extended criticality in biology focuses on the dynamics of symmetry changes and provides an insight into the permanent, ontogenetic and evolutionary adaptability, as long as these changes are compatible with the co-constituted Kantian whole and the ecosystem. As we said, autonomy is integrated in and regulated by constraints, with an organism itself and of an organism within an ecosystem. Autonomy makes no sense without constraints and constraints apply to an autonomous Kantian whole. So constraints shape autonomy, which in turn modifies constraints, within the margin of viability, i.e. within the limits of the interval of extended criticality. The extended critical transition proper to the biological dynamics does not allow one to prestate the symmetries and the correlated phase space.

Consider, say, a microbial ecosystem in a human. It has some 150 different microbial species in the intestinal tract. Each person’s ecosystem is unique, and tends largely to be restored following antibiotic treatment. Each of these microbes is a Kantian whole, and in ways we do not understand yet, the “community” in the intestines co-creates their worlds together, co-creating the niches by which each and all achieve, with the surrounding human tissue, a task closure that is “always” sustained even if it may change by immigration of new microbial species into the community and extinction of old species in the community. With such community membership turnover, or community assembly, the phase space of the system is undergoing continual and open ended changes. Moreover, given the rate of mutation in microbial populations, it is very likely that these microbial communities are also co-evolving with one another on a rapid time scale. Again, the phase space is continually changing as are the symmetries.

Can one have a complete description of actual and potential biological niches? If so, the description seems to be incompressible, in the sense that any linguistic description may require new names and meanings for the new unprestable functions, where functions and their names make only sense in the newly co-constructed biological and historical (linguistic) environment. Even for existing niches, short descriptions are given from a specific perspective (they are very epistemic), looking at a purpose, say. One finds out a feature in a niche, because you observe that if it goes away the intended organisms dies. In other terms, niches are compared by differences: one may not be able to prove that two niches are identical or equivalent (in supporting life), but one may show that two niches are different. Once more, there are no symmetries organizing over time these spaces and their internal relations. Mathematically, no symmetry (groups) nor (partial-) order (semigroups) organize the phase spaces of phenotypes, in contrast to physical phase spaces.

Finally, here is one of the many logical challenges posed by evolution: the circularity of the definition of niches is more than the circularity in the definitions. The “in the definitions” circularity concerns the quantities (or quantitative distributions) of given observables. Typically, a numerical function defined by recursion or by impredicative tools yields a circularity in the definition and poses no mathematical nor logical problems, in contemporary logic (this is so also for recursive definitions of metabolic cycles in biology). Similarly, a river flow, which shapes its own border, presents technical difficulties for a careful equational description of its dynamics, but no mathematical nor logical impossibility: one has to optimize a highly non linear and large action/reaction system, yielding a dynamically constructed geodetic, the river path, in perfectly known phase spaces (momentum and space or energy and time, say, as pertinent observables and variables).

The circularity “of the definitions” applies, instead, when it is impossible to prestate the phase space, so the very novel interaction (including the “boundary conditions” in the niche and the biological dynamics) co-defines new observables. The circularity then radically differs from the one in the definition, since it is at the meta-theoretical (meta-linguistic) level: which are the observables and variables to put in the equations? It is not just within prestatable yet circular equations within the theory (ordinary recursion and extended non – linear dynamics), but in the ever changing observables, the phenotypes and the biological functions in a circularly co-specified niche. Mathematically and logically, no law entails the evolution of the biosphere.

# Leibnizian Mnemonics

By any standard, Leibniz’s effort to create a “universal calculus” should be considered one of the most ambitious intellectual agendas ever conceived. Building on his previous successes in developing the infinitesimal calculus, Leibniz aimed to extend the notion of a symbolic calculus to all domains of human thought, from law, to medicine, to biology, to theology. The ultimate vision was a pictorial language which could be learned by anyone in a matter of weeks and which would transparently represent the factual content of all human knowledge. This would be the starting point for developing a logical means for manipulating the associated symbolic representation, thus giving rise to the ability to model nature and society, to derive new logic truths, and to eliminate logical contradictions from the foundations of Christian thought.

Astonishingly, many elements of this agenda are quite familiar when examined from the perspective of modern computer science. The starting point for this agenda would be an encyclopedia of structured knowledge, not unlike our own contemporary efforts related to the Semantic Web, Web 2.0, or LinkedData. Rather than consisting of prose descriptions, this encyclopedia would consist of taxonomies of basic concepts extending across all subjects.

Leibniz then wanted to create a symbolic representation of each of the fundamental concepts in this repository of structured information. It is the choice of the symbolic representation that is particularly striking. Unlike the usual mathematical symbols that comprise the differential calculus, Leibniz’s effort would rely on mnemonic images which were useful for memorizing facts.

Whereas modern thinkers usually imagine memorization to be a task accomplished through pure repetition, 16th and 17th-century Europe saw fundamental innovation in the theory and practice of memory. During this period, practitioners of the memory arts relied on a diverse array of visualization techniques that allowed them to recall massive amounts of information with extraordinary precision. These memory techniques were part of a broader intellectual culture which viewed memorization as a foundational methodology for structuring knowledge.

The basic elements of this methodology were mnemonic techniques. Not the simple catch phrases that we typically associate with mnemonics, but rather, elaborate visualized scenes or images that represented what was to be remembered. It is these same memory techniques that are used in modern memory competitions and which allow competitors to perform such superhuman feats as memorizing the order of a deck of cards in under 25 seconds, or thousands of random numbers in an hour. The basic principle behind these techniques is the same, namely, that a striking and inventive visual image can dramatically aid the memory.

Leibniz and many of his contemporaries had a much more ambitious vision for mnemonics than our modern day competitive memorizers. They believed that the process of memorization went hand in hand with structuring knowledge, and furthermore, that there were better and worse mnemonics and that the different types of pictorial representations could have different philosophical and scientific implications.

For instance, if the purpose was merely to memorize, one might create the most lewd and absurd possible images in order to remember some list of facts. Indeed, this was recommended by enterprising memory theorists of the day trying to make money by selling pamphlets on how to improve one’s memory. Joshua Foer’s memoir Moonwalking with Einstein is an engaging and insightful first-person account of the “competitive memory circuit,” where techniques such as this one are the bread and butter of how elite competitors are able to perform feats of memory that boggle the mind.

But whereas in the modern world, mnemonic techniques have been relegated to learning vocabulary words and the competitive memory circuit, elite intellectuals several centuries ago had a much more ambitious vision the ultimate implications of this methodology. In particular, Leibniz hoped that through a rigorous process of notation engineering one might be able to preserve the memory-aiding properties of mnemonics while eliminating the inevitable conceptual interference that arises in creating absurdly comical, lewd, or provocative mnemonics. By drawing inspiration from the Chinese alphabet and Egyptian hieroglyphics, he hoped to create a language that could be learned by anyone in a short period of time and which would transparently – through the pictorial dimension – represent the factual content of a curated encyclopedia. Furthermore, by building upon his successes in developing the infinitesimal calculus, Leibniz hoped that a logical structure would emerge which would allow novel insights to be derived by manipulating the associated symbolic calculus.

Leibniz’s motivations extended far beyond the realm of the natural sciences. Using mnemonics as the core alphabet to engineer a symbolic system with complete notational transparency would mean that all people would be able to learn this language, regardless of their level of education or cultural background. It would be a truly universal language, one that would unite the world, end religious conflict, and bring about widespread peace and prosperity. It was a beautiful and humane vision, although it goes without saying that it did not materialize.

# Aristotelian Influence on Hobbes

Let us begin by surveying the forces, which exercised a decisive influence on Hobbes before he turned to Mathematics and Natural Sciences. From 1603 to 1608 he studied at Oxford. During this time, dissatisfied with academic teaching, he turned to classical texts, which he had already read. He read them with the interpretations of grammarians. His purpose in this study was to develop a clear Latin style. The continuation and conclusion of this study was the English translation of Thucydides, which was gradually published in 1628.

At Oxford Hobbes was introduced to scholastic philosophy. He himself recounts that he studied Aristotle’s logic and physics. He makes no mention of studying Aristotle’s morals and politics. According to the traditional curriculum, the formal disciplines viz., grammar, rhetoric, and logic were in the foreground. We may therefore assume that scholastic studies were for Hobbes in the main formal training, and that he acquired the more detailed knowledge of scholasticism, which he afterwards needed for the polemical defence of his own theories. Later on, he did not take up the studies of scholastic studies as he defected to the studies of humanities.

There were four major influences on Hobbes viz., humanism, scholasticism, Puritanism, and aristocracy. But humanism in Hobbes’ youth was the most prominent of all the influences. Hobbes after the end of his university studies read not only classical poets and historians but also classical philosophers. Which philosophers? In a foreword to his translation of Thucydides he say:

It hath been noted by divers, that Homer in poesy, Aristotle in philosophy, Demosthenes in eloquence, and others of the ancients in other knowledge, do still maintain their privacy: none of them exceeded, some not approached, by any in these later ages. And in the number of these is justly ranked also our Thucydides; a workman no less perfect in his work, than any of the former.

Hobbes later considered Plato to be the best philosopher, not the best philosopher of all, but the best philosopher of antiquity. But at the end of his humanist period he repeats without raising any objection the ruling opinion according to which Aristotle is the highest authority in philosophy. The break with Aristotle was completed only when Hobbes took to the studies of mathematics and natural sciences. The polemic against Aristotle is definitely not as violent as it is in Hobbes’ Leviathan and De Cive. In the Elements of Law, in his definition of the State, Hobbes asserts the aim of the State to be, along with peace and defence, common benefit. With this he tacitly admits Aristotle’s distinction between the reason of the genesis of the State and the reason of its being. In the later stages, Hobbes rejects the common benefit and thus defects from the above mentioned Aristotelian distinction. The linkage of Aristotle with Homer, Demosthenes, and Thucydides provides the answer i.e. Aristotle seen from the humanist point of view. Fundamentally it means the shifting of interests from Aristotle’s physics and metaphysics to his morals and politics. It also means the replacement of theory with the primacy of practice. Only if one assumes a fundamental change of this kind does Hobbes’ turning away from scholasticism to poetry and history cease to be a biographical and a historical peculiarity. Even after natural science had become Hobbes’ favourite subject of investigation, he still acknowledged the precedence of practice over theory and of political philosophy over natural science. The joys of knowledge for him was not the justification of philosophy, but rather the justification only in relation of being beneficial to man, i.e. the safeguarding of man’s life and the increase of human power. Where Hobbes develops his own view connectedly, he manifestly subordinates theory to practice. He did not, like Aristotle, attribute prudence to practice and wisdom to theory. He says: ‘Prudence is to wisdom what experience is to knowledge; wisdom is the knowledge ‘of what is right and wrong and what is good and hurtful to the being and the well-being of mankind… For generally, not he that hath skill in geometry, or any other science speculative, but only he that understandeth what conduceth to the good and Government of the people, is called a wise man’. The contrast with Aristotle has its ultimate reason in Hobbes’ conception of the place of man in the universe, which is diametrically opposed to that of Aristotle. Aristotle justified his placing of the theoretical sciences above moral and political philosophy by the argument that man is not the highest being in the universe. This ultimate assumption of the primacy of theory is rejected by Hobbes; in his contention man is ‘the most excellent work of nature’. In this strict sense Hobbes always remained a humanist, and only with the essential limitation which this brings could he recognize Aristotle’s authority in his humanist period.

Even when Hobbes had come to the conclusion that Aristotle was ‘the worst teacher that ever was’, he excepted two works from his condemnation: ‘but his rhetorique and discourse of animals were rare’. It would be difficult to find other classical work whose importance for Hobbes’ political philosophy can be compared with that of the Rhetoric. The central chapters of Hobbes’ anthropology, those chapters on which, more than on anything else he wrote, his fame as a stylist and as one who knows men rests for all time, betray in style and contents that their author was a zealous reader of the Rhetoric. In the 10th chapter of Leviathan, Hobbes treats under the heading ‘Honourable’ with what Aristotle in the Rhetoric discusses. Aristotle says ‘And honourable are the works of virtue. And the sign of virtue. And the reward whereof is rather honour. And those things are honourable which, good of themselves, are not so to the owner…And bestowing of benefits…And honourable are…victory…And things that excel. And what none can do but we. And possessions we reap no profit by. And those things which are had in honour…And the signs of praise’. In reply to this Hobbes comments ‘…victory is honourable…Magnanimity, Liberality, Hope, Courage, Confidence, are Honourable…Actions proceeding from Equity, joyned with losse, are Honourable’.

Let us try to chart out a dependence of Hobbes’ theory of the passions on the Rhetoric. In the Rhetoric, Anger is desire of revenge, joined with grief, for that he, or some of his, is, or seems to be neglected. While in the Elements of Hobbes, Anger hath been commonly defined to be grief proceeding from an opinion of contempt. To kill is the aim of them that hate, revenge aimeth at triumph. In the Rhetoric Pity is a perturbation of the mind, arising from the apprehension of hurt or trouble to another that doth not deserve it, and which he thinks may happen to himself or his. And because it appertains to pity to think that he, or his, may fall into the misery he pities in others; it follows that they may be most compassionate: who have passed through misery. And such as think there be honest men…Less compassionate are they that think no man honest and who are in great prosperity. In Hobbes’ Elements, Pity is imagination or fiction of future calamity to ourselves, proceeding from the sense of another man’s present calamity; but when it lighteth on such as we think does not deserve the same, the compassion is the greater, because then there appeareth the more probability that the same may happen to us. The contrary of pity is the hardness of heart, proceeding from extreme great opinion of their of their own exemption of the like calamity, or from hatred of all, or most men.

In Rhetoric, indignation is the grief for the prosperity of a man unworthy. In the Rhetoric, envy is grief is for the prosperity of such as ourselves, arising not from any hurt that we, but from the good that they receive. Emulation is grief arising from that our equals possess such goods as are had in honour, and whereof we are capable, but have them not; not because they have them, but because not also we. No man therefore emulates another in things whereof himself is not capable. In the Elements, Emulation is grief arising from seeing one’s self exceeded or excelled by his concurrent, together with hope to equal or exceed him in time to come.

Hobbes in his later writings uses passages from the Rhetoric, of which he had made no use of in his earlier writings, it follows that when composing all his systematic expositions of anthropology he studied Aristotle’s Rhetoric afresh each time. Hobbes’ pre-occupation with the Rhetoric can be traced back as far as about 1635. in 1635, Hobbes had considered the writing of personal exposition of the theory of the passions and as just seen, his earliest treatment of the theory of the passions was clearly influenced by Aristotle’s Rhetoric. In addition, he himself recounts that he instructed the third Earl of Devonshire in rhetoric.

Hobbes’ closer study of Aristotle’s Rhetoric may be proved with certainty only for the 1630s, i.e. in the time in which he had overtly completed the break with Aristotelianism. Moreover, one gathers from his introduction to the translation of Thucydides that the phenomenon of eloquence on the one hand, and of the passions on the other, occupied his mind even in the humanist period of his. On the whole, it seems to us more correct to assume that the use and appreciation of Aristotle’s Rhetoric, which may be traced in Hobbes’ mature writings, are the last remnants of the Aristotelianism of his youth. Hobbes after exclusive pre-occupation with poets and historians

# Gauge Geometry and Philosophical Dynamism

Weyl was dissatisfied with his own theory of the predicative construction of the arithmetically definable subset of the classical real continuum by the time he had finished his Das Kontinuum, when he compared it with Husserl’s continuum of time, which possessed a “non-atomistic” character in contradistinction with his own theory. No determined point of time could be exhibited, only approximate fixing is possible, just as in the case of “continuum of spatial intuition”. He himself accepted the necessity that the mathematical concept of continuum, the continuous manifold, should not be characterized in terms of modern set theory enriched by topological axioms, because this would contradict the essence of continuum. Weyl says,

It seems that set theory violates against the essence of continuum, which, by its very nature, cannot at all be battered into a single set of elements. not the relationship of an element to a set, but a part of the whole ought to be taken as a basis for the analysis of the continuum.

For Weyl, single points of continuum were empty abstractions, and made him enter a difficult terrain, as no mathematical conceptual frame was in sight, which could satisfy his methodological postulate in a sufficiently elaborative manner. For some years, he sympathized with Brouwer’s idea to characterize points in the intuitionistic one-dimensional continuum by “free choice sequences” of nested intervals, and even tried to extend the idea to higher dimensions and explored the possibility of a purely combinatorial approach to the concept of manifold, in which point-like localizations were given only by infinite sequences of nested star neighborhoods in barycentric subdivisions of a combinatorially defined “manifold”. There arose, however, the problem of how to characterize the local manifold property in purely combinatorial terms.

Weyl was much more successful on another level to rebuild differential geometry in manifolds from a “purely infinitesimal” point of view. He generalized Riemann’s proposal for a differential geometric metric

ds2(x) = ∑n i, j = 1 gij(x) dxi dxj

From his purely infinitesimal point of view, it seemed a strange effect that the length of two vectors ξ(x) and η(x’) given at different points x and x’ can be immediately and objectively compared in this framework after the calculation of

|ξ(x)|2 = ∑n i, j = 1 gij(x) ξi ξj,

|η(x’)|2 = ∑n i, j = 1 gij(x’) ηi ηj

In this context, it was, comparatively easy for Weyl, to give a perfectly infinitesimal characterization of metrical concepts. He started from a well-known structure of conformal metric, i.e. an equivalence class [g] of semi-Riemannian metrics g = gij(x) and g’ = g’ij(x), which are equal up to a point of dependent positive factor λ(x) > 0, g’ = λg. Then, comparison of length made immediate sense only for vectors attached to the same point x, independently of the gauge of the metric, i.e. the choice of the representative in the conformal class. To achieve comparability of lengths of vectors inside each infinitesimal neighborhood, Weyl introduced the conception of length connection formed in analogy to the affine connection, Γ, just distilled from the classical Christoffel Symbols Γkij of Riemannian geometry by Levi Civita. The localization inside such an infinitesimal neighborhood was given, as would have been done already by the mathematicians of the past, by coordinate parameters x and x’ = x + dx for some infinitesimal displacement dx. Weyl’s length connection consisted, then, in an equivalence class of differential I-forms [Ψ], Ψ ≡ ∑ni = 1 Ψidxi, where an equivalent representation of the form is given by Ψ’ ≡ Ψ – d log λ, corresponding to a change of gauge of the conformal metric by the factor λ. Weyl called this transformation, which he recognized as necessary for the consistency of his extended symbol system, the gauge transformation of the length connection.

Weyl established a purely infinitesimal gauge geometry, where lengths of vectors (or derived metrical concepts in tensor fields) were immediately comparable only in the infinitesimal neighborhood of one point, and for points of finite distance only after an integration procedure. this integration turned out to be, in general, path dependent. Independence of the choice of path between two points x and x’ holds if and only if the length curvature vanishes. the concept of curvature was built in direct analogy to the curvature of the affine connection and turned out to be, in this case, just the exterior derivative of the length connection f ≡ dΨ. This led Weyl to a coherent and conceptually pleasing realization of a metrical differential geometry built upon purely infinitesimal principles. moreover, Weyl was convinced of important consequences of his new gauge geometry for physics. The infinitesimal neighborhoods understood as spheres of activity as Fichte might have said, suggested looking for interpretations of the length connection as a field representing physically active quantities. In fact, building on the mathematically obvious observation df ≡ 0, which was formally identical with the second system of the generally covariant Maxwell equations, Weyl immediately drew the conclusion that the length curvature f ought to be identified with the electromagnetic field.

He, however, gave up the belief in the ontological correctness of the purely field-theoretic approach to matter, where the Mie-Hilbert theory of a combined Lagrange function L(g,Ψ) for the action of the gravitational field (g) and electromagnetism (Ψ) was further geometrized and technically enriched by the principle of gauge invariance (L), substituting in its place a philosophically motivated a priori argumentation for the conceptual superiority of his gauge geometry. The goal of a unified description of gravitation and electromagnetism, and the derivation of matter structures from it, was nothing specific to Weyl. In his theory, the purely infinitesimal approach to manifolds and the ensuing possibility to geometrically unify the two-known interaction fields gravitation and electromagnetism took on a dense and conceptually sophisticated form.

# Honey-Trap Catalysis or Why Chemistry Mechanizes Complexity? Note Quote.

Was browsing through Yuri Tarnopolsky’s Pattern Chemistry and its affect on/from humanities. Tarnopolsky’s states “chemistry” + “humanities” connectivity ideas thusly:

Practically all comments to the folk tales in my collection contained references to a book by the Russian ethnographer Vladimir Propp, who systematized Russian folk tales as ‘molecules‘ consisting of the same ‘atoms‘ of plot arranged in different ways, and even wrote their formulas. His book was published in the 30’s, when Claude Levi-Strauss, the founder of what became known as structuralism, was studying another kind of “molecules:” the structures of kinship in tribes of Brazil. Remarkably, this time a promise of a generalized and unifying vision of the world was coming from a source in humanities. What later happened to structuralism, however, is a different story, but the opportunity to build a bridge between sciences and humanities was missed. The competitive and pugnacious humanities could be a rough terrain.

I believed that chemistry carried a universal message about changes in systems that could be described in terms of elements and bonds between them. Chemistry was a particular branch of a much more general science about breaking and establishing bonds. It was not just about molecules: a small minority of hothead human ‘molecules’ drove a society toward change. A nation could be hot or cold. A child playing with Lego and a poet looking for a word to combine with others were in the company of a chemist synthesizing a drug.

Further on, Tarnopolsky, following his chemistry then thermodynamics leads, then found the pattern theory work of Swedish chemist Ulf Grenander, which he describes as follows:

In 1979 I heard about a mathematician who tried to list everything in the world. I easily found in a bookstore the first volume of Pattern Theory (1976) by Ulf Grenander, translated into Russian. As soon as I had opened the book, I saw that it was exactly what I was looking for and what I called ‘meta-chemistry’, i.e., something more general than chemistry, which included chemistry as an application, together with many other applications. I can never forget the physical sensation of a great intellectual power that gushed into my face from the pages of that book.

Although the mathematics in the book was well above my level, Grenander’s basic idea was clear. He described the world in terms of structures built of abstract ‘atoms’ possessing bonds to be selectively linked with each other. Body movements, society, pattern of a fabric, chemical compounds, and scientific hypothesis—everything could be described in the atomistic way that had always been considered indigenous for chemistry. Grenander called his ‘atoms of everything’ generators, which tells something to those who are familiar with group theory, but for the rest of us could be a good little metaphor for generating complexity from simplicity. Generators had affinities to each other and could form bonds of various strength. Atomism is a millennia old idea. In the next striking step so much appealing to a chemist, Ulf Grenander outlined the foundation of a universal physical chemistry able to approach not only fixed structures but also “reactions” they could undergo.

The two major means of control in chemistry and organic life: thermodynamic control (shift of equilibrium) and kinetic control (selective change of speed). People might not be aware that the same mechanisms are employed in social and political control, as well as in large historical events out of control, for example, the great global migration of people and jobs in our time or just the one-way flow of people across the US-Mexican border!!! Thus, with an awful degree of simplification, the intensification of a hunt for illegal immigrants looks like thermodynamic control by a honey trap, while the punishment for illegal employers is typical negative catalysis, although both may lead to a less stable and more stressed state. In both cases, new equilibrium will be established, different equilibria housed upon different sets of conditions.

Should I treat people as molecules, unless I am from the Andromeda Galaxy. Complex-systems never come to global equilibrium, although local equilibrium can exist for some time. They can be in the state of homeostasis, which, again, is not the same as steady state in physics and chemistry. Homeostasis is the global complement of the classical local Darwinism of mutation and selection.

Taking other examples, the immigration discrimination in favor of educated or wealthy professionals is also a catalysis of affirmative action type. It speeds up the drive to equilibrium. Attractive salary for rare specialists is an equilibrium shift (honey trap) because it does not discriminate between competitors. Ideally, neither does exploitation of foreign labor. Bureaucracy is a global thermodynamic freeze that can be selectively overcome by 100% catalytic connections and bribes. Severe punishment for bribe is thermodynamic control. The use of undercover agents looks like a local catalyst: you can wait for the crook to make a mistake or you can speed it up. Tax incentive or burden is a shift of equilibrium. Preferred (or discouraging) treatment of competitors is catalysis (or inhibition).

There is no catalysis without selectivity and no selectivity without competition. Equilibrium, however, is not selective: it applies globally to the fluid enough system. Organic life, society, and economy operate by both equilibrium shift and catalysis. More examples: by manipulating the interest rate, the RBI employs thermodynamic control; by tax cuts for efficient use of energy, the government employs kinetic control, until saturation comes. Thermodynamic and kinetic factors are necessary for understanding Complex-systems, although only professionals can talk about them reasonably, but they are not sufficient. History is not chemistry because organic life and human society develop by design patterns, so to speak, or archetypal abstract devices, which do not follow from any physical laws. They all, together with René Thom morphologies, have roots not in thermodynamics but in topology. Anything that cannot be presented in terms of points, lines, and interactions between the points is far from chemistry. Topology is blind to metrics, but if Pattern Theory were not metrical, it would be just a version of graph theory.

# Mereology

Theory of parts and the whole concern both formal and material ontologies. It pertains to the former as a pure theory of independence and non-independence, and to the latter as regards the particular laws of non-independence that apply in the various ontological regions.

Mereologies are generally classified into extensional and intensional, where the former are ontologically monistic: every object that exists is an object; the parts of objects are objects; and the compositions of objects are objects as well. This is the position put forward by Stanisław Leśniewski, who starts from his definition of ontology where he gives formal definition of the concept of ‘object’ and then extends it in his concert of mereology based on the concept of being a ‘proper part of’, then developing it further in his theory of space and time. Intensional mereologies, by contrast distinguish the parts of an entity into independent and non-independent parts. The former are termed ‘pieces’, and they are those that effectively assume the denomination ‘part’, while the latter are called ‘moments’. From an ontological point of view, parts are objects in the same sense that the entities of which they are parts are objects. Moments have a different ontological valence. they are secondary objectualities and solely in the translated and subjective sense. Which, however, do not mean arbitrary. Note the formulation employed by Husserl, who speaks of the independence of parts and the non-independence of moments, not of independence and dependence. The difference is a subtle one, but it is deliberately introduced. The reason for it resides in Husserl’s mathematical training, where the use of negation is that these cases signify that equality is possible. where a is said to be not greater than b, this means that a is less than b or is equal to b. Translated into the present scenario, when one says that a is non-independent of b, the intention is to say that a is dependent on b or that a is equal to b. Moments, may therefore be equal to the whole of which they are moments, where, however, the concept of equality should be understood in the sense of indiscernibility. The possible indiscernibility of the moments from the whole should not be confused with the possible identity of the part with the whole. A part may even be the whole itself, whereas the moment can at most coincide with the whole; or it may be indiscernible from the whole, but is nonetheless distinct from it.

# Roger Penrose and Artificial Intelligence: Revenance from the Archives and the Archaic.

Let us have a look at Penrose and his criticisms of strong AI, and does he come out as a winner. His Emperor’s New Mind: Concerning Computers, Minds, and The Laws of Physics

sets out to deal a death blow to the project of strong AI. Even while showing humility, like in saying,

My point of view is an unconventional among physicists and is consequently one which is unlikely to be adopted, at present, by computer scientists or physiologists,

he is merely stressing on his speculative musings. Penrosian arguments ala Searle, are definitely opinionated, in making assertions like a conscious mind cannot work like a computer. He grants the possibility of artificial machines coming into existence, and even superseding humans (1), but at every moment remains convinced that algorithmic machines are doomed to subservience. Penrose’s arguments proceed through showing that human intelligence cannot be implemented by any Turing machine equivalent computer, and human mind as not algorithmically based that could capture the Turing machine equivalent. He is even sympathetic to Searle’s Chinese Room argument, despite showing some reservations against its conclusions.

The speculative nature of his arguments question people as devices which compute that a Turing machine cannot, despite physical laws that allow such a construction of a device as a difficult venture. This is where his quantum theory sets in, with U and R (Unitary process and Reduction process respectively) acting on quantum states that help describe a quantum system. His U and R processes and the states they act upon are not only independent of observers, but at the same time real, thus branding him as a realist. What happens is an interpolation that occurs between Unitary Process and Reductive Process, a new procedure that essentially contains a non-algorithmic element takes shape, which effectuates a future that cannot be computable based on the present, even though it could be determined that way. This radically new concept which is applied to space-time is mapped onto the depths of brain’s structure, and for Penrose, the speculative possibility occurs in what he terms the Phenomenon of Brain Plasticity. As he says,

Somewhere within the depths of the brain, as yet unknown cells are to be found of single quantum sensitivity, such that synapses becoming activate or deactivated through the growth of contraction of dendritic spines…could be governed by something like the processes involved in quasi-crystal growth. Thus, not just one of the possible alternative arrangements is tried out, but vast numbers, all superposed in complex linear superposition.

From the above, it is deduced that the impact is only on the conscious mind, whereas the unconscious mind is left to do with algorithmic computationality. Why is this important for Penrose is, since, as a mathematician believes in the mathematical ideas as populating an ideal Platonic world, and which in turn is accessible only via the intellect. And harking back to the non-locality principle within quantum theory, it is clear that true intellect requires consciousness, and the mathematician’s conscious mind has a direct route to truth. In the meanwhile, there is a position in “many-worlds” (2) view that supersedes Penrose’s quantum realist one. This position rejects the Reduction Process in favor of Unitary Process, by terming the former as a mere illusion. Penrose shows his reservations against this view, as for him, a theory of consciousness needs to be in place prior to “many-worlds” view, and before the latter view could be squared with what one actually observes. Penrose is quite amazed at how many AI reviewers and researchers embrace the “many-worlds” hypothesis, and mocks at them, for their reasons being better supportive of validating AI project. In short, Penrose’s criticism of strong AI is based on the project’s assertion that consciousness can emerge by a complex system of algorithms, whereas for the thinker, a great many things humans involve in are intrinsically non-algorithmic in nature. For Penrose, a system can be deterministic without being algorithmic. He even uses the Turing’s halting theorem (3) to demonstrate the possibility of replication of consciousness. In a public lecture in Kolkata on the 4th of January 2011 (4), Penrose had this to say,

There are many things in Physics which are yet unknown. Unless we unravel them, we cannot think of creating real artificial intelligence. It cannot be achieved through the present system of computing which is based on mathematical algorithm. We need to be able to replicate human consciousness, which, I believe, is possible through physics and quantum mechanics. The good news is that recent experiments indicate that it is possible.

There is an apparent shift in Penrosean ideas via what he calls “correct quantum gravity”, which argues for the rational processes of the mind to be completely algorithmic and probably standing a bright chance to be duplicated by a sufficiently complex computing system. As he quoted from the same lecture in Kolkata,

A few years back, scientists at NASA had contemplated sending intelligent robots to space and sought my inputs. Even though we are still unable to create some device with genuine feelings and understanding, the question remains a disturbing one. Is it ethical to leave a machine which has consciousness in some faraway planet or in space? Honestly, we haven’t reached that stage yet. Having said that, I must add it may not be too far away, either. It is certainly a possibility.

Penrose does meet up with some sympathizers for his view, but his fellow-travelers do not tread along with him for a long distance. For example, in an interview with Sander Olson, Vernor Vinge, despite showing some reluctance to Penrose’s position, accepts that physical aspects of mind, or especially the quantum effects have not been studied in greater detail, but these quantum effects would simply be another thing to be learned with artifacts. Vinge does speculate on other paradigms that could be equally utilized for AI research hitting speed, rather than confining oneself to computer departments to bank on their progress. His speculations (5) have some parallel to what Penrose and Searle would hint at, albeit occasionally. Most of the work in AI could benefit, if AI, neural nets are closely connected to biological life. Rather than banking upon modeling and understanding of biological life with computers, if composite systems relying on biological life for guidance, or for providing features we do not understand quite well as yet to be implemented within the hardware, could be fathomed and made a reality, the program of AI would undoubtedly push the pedal to accelerate. There would probably be no disagreeing with what Aaron Saenz, Senior Editor of singularityhub.com said (6),

Artificial General Intelligence is one of the Holy Grails of science because it is almost mythical in its promise: not a system that simply learns, but one that reaches and exceeds our own kind of intelligence. A truly new form of advanced life. There are many brilliant people trying to find it. Each of these AI researchers have their own approach, their own expectations and their own history of failures and a precious few successes. The products you see on the market today are narrow AI-machines that have very limited ability to learn. As Scott Brown said, “today’s I technology is so primitive that much of the cleverness goes towards inventing business models that do not require good algorithms to succeed.” We’re in the infantile stages of AI. If that. Maybe the fetal stages.

(1) This is quite apocalyptic sounding like the singularity notion of Ray Kurzweil, which is an extremely disruptive, world-altering event that has the potentiality of forever changing the course of human history. The extermination of humanity by violent machines is not impossible, since there would be no sharp distinctions between men and machines due to the existence of cybernetically enhanced humans and uploaded humans.

(2) “Many-worlds” view was first put forward by Hugh Everett in 1957. According to this view, evolution of state vector regarded realistically, is always governed by deterministic Unitary Process, while Reduction Process remains totally absent from such an evolutionary process. The interesting ramifications of this view are putting conscious observers at the center of the considerations, thus proving the basic assumption that quantum states corresponding to distinct conscious experiences have to be orthogonal (Simon 2009). On the technical side, ‘Orthogonal’ according to quantum mechanics is: two eigenstates of a Hermitian operator, ψm and ψn, are orthogonal if they correspond to different eigenvalues. This means, in Dirac notation, that < ψm | ψn > = 0 unless ψm and ψn correspond to the same eigenvalue. This follows from the fact that Schrödinger’s equation is a Sturm–Liouville equation (in Schrödinger’s formulation) or that observables are given by hermitian operators (in Heisenberg’s formulation).

(3) Halting problem is a decisional problem in computability theory, and is stated as: Given a description of a program, decide whether the program finishes running or continues to run, and will thereby run forever. Turing proved that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. In a way, the halting problem is undecidable over Turing machines.

(4) Penrose, R. AI may soon become reality. Public lecture delivered in Kolkata on the 4th of Jan 2011. <http://timesofindia.indiatimes.com/city/kolkata-/AI-may-soon-become-reality- Penrose/articleshow/7219700.cms>

(5) Olson, S. Interview with Vernor Vinge in Nanotech.Biz <http://www.nanotech.biz/i.php?id=01_16_09&gt;

(6) Saenz, A. Will Vicarious Systems’ Silicon Valley Pedigree Help it Build AI? in singularityhub.com <http://singularityhub.com/2011/02/03/will-vicarious-systems-silicon-valley-pedigree-help-it-build-agi/&gt;

…about meta-narrativizing (sorry for this nonlinear/bottom-up approach to the mail), I could only quip on post-modernism as highly ineffectual, escapist laden movement in its reactionary gesture to modernism. Take for instance, Lyotard, and his turn from libidinal economy to post-modernism through paganism, before he culminates his journey in The Differend. He sure reached a road block in Libidinal Economy itself, when faced with his unflinching commitment to ontology of events, since that raised dire issue for his epistemological affiliations. The resultant: Freud and Marxian marriage was filed for divorce. The way out that he imagined was to sort out matters to even out differences with the incommensurable issues of justice, and thats why he took up paganism. Even here, to begin with, he was in a quandary, since he took recourse to admissibility in irreducible differences plaguing the prevalent order of things (Sorry for this Foucauldian noise!!), and paved the escape route by adhering to the principles of never trying one’s hand/mind or whatever one could use at reductionism. So far, so good. But, was this turn towards micro-narrativizing proving a difficult ordeal? And my reading of the thinker in question undoubtedly says YES. If one reads The Postmodern Condition or The Differend carefully, one notices his liberal borrowings from Wittgenstein’s language games, or what he prefers to call “phase-regimens”. These are used to negate his earlier commitments to ontology of events by stressing more upon his epistemological ones, and therefore are invoked only with the idea of political motivators. The crux of the matter is: to drive his point home forcefully, he negates critical theory, unitary Being of the society (both pillars of modernism, or meta-narratives in themselves), and substitutes it by a post-modern society that is built by compositions of fragmented “phase-regimens” open to alteration in their attempts to successfully pass the test of legitimate narratives. This debt to Wittgenstein is what I call, a movement riddled with escapism, an exegesis that begins, but has a real eschatological problem. I do not know, if I’ve been able to show with this example clearly the fault-lines within micro-narratives?

[addendum]: if Wittgenstein is said to have some resemblances with postmodernism or more importantly poststructuralism, human imagination has transcended its sleep state..