Abstraction as Dissection of a Flat “Ontology”: The Illusiveness of Levels (Paper)

DeLanda:

…while an ontology based on relations between general types and particular instances is hierarchical, each level representing a different ontological category (organism, species, genera) [or strings, quarks, baryons], an approach in terms of interacting parts and emergent wholes leads to a flat ontology, one made exclusively of unique, singular individuals, differing in spatio-temporal scale but not in ontological status.

The following discussion, however, seeks to go further than DeLanda’s account of hierarchy, extending it to all entities of spatio-temporal entities, thus the interjection of “strings, quarks, baryons” into the quote. That this extension is natural should be clear once van Fraassen’s role in this level-denying consilience. Furthermore, van Fraassen’s account will be employed to illustrate why any level-like organization attributed to the components of an explanation has no bearing on the explanation, and arises due to two things:

1) erroneously clumping together all types of belief statements into a single branch of philosophy that deals with knowledge, and

2) attempting to stratify the causal thicket that is the world, so as conform the produce of scientific enterprise with monism and fundamentalist predilections.

Finally, there is one other way in which the account differs from DeLanda’s flat ontology, when touching upon Kant’s antinomy of teleology: the idea that levels of mechanisms telescopes to a flat ontology, as every part and whole enjoys the same status in a scientific explanation, and only differ in size.

Untitled

Figure: The larger arrows point toward the conclusion of consilience. The smaller arrows suggest relationships that create cooperation toward the thesis. The blue bubbles represent the positive web of notions that cohere, and the red bubbles are those notions that are excluded from web. The red arrows indicate where the ideas not included in the web arise, and the totality of this paper works toward a final explication as to why these are to be excluded. There are a few thin blue lines that are not included, because they would make a mess of the image, such as a line connecting abstraction and James’ pragmatism. However, the paper (Abstraction as Dissection of a Flat Ontologythrough) endeavors to make these connections clear, for example, quoting James to show that James presents an idea that seems a precursor to Cartwright’s notion of abstraction. (Note: Yellow lines are really blue lines, but are yellow to avoid confusion that might ensue from blue lines passing through blue bubbles. Green lines are to indicate additivity. The red lines denote notions not connected to the web, yet bear some relation to ideas in the web.) 

Glue Code + Pipeline Jungles. Thought of the Day 25.0

equation

Machine learning researchers tend to develop general purpose solutions as self-contained packages. A wide variety of these are available as open-source packages at places like mloss.org, or from in-house code, proprietary packages, and cloud-based platforms. Using self-contained solutions often results in a glue code system design pattern, in which a massive amount of supporting code is written to get data into and out of general-purpose packages.

This glue code design pattern can be costly in the long term, as it tends to freeze a system to the peculiarities of a specific package. General purpose solutions often have different design goals: they seek to provide one learning system to solve many problems, but many practical software systems are highly engineered to apply to one large-scale problem, for which many experimental solutions are sought. While generic systems might make it possible to interchange optimization algorithms, it is quite often refactoring of the construction of the problem space which yields the most benefit to mature systems. The glue code pattern implicitly embeds this construction in supporting code instead of in principally designed components. As a result, the glue code pattern often makes experimentation with other machine learning approaches prohibitively expensive, resulting in an ongoing tax on innovation.

Glue code can be reduced by choosing to re-implement specific algorithms within the broader system architecture. At first, this may seem like a high cost to pay – reimplementing a machine learning package in C++ or Java that is already available in R or matlab, for example, may appear to be a waste of effort. But the resulting system may require dramatically less glue code to integrate in the overall system, be easier to test, be easier to maintain, and be better designed to allow alternate approaches to be plugged in and empirically tested. Problem-specific machine learning code can also be tweaked with problem-specific knowledge that is hard to support in general packages.

As a special case of glue code, pipeline jungles often appear in data preparation. These can evolve organically, as new signals are identified and new information sources added. Without care, the resulting system for preparing data in an ML-friendly format may become a jungle of scrapes, joins, and sampling steps, often with intermediate files output. Managing these pipelines, detecting errors and recovering from failures are all difficult and costly. Testing such pipelines often requires expensive end-to-end integration tests. All of this adds to technical debt of a system and makes further innovation more costly. It’s worth noting that glue code and pipeline jungles are symptomatic of integration issues that may have a root cause in overly separated “research” and “engineering” roles. When machine learning packages are developed in an ivory-tower setting, the resulting packages may appear to be more like black boxes to the teams that actually employ them in practice.

Bayesianism in Game Theory. Thought of the Day 24.0

16f585c6707dae1b884ef409d0b5c7ef

Bayesianism in game theory can be characterised as the view that it is always possible to define probabilities for anything that is relevant for the players’ decision-making. In addition, it is usually taken to imply that the players use Bayes’ rule for updating their beliefs. If the probabilities are to be always definable, one also has to specify what players’ beliefs are before the play is supposed to begin. The standard assumption is that such prior beliefs are the same for all players. This common prior assumption (CPA) means that the players have the same prior probabilities for all those aspects of the game for which the description of the game itself does not specify different probabilities. Common priors are usually justified with the so called Harsanyi doctrine, according to which all differences in probabilities are to be attributed solely to differences in the experiences that the players have had. Different priors for different players would imply that there are some factors that affect the players’ beliefs even though they have not been explicitly modelled. The CPA is sometimes considered to be equivalent to the Harsanyi doctrine, but there seems to be a difference between them: the Harsanyi doctrine is best viewed as a metaphysical doctrine about the determination of beliefs, and it is hard to see why anybody would be willing to argue against it: if everything that might affect the determination of beliefs is included in the notion of ‘experience’, then it alone does determine the beliefs. The Harsanyi doctrine has some affinity to some convergence theorems in Bayesian statistics: if individuals are fed with similar information indefinitely, their probabilities will ultimately be the same, irrespective of the original priors.

The CPA however is a methodological injunction to include everything that may affect the players’ behaviour in the game: not just everything that motivates the players, but also everything that affects the players’ beliefs should be explicitly modelled by the game: if players had different priors, this would mean that the game structure would not be completely specified because there would be differences in players’ behaviour that are not explained by the model. In a dispute over the status of the CPA, Faruk Gul essentially argues that the CPA does not follow from the Harsanyi doctrine. He does this by distinguishing between two different interpretations of the common prior, the ‘prior view’ and the ‘infinite hierarchy view’. The former is a genuinely dynamic story in which it is assumed that there really is a prior stage in time. The latter framework refers to Mertens and Zamir’s construction in which prior beliefs can be consistently formulated. This framework however, is static in the sense that the players do not have any information on a prior stage, indeed, the ‘priors’ in this framework do not even pin down a player’s priors for his own types. Thus, the existence of a common prior in the latter framework does not have anything to do with the view that differences in beliefs reflect differences in information only.

It is agreed by everyone that for most (real-world) problems there is no prior stage in which the players know each other’s beliefs, let alone that they would be the same. The CPA, if understood as a modelling assumption, is clearly false. Robert Aumann, however, defends the CPA by arguing that whenever there are differences in beliefs, there must have been a prior stage in which the priors were the same, and from which the current beliefs can be derived by conditioning on the differentiating events. If players differ in their present beliefs, they must have received different information at some previous point in time, and they must have processed this information correctly. Based on this assumption, he further argues that players cannot ‘agree to disagree’: if a player knows that his opponents’ beliefs are different from his own, he should revise his beliefs to take the opponents’ information into account. The only case where the CPA would be violated, then, is when players have different beliefs, and have common knowledge about each others’ different beliefs and about each others’ epistemic rationality. Aumann’s argument seems perfectly legitimate if it is taken as a metaphysical one, but we do not see how it could be used as a justification for using the CPA as a modelling assumption in this or that application of game theory and Aumann does not argue that it should.

wpid-bilindustriella-a86478514b

Beginning of Matter, Start to Existence Itself

__beginning_of_matter___by_ooookatioooo

When the inequality

μ+3p/c2 >0 ⇔ w > −1/3

is satisfied, one obtains directly from the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ +3p/c2) + Λ

the Friedmann-Lemaître (FL) Universe Singularity Theorem, which states that:

In a FL universe with Λ ≤ 0 and μ + 3p/c2 > 0 at all times, at any instant t0 when H0 ≡ (S ̇/S)0 > 0 there is a finite time t: t0 − (1/H0) < t < t0, such that S(t) → 0 as t → t; the universe starts at a space-time singularity there, with μ → ∞ and T → ∞ if μ + p/c2 > 0.

This is not merely a start to matter – it is a start to space, to time, to physics itself. It is the most dramatic event in the history of the universe: it is the start of existence of everything. The underlying physical feature is the non-linear nature of the Einstein’s Field Equations (EFE): going back into the past, the more the universe contracts, the higher the active gravitational density, causing it to contract even more. The pressure p that one might have hoped would help stave off the collapse makes it even worse because (consequent on the form of the EFE) p enters algebraically into the Raychaudhuri equation with the same sign as the energy density μ. Note that the Hubble constant gives an estimate of the age of the universe: the time τ0 = t0 − t since the start of the universe is less than 1/H0.

This conclusion can in principle be avoided by a cosmological constant, but in practice this cannot work because we know the universe has expanded by at least a ratio of 11, as we have seen objects at a redshift 6 of 10, the cosmological constant would have to have an effective magnitude at least 113 = 1331 times the present matter density to dominate and cause a turn-around then or at any earlier time, and so would be much bigger than its observed present upper limit (of the same order as the present matter density). Accordingly, no turnaround is possible while classical physics holds. However energy-violating matter components such as a scalar field can avoid this conclusion, if they dominate at early enough times; but this can only be when quantum fields are significant, when the universe was at least 1012 smaller than at present.

Because Trad ∝ S−1, a major conclusion is that a Hot Big Bang must have occurred; densities and temperatures must have risen at least to high enough energies that quantum fields were significant, at something like the GUT energy. The universe must have reached those extreme temperatures and energies at which classical theory breaks down.

Unconditional Accelerationists: Park Chung-Hee and Napoleon

Land’s Teleoplexy,

Some instance of intermediate individuation—most obviously the state—could be strategically invested by a Left Accelerationism. precisely in order to submit the virtual-teleoplexic lineage of Terrestrial Capitalism (or Techonomic Singularity) to effacement and disruption.

For the unconditional accelerationist as much as for the social historian, of course, the voluntarist quality of this image is a lie. Napoleon’s supposed flight from history can amply be recuperated within the process of history itself, if only we revise our image of what this is: not a flat space or a series of smooth curves, but rather a tangled, homeorhetic, deep-subversive spiral-complex. Far from shaping history like putty, Napoleon like all catastrophic agents of time-anomaly unleashed forces that ran far ahead of his very intentions: pushing Europe’s engagement with Africa and the Middle East onto a new plane, promulgating the Code Napoleon that would shape and selectively boost the economic development of continental Europe. In this respect, the image of him offered later by Marinetti is altogether more interesting. In his 1941 ‘Qualitative Imaginative Futurist Mathematics’, Marinetti claimed that Futurist military ‘calculations are as precise as those of Napoleon who in some battles had all of his couriers killed and hence his generals autonomous‘. Far from the prideful image of a singular genius strutting as he pleases across the stage of world history, here Napoleon becomes something altogether more monstrous. Foreshadowing Bataille’s argument a few years later that the apex of sovereignty is precisely an absolute moment of unknowing, he becomes a head that has severed itself from its limbs, falling from its body as it gives way to the sharp and militant positive feedback it has unleashed.

To understand its significance, we must begin by recognising that far from being a story of the triumph of a free capitalism over communism, the reality of Park Chung-hee’s rule and the overtaking of the North by the South is more than a little uncomfortable for a right-libertarian (though not, perhaps, for someone like Peter Thiel). Park was not just a sovereign dictator but an inveterate interventionist, who constructed an entire sequence of bureaucracies to oversee the expansion of the economy according to determinate Five-Year Plans. In private notes, he emphasised the ideology of the February 26 incident in Japan, the militarised attempt to effect a ‘Shōwa Restoration’ that would have united the Japanese race politically and economically behind a totalitarian emperor. In Japan this had failed: in Korea, Park himself could be the president-emperor, declaiming on his ‘sacred military revolution’ of 1961 that had brought together the ‘Korean race’. At the same time, he explicitly imitated the communist North, proclaiming the need for spiritual mobilisation and a ‘path of the leader’ 지도자의길 around which the nation would cohere. The carefully-coordinated mass histrionics after his death in 1979 echoed closely the spectacle with which we are still familiar in North Korea.

Park Chung-hee and Napoleon demonstrate at its extreme the tangled structure of the history of capital. Capitalism’s intensities are geographically and temporally uneven; they spread through loops and spectacular digressions. Human agencies and the mechanisms of the state have their important place within this capitalist megamachine. But things never quite work out the way they plan.

Holism. Note Quote.

spiritual_fractal_by_trosik-d5xscx1

It is a basic tenet of systems theory/holism as well as of theosophy that the whole is greater than the sum of its parts. If, then, our individual minds are subsystems of larger manifestations of mind, how is it that our own minds are self-conscious while the universal mind (on the physical plane) is not? How can a part possess a quality that the whole does not? A logical solution is to regard the material universe as but the outer garment of universal mind. According to theosophy the laws of nature are the wills and energies of higher beings or spiritual intelligences which in their aggregate make up universal mind. It is mind and intelligence which give rise to the order and harmony of the physical universe, and not the patterns of chance, or the decisions of self-organizing matter. Like Capra, the theosophical philosophy rejects the traditional theological idea of a supernatural, extracosmic divine Creator. It would also question Capra’s notion that such an extracosmic God is the self-organizing dynamics of the physical universe. Theosophy, on the other hand, firmly believes in the existence of innumerable superhuman, intracosmic intelligences (or gods), which have already passed through the human stage in past evolutionary cycles, and to which status we shall ourselves one day attain. There are two opposing views of consciousness: the Western scientific view which considers matter as primary and consciousness as a by-product of complex material patterns associated with a certain stage of biological evolution; and the mystical view which sees consciousness as the primary reality and ground of all being. Systems theory accepts the conventional materialist view that consciousness is a manifestation of living systems of a certain complexity, although the biological structures themselves are expressions of “underlying processes that represent the system’s self-organization, and hence its mind. In this sense material structures are no longer considered the primary reality” (Turning Point). This stance reaffirms the dualistic view of mind and matter. Capra clearly believes that matter is primary in the sense that the physical world comes first and life, mind, and consciousness emerge at a later stage. That he chooses to call the self-organizing dynamics of the universe by the name “mind” is beside the point. If consciousness is regarded as the underlying reality, it is impossible to regard it also as a property of matter which emerges at a certain stage of evolution. Systems theory accepts neither the traditional scientific view of evolution as a game of dice, nor the Western religious view of an ordered universe designed by a divine creator. Evolution is presented as basically open and indeterminate, without goal or purpose, yet with a recognizable pattern of development. Chance fluctuations take place, causing a system at a certain moment to become unstable. As it “approaches the critical point, it ‘decides’ itself which way to go, and this decision will determine its evolution”. Capra sees the systems view of the evolutionary process not as a product of blind chance but as an unfolding of order and complexity analogous to a learning process, including both independence from the environment and freedom of choice. However, he fails to explain how supposedly inert matter is able to “decide,” “choose,” and “learn.” This belief that evolution is purposeless and haphazard and yet shows a recognizable pattern is similar to biologist Lyall Watson’s belief that evolution is governed by chance but that chance has “a pattern and a reason of its own”.

While the materialistic and mystical views of mind seem incompatible and irreconcilable, mind/matter dualism may be resolved by seeing spirit and matter as fundamentally one, as different grades of consciousness-life-substance. Science already holds that physical matter and energy are interconvertible, that matter is concentrated energy; and theosophy adds that consciousness is the highest and subtlest form. From this view there is no absolutely dead and unconscious matter in the universe. Everything is a living, evolving, conscious entity, and every entity is composite, consisting of bundles of forces and substances pertaining to different planes, from the astral-physical through the psychomental to divine-spiritual. Obviously the degree of manifested life and consciousness varies widely from one entity to another; but at the heart of every entity is an indwelling spiritual atom or consciousness-center at a particular stage of its evolutionary unfoldment. More complex material forms do not create consciousness, but merely provide a more developed vehicle through which this spiritual monad can express its powers and faculties. Evolution is far from being purposeless and indeterminate: our human monads issued from the divine Source aeons ago as unself-conscious god-sparks and, by taking embodiment and garnering experience in all the kingdoms of nature, we will eventually raise ourselves to the status of self-conscious gods.

The Semiotic Theory of Autopoiesis, OR, New Level Emergentism

higher-consciousness

The dynamics of all the life-cycle meaning processes can be described in terms of basic semiotic components, algebraic constructions of the following forms:

Pnn:fnn] → Ξn+1)

where Ξn is a sign system corresponding to a representation of a (design) problem at time t1, Ξn+1 is a sign system corresponding to a representation of the problem at time t2, t2 > t1, fn is a composition of semiotic morphisms that specifies the interaction of variation and selection under the condition of information closure, which requires no external elements be added to the current sign system; мn is a semiotic morphism, and Pn is the probability associated with мn, ΣPn = 1, n=1,…,M, where M is the number of the meaningful transformations of the resultant sign system after fn. There is a partial ranking – importance ordering – on the constraints of A in every Ξn, such that lower ranked constraints can be violated in order for higher ranked constraints to be satisfied. The morphisms of fn preserve the ranking.

The Semiotic Theory of Self-Organizing Systems postulates that in the scale hierarchy of dynamical organization, a new level emerges if and only if a new level in the hierarchy of semiotic interpretance emerges. As the development of a new product always and naturally causes the emergence of a new meaning, the above-cited Principle of Emergence directly leads us to the formulation of the first law of life-cycle semiosis as follows:

I. The semiosis of a product life cycle is represented by a sequence of basic semiotic components, such that at least one of the components is well defined in the sense that not all of its morphisms of м and f are isomorphisms, and at least one м in the sequence is not level-preserving in the sense that it does not preserve the original partial ordering on levels.

For the present (i.e. for an on-going process), there exists a probability distribution over the possible мn for every component in the sequence. For the past (i.e. retrospectively), each of the distributions collapses to a single mapping with Pn = 1, while the sequence of basic semiotic components is degenerated to a sequence of functions. For the future, the life-cycle meaning-making