The Second Trichotomy. Thought of the Day 120.0

Figure-2-Peirce's-triple-trichotomy

The second trichotomy (here is the first) is probably the most well-known piece of Peirce’s semiotics: it distinguishes three possible relations between the sign and its (dynamical) object. This relation may be motivated by similarity, by actual connection, or by general habit – giving rise to the sign classes icon, index, and symbol, respectively.

According to the second trichotomy, a Sign may be termed an Icon, an Index, or a Symbol.

An Icon is a sign which refers to the Object that it denotes merely by virtue of characters of its own, and which it possesses, just the same, whether any such Object actually exists or not. It is true that unless there really is such an Object, the Icon does not act as a sign; but this has nothing to do with its character as a sign. Anything whatever, be it quality, existent individual, or law, is an Icon of anything, in so far as it is like that thing and used as a sign of it.

An Index is a sign which refers to the Object that it denotes by virtue of being really affected by that Object. It cannot, therefore, be a Qualisign, because qualities are whatever they are independently of anything else. In so far as the Index is affected by the Object, it necessarily has some Quality in common with the Object, and it is in respect to these that it refers to the Object. It does, therefore, involve a sort of Icon, although an Icon of a peculiar kind; and it is not the mere resemblance of its Object, even in these respects which makes it a sign, but it is the actual modification of it by the Object. 

A Symbol is a sign which refers to the Object that it denotes by virtue of a law, usually an association of general ideas, which operates to cause the Symbol to be interpreted as referring to that Object. It is thus itself a general type or law, that is, a Legisign. As such it acts through a Replica. Not only is it general in itself, but the Object to which it refers is of general nature. Now that which is general has its being in the instances it will determine. There must, therefore, be existent instances of what the Symbol denotes, although we must here understand by ‘existent’, existent in the possibly imaginary universe to which the Symbol refers. The Symbol will indirectly, through the association or other law, be affected by those instances; and thus the Symbol will involve a sort of Index, although an Index of a peculiar kind. It will not, however, be by any means true that the slight effect upon the Symbol of those instances accounts for the significant character of the Symbol.

The icon refers to its object solely by means of its own properties. This implies that an icon potentially refers to an indefinite class of objects, namely all those objects which have, in some respect, a relation of similarity to it. In recent semiotics, it has often been remarked by someone like Nelson Goodman that any phenomenon can be said to be like any other phenomenon in some respect, if the criterion of similarity is chosen sufficiently general, just like the establishment of any convention immediately implies a similarity relation. If Nelson Goodman picks out two otherwise very different objects, then they are immediately similar to the extent that they now have the same relation to Nelson Goodman. Goodman and others have for this reason deemed the similarity relation insignificant – and consequently put the whole burden of semiotics on the shoulders of conventional signs only. But the counterargument against this rejection of the relevance of the icon lies close at hand. Given a tertium comparationis, a measuring stick, it is no longer possible to make anything be like anything else. This lies in Peirce’s observation that ‘It is true that unless there really is such an Object, the Icon does not act as a sign ’ The icon only functions as a sign to the extent that it is, in fact, used to refer to some object – and when it does that, some criterion for similarity, a measuring stick (or, at least, a delimited bundle of possible measuring sticks) are given in and with the comparison. In the quote just given, it is of course the immediate object Peirce refers to – it is no claim that there should in fact exist such an object as the icon refers to. Goodman and others are of course right in claiming that as ‘Anything whatever ( ) is an Icon of anything ’, then the universe is pervaded by a continuum of possible similarity relations back and forth, but as soon as some phenomenon is in fact used as an icon for an object, then a specific bundle of similarity relations are picked out: ‘ in so far as it is like that thing.’

Just like the qualisign, the icon is a limit category. ‘A possibility alone is an Icon purely by virtue of its quality; and its object can only be a Firstness.’ (Charles S. PeirceThe Essential Peirce_ Selected Philosophical Writings). Strictly speaking, a pure icon may only refer one possible Firstness to another. The pure icon would be an identity relation between possibilities. Consequently, the icon must, as soon as it functions as a sign, be more than iconic. The icon is typically an aspect of a more complicated sign, even if very often a most important aspect, because providing the predicative aspect of that sign. This Peirce records by his notion of ‘hypoicon’: ‘But a sign may be iconic, that is, may represent its object mainly by its similarity, no matter what its mode of being. If a substantive is wanted, an iconic representamen may be termed a hypoicon’. Hypoicons are signs which to a large extent makes use of iconical means as meaning-givers: images, paintings, photos, diagrams, etc. But the iconic meaning realized in hypoicons have an immensely fundamental role in Peirce’s semiotics. As icons are the only signs that look-like, then they are at the same time the only signs realizing meaning. Thus any higher sign, index and symbol alike, must contain, or, by association or inference terminate in, an icon. If a symbol can not give an iconic interpretant as a result, it is empty. In that respect, Peirce’s doctrine parallels that of Husserl where merely signitive acts require fulfillment by intuitive (‘anschauliche’) acts. This is actually Peirce’s continuation of Kant’s famous claim that intuitions without concepts are blind, while concepts without intuitions are empty. When Peirce observes that ‘With the exception of knowledge, in the present instant, of the contents of consciousness in that instant (the existence of which knowledge is open to doubt) all our thought and knowledge is by signs’ (Letters to Lady Welby), then these signs necessarily involve iconic components. Peirce has often been attacked for his tendency towards a pan-semiotism which lets all mental and physical processes take place via signs – in the quote just given, he, analogous to Husserl, claims there must be a basic evidence anterior to the sign – just like Husserl this evidence before the sign must be based on a ‘metaphysics of presence’ – the ‘present instant’ provides what is not yet mediated by signs. But icons provide the connection of signs, logic and science to this foundation for Peirce’s phenomenology: the icon is the only sign providing evidence (Charles S. Peirce The New Elements of Mathematics Vol. 4). The icon is, through its timeless similarity, apt to communicate aspects of an experience ‘in the present instant’. Thus, the typical index contains an icon (more or less elaborated, it is true): any symbol intends an iconic interpretant. Continuity is at stake in relation to the icon to the extent that the icon, while not in itself general, is the bearer of a potential generality. The infinitesimal generality is decisive for the higher sign types’ possibility to give rise to thought: the symbol thus contains a bundle of general icons defining its meaning. A special icon providing the condition of possibility for general and rigorous thought is, of course, the diagram.

The index connects the sign directly with its object via connection in space and time; as an actual sign connected to its object, the index is turned towards the past: the action which has left the index as a mark must be located in time earlier than the sign, so that the index presupposes, at least, the continuity of time and space without which an index might occur spontaneously and without any connection to a preceding action. Maybe surprisingly, in the Peircean doctrine, the index falls in two subtypes: designators vs. reagents. Reagents are the simplest – here the sign is caused by its object in one way or another. Designators, on the other hand, are more complex: the index finger as pointing to an object or the demonstrative pronoun as the subject of a proposition are prototypical examples. Here, the index presupposes an intention – the will to point out the object for some receiver. Designators, it must be argued, presuppose reagents: it is only possible to designate an object if you have already been in reagent contact (simulated or not) with it (this forming the rational kernel of causal reference theories of meaning). The closer determination of the object of an index, however, invariably involves selection on the background of continuities.

On the level of the symbol, continuity and generality play a main role – as always when approaching issues defined by Thirdness. The symbol is, in itself a legisign, that is, it is a general object which exists only due to its actual instantiations. The symbol itself is a real and general recipe for the production of similar instantiations in the future. But apart from thus being a legisign, it is connected to its object thanks to a habit, or regularity. Sometimes, this is taken to mean ‘due to a convention’ – in an attempt to distinguish conventional as opposed to motivated sign types. This, however, rests on a misunderstanding of Peirce’s doctrine in which the trichotomies record aspects of sign, not mutually exclusive, independent classes of signs: symbols and icons do not form opposed, autonomous sign classes; rather, the content of the symbol is constructed from indices and general icons. The habit realized by a symbol connects it, as a legisign, to an object which is also general – an object which just like the symbol itself exists in instantiations, be they real or imagined. The symbol is thus a connection between two general objects, each of them being actualized through replicas, tokens – a connection between two continua, that is:

Definition 1. Any Blank is a symbol which could not be vaguer than it is (although it may be so connected with a definite symbol as to form with it, a part of another partially definite symbol), yet which has a purpose.

Axiom 1. It is the nature of every symbol to blank in part. [ ]

Definition 2. Any Sheet would be that element of an entire symbol which is the subject of whatever definiteness it may have, and any such element of an entire symbol would be a Sheet. (‘Sketch of Dichotomic Mathematics’ (The New Elements of Mathematics Vol. 4 Mathematical Philosophy)

The symbol’s generality can be described as it having always blanks having the character of being indefinite parts of its continuous sheet. Thus, the continuity of its blank parts is what grants its generality. The symbol determines its object according to some rule, granting the object satisfies that rule – but leaving the object indeterminate in all other respects. It is tempting to take the typical symbol to be a word, but it should rather be taken as the argument – the predicate and the proposition being degenerate versions of arguments with further continuous blanks inserted by erasure, so to speak, forming the third trichotomy of term, proposition, argument.

Infinite Sequences and Halting Problem. Thought of the Day 76.0

580376_f96f_2

In attempting to extend the notion of depth from finite strings to infinite sequences, one encounters a familiar phenomenon: the definitions become sharper (e.g. recursively invariant), but their intuitive meaning is less clear, because of distinctions (e.g. between infintely-often and almost-everywhere properties) that do not exist in the finite case.

An infinite sequence X is called strongly deep if at every significance level s, and for every recursive function f, all but finitely many initial segments Xn have depth exceeding f(n).

It is necessary to require the initial segments to be deep almost everywhere rather than infinitely often, because even the most trivial sequence has infinitely many deep initial segments Xn (viz. the segments whose lengths n are deep numbers).

It is not difficult to show that the property of strong depth is invariant under truth-table equivalence (this is the same as Turing equivalence in recursively bounded time, or via a total recursive operator), and that the same notion would result if the initial segments were required to be deep in the sense of receiving less than 2−s of their algorithmic probability from f(n)-fast programs. The characteristic sequence of the halting set K is an example of a strongly deep sequence.

A weaker definition of depth, also invariant under truth-table equivalence, is perhaps more analogous to that adopted for finite strings:

An infinite sequence X is weakly deep if it is not computable in recursively bounded time from any algorithmically random infinite sequence.

Computability in recursively bounded time is equivalent to two other properties, viz. truth-table reducibility and reducibility via a total recursive operator.

By contrast to the situation with truth-table reducibility, Péter Gacs has shown that every sequence is computable from (i.e. Turing reducible to) an algorithmically random sequence if no bound is imposed on the time. This is the infinite analog of far more obvious fact that every finite string is computable from an algorithmically random string (e.g. its minimal program).

Every strongly deep sequence is weakly deep, but by intermittently padding K with large blocks of zeros, one can construct a weakly deep sequence with infinitely many shallow initial segments.

Truth table reducibility to an algorithmically random sequence is equivalent to the property studied by Levin et. al. of being random with respect to some recursive measure. Levin calls sequences with this property “proper” or “complete” sequences, and views them as more realistic and interesting than other sequences because they are the typical outcomes of probabilistic or deterministic effective processes operating in recursively bounded time.

Weakly deep sequences arise with finite probability when a universal Turing machine (with one-way input and output tapes, so that it can act as a transducer of infinite sequences) is given an infinite coin toss sequence for input. These sequences are necessarily produced very slowly: the time to output the n’th digit being bounded by no recursive function, and the output sequence contains evidence of this slowness. Because they are produced with finite probability, such sequences can contain only finite information about the halting problem.

Reductionism of Numerical Complexity: A Wittgensteinian Excursion

boyle10

Wittgenstein’s criticism of Russell’s logicist foundation of mathematics contained in (Remarks on the Foundation of Mathematics) consists in saying that it is not the formalized version of mathematical deduction which vouches for the validity of the intuitive version but conversely.

If someone tries to shew that mathematics is not logic, what is he trying to shew? He is surely trying to say something like: If tables, chairs, cupboards, etc. are swathed in enough paper, certainly they will look spherical in the end.

He is not trying to shew that it is impossible that, for every mathematical proof, a Russellian proof can be constructed which (somehow) ‘corresponds’ to it, but rather that the acceptance of such a correspondence does not lean on logic.

Taking up Wittgenstein’s criticism, Hao Wang (Computation, Logic, Philosophy) discusses the view that mathematics “is” axiomatic set theory as one of several possible answers to the question “What is mathematics?”. Wang points out that this view is epistemologically worthless, at least as far as the task of understanding the feature of cognition guiding is concerned:

Mathematics is axiomatic set theory. In a definite sense, all mathematics can be derived from axiomatic set theory. [ . . . ] There are several objections to this identification. [ . . . ] This view leaves unexplained why, of all the possible consequences of set theory, we select only those which happen to be our mathematics today, and why certain mathematical concepts are more interesting than others. It does not help to give us an intuitive grasp of mathematics such as that possessed by a powerful mathematician. By burying, e.g., the individuality of natural numbers, it seeks to explain the more basic and the clearer by the more obscure. It is a little analogous to asserting that all physical objects, such as tables, chairs, etc., are spherical if we swathe them with enough stuff.

Reductionism is an age-old project; a close forerunner of its incarnation in set theory was the arithmetization program of the 19th century. It is interesting that one of its prominent representatives, Richard Dedekind (Essays on the Theory of Numbers), exhibited a quite distanced attitude towards a consequent carrying out of the program:

It appears as something self-evident and not new that every theorem of algebra and higher analysis, no matter how remote, can be expressed as a theorem about natural numbers [ . . . ] But I see nothing meritorious [ . . . ] in actually performing this wearisome circumlocution and insisting on the use and recognition of no other than rational numbers.

Perec wrote a detective novel without using the letter ‘e’ (La disparition, English A void), thus proving not only that such an enormous enterprise is indeed possible but also that formal constraints sometimes have great aesthetic appeal. The translation of mathematical propositions into a poorer linguistic framework can easily be compared with such painful lipogrammatical exercises. In principle all logical connectives can be simulated in a framework exclusively using Sheffer’s stroke, and all cuts (in Gentzen’s sense) can be eliminated; one can do without common language at all in mathematics and formalize everything and so on: in principle, one could leave out a whole lot of things. However, in doing so one would depart from the true way of thinking employed by the mathematician (who really uses “and” and “not” and cuts and who does not reduce many things to formal systems). Obviously, it is the proof theorist as a working mathematician who is interested in things like the reduction to Sheffer’s stroke since they allow for more concise proofs by induction in the analysis of a logical calculus. Hence this proof theorist has much the same motives as a mathematician working on other problems who avoids a completely formalized treatment of these problems since he is not interested in the proof-theoretical aspect.

There might be quite similar reasons for the interest of some set theorists in expressing usual mathematical constructions exclusively with the expressive means of ZF (i.e., in terms of ∈). But beyond this, is there any philosophical interpretation of such a reduction? In the last analysis, mathematicians always transform (and that means: change) their objects of study in order to make them accessible to certain mathematical treatments. If one considers a mathematical concept as a tool, one does not only use it in a way different from the one in which it would be used if it were considered as an object; moreover, in semiotical representation of it, it is given a form which is different in both cases. In this sense, the proof theorist has to “change” the mathematical proof (which is his or her object of study to be treated with mathematical tools). When stating that something is used as object or as tool, we have always to ask: in which situation, or: by whom.

A second observation is that the translation of propositional formulæ in terms of Sheffer’s stroke in general yields quite complicated new formulæ. What is “simple” here is the particularly small number of symbols needed; but neither the semantics becomes clearer (p|q means “not both p and q”; cognitively, this looks more complex than “p and q” and so on), nor are the formulæ you get “short”. What is looked for in this case, hence, is a reduction of numerical complexity, while the primitive basis attained by the reduction cognitively looks less “natural” than the original situation (or, as Peirce expressed it, “the consciousness in the determined cognition is more lively than in the cognition which determines it”); similarly in the case of cut elimination. In contrast to this, many philosophers are convinced that the primitive basis of operating with sets constitutes really a “natural” basis of mathematical thinking, i.e., such operations are seen as the “standard bricks” of which this thinking is actually made – while no one will reasonably claim that expressions of the type p|q play a similar role for propositional logic. And yet: reduction to set theory does not really have the task of “explanation”. It is true, one thus reduces propositions about “complex” objects to propositions about “simple” objects; the propositions themselves, however, thus become in general more complex. Couched in Fregean terms, one can perhaps more easily grasp their denotation (since the denotation of a proposition is its truth value) but not their meaning. A more involved conceptual framework, however, might lead to simpler propositions (and in most cases has actually just been introduced in order to do so). A parallel argument concerns deductions: in its totality, a deduction becomes more complex (and less intelligible) by a decomposition into elementary steps.

Now, it will be subject to discussion whether in the case of some set operations it is admissible at all to claim that they are basic for thinking (which is certainly true in the case of the connectives of propositional logic). It is perfectly possible that the common sense which organizes the acceptance of certain operations as a natural basis relies on something different, not having the character of some eternal laws of thought: it relies on training.

Is it possible to observe that a surface is coloured red and blue; and not to observe that it is red? Imagine a kind of colour adjective were used for things that are half red and half blue: they are said to be ‘bu’. Now might not someone to be trained to observe whether something is bu; and not to observe whether it is also red? Such a man would then only know how to report: “bu” or “not bu”. And from the first report we could draw the conclusion that the thing was partly red.

The Political: NRx, Neoreactionism Archived.

This one is eclectic and for the record.

dark-enlightenment-map-1-5

The techno-commercialists appear to have largely arrived at neoreaction via right-wing libertarianism. They are defiant free marketeers, sharing with other ultra-capitalists such as Randian Objectivists a preoccupation with “efficiency,” a blind trust in the power of the free market, private property, globalism and the onward march of technology. However, they are also believers in the ideal of small states, free movement and absolute or feudal monarchies with no form of democracy. The idea of “exit,” predominantly a techno-commercialist viewpoint but found among other neoreactionaries too, essentially comes down to the idea that people should be able to freely exit their native country if they are unsatisfied with its governance-essentially an application of market economics and consumer action to statehood. Indeed, countries are often described in corporate terms, with the King being the CEO and the aristocracy shareholders.

The “theonomists” place more emphasis on the religious dimension of neoreaction. They emphasise tradition, divine law, religion rather than race as the defining characteristic of “tribes” of peoples and traditional, patriarchal families. They are the closest group in terms of ideology to “classical” or, if you will, “palaeo-reactionaries” such as the High Tories, the Carlists and French Ultra-royalists. Often Catholic and often ultramontanist. Finally, there’s the “ethnicist” lot, who believe in racial segregation and have developed a new form of racial ideology called “Human Biodiversity” (HBD) which says people of African heritage are naturally less intelligent than people of Caucasian and east Asian heritage. Of course, the scientific community considers the idea that there are any genetic differences between human races beyond melanin levels in the skin and other cosmetic factors to be utterly false, but presumably this is because they are controlled by “The Cathedral.” They like “tribal solidarity,” tribes being defined by shared ethnicity, and distrust outsiders.

dark-enlightenment

Overlap between these groups is considerable, but there are also vast differences not just between them but within them. What binds them together is common opposition to “The Cathedral” and to “progressive” ideology. Some of their criticisms of democracy and modern society are well-founded, and some of them make good points in defence of the monarchical system. However, I don’t much like them, and I doubt they’d much like me.

Whereas neoreactionaries are keen on the free market and praise capitalism, unregulated capitalism is something I am wary of. Capitalism saw the collapse of traditional monarchies in Europe in the 19th century, and the first revolutions were by capitalists seeking to establish democratic, capitalist republics where the bourgeoisie replaced the aristocratic elite as the ruling class; setting an example revolutionary socialists would later follow. Capitalism, when unregulated, leads to monopolies, exploitation of the working class, unsustainable practices in pursuit of increased short-term profits, globalisation and materialism. Personally, I prefer distributist economics, which embrace private property rights but emphasise widespread ownership of wealth, small partnerships and cooperatives replacing private corporations as the basic units of the nation’s economy. And although critical of democracy, the idea that any form of elected representation for the lower classes is anathaema is not consistent with my viewpoint; my ideal government would not be absolute or feudal monarchy, but executive constitutional monarchy with a strong monarch exercising executive powers and the legislative role being at least partially controlled by an elected parliament-more like the Bourbon Restoration than the Ancien Régime, though I occasionally say “Vive l’Ancien Régime!” on forums or in comments to annoy progressive types. Finally, I don’t believe in racialism in any form. I tend to attribute preoccupations with racial superiority to deep insecurity which people find the need to suppress by convincing themselves that they are “racially superior” to others, in absence of any actual talent or especial ability to take pride in. The 20th century has shown us where dividing people up based on their genetics leads us, and it is not somewhere I care to return to.

I do think it is important to go into why Reactionaries think Cthulhu always swims left, because without that they’re vulnerable to the charge that they have no a priori reason to expect our society to have the biases it does, and then the whole meta-suspicion of the modern Inquisition doesn’t work or at least doesn’t work in that particular direction. Unfortunately (for this theory) I don’t think their explanation is all that great (though this deserves substantive treatment) and we should revert to a strong materialist prior, but of course I would say that, wouldn’t I.

And of course you could get locked up for wanting fifty Stalins! Just try saying how great Enver Hoxha was at certain places and times. Of course saying you want fifty Stalins is not actually advocating that Stalinism become more like itself – as Leibniz pointed out, a neat way of telling whether something is something is checking whether it is exactly like that thing, and nothing could possibly be more like Stalinism than Stalinism. Of course fifty Stalins is further in the direction that one Stalin is from our implied default of zero Stalins. But then from an implied default of 1.3 kSt it’s a plea for moderation among hypostalinist extremists. As Mayberry Mobmuck himself says, “sovereign is he who determines the null hypothesis.”

Speaking of Stalinism, I think it does provide plenty of evidence that policy can do wonderful things for life expectancy and so on, and I mean that in a totally unironic “hail glorious comrade Stalin!” way, not in a “ha ha Stalin sure did kill a lot people way.” But this is a super-unintuitive claim to most people today, so ill try to get around to summarizing the evidence at some point.

‘Neath an eyeless sky, the inkblack sea
Moves softly, utters not save a quiet sound
A lapping-sound, not saying what may be
The reach of its voice a furthest bound;
And beyond it, nothing, nothing known
Though the wind the boat has gently blown
Unsteady on shifting and traceless ground
And quickly away from it has flown.

Allow us a map, and a lamp electric
That by instrument we may probe the dark
Unheard sounds and an unseen metric
Keep alive in us that unknown spark
To burn bright and not consume or mar
Has the unbounded one come yet so far
For night over night the days to mark
His journey — adrift, without a star?

Chaos is the substrate, and the unseen action (or non-action) against disorder, the interloper. Disorder is a mere ‘messing up order’.  Chaos is substantial where disorder is insubstantial. Chaos is the ‘quintessence’ of things, chaotic itself and yet always-begetting order. Breaking down disorder, since disorder is maladaptive. Exit is a way to induce bifurcation, to quickly reduce entropy through separation from the highly entropic system. If no immediate exit is available, Chaos will create one.

The Occultic Brotherhood

theosophy

Millions upon millions of years ago in the darkness of prehistory, humanity was an infant, a child of Mother Nature, unawakened, dreamlike, wrapped in the cloak of mental somnolence. Recognition of egoity slept; instinctual consciousness alone was active. Like a stream of brilliance across the horizon of time, divine beings, manasaputras, sons of mind, descended among the sleeping humans, and with the flame of intellectual solar fire lighted the wick of latent mind, and lo! the thinker stirred. Self-consciousness wakened, and man became a dynamo of intellectual and emotional power: capable of love, of hate, of glory, of defeat. Having knowledge, he acquired power; acquiring power, he chose; choosing, he fashioned the fabric of his future; and the perception of this ran like wine through his veins.

Knowledge, more knowledge, and still greater knowledge was required by the maturing humans who looked with gratitude to the godlike beings who had come to awaken them. For many millennia they followed their guidance, as children lovingly follow the footsteps of their mother.

As the ages rolled by, a circulation of divine instructors succeeded these primeval manasaputras and personally supervised the progress of child-humanity: they initiated them in the arts and sciences, taught them to sow their fields with corn and wheat, instructed them in the ways of clean and moral living — in short, established primeval schools of training and instruction open and free to all to learn of things material, intellectual, and spiritual. At this early period there were no Mystery colleges: the ancient wisdom was the common heirloom of all mankind, for as yet there had been no abuse of knowledge, and hence no need for schools kept hid and sacred from the world. Truth was freely given and as freely accepted in that golden age. (H. P. Blavatsky Collected Writings)

The race was young; not all were adept in learning. Some through past experience in former world periods learned quickly and with ease, choosing intuitively the path of spiritual intellection; others, less awake, were good though wayward in progress; while a third class of humans, drugged with inertia, found learning and aspiring a burden and became laggards in the evolutionary procession. To them, spiritual apathy was preferable to spiritual exertion.

Mankind as a whole progressed rapidly in the acquisition of knowledge and its subsequent use. Some obviously wrought evil — others good. What had been latent spirituality now became active good and active evil. Suffering and pain became nature’s most merciful method of restoring the heart to its primeval instinct, that of spiritual choice. As mind developed keener potentialities and the struggle for mental supremacy overcame the spiritual, the gift of intellect became a double-edged weapon: on the one hand, the bringer of spiritual awareness and undreamed of intellectual ecstasy; and on the other, the wielder of a weapon of destruction, of horror and, in the worst cases, of deliberate spiritual wickedness — diabolism. As H. P. Blavatsky wrote:

The mysteries of Heaven and Earth, revealed to the Third Race by their celestial teachers in the days of their purity, became a great focus of light, the rays from which became necessarily weakened as they were diffused and shed upon an uncongenial, because too material soil. With the masses they degenerated into Sorcery, taking later on the shape of exoteric religions, of idolatry full of superstitions, . . . — The Secret Doctrine  

Nature is cyclical throughout: at one time fertile in spiritual things, at another barren. At this long-ago period of the third root-race, on the great continent of Lemuria, now submerged, the cycle was against spiritual progress. A great downward sweep was in force, when expansion of physical and material energies were accelerated with the consequent retardation and contraction of spiritual power. The humanities of that period were part of the general evolutionary current, and individuals reacted to the coarsening atmosphere according to their nature. Some resisted its down- ward influence through awakened spirituality; others, weaker in understanding, vacillated between spirit and matter, between good and evil: sometimes listening to the promptings of intuition, at other times submerged by the rushing waves of the downward current. Still others, in whom the spark of intellectual splendor burned low, plunged headlong downstream, unmindful of the turbulent and muddy waters.

As the downward cycle proceeded, knowledge of spiritual verities and living of the life in accordance with them became a dull and useless tool in human hearts and minds. Such folly was inevitable in the course of cosmic events, and all things were provided for. Just as there are many types of people — some spiritual, others material, some highly intelligent, others slow of thought — so are there various grades of beings throughout the universe, ranging from the mineral, through the vegetable, animal, and human kingdom, and beyond to the head and hierarch of our earth.

During these first millennia the spiritual head and guardian of the earth had been stimulating wherever possible the individual fires of active spirituality. Gradually as knowledge of divine things became abused by those strong in will but weak in morality, truth was increasingly veiled. The planetary watcher now felt the need of selecting a band of co-workers to act as bodyguard and protector of the ancient wisdom. Alone a handful of spiritually illumined human beings, in whom the divine fervor burned bright, acknowledged wholehearted allegiance to their planetary mentor — the spiritual hierarch of humanity. Through long ages certain individuals had been watched over and guided, strengthened and tested in innumerable ways, and those who passed the test of self-knowledge and self-sacrifice were gathered together to form the first association of spiritual-divine human beings — the Great Brotherhood. As G. de Purucker elaborates:

Then was formed or established or set in operation the gathering together of the very highest representatives, spiritually and intellectually speaking, that the human race as yet had given manifestation to; . . .

. . . the Silent Watcher of the Globe, through the spiritual-magnetic attraction of like to like, was enabled to attract to the Path of Light, even from the earliest times of the Third Root-Race, certain unusual human individuals, early forerunners of the general Manasaputric “descent,” and thus to form with these individuals a Focus of Spiritual and Intellectual Light on Earth, this fact signifying not so much an association or society or brotherhood as a unity of human spiritual and intellectual Flames, so to speak, which then represented on Earth the heart of the Hierarchy of Compassion. . . .

Now it was just this original focus of Living Flames, which never degenerated nor lost its high status of the mystic center on Earth through which poured the supernal glory of the Hierarchy of Compassion, today represented by the Great Brotherhood of the Mahatmans, . . . Thus it is that the Great Brotherhood traces an unbroken and uninterrupted ancestry back to the original focus of Light of the Third Root-Race. — The Esoteric Tradition  

Hence the elder brothers of the race remain

the elect custodians of the Mysteries revealed to mankind by the divine Teachers . . . and tradition whispers, what the secret teachings affirm, namely, that these Elect were the germ of a Hierarchy which never died since that period” (Secret Doctrine)

— since the foundation and establishment of the Great Brotherhood some 12 million years ago. From this center for millions of years have been streaming in continuous procession rays of light and strength into the world at large and, more specifically, into the hearts of those whose lives are dedicated to the service of truth. From this Fraternity have gone forth messengers, masters of wisdom, to inspire the grand religions of the past, and they will continue to send forth their envoys as long as mankind requires their care.

Single Asset Optimal Investment Fraction

Protecting-your-nest-egg_investment-outcomes

We first consider a situation, when an investor can spend a fraction of his capital to buy shares of just one risky asset. The rest of his money he keeps in cash.

Generalizing Kelly, we consider the following simple strategy of the investor: he regularly checks the asset’s current price p(t), and sells or buys some asset shares in order to keep the current market value of his asset holdings a pre-selected fraction r of his total capital. These readjustments are made periodically at a fixed interval, which we refer to as readjustment interval, and select it as the discrete unit of time. In this work the readjustment time interval is selected once and for all, and we do not attempt optimization of its length.

We also assume that on the time-scale of this readjustment interval the asset price p(t) undergoes a geometric Brownian motion:

p(t + 1) = eη(t)p(t) —– (1)

i.e. at each time step the random number η(t) is drawn from some probability distribution π(η), and is independent of it’s value at previous time steps. This exponential notation is particularly convenient for working with multiplicative noise, keeping the necessary algebra at minimum. Under these rules of dynamics the logarithm of the asset’s price, ln p(t), performs a random walk with an average drift v = ⟨η⟩ and a dispersion D = ⟨η2⟩ − ⟨η⟩2.

It is easy to derive the time evolution of the total capital W(t) of an investor, following the above strategy:

W(t + 1) = (1 − r)W(t) + rW(t)eη(t) —– (2)

Let us assume that the value of the investor’s capital at t = 0 is W(0) = 1. The evolution of the expectation value of the expectation value of the total capital ⟨W (t)⟩ after t time steps is obviously given by the recursion ⟨W (t + 1)⟩ = (1 − r + r⟨eη⟩)⟨W (t)⟩. When ⟨eη⟩ > 1, at first thought the investor should invest all his money in the risky asset. Then the expectation value of his capital would enjoy an exponential growth with the fastest growth rate. However, it would be totally unreasonable to expect that in a typical realization of price fluctuations, the investor would be able to attain the average growth rate determined as vavg = d⟨W(t)⟩/dt. This is because the main contribution to the expectation value ⟨W(t)⟩ comes from exponentially unlikely outcomes, when the price of the asset after a long series of favorable events with η > ⟨η⟩ becomes exponentially big. Such outcomes lie well beyond reasonable fluctuations of W (t), determined by the standard deviation √Dt of ln W (t) around its average value ⟨ln W (t)⟩ = ⟨η⟩t. For the investor who deals with just one realization of the multiplicative process it is better not to rely on such unlikely events, and maximize his gain in a typical outcome of a process. To quantify the intuitively clear concept of a typical value of a random variable x, we define xtyp as a median of its distribution, i.e xtyp has the property that Prob(x > xtyp) = Prob(x < xtyp) = 1/2. In a multiplicative process (2) with r = 1, W (t + 1) = eη(t)W (t), one can show that Wtyp(t) – the typical value of W(t) – grows exponentially in time: Wtyp(t) = e⟨η⟩t at a rate vtyp = ⟨η⟩, while the expectation value ⟨W(t)⟩ also grows exponentially as ⟨W(t)⟩ = ⟨eη⟩t, but at a faster rate given by vavg = ln⟨eη⟩. Notice that ⟨lnW(t)⟩ always grows with the typical growth rate, since those very rare outcomes when W (t) is exponentially big, do not make significant contribution to this average.

The question we are going to address is: which investment fraction r provides the investor with the best typical growth rate vtyp of his capital. Kelly has answered this question for a particular realization of multiplicative stochastic process, where the capital is multiplied by 2 with probability q > 1/2, and by 0 with probability p = 1 − q. This case is realized in a gambling game, where betting on the right outcome pays 2:1, while you know the right outcome with probability q > 1/2. In our notation this case corresponds to η being equal to ln 2 with probability q and −∞ otherwise. The player’s capital in Kelly’s model with r = 1 enjoys the growth of expectation value ⟨W(t)⟩ at a rate vavg = ln2q > 0. In this case it is however particularly clear that one should not use maximization of the expectation value of the capital as the optimum criterion. If the player indeed bets all of his capital at every time step, sooner or later he will loose everything and would not be able to continue to play. In other words, r = 1 corresponds to the worst typical growth of the capital: asymptotically the player will be bankrupt with probability 1. In this example it is also very transparent, where the positive average growth rate comes from: after T rounds of the game, in a very unlikely (Prob = qT) event that the capital was multiplied by 2 at all times (the gambler guessed right all the time!), the capital is equal to 2T. This exponentially large value of the capital outweighs exponentially small probability of this event, and gives rise to an exponentially growing average. This would offer condolence to a gambler who lost everything.

We generalize Kelly’s arguments for arbitrary distribution π(η). As we will see this generalization reveals some hidden results, not realized in Kelly’s “betting” game. As we learned above, the growth of the typical value of W(t), is given by the drift of ⟨lnW(t)⟩ = vtypt, which in our case can be written as

vtyp(r) = ∫ dη π(η) ln(1 + r(eη − 1)) —– (3)

One can check that vtyp(0) = 0, since in this case the whole capital is in the form of cash and does not change in time. In another limit one has vtyp(1) = ⟨η⟩, since in this case the whole capital is invested in the asset and enjoys it’s typical growth rate (⟨η⟩ = −∞ for Kelly’s case). Can one do better by selecting 0 < r < 1? To find the maximum of vtyp(r) one differentiates (3) with respect to r and looks for a solution of the resulting equation: 0 = v’typ(r) = ∫ dη π(η) (eη −1)/(1+r(eη −1)) in the interval 0 ≤ r ≤ 1. If such a solution exists, it is unique since v′′typ(r) = − ∫ dη π(η) (eη − 1)2 / (1 + r(eη − 1))2 < 0 everywhere. The values of the v’typ(r) at 0 and 1 are given by v’typ(0) = ⟨eη⟩ − 1, and v’typ(1) = 1−⟨e−η⟩. One has to consider three possibilities:

(1) ⟨eη⟩ is realized at r = 0 and is equal to 0. In other words, one should never invest in an asset with negative average return per capital ⟨eη⟩ − 1 < 0.

(2) ⟨eη⟩ > 1 , and ⟨e−η⟩ > 1. In this case v’typ(0) > 0, but v’typ(1) < 0 and the maximum of v(r) is realized at some 0 < r < 1, which is a unique solution to v’typ(r) = 0. The typical growth rate in this case is always positive (because you could have always selected r = 0 to make it zero), but not as big as the average rate ln⟨eη⟩, which serves as an unattainable ideal limit. An intuitive understanding of why one should select r < 1 in this case comes from the following observation: the condition ⟨e−η⟩ > 1 makes ⟨1/p(t)⟩ to grow exponentially in time. Such an exponential growth indicates that the outcomes with very small p(t) are feasible and give dominant contribution to ⟨1/p(t)⟩. This is an indicator that the asset price is unstable and one should not trust his whole capital to such a risky investment.

(3) ⟨eη⟩ > 1 , and ⟨e−η⟩ < 1. This is a safe asset and one can invest his whole capital in it. The maximum vtyp(r) is achieved at r = 1 and is equal to vtyp(1) = ln⟨η⟩. A simple example of this type of asset is one in which the price p(t) with equal probabilities is multiplied by 2 or by a = 2/3. As one can see this is a marginal case in which ⟨1/p(t)⟩ = const. For a < 2/3 one should invest only a fraction r < 1 of his capital in the asset, while for a ≥ 2/3 the whole sum could be trusted to it. The specialty of the case with a = 2/3 cannot not be guessed by just looking at the typical and average growth rates of the asset! One has to go and calculate ⟨e−η⟩ to check if ⟨1/p(t)⟩ diverges. This “reliable” type of asset is a new feature of the model with a general π(η). It is never realized in Kelly’s original model, which always has ⟨η⟩ = −∞, so that it never makes sense to gamble the whole capital every time.

An interesting and somewhat counterintuitive consequence of the above results is that under certain conditions one can make his capital grow by investing in asset with a negative typical growth rate ⟨η⟩ < 0. Such asset certainly loses value, and its typical price experiences an exponential decay. Any investor bold enough to trust his whole capital in such an asset is losing money with the same rate. But as long as the fluctuations are strong enough to maintain a positive average return per capital ⟨eη⟩ − 1 > 0) one can maintain a certain fraction of his total capital invested in this asset and almost certainly make money! A simple example of such mind-boggling situation is given by a random multiplicative process in which the price of the asset with equal probabilities is doubled (goes up by 100%) or divided by 3 (goes down by 66.7%). The typical price of this asset drifts down by 18% each time step. Indeed, after T time steps one could reasonably expect the price of this asset to be ptyp(T) = 2T/2 3−T/2 = (√2/3)T ≃ 0.82T. On the other hand, the average ⟨p(t)⟩ enjoys a 17% growth ⟨p(t + 1)⟩ = 7/6 ⟨p(t)⟩ ≃ 1.17⟨W (t)⟩. As one can easily see, the optimum of the typical growth rate is achieved by maintaining a fraction r = 1/4 of the capital invested in this asset. The typical rate in this case is a meager √(25/24) ≃ 1.02, meaning that in a long run one almost certainly gets a 2% return per time step, but it is certainly better than losing 18% by investing the whole capital in this asset.

Of course the properties of a typical realization of a random multiplicative process are not fully characterized by the drift vtyp(r)t in the position of the center of mass of P(h,t), where h(t) = lnW(t) is a logarithm of the wealth of the investor. Indeed, asymptotically P (h, t) has a Gaussian shape P (h, t) =1/ (√2π D(r)t) (exp(−(h−vtyp(r)t)2)/(2D(r)t), where vtyp(r) is given by eq. (3). One needs to know the dispersion D(r) to estimate √D(r)t, which is the magnitude of characteristic deviations of h(t) away from its typical value htyp(t) = vtypt. At the infinite time horizon t → ∞, the process with the biggest vtyp(r) will certainly be preferable over any other process. This is because the separation between typical values of h(t) for two different investment fractions r grows linearly in time, while the span of typical fluctuations grows only as a √t. However, at a finite time horizon the investor should take into account both vtyp(r) and D(r) and decide what he prefers: moderate growth with small fluctuations or faster growth with still bigger fluctuations. To quantify this decision one needs to introduce an investor’s “utility function” which we will not attempt in this work. The most conservative players are advised to always keep their capital in cash, since with any other arrangement the fluctuations will certainly be bigger. As a rule one can show that the dispersion D(r) = ∫ π(η) ln2[1 + r(eη − 1)]dη − v2typ monotonically increases with r. Therefore, among two solutions with equal vtyp(r) one should always select the one with a smaller r, since it would guarantee smaller fluctuations. Here it is more convenient to switch to the standard notation. It is customary to use the random variable

Λ(t)= (p(t+1)−p(t))/p(t) = eη(t) −1 —– (4)

which is referred to as return per unit capital of the asset. The properties of a random multiplicative process are expressed in terms of the average return per capital α = ⟨Λ⟩ = ⟨eη⟩ − 1, and the volatility (standard deviation) of the return per capital σ = √(⟨Λ2⟩ – ⟨Λ⟩2. In our notation, α = ⟨eη⟩ – 1, is determined by the average and not typical growth rate of the process. For η ≪ 1 , α ≃ v + D/2 + v2/2, while the volatility σ is related to D ( the dispersion of η) through σ ≃ √D.

 

Diffeomorphism Invariance: General Relativity Spacetime Points Cannot Possess Haecceity.

haecceity-786-800

Eliminative or radical ontic structural realism (ROSR) offers a radical cure—appropriate given its name—to what it perceives to be the ailing of traditional, object-based realist interpretations of fundamental theories in physics: rid their ontologies entirely of objects. The world does not, according to this view, consist of fundamental objects, which may or may not be individuals with a well-defined intrinsic identity, but instead of physical structures that are purely relational in the sense of networks of ‘free-standing’ physical relations without relata.

Advocates of ROSR have taken at least three distinct issues in fundamental physics to support their case. The quantum statistical features of an ensemble of elementary quantum particles of the same kind as well as the features of entangled elementary quantum (field) systems as illustrated in the violation of Bell-type inequalities challenge the standard understanding of the identity and individuality of fundamental physical objects: considered on their own, an elementary quantum particle part of the above mentioned ensemble or an entangled elementary quantum system (that is, an elementary quantum system standing in a quantum entanglement relation) cannot be said to satisfy genuine and empirically meaningful identity conditions. Thirdly, it has been argued that one of the consequences of the diffeomorphism invariance and background independence found in general relativity (GTR) is that spacetime points should not be considered as traditional objects possessing some haecceity, i.e. some identity on their own.

The trouble with ROSR is that its main assertion appears squarely incoherent: insofar as relations can be exemplified, they can only be exemplified by some relata. Given this conceptual dependence of relations upon relata, any contention that relations can exist floating freely from some objects that stand in those relations seems incoherent. If we accept an ontological commitment e.g. to universals, we may well be able to affirm that relations exist independently of relata – as abstracta in a Platonic heaven. The trouble is that ROSR is supposed to be a form of scientific realism, and as such committed to asserting that at least certain elements of the relevant theories of fundamental physics faithfully capture elements of physical reality. Thus, a defender of ROSR must claim that, fundamentally, relations-sans-relata are exemplified in the physical world, and that contravenes both the intuitive and the usual technical conceptualization of relations.

The usual extensional understanding of n-ary relations just equates them with subsets of the n-fold Cartesian product of the set of elementary objects assumed to figure in the relevant ontology over which the relation is defined. This extensional, ultimately set-theoretic, conceptualization of relations pervades philosophy and operates in the background of fundamental physical theories as they are usually formulated, as well as their philosophical appraisal in the structuralist literature. The charge then is that the fundamental physical structures that are represented in the fundamental physical theories are just not of the ‘object-free’ type suggested by ROSR.

While ROSR should not be held to the conceptual standards dictated by the metaphysical prejudices it denies, giving up the set-theoretical framework and the ineliminable reference to objects and relata attending its characterizations of relations and structure requires an alternative conceptualization of these notions so central to the position. This alternative conceptualization remains necessary even in the light of ‘metaphysics first’ complaints, which insist that ROSR’s problem must be confronted, first and foremost, at the metaphysical level, and that the question of how to represent structure in our language and in our theories only arises in the wake of a coherent metaphysical solution. But the radical may do as much metaphysics as she likes, articulate her theory and her realist commitments she must, and in order to do that, a coherent conceptualization of what it is to have free-floating relations exemplified in the physical world is necessary.

ROSR thus confronts a dilemma: either soften to a more moderate structural realist position or else develop the requisite alternative conceptualizations of relations and of structures and apply them to fundamental physical theories. A number of structural realists have grabbed the first leg and proposed less radical and non-eliminative versions of ontic structural realism (OSR). These moderate cousins of ROSR aim to take seriously the difficulties of the traditional metaphysics of objects for understanding fundamental physics while avoiding these major objections against ROSR by keeping some thin notion of object. The picture typically offered is that of a balance between relations and their relata, coupled to an insistence that these relata do not possess their identity intrinsically, but only by virtue of occupying a relational position in a structural complex. Because it strikes this ontological balance, we term this moderate version of OSR ‘balanced ontic structural realism’ (BOSR).

But holding their ground may reward the ROSRer with certain advantages over its moderate competitors. First, were the complete elimination of relata to succeed, then structural realism would not confront any of the known headaches concerning the identity of these objects or, relatedly, the status of the Principle of the Identity of Indiscernibles. To be sure, this embarrassment can arguably be avoided by other moves; but eliminating objects altogether simply obliterates any concerns whether two objects are one and the same. Secondly, and speculatively, alternative formulations of our fundamental physical theories may shed light on a path toward a quantum theory of gravity.

For these presumed advantages to come to bear, however, the possibility of a precise formulation of the notion of ‘free-standing’ (or ‘object-free’) structure, in the sense of a network of relations without relata (without objects) must thus be achieved.  Jonathan Bain has argued that category theory provides the appropriate mathematical framework for ROSR, allowing for an ‘object-free’ notion of relation, and hence of structure. This argument can only succeed, however, if the category-theoretical formulation of (some of the) fundamental physical theories has some physical salience that the set-theoretical formulation lacks, or proves to be preferable qua formulation of a physical theory in some other way.

F. A. Muller has argued that neither set theory nor category theory provide the tools necessary to clarify the “Central Claim” of structural realism that the world, or parts of the world, have or are some structure. The main reason for this arises from the failure of reference in the contexts of both set theory and category theory, at least if some minimal realist constraints are imposed on how reference can function. Consequently, Muller argues that an appropriately realist stucturalist is better served by fixing the concept of structure by axiomatization rather than by (set-theoretical or category-theoretical) definition.