Weakness of Gravity and Transverse Spreading of Gravitational Flux. Drunken Risibility.

I15-15-stringtheories

Dirichlet branes, or their dual heterotic fivebranes and Horava-Witten walls – can trap non-abelian gauge interactions in their worldvolumes. This has placed on a firmer basis an old idea, according to which we might be living on a brane embedded in a higher-dimensional world. The idea arises naturally in compactifications of type I theory, which typically involve collections of orientifold planes and D-branes. The ‘brane-world’ scenario admits a fully perturbative string description.

In type I string theory the graviton (a closed-string state) lives in the ten-dimensional bulk, while open-string vector bosons are in general localized on lower-dimensional D-branes. Furthermore while closed strings interact to leading order via the sphere diagram, open strings interact via the disk diagram which is of higher order in the genus expansion. The four-dimensional Planck mass and Yang-Mills couplings therefore take the form

αU ∼ gI/(r˜MI)6-n

M2Planck ∼ rn6-nM8I/g2

where r is the typical radius of the n compact dimensions transverse to the brane, f the typical radius of the remaining (6-n) compact longitudinal dimensions, MI the type-I string scale and gI the string coupling constant. By appropriate T-dualities we can again ensure that both r and r˜ are greater than or equal to the fundamental string scale. T- dualities change n and may take us either to Ia or to Ib theory (also called I or I’, respectively).

It follows from these formulae that (a) there is no universal relation between MPlanck, αand MI anymore, and (b) tree-level gauge couplings corresponding to different sets of D-branes have radius-dependent ratios and need not unify at all. Thus type-I string theory is much more flexible (and less predictive) than its heterotic counterpart. The fundamental string scale, MI, in particular is a free parameter, even if one insists that αU be kept fixed and of order one, and that the string theory be weakly coupled. This added flexibility can be used to ‘remove’ the order-of magnitude discrepancy between the apparent unification and string scales of the heterotic theory, to lower MI to an intemediate scale or even all the way down to its experimentally-allowed limit of order the TeV. Keeping for instance gI, α and r˜MI fixed and of order one, leads to the condition

rn ∼ M2Planck/M2+nI

A TeV string scale would then require from n = 2 millimetric to n = 6 fermi-size dimensions transverse to our brane world. The relative weakness of gravity is in this picture attributed to the large transverse spreading of the gravitational flux.

Advertisements

The First Trichotomy. Thought of the Day 119.0

sign_aspects

As the sign consists of three components it comes hardly as a surprise that it may be analyzed in nine aspects – every one of the sign’s three components may be viewed under each of the three fundamental phenomenological categories. The least discussed of these so-called trichotomies is probably the first, concerning which property in the sign it is that functions, in fact, to make it a sign. It gives rise to the trichotomy qualisign, sinsign, legisign, or, in a little more sexy terminology, tone, token, type.

The oftenmost quoted definition is from ‘Syllabus’ (Charles S. Peirce, The Essential Peirce Selected Philosophical Writings, Volume 2):

According to the first division, a Sign may be termed a Qualisign, a Sinsign, or a Legisign.

A Qualisign is a quality which is a Sign. It cannot actually act as a sign until it is embodied; but the embodiment has nothing to do with its character as a sign.

A Sinsign (where the syllable sin is taken as meaning ‘being only once’, as in single, simple, Latin semel, etc.) is an actual existent thing or event which is a sign. It can only be so through its qualities; so that it involves a qualisign, or rather, several qualisigns. But these qualisigns are of a peculiar kind and only form a sign through being actually embodied.

A Legisign is a law that is a Sign. This law is usually [sic] established by men. Every conventional sign is a legisign. It is not a single object, but a general type which, it has been agreed, shall be significant. Every legisign signifies through an instance of its application, which may be termed a Replica of it. Thus, the word ‘the’ will usually occur from fifteen to twenty-five times on a page. It is in all these occurrences one and the same word, the same legisign. Each single instance of it is a Replica. The Replica is a Sinsign. Thus, every Legisign requires Sinsigns. But these are not ordinary Sinsigns, such as are peculiar occurrences that are regarded as significant. Nor would the Replica be significant if it were not for the law which renders it so.

In some sense, it is a strange fact that this first and basic trichotomy has not been widely discussed in relation to the continuity concept in Peirce, because it is crucial. It is evident from the second noticeable locus where this trichotomy is discussed, the letters to Lady Welby – here Peirce continues (after an introduction which brings less news):

The difference between a legisign and a qualisign, neither of which is an individual thing, is that a legisign has a definite identity, though usually admitting a great variety of appearances. Thus, &, and, and the sound are all one word. The qualisign, on the other hand, has no identity. It is the mere quality of an appearance and is not exactly the same throughout a second. Instead of identity, it has great similarity, and cannot differ much without being called quite another qualisign.

The legisign or type is distinguished as being general which is, in turn, defined by continuity: the type has a ‘great variety of appearances’; as a matter of fact, a continuous variation of appearances. In many cases even several continua of appearances (as &, and, and the spoken sound of ‘and’). Each continuity of appearances is gathered into one identity thanks to the type, making possible the repetition of identical signs. Reference is not yet discussed (it concerns the sign’s relation to its object), nor is meaning (referring to its relation to its interpretant) – what is at stake is merely the possibility for a type to incarnate a continuum of possible actualizations, however this be possible, and so repeatedly appear as one and the same sign despite other differences. Thus the reality of the type is the very foundation for Peirce’s ‘extreme realism’, and this for two reasons. First, seen from the side of the sign, the type provides the possibility of stable, repeatable signs: the type may – opposed to qualisigns and those sinsigns not being replicas of a type – be repeated as a self-identical occurrence, and this is what in the first place provides the stability which renders repeated sign use possible. Second, seen from the side of reality: because types, legisigns, are realized without reference to human subjectivity, the existence of types is the condition of possibility for a sign, in turn, to stably refer to stably occurring entities and objects. Here, the importance of the irreducible continuity in philosophy of mathematics appears for semiotics: it is that which grants the possibility of collecting a continuum in one identity, the special characteristic of the type concept. The opposition to the type is the qualisign or tone lacking the stability of the type – they are not self-identical even through a second, as Peirce says – they have, of course, the character of being infinitesimal entities, about which the principle of contradiction does not hold. The transformation from tone to type is thus the transformation from unstable pre-logic to stable logic – it covers, to phrase it in a Husserlian way, the phenomenology of logic. The legisign thus exerts its law over specific qualisigns and sinsigns – like in all Peirce’s trichotomies the higher sign types contain and govern specific instances of the lower types. The legisign is incarnated in singular, actual sinsigns representing the type – they are tokens of the type – and what they have in common are certain sets of qualities or qualisigns – tones – selected from continua delimited by the legisign. The amount of possible sinsigns, tokens, are summed up by a type, a stable and self-identical sign. Peirce’s despised nominalists would to some degree agree here: the universal as a type is indeed a ‘mere word’ – but the strong counterargument which Peirce’s position makes possible says that if ‘mere words’ may possess universality, then the world must contain it as well, because words are worldly phenomena like everything else. Here, nominalists will typically exclude words from the world and make them privileges of the subject, but for Peirce’s welding of idealism and naturalism nothing can be truly separated from the world – all what basically is in the mind must also exist in the world. Thus the synthetical continuum, which may, in some respects, be treated as one entity, becomes the very condition of possibility for the existence of types.

Whether some types or legisigns now refer to existing general objects or not is not a matter for the first trichotomy to decide; legisigns may be part of any number of false or nonsensical propositions, and not all legisigns are symbols, just like arguments, in turn, are only a subset of symbols – but all of them are legisigns because they must in themselves be general in order to provide the condition of possibility of identical repetition, of reference to general objects and of signifying general interpretants.

Husserl’s Flip-Flop on Arithmetic Axiomatics. Thought of the Day 118.0

g5198

Husserl’s position in his Philosophy of Arithmetic (Psychological and Logical Investigations with Supplementary Texts) was resolutely anti-axiomatic. He attacked those who fell into remote, artificial constructions which, with the intent of building the elementary arithmetic concepts out of their ultimate definitional properties, interpret and change their meaning so much that totally strange, practically and scientifically useless conceptual formations finally result. Especially targeted was Frege’s ideal of the

founding of arithmetic on a sequence of formal definitions, out of which all the theorems of that science could be deduced purely syllogistically.

As soon as one comes to the ultimate, elemental concepts, Husserl reasoned, all defining has to come to an end. All one can then do is to point to the concrete phenomena from or through which the concepts are abstracted and show the nature of the abstractive process. A verbal explanation should place us in the proper state of mind for picking out, in inner or outer intuition, the abstract moments intended and for reproducing in ourselves the mental processes required for the formation of the concept. He said that his analyses had shown with incontestable clarity that the concepts of multiplicity and unity rest directly upon ultimate, elemental psychical data, and so belong among the indefinable concepts. Since the concept of number was so closely joined to them, one could scarcely speak of defining it either. All these points are made on the only pages of Philosophy of Arithmetic that Husserl ever explicitly retracted.

In On the Concept of Number, Husserl had set out to anchor arithmetical concepts in direct experience by analyzing the actual psychological processes to which he thought the concept of number owed its genesis. To obtain the concept of number of a concrete set of objects, say A, A, and A, he explained, one abstracts from the particular characteristics of the individual contents collected, only considering and retaining each one insofar as it is a something or a one. Regarding their collective combination, one thus obtains the general form of the set belonging to the set in question: one and one, etc. and. . . and one, to which a number name is assigned.

The enthusiastic espousal of psychologism of On the Concept of Number is not found in Philosophy of Arithmetic. Husserl later confessed that doubts about basic differences between the concept of number and the concept of collecting, which was all that could be obtained from reflection on acts, had troubled and tormented him from the very beginning and had eventually extended to all categorial concepts and to concepts of objectivities of any sort whatsoever, ultimately to include modern analysis and the theory of manifolds, and simultaneously to mathematical logic and the entire field of logic in general. He did not see how one could reconcile the objectivity of mathematics with psychological foundations for logic.

In sharp contrast to Brouwer who denounced logic as a source of truth, from the mid-1890s on, Husserl defended the view, which he attributed to Frege’s teacher Hermann Lotze, that pure arithmetic was basically no more than a branch of logic that had undergone independent development. He bid students not to be “scared” by that thought and to grow used to Lotze’s initially strange idea that arithmetic was only a particularly highly developed piece of logic.

Years later, Husserl would explain in Formal and Transcendental Logic that his

war against logical psychologism was meant to serve no other end than the supremely important one of making the specific province of analytic logic visible in its purity and ideal particularity, freeing it from the psychologizing confusions and misinterpretations in which it had remained enmeshed from the beginning.

He had come to see arithmetic truths as being analytic, as grounded in meanings independently of matters of fact. He had come to believe that the entire overthrowing of psychologism through phenomenology showed that his analyses in On the Concept of Number and Philosophy of Arithmetic had to be considered a pure a priori analysis of essence. For him, pure arithmetic, pure mathematics, and pure logic were a priori disciplines entirely grounded in conceptual essentialities, where truth was nothing other than the analysis of essences or concepts. Pure mathematics as pure arithmetic investigated what is grounded in the essence of number. Pure mathematical laws were laws of essence.

He is said to have told his students that it was to be stressed repeatedly and emphatically that the ideal entities so unpleasant for empiricistic logic, and so consistently disregarded by it, had not been artificially devised either by himself, or by Bolzano, but were given beforehand by the meaning of the universal talk of propositions and truths indispensable in all the sciences. This, he said, was an indubitable fact that had to be the starting point of all logic. All purely mathematical propositions, he taught, express something about the essence of what is mathematical. Their denial is consequently an absurdity. Denying a proposition of the natural sciences, a proposition about real matters of fact, never means an absurdity, a contradiction in terms. In denying the law of gravity, I cast experience to the wind. I violate the evident, extremely valuable probability that experience has established for the laws. But, I do not say anything “unthinkable,” absurd, something that nullifies the meaning of the word as I do when I say that 2 × 2 is not 4, but 5.

Husserl taught that every judgment either is a truth or cannot be a truth, that every presentation either accorded with a possible experience adequately redeeming it, or was in conflict with the experience, and that grounded in the essence of agreement was the fact that it was incompatible with the conflict, and grounded in the essence of conflict that it was incompatible with agreement. For him, that meant that truth ruled out falsehood and falsehood ruled out truth. And, likewise, existence and non-existence, correctness and incorrectness cancelled one another out in every sense. He believed that that became immediately apparent as soon as one had clarified the essence of existence and truth, of correctness and incorrectness, of Evidenz as consciousness of givenness, of being and not-being in fully redeeming intuition.

At the same time, Husserl contended, one grasps the “ultimate meaning” of the basic logical law of contradiction and of the excluded middle. When we state the law of validity that of any two contradictory propositions one holds and the other does not hold, when we say that for every proposition there is a contradictory one, Husserl explained, then we are continually speaking of the proposition in its ideal unity and not at all about mental experiences of individuals, not even in the most general way. With talk of truth it is always a matter of propositions in their ideal unity, of the meaning of statements, a matter of something identical and atemporal. What lies in the identically-ideal meaning of one’s words, what one cannot deny without invalidating the fixed meaning of one’s words has nothing at all to do with experience and induction. It has only to do with concepts. In sharp contrast to this, Brouwer saw intuitionistic mathematics as deviating from classical mathematics because the latter uses logic to generate theorems and in particular applies the principle of the excluded middle. He believed that Intuitionism had proven that no mathematical reality corresponds to the affirmation of the principle of the excluded middle and to conclusions derived by means of it. He reasoned that “since logic is based on mathematics – and not vice versa – the use of the Principle of the Excluded Middle is not permissible as part of a mathematical proof.”

Triadomania. Thought of the Day 117.0

figure-2

Peirce’s famous ‘triadomania’ lets most of his decisive distinctions appear in threes, following the tripartition of his list of categories, the famous triad of First, Second, and Third, or Quality, Reaction, Representation, or Possibility, Actuality, Reality.

Firstness is the mode of being of that which is such as it is, positively and without reference to anything else.

Secondness is the mode of being of that which is such as it is, with respect to a second but regardless of any third.

Thirdness is the mode of being of that which is such as it is, in bringing a second and third into relation to each other.

Firstness constitutes the quality of experience: in order for something to appear at all, it must do so due to a certain constellation of qualitative properties. Peirce often uses sensory qualities as examples, but it is important for the understanding of his thought that the examples may refer to phenomena very far from our standard conception of ‘sensory data’, e.g. forms or the ‘feeling’ of a whole melody or of a whole mathematical proof, not to be taken in a subjective sense but as a concept for the continuity of melody or proof as a whole, apart from the analytical steps and sequences in which it may be, subsequently, subdivided. In short, all sorts of simple and complex Gestalt qualities also qualify as Firstnesses. Firstness tend to form continua of possibilities such as the continua of shape, color, tone, etc. These qualities, however, are, taken in themselves, pure possibilities and must necessarily be incarnated in phenomena in order to appear. Secondness is the phenomenological category of ‘incarnation’ which makes this possible: it is the insistency, then, with which the individuated, actualized, existent phenomenon appears. Thus, Secondness necessarily forms discontinuous breaks in Firstness, allowing for particular qualities to enter into existence. The mind may imagine anything whatever in all sorts of quality combinations, but something appears with an irrefutable insisting power, reacting, actively, yielding resistance. Peirce’s favorite example is the resistance of the closed door – which might be imagined reduced to the quality of resistance feeling and thus degenerate to pure Firstness so that his theory imploded into a Hume-like solipsism – but to Peirce this resistance, surprise, event, this thisness, ‘haecceity’ as he calls it with a Scotist term, remains irreducible in the description of the phenomenon (a Kantian idea, at bottom: existence is no predicate). About Thirdness, Peirce may directly state that continuity represents it perfectly: ‘continuity and generality are two names of the same absence of distinction of individuals’. As against Secondness, Thirdness is general; it mediates between First and Second. The events of Secondness are never completely unique, such an event would be inexperiencable, but relates (3) to other events (2) due to certain features (1) in them; Thirdness is thus what facilitates understanding as well as pragmatic action, due to its continuous generality. With a famous example: if you dream about an apple pie, then the very qualities of that dream (taste, smell, warmth, crustiness, etc.) are pure Firstnesses, while the act of baking is composed of a series of actual Secondnesses. But their coordination is governed by a Thirdness: the recipe, being general, can never specify all properties in the individual apple pie, it has a schematic frame-character and subsumes an indefinite series – a whole continuum – of possible apple pies. Thirdness is thus necessarily general and vague. Of course, the recipe may be more or less precise, but no recipe exists which is able to determine each and every property in the cake, including date, hour, place, which tree the apples stem from, etc. – any recipe is necessarily general. In this case, the recipe (3) mediates between dream (1) and fulfilment (2) – its generality, symbolicity, relationality and future orientation are all characteristic for Thirdness. An important aspect of Peirce’s realism is that continuous generality may be experienced directly in perceptual judgments: ‘Generality, Thirdness, pours in upon us in our very perceptual judgments’.

All these determinations remain purely phenomenological, even if the later semiotic and metaphysical interpretations clearly shine through. In a more general, non-Peircean terminology, his phenomenology can be seen as the description of minimum aspects inherent in any imaginable possible world – for this reason it is imaginability which is the main argument, and this might point in the direction that Peirce could be open to critique for subjectivism, so often aimed at Husserl’s project, in some respects analogous. The concept of consciousness is invoked as the basis of imaginability: phenomenology is the study of invariant properties in any phenomenon appearing for a mind. Peirce’s answer would here be, on the one hand, the research community which according to him defines reality – an argument which structurally corresponds to Husserl’s reference to intersubjectivity as a necessary ingredient in objectivity (an object is a phenomenon which is intersubjectively accessible). Peirce, however, has a further argument here, namely his consequent refusal to delimit his concept of mind exclusively to human subjects (a category the use of which he obviously tries to minimize), mind-like processes may take place in nature without any subject being responsible. Peirce will, for continuity reasons, never accept any hard distinction between subject and object and remains extremely parsimonious in the employment of such terms.

From Peirce’s New Elements of Mathematics (The New Elements of Mathematics Vol. 4),

But just as the qualities, which as they are for themselves, are equally unrelated to one other, each being mere nothing for any other, yet form a continuum in which and because of their situation in which they acquire more or less resemblance and contrast with one another; and then this continuum is amplified in the continuum of possible feelings of quality, so the accidents of reaction, which are waking consciousnesses of pairs of qualities, may be expected to join themselves into a continuum. 

Since, then an accidental reaction is a combination or bringing into special connection of two qualities, and since further it is accidental and antigeneral or discontinuous, such an accidental reaction ought to be regarded as an adventitious singularity of the continuum of possible quality, just as two points of a sheet of paper might come into contact.

But although singularities are discontinuous, they may be continuous to a certain extent. Thus the sheet instead of touching itself in the union of two points may cut itself all along a line. Here there is a continuous line of singularity. In like manner, accidental reactions though they are breaches of generality may come to be generalized to a certain extent.

Secondness is now taken to actualize these quality possibilities based on an idea that any actual event involves a clash of qualities – in the ensuing argumentation Peirce underlines that the qualities involved in actualization need not be restrained to two but may be many, if they may only be ‘dissolved’ into pairs and hence do not break into the domain of Thirdness. This appearance of actuality, hence, has the property of singularities, spontaneously popping up in the space of possibilities and actualizing pairs of points in it. This transition from First to Second is conceived of along Aristotelian lines: as an actualization of a possibility – and this is expressed in the picture of a discontinuous singularity in the quality continuum. The topological fact that singularities must in general be defined with respect to the neighborhood of the manifold in which they appear, now becomes the argument for the fact that Secondness can never be completely discontinuous but still ‘inherits’ a certain small measure of continuity from the continuum of Firstness. Singularities, being discontinuous along certain dimensions, may be continuous in others, which provides the condition of possibility for Thirdness to exist as a tendency for Secondness to conform to a general law or regularity. As is evident, a completely pure Secondness is impossible in this continuous metaphysics – it remains a conceivable but unrealizable limit case, because a completely discon- tinuous event would amount to nothing. Thirdness already lies as a germ in the non-discontinuous aspects of the singularity. The occurrences of Secondness seem to be infinitesimal, then, rather than completely extensionless points.

Fragmentation – Lit and Dark Electronic Exchanges. Thought of the Day 116.0

Untitled

Exchanges also control the amount and degree of granularity of the information you receive (e.g., you can use the consolidated/public feed at a low cost or pay a relatively much larger cost for direct/proprietary feeds from the exchanges). They also monetise the need for speed by renting out computer/server space next to their matching engines, a process called colocation. Through coloca­tion, exchanges can provide uniform service to trading clients at competitive rates. Having the traders’ trading engines at a common location owned by the exchange simplifies the exchange’s ability to provide uniform service as it can control the hardware connecting each client to the trading engine, the cable (so all have the same cable of the same length), and the network. This ensures that all traders in colocation have the same fast access, and are not disadvantaged (at least in terms of exchange-provided hardware). Naturally, this imposes a clear distinction between traders who are colocated and those who are not. Those not colocated will always have a speed disadvantage. It then becomes an issue for reg­ulators who have to ensure that exchanges keep access to colocation sufficiently competitive.

The issue of distance from the trading engine brings us to another key dimen­sion of trading nowadays, especially in US equity markets, namely fragmentation. A trader in US equities markets has to be aware that there are up to 13 lit electronic exchanges and more than 40 dark ones. Together with this wide range of trading options, there is also specific regulation (the so-called ‘trade-through’ rules) which affects what happens to market orders sent to one exchange if there are better execution prices at other exchanges. The interaction of multiple trading venues, latency when moving be­tween these venues, and regulation introduces additional dimensions to keep in mind when designing success l trading strategies.

The role of time is fundamental in the usual price-time priority electronic ex­change, and in a fragmented market, the issue becomes even more important. Traders need to be able to adjust their trading positions fast in response to or in anticipation of changes in market circumstances, not just at the local exchange but at other markets as well. The race to be the first in or out of a certain position is one of the focal points of the debate on the benefits and costs of ‘high-frequency trading’.

The importance of speed permeates the whole process of designing trading algorithms, from the actual code, to the choice of programming language, to the hardware it is implemented on, to the characteristics of the connection to the matching engine, and the way orders are routed within an exchange and between exchanges. Exchanges, being aware of the importance of speed, have adapted and, amongst other things, moved well beyond the basic two types of orders (Market Orders and Limit Orders). Any trader should be very well-informed regarding all the different order types available at the exchanges, what they are and how they may be used.

When coding an algorithm one should be very aware of all the possible types of orders allowed, not just in one exchange, but in all competing exchanges where one’s asset of interest is traded. Being uninformed about the variety of order types can lead to significant losses. Since some of these order types allow changes and adjustments at the trading engine level, they cannot be beaten in terms of latency by the trader’s engine, regardless of how efficiently your algorithms are coded and hardwired.

Untitled

Another important issue to be aware of is that trading in an exchange is not free, but the cost is not the same for all traders. For example, many exchanges run what is referred to as a maker-taker system of fees whereby a trader sending an MO (and hence taking liquidity away from the market) pays a trading fee, while a trader whose posted LO is filled by the MO (that is, the LO with which the MO is matched) will a pay much lower trading fee, or even receive a payment (a rebate) from the exchange for providing liquidity (making the market). On the other hand, there are markets with an inverted fee schedule, a taker-maker system where the fee structure is the reverse: those providing liquidity pay a higher fee than those taking liquidity (who may even get a rebate). The issue of exchange fees is quite important as fees distort observed market prices (when you make a transaction the relevant price for you is the net price you pay/receive, which is the published price net of fees).

Stuxnet

Untitled

Stuxnet is a threat targeting a specific industrial control system likely in Iran, such as a gas pipeline or power plant. The ultimate goal of Stuxnet is to sabotage that facility by reprogramming programmable logic controllers (PLCs) to operate as the attackers intend them to, most likely out of their specified boundaries.

Stuxnet was discovered in July, but is confirmed to have existed at least one year prior and likely even before. The majority of infections were found in Iran. Stuxnet contains many features such as:

  • Self-replicates through removable drives exploiting a vulnerability allowing auto-execution. Microsoft Windows Shortcut ‘LNK/PIF’ Files Automatic File Execution Vulnerability (BID 41732)
  • Spreads in a LAN through a vulnerability in the Windows Print Spooler.
    Microsoft Windows Print Spooler Service Remote Code Execution Vulnerability (BID 43073)
  • Spreads through SMB by exploiting the Microsoft Windows Server Service RPC Handling Remote Code Execution Vulnerability (BID 31874).
  • Copies and executes itself on remote computers through network shares.
  • Copies and executes itself on remote computers running a WinCC database server.
  • Copies itself into Step 7 projects in such a way that it automatically executes when the Step 7 project is loaded.
  • Updates itself through a peer-to-peer mechanism within a LAN.
  • Exploits a total of four unpatched Microsoft vulnerabilities, two of which are previously mentioned vulnerabilities for self-replication and the other two are escalation of privilege vulnerabilities that have yet to be disclosed.
  • Contacts a command and control server that allows the hacker to download and execute code, including updated versions.
  • Contains a Windows rootkit that hide its binaries.
  • Attempts to bypass security products.
  • Fingerprints a specific industrial control system and modifies code on the Siemens PLCs to potentially sabotage the system.
  • Hides modified code on PLCs, essentially a rootkit for PLCs.

The following is a possible attack scenario. It is only speculation driven by the technical features of Stuxnet.

Industrial control systems (ICS) are operated by a specialized assembly like code on programmable logic controllers (PLCs). The PLCs are often programmed from Windows computers not connected to the Internet or even the internal network. In addition, the industrial control systems themselves are also unlikely to be connected to the Internet.

First, the attackers needed to conduct reconnaissance. As each PLC is configured in a unique manner, the attack- ers would first need the ICS’s schematics. These design documents may have been stolen by an insider or even retrieved by an early version of Stuxnet or other malicious binary. Once attackers had the design documents and potential knowledge of the computing environment in the facility, they would develop the latest version of Stuxnet. Each feature of Stuxnet was implemented for a specific reason and for the final goal of potentially sabotaging the ICS.

Attackers would need to setup a mirrored environment that would include the necessary ICS hardware, such as PLCs, modules, and peripherals in order to test their code. The full cycle may have taken six months and five to ten core developers not counting numerous other individuals, such as quality assurance and management.

In addition their malicious binaries contained driver files that needed to be digitally signed to avoid suspicion. The attackers compromised two digital certificates to achieve this task. The attackers would have needed to obtain the digital certificates from someone who may have physically entered the premises of the two companies and stole them, as the two companies are in close physical proximity.

To infect their target, Stuxnet would need to be introduced into the target environment. This may have occurred by infecting a willing or unknowing third party, such as a contractor who perhaps had access to the facility, or an insider. The original infection may have been introduced by removable drive.

Once Stuxnet had infected a computer within the organization it began to spread in search of Field PGs, which are typical Windows computers but used to program PLCs. Since most of these computers are non-networked, Stuxnet would first try to spread to other computers on the LAN through a zero-day vulnerability, a two year old vulnerability, infecting Step 7 projects, and through removable drives. Propagation through a LAN likely served as the first step and propagation through removable drives as a means to cover the last and final hop to a Field PG that is never connected to an untrusted network.

While attackers could control Stuxnet with a command and control server, as mentioned previously the key computer was unlikely to have outbound Internet access. Thus, all the functionality required to sabotage a system was embedded directly in the Stuxnet executable. Updates to this executable would be propagated throughout the facility through a peer-to-peer method established by Stuxnet.

When Stuxnet finally found a suitable computer, one that ran Step 7, it would then modify the code on the PLC. These modifications likely sabotaged the system, which was likely considered a high value target due to the large resources invested in the creation of Stuxnet.

Victims attempting to verify the issue would not see any rogue PLC code as Stuxnet hides its modifications.

While their choice of using self-replication methods may have been necessary to ensure they’d find a suitable Field PG, they also caused noticeable collateral damage by infecting machines outside the target organization. The attackers may have considered the collateral damage a necessity in order to effectively reach the intended target. Also, the attackers likely completed their initial attack by the time they were discovered.

Stuxnet dossier

Conjuncted: Bank Recapitalization – Some Scattered Thoughts on Efficacies.

chart-636451248874330188

In response to this article by Joe.

Some scattered thoughts could be found here.

With demonetization, banks got a surplus liquidity to the tune of Rs. 4 trillion which was largely responsible for call rates becoming tepid. However, there was no commensurate demand for credit as most corporates with a good credit rating managed to raise funds in the bond market at much lower yields. The result was that banks ended up investing most of this liquidity in government securities resulting in the Statutory Liquidity Ratio (SLR) bond holdings of banks exceeding the minimum requirement by up to 700 basis points. This combination of a surfeit of liquidity and weak credit demand can be used to design a recapitalization bond to address the capital problem. Since the banks are anyways sitting on surplus liquidity and investing in G-Secs, recapitalization bonds can be used to convert the bank liquidity to actually recapitalize the banks. Firstly, the government of India, through the RBI, will issue Recapitalization Bonds. Banks, who are sitting on surplus liquidity, will use their resources to invest in these recapitalization bonds. With the funds raised by the government through the issue of recapitalization bonds, the government will infuse capital into the stressed banks. This way, the surplus liquidity of the banks will be used more effectively and in the process the banks will also be better capitalized and now become capable of expanding their asset books as well as negotiating with stressed clients for haircuts. Recapitalization bonds are nothing new and have been used by the RBI in the past. In fact, the former RBI governor, Dr. Y V Reddy, continues to be one of the major proponents of recapitalization bonds in the current juncture. More so, considering that the capital adequacy ratio of Indian banks could dip as low as 11% by March 2018 if the macroeconomic conditions worsen, the motivation for going in for recap bonds has no logical counters. As I have often said this in many a fora, when banks talk numbers, transparency and accountability the way it is perceived isn’t how it is perceived by them, and moreover this argument gets diluted a bit in the wake of demonetization, which has still been haunted by lack of credit demand. As far as the NPAs are concerned, these were lying dormant and thanks to RBI’s AQR, these would not even have surfaced if let be made decisions about by the banks’ free hands. So, RBI’s intervention was a must to recognize NPAs rather than the political will of merely considering them as stressed assets. The real problem with recap bonds lie in the fact that the earlier such exercise in the 90s has still resulted in bonds maturing, and unless, these bonds are made tradable, these would be confined to further immaturities.