Momentum of Accelerated Capital. Note Quote.

high-frequency-trading

Distinct types of high frequency trading firms include independent proprietary firms, which use private funds and specific strategies which remain secretive, and may act as market makers generating automatic buy and sell orders continuously throughout the day. Broker-dealer proprietary desks are part of traditional broker-dealer firms but are not related to their client business, and are operated by the largest investment banks. Thirdly hedge funds focus on complex statistical arbitrage, taking advantage of pricing inefficiencies between asset classes and securities.

Today strategies using algorithmic trading and High Frequency Trading play a central role on financial exchanges, alternative markets, and banks‘ internalized (over-the-counter) dealings:

High frequency traders typically act in a proprietary capacity, making use of a number of strategies and generating a very large number of trades every single day. They leverage technology and algorithms from end-to-end of the investment chain – from market data analysis and the operation of a specific trading strategy to the generation, routing, and execution of orders and trades. What differentiates HFT from algorithmic trading is the high frequency turnover of positions as well as its implicit reliance on ultra-low latency connection and speed of the system.

The use of algorithms in computerised exchange trading has experienced a long evolution with the increasing digitalisation of exchanges:

Over time, algorithms have continuously evolved: while initial first-generation algorithms – fairly simple in their goals and logic – were pure trade execution algos, second-generation algorithms – strategy implementation algos – have become much more sophisticated and are typically used to produce own trading signals which are then executed by trade execution algos. Third-generation algorithms include intelligent logic that learns from market activity and adjusts the trading strategy of the order based on what the algorithm perceives is happening in the market. HFT is not a strategy per se, but rather a technologically more advanced method of implementing particular trading strategies. The objective of HFT strategies is to seek to benefit from market liquidity imbalances or other short-term pricing inefficiencies.

While algorithms are employed by most traders in contemporary markets, the intense focus on speed and the momentary holding periods are the unique practices of the high frequency traders. As the defence of high frequency trading is built around the principles that it increases liquidity, narrows spreads, and improves market efficiency, the high number of trades made by HFT traders results in greater liquidity in the market. Algorithmic trading has resulted in the prices of securities being updated more quickly with more competitive bid-ask prices, and narrowing spreads. Finally HFT enables prices to reflect information more quickly and accurately, ensuring accurate pricing at smaller time intervals. But there are critical differences between high frequency traders and traditional market makers:

  1. HFT do not have an affirmative market making obligation, that is they are not obliged to provide liquidity by constantly displaying two sides quotes, which may translate into a lack of liquidity during volatile conditions.
  2. HFT contribute little market depth due to the marginal size of their quotes, which may result in larger orders having to transact with many small orders, and this may impact on overall transaction costs.
  3. HFT quotes are barely accessible due to the extremely short duration for which the liquidity is available when orders are cancelled within milliseconds.

Besides the shallowness of the HFT contribution to liquidity, are the real fears of how HFT can compound and magnify risk by the rapidity of its actions:

There is evidence that high-frequency algorithmic trading also has some positive benefits for investors by narrowing spreads – the difference between the price at which a buyer is willing to purchase a financial instrument and the price at which a seller is willing to sell it – and by increasing liquidity at each decimal point. However, a major issue for regulators and policymakers is the extent to which high-frequency trading, unfiltered sponsored access, and co-location amplify risks, including systemic risk, by increasing the speed at which trading errors or fraudulent trades can occur.

Although there have always been occasional trading errors and episodic volatility spikes in markets, the speed, automation and interconnectedness of today‘s markets create a different scale of risk. These risks demand that exchanges and market participants employ effective quality management systems and sophisticated risk mitigation controls adapted to these new dynamics in order to protect against potential threats to market stability arising from technology malfunctions or episodic illiquidity. However, there are more deliberate aspects of HFT strategies which may present serious problems for market structure and functioning, and where conduct may be illegal, for example in order anticipation seeks to ascertain the existence of large buyers or sellers in the marketplace and then to trade ahead of those buyers and sellers in anticipation that their large orders will move market prices. A momentum strategy involves initiating a series of orders and trades in an attempt to ignite a rapid price move. HFT strategies can resemble traditional forms of market manipulation that violate the Exchange Act:

  1. Spoofing and layering occurs when traders create a false appearance of market activity by entering multiple non-bona fide orders on one side of the market at increasing or decreasing prices in order to induce others to buy or sell the stock at a price altered by the bogus orders.
  2. Painting the tape involves placing successive small amount of buy orders at increasing prices in order to stimulate increased demand.

  3. Quote Stuffing and price fade are additional HFT dubious practices: quote stuffing is a practice that floods the market with huge numbers of orders and cancellations in rapid succession which may generate buying or selling interest, or compromise the trading position of other market participants. Order or price fade involves the rapid cancellation of orders in response to other trades.

The World Federation of Exchanges insists: ― Exchanges are committed to protecting market stability and promoting orderly markets, and understand that a robust and resilient risk control framework adapted to today‘s high speed markets, is a cornerstone of enhancing investor confidence. However this robust and resilient risk control framework‘ seems lacking, including in the dark pools now established for trading that were initially proposed as safer than the open market.

Advertisement

Banking? There isn’t much to it than this anyways.

Don’t go by the innocuous sounding title, for this is at its wit alternative. The modus operandi (Oh!, how much I feel like saying modis operandi in honor of the you Indians know who?) for accumulation of wealth to parts of ‘the system’ (which, for historic reasons, we call ‘capitalists’) is banking. The ‘capitalists’ (defined as those that skim the surplus labor of others) accumulate it through the banking system. That is nearly an empty statement, since wealth = money. That is, money is the means of increasing wealth and thus one represents the other. If capitalists skim surplus labor, it means that they skim surplus money. Money is linked to (only!) banks, and thus, accumulation is in the banks.

If interest is charged, borrowers will go bankrupt. This idea can be extended. If interest is charged, all money is accumulated in banks. Or, better to say, a larger and larger fraction of money is accumulated in the banks, and kept in financial institutions. The accumulation of wealth is accumulation of money in and by banks. It can only be interesting to see whom the money belongs to.

By the way, these institutions, the capitalists naturally wanting to part with as little as possible from this money, are often in fiscal paradises. Famous are The Cayman Islands, The Bahamas, The Seychelles, etc. With the accumulated money the physical property is bought. Once again, this is an empty statement. Money represents buying power (to buy more wealth). For instance buying the means-of-production (the Marxian mathematical equation raises its head again, MoP), such as land, factories, people’s houses (which will then be rented to them; more money). Etc.

Also, a tiny fraction of the money is squandered. It is what normally draws most attention. Oil sheiks that drive golden cars, bunga-bunga parties etc. That, however, is rather insignificant, this way of re-injecting money into the system. Mostly money is used to increase capital. That is why it is an obvious truth that “When you are rich, you must be extremely stupid to become poor. When you are poor, you must be extremely talented to become rich”. When you are rich, just let the capital work for you; it will have the tendency to increase, even if it increases slower than that of your more talented neighbor.

To accelerate the effect of skimming, means of production (MoP, ‘capital’), are confiscated from everything – countries and individual people – that cannot pay the loan + interest (which is unavoidable). Or bought for a much-below market value price in a way of “Take it or leave it; either give me my money back, which I know there is no way you can, or give me all your possessions and options for confiscation of possessions of future generations as well, i.e., I’ll give you new loans (which you will also not be able to pay back, I know, but that way I’ll manage to forever take everything you will ever produce in your life and all generations after you. Slaves, obey your masters!)”

Although not essential (Marx analyzed it not like this), the banking system accelerates the condensation of wealth. It is the modus operandi. Money is accumulated. With that money, capital is bought and then the money is re-confiscated with that newly-bought capital, or by means of new loans, etc. It is a feedback system where all money and capital is condensing on a big pile. Money and capital are synonyms. Note that this pile in not necessarily a set of people. It is just ‘the system’. There is no ‘class struggle’ between rich and poor, where the latter are trying to steal/take-back the money (depending on which side of the alleged theft the person analyzing it is). It is a class struggle of people against ‘the system’.

There is only one stable final distribution: all money/capital belonging to one person or institute, one ‘entity’. That is what is called a ‘singularity’ and the only mathematical function that is stable in this case. It is called a delta-function, or Kronecker-delta function: zero everywhere, except in one point, where it is infinite, with the total integral (total money) equal to unity. In this case: all money on one big pile. All other functions are unstable.

Imagine that there are two brothers that wound up with all the money and the rest of the people are destitute and left without anything. These two brothers will then start lending things to each-other. Since they are doing this in the commercial way (having to give back more than borrowed), one of the brothers will confiscate everything from the other.
Note: There is only one way out of it, namely that the brother ‘feels sorry’ for his sibling and gives him things without anything in return, to compensate for the steady unidirectional flow of wealth….

Conjuncted: Ergodicity. Thought of the Day 51.1

ergod_noise

When we scientifically investigate a system, we cannot normally observe all possible histories in Ω, or directly access the conditional probability structure {PrE}E⊆Ω. Instead, we can only observe specific events. Conducting many “runs” of the same experiment is an attempt to observe as many histories of a system as possible, but even the best experimental design rarely allows us to observe all histories or to read off the full conditional probability structure. Furthermore, this strategy works only for smaller systems that we can isolate in laboratory conditions. When the system is the economy, the global ecosystem, or the universe in its entirety, we are stuck in a single history. We cannot step outside that history and look at alternative histories. Nonetheless, we would like to infer something about the laws of the system in general, and especially about the true probability distribution over histories.

Can we discern the system’s laws and true probabilities from observations of specific events? And what kinds of regularities must the system display in order to make this possible? In other words, are there certain “metaphysical prerequisites” that must be in place for scientific inference to work?

To answer these questions, we first consider a very simple example. Here T = {1,2,3,…}, and the system’s state at any time is the outcome of an independent coin toss. So the state space is X = {Heads, Tails}, and each possible history in Ω is one possible Heads/Tails sequence.

Suppose the true conditional probability structure on Ω is induced by the single parameter p, the probability of Heads. In this example, the Law of Large Numbers guarantees that, with probability 1, the limiting frequency of Heads in a given history (as time goes to infinity) will match p. This means that the subset of Ω consisting of “well-behaved” histories has probability 1, where a history is well-behaved if (i) there exists a limiting frequency of Heads for it (i.e., the proportion of Heads converges to a well-defined limit as time goes to infinity) and (ii) that limiting frequency is p. For this reason, we will almost certainly (with probability 1) arrive at the true conditional probability structure on Ω on the basis of observing just a single history and counting the number of Heads and Tails in it.

Does this result generalize? The short answer is “yes”, provided the system’s symmetries are of the right kind. Without suitable symmetries, generalizing from local observations to global laws is not possible. In a slogan, for scientific inference to work, there must be sufficient regularities in the system. In our toy system of the coin tosses, there are. Wigner (1967) recognized this point, taking symmetries to be “a prerequisite for the very possibility of discovering the laws of nature”.

Generally, symmetries allow us to infer general laws from specific observations. For example, let T = {1,2,3,…}, and let Y and Z be two subsets of the state space X. Suppose we have made the observation O: “whenever the state is in the set Y at time 5, there is a 50% probability that it will be in Z at time 6”. Suppose we know, or are justified in hypothesizing, that the system has the set of time symmetries {ψr : r = 1,2,3,….}, with ψr(t) = t + r, as defined as in the previous section. Then, from observation O, we can deduce the following general law: “for any t in T, if the state of the system is in the set Y at time t, there is a 50% probability that it will be in Z at time t + 1”.

However, this example still has a problem. It only shows that if we could make observation O, then our generalization would be warranted, provided the system has the relevant symmetries. But the “if” is a big “if”. Recall what observation O says: “whenever the system’s state is in the set Y at time 5, there is a 50% probability that it will be in the set Z at time 6”. Clearly, this statement is only empirically well supported – and thus a real observation rather than a mere hypothesis – if we can make many observations of possible histories at times 5 and 6. We can do this if the system is an experimental apparatus in a lab or a virtual system in a computer, which we are manipulating and observing “from the outside”, and on which we can perform many “runs” of an experiment. But, as noted above, if we are participants in the system, as in the case of the economy, an ecosystem, or the universe at large, we only get to experience times 5 and 6 once, and we only get to experience one possible history. How, then, can we ever assemble a body of evidence that allows us to make statements such as O?

The solution to this problem lies in the property of ergodicity. This is a property that a system may or may not have and that, if present, serves as the desired metaphysical prerequisite for scientific inference. To explain this property, let us give an example. Suppose T = {1,2,3,…}, and the system has all the time symmetries in the set Ψ = {ψr : r = 1,2,3,….}. Heuristically, the symmetries in Ψ can be interpreted as describing the evolution of the system over time. Suppose each time-step corresponds to a day. Then the history h = (a,b,c,d,e,….) describes a situation where today’s state is a, tomorrow’s is b, the next day’s is c, and so on. The transformed history ψ1(h) = (b,c,d,e,f,….) describes a situation where today’s state is b, tomorrow’s is c, the following day’s is d, and so on. Thus, ψ1(h) describes the same “world” as h, but as seen from the perspective of tomorrow. Likewise, ψ2(h) = (c,d,e,f,g,….) describes the same “world” as h, but as seen from the perspective of the day after tomorrow, and so on.

Given the set Ψ of symmetries, an event E (a subset of Ω) is Ψ-invariant if the inverse image of E under ψ is E itself, for all ψ in Ψ. This implies that if a history h is in E, then ψ(h) will also be in E, for all ψ. In effect, if the world is in the set E today, it will remain in E tomorrow, and the day after tomorrow, and so on. Thus, E is a “persistent” event: an event one cannot escape from by moving forward in time. In a coin-tossing system, where Ψ is still the set of time translations, examples of Ψ- invariant events are “all Heads”, where E contains only the history (Heads, Heads, Heads, …), and “all Tails”, where E contains only the history (Tails, Tails, Tails, …).

The system is ergodic (with respect to Ψ) if, for any Ψ-invariant event E, the unconditional probability of E, i.e., PrΩ(E), is either 0 or 1. In other words, the only persistent events are those which occur in almost no history (i.e., PrΩ(E) = 0) and those which occur in almost every history (i.e., PrΩ(E) = 1). Our coin-tossing system is ergodic, as exemplified by the fact that the Ψ-invariant events “all Heads” and “all Tails” occur with probability 0.

In an ergodic system, it is possible to estimate the probability of any event “empirically”, by simply counting the frequency with which that event occurs. Frequencies are thus evidence for probabilities. The formal statement of this is the following important result from the theory of dynamical systems and stochastic processes.

Ergodic Theorem: Suppose the system is ergodic. Let E be any event and let h be any history. For all times t in T, let Nt be the number of elements r in the set {1, 2, …, t} such that ψr(h) is in E. Then, with probability 1, the ratio Nt/t will converge to PrΩ(E) as t increases towards infinity.

Intuitively, Nt is the number of times the event E has “occurred” in history h from time 1 up to time t. The ratio Nt/t is therefore the frequency of occurrence of event E (up to time t) in history h. This frequency might be measured, for example, by performing a sequence of experiments or observations at times 1, 2, …, t. The Ergodic Theorem says that, almost certainly (i.e., with probability 1), the empirical frequency will converge to the true probability of E, PrΩ(E), as the number of observations becomes large. The estimation of the true conditional probability structure from the frequencies of Heads and Tails in our illustrative coin-tossing system is possible precisely because the system is ergodic.

To understand the significance of this result, let Y and Z be two subsets of X, and suppose E is the event “h(1) is in Y”, while D is the event “h(2) is in Z”. Then the intersection E ∩ D is the event “h(1) is in Y, and h(2) is in Z”. The Ergodic Theorem says that, by performing a sequence of observations over time, we can empirically estimate PrΩ(E) and PrΩ(E ∩ D) with arbitrarily high precision. Thus, we can compute the ratio PrΩ(E ∩ D)/PrΩ(E). But this ratio is simply the conditional probability PrΕ(D). And so, we are able to estimate the conditional probability that the state at time 2 will be in Z, given that at time 1 it was in Y. This illustrates that, by allowing us to estimate unconditional probabilities empirically, the Ergodic Theorem also allows us to estimate conditional probabilities, and in this way to learn the properties of the conditional probability structure {PrE}E⊆Ω.

We may thus conclude that ergodicity is what allows us to generalize from local observations to global laws. In effect, when we engage in scientific inference about some system, or even about the world at large, we rely on the hypothesis that this system, or the world, is ergodic. If our system, or the world, were “dappled”, then presumably we would not be able to presuppose ergodicity, and hence our ability to make scientific generalizations would be compromised.

Nomological Possibility and Necessity

6a010535ce1cf6970c01bb096e3f72970d

An event E is nomologically possible in history h at time t if the initial segment of that history up to t admits at least one continuation in Ω that lies in E; and E is nomologically necessary in h at t if every continuation of the history’s initial segment up to t lies in E.

More formally, we say that one history, h’, is accessible from another, h, at time t if the initial segments of h and h’ up to time t coincide, i.e., ht = ht‘. We then write h’Rth. The binary relation Rt on possible histories is in fact an equivalence relation (reflexive, symmetric, and transitive). Now, an event E ⊆ Ω is nomologically possible in history h at time t if some history h’ in Ω that is accessible from h at t is contained in E. Similarly, an event E ⊆ Ω is nomologically necessary in history h at time t if every history h’ in Ω that is accessible from h at t is contained in E.

In this way, we can define two modal operators, ♦t and ¤t, to express possibility and necessity at time t. We define each of them as a mapping from events to events. For any event E ⊆ Ω,

t E = {h ∈ Ω : for some h’ ∈ Ω with h’Rth, we have h’ ∈ E},

¤t E = {h ∈ Ω : for all h’ ∈ Ω with h’Rth, we have h’ ∈ E}.

So, ♦t E is the set of all histories in which E is possible at time t, and ¤t E is the set of all histories in which E is necessary at time t. Accordingly, we say that “ ♦t E” holds in history h if h is an element of ♦t E, and “ ¤t E” holds in h if h is an element of ¤t E. As one would expect, the two modal operators are duals of each other: for any event E ⊆ Ω, we have ¤t E = ~ ♦t ~E and ♦E = ~ ¤t ~E.

Although we have here defined nomological possibility and necessity, we can analogously define logical possibility and necessity. To do this, we must simply replace every occurrence of the set Ω of nomologically possible histories in our definitions with the set H of logically possible histories. Second, by defining the operators ♦t and ¤t as functions from events to events, we have adopted a semantic definition of these modal notions. However, we could also define them syntactically, by introducing an explicit modal logic. For each point in time t, the logic corresponding to the operators ♦t and ¤t would then be an instance of a standard S5 modal logic.

The analysis shows how nomological possibility and necessity depend on the dynamics of the system. In particular, as time progresses, the notion of possibility becomes more demanding: fewer events remain possible at each time. And the notion of necessity becomes less demanding: more events become necessary at each time, for instance due to having been “settled” in the past. Formally, for any t and t’ in T with t < t’ and any event E ⊆ Ω,

if ♦t’ E then ♦E,

if ¤t E then ¤t’ E.

Furthermore, in a deterministic system, for every event E and any time t, we have ♦t E = ¤t E. In other words, an event is possible in any history h at time t if and only if it is necessary in h at t. In an indeterministic system, by contrast, necessity and possibility come apart.

Let us say that one history, h’, is accessible from another, h, relative to a set T’ of time points, if the restrictions of h and h’ to T’ coincide, i.e., h’T’ = hT’. We then write h’RT’h. Accessibility at time t is the special case where T’ is the set of points in time up to time t. We can define nomological possibility and necessity relative to T’ as follows. For any event E ⊆ Ω,

T’ E = {h ∈ Ω : for some h’ ∈ Ω with h’RT’h, we have h’ ∈ E},

¤T’ E = {h ∈ Ω : for all h’ ∈ Ω with h’RT’h, we have h’ ∈ E}.

Although these modal notions are much less familiar than the standard ones (possibility and necessity at time t), they are useful for some purposes. In particular, they allow us to express the fact that the states of a system during a particular period of time, T’ ⊆ T, render some events E possible or necessary.

Finally, our definitions of possibility and necessity relative to some general subset T’ of T also allow us to define completely “atemporal” notions of possibility and necessity. If we take T’ to be the empty set, then the accessibility relation RT’ becomes the universal relation, under which every history is related to every other. An event E is possible in this atemporal sense (i.e., ♦E) iff E is a non-empty subset of Ω, and it is necessary in this atemporal sense (i.e., ¤E) if E coincides with all of Ω. These notions might be viewed as possibility and necessity from the perspective of some observer who has no temporal or historical location within the system and looks at it from the outside.

Diagrammatic Political Via The Exaptive Processes

thing politics v2x copy

The principle of individuation is the operation that in the matter of taking form, by means of topological conditions […] carries out an energy exchange between the matter and the form until the unity leads to a state – the energy conditions express the whole system. Internal resonance is a state of the equilibrium. One could say that the principle of individuation is the common allagmatic system which requires this realization of the energy conditions the topological conditions […] it can produce the effects in all the points of the system in an enclosure […]

This operation rests on the singularity or starting from a singularity of average magnitude, topologically definite.

If we throw in a pinch of Gilbert Simondon’s concept of transduction there’s a basis recipe, or toolkit, for exploring the relational intensities between the three informal (theoretical) dimensions of knowledge, power and subjectification pursued by Foucault with respect to formal practice. Supplanting Foucault’s process of subjectification with Simondon’s more eloquent process of individuation marks an entry for imagining the continuous, always partial, phase-shifting resolutions of the individual. This is not identity as fixed and positionable, it’s a preindividual dynamic that affects an always becoming- individual. It’s the pre-formative as performative. Transduction is a process of individuation. It leads to individuated beings, such as things, gadgets, organisms, machines, self and society, which could be the object of knowledge. It is an ontogenetic operation which provisionally resolves incompatibilities between different orders or different zones of a domain.

What is at stake in the bigger picture, in a diagrammatic politics, is double-sided. Just as there is matter in expression and expression in matter, there is event-value in an  exchange-value paradigm, which in fact amplifies the force of its power relations. The economic engine of our time feeds on event potential becoming-commodity. It grows and flourishes on the mass production of affective intensities. Reciprocally, there are degrees of exchange-value in eventness. It’s the recursive loopiness of our current Creative Industries diagram in which the social networking praxis of Web 2.0 is emblematic and has much to learn.