Malignant Acceleration in Tech-Finance. Some Further Rumination on Regulations. Thought of the Day 72.1

these-stunning-charts-show-some-of-the-wild-trading-activity-that-came-from-a-dark-pool-this-morning

Regardless of the positive effects of HFT that offers, such as reduced spreads, higher liquidity, and faster price discovery, its negative side is mostly what has caught people’s attention. Several notorious market failures and accidents in recent years all seem to be related to HFT practices. They showed how much risk HFT can involve and how huge the damage can be.

HFT heavily depends on the reliability of the trading algorithms that generate, route, and execute orders. High-frequency traders thus must ensure that these algorithms have been tested completely and thoroughly before they are deployed into the live systems of the financial markets. Any improperly-tested, or prematurely-released algorithms may cause losses to both investors and the exchanges. Several examples demonstrate the extent of the ever-present vulnerabilities.

In August 2012, the Knight Capital Group implemented a new liquidity testing software routine into its trading system, which was running live on the NYSE. The system started making bizarre trading decisions, quadrupling the price of one company, Wizzard Software, as well as bidding-up the price of much larger entities, such as General Electric. Within 45 minutes, the company lost USD 440 million. After this event and the weakening of Knight Capital’s capital base, it agreed to merge with another algorithmic trading firm, Getco, which is the biggest HFT firm in the U.S. today. This example emphasizes the importance of implementing precautions to ensure their algorithms are not mistakenly used.

Another example is Everbright Securities in China. In 2013, state-owned brokerage firm, Everbright Securities Co., sent more than 26,000 mistaken buy orders to the Shanghai Stock Exchange (SSE of RMB 23.4 billion (USD 3.82 billion), pushing its benchmark index up 6 % in two minutes. This resulted in a trading loss of approximately RMB 194 million (USD 31.7 million). In a follow-up evaluative study, the China Securities Regulatory Commission (CSRC) found that there were significant flaws in Everbright’s information and risk management systems.

The damage caused by HFT errors is not limited to specific trading firms themselves, but also may involve stock exchanges and the stability of the related financial market. On Friday, May 18, 2012, the social network giant, Facebook’s stock was issued on the NASDAQ exchange. This was the most anticipated initial public offering (IPO) in its history. However, technology problems with the opening made a mess of the IPO. It attracted HFT traders, and very large order flows were expected, and before the IPO, NASDAQ was confident in its ability to deal with the high volume of orders.

But when the deluge of orders to buy, sell and cancel trades came, NASDAQ’s trading software began to fail under the strain. This resulted in a 30-minute delay on NASDAQ’s side, and a 17-second blackout for all stock trading at the exchange, causing further panic. Scrutiny of the problems immediately led to fines for the exchange and accusations that HFT traders bore some responsibility too. Problems persisted after opening, with many customer orders from institutional and retail buyers unfilled for hours or never filled at all, while others ended up buying more shares than they had intended. This incredible gaffe, which some estimates say cost traders USD 100 million, eclipsed NASDAQ’s achievement in getting Facebook’s initial IPO, the third largest IPO in U.S. history. This incident has been estimated to have cost investors USD 100 million.

Another instance occurred on May 6, 2010, when U.S. financial markets were surprised by what has been referred to ever since as the “Flash Crash” Within less than 30 minutes, the main U.S. stock markets experienced the single largest price declines within a day, with a decline of more than 5 % for many U.S.-based equity products. In addition, the Dow Jones Industrial Average (DJIA), at its lowest point that day, fell by nearly 1,000 points, although it was followed by a rapid rebound. This brief period of extreme intraday volatility demonstrated the weakness of the structure and stability of U.S. financial markets, as well as the opportunities for volatility-focused HFT traders. Although a subsequent investigation by the SEC cleared high-frequency traders of directly having caused the Flash Crash, they were still blamed for exaggerating market volatility, withdrawing liquidity for many U.S.-based equities (FLASH BOYS).

Since the mid-2000s, the average trade size in the U.S. stock market had plummeted, the markets had fragmented, and the gap in time between the public view of the markets and the view of high-frequency traders had widened. The rise of high-frequency trading had been accompanied also by a rise in stock market volatility – over and above the turmoil caused by the 2008 financial crisis. The price volatility within each trading day in the U.S. stock market between 2010 and 2013 was nearly 40 percent higher than the volatility between 2004 and 2006, for instance. There were days in 2011 in which volatility was higher than in the most volatile days of the dot-com bubble. Although these different incidents have different causes, the effects were similar and some common conclusions can be drawn. The presence of algorithmic trading and HFT in the financial markets exacerbates the adverse impacts of trading-related mistakes. It may lead to extremely higher market volatility and surprises about suddenly-diminished liquidity. This raises concerns about the stability and health of the financial markets for regulators. With the continuous and fast development of HFT, larger and larger shares of equity trades were created in the U.S. financial markets. Also, there was mounting evidence of disturbed market stability and caused significant financial losses due to HFT-related errors. This led the regulators to increase their attention and effort to provide the exchanges and traders with guidance on HFT practices They also expressed concerns about high-frequency traders extracting profit at the costs of traditional investors and even manipulating the market. For instance, high-frequency traders can generate a large amount of orders within microseconds to exacerbate a trend. Other types of misconduct include: ping orders, which is using some orders to detect other hidden orders; and quote stuffing, which is issuing a large number of orders to create uncertainty in the market. HFT creates room for these kinds of market abuses, and its blazing speed and huge trade volumes make their detection difficult for regulators. Regulators have taken steps to increase their regulatory authority over HFT activities. Some of the problems that arose in the mid-2000s led to regulatory hearings in the United States Senate on dark pools, flash orders and HFT practices. Another example occurred after the Facebook IPO problem. This led the SEC to call for a limit up-limit down mechanism at the exchanges to prevent trades in individual securities from occurring outside of a specified price range so that market volatility will be under better control. These regulatory actions put stricter requirements on HFT practices, aiming to minimize the market disturbance when many fast trading orders occur within a day.

Regulating the Velocities of Dark Pools. Thought of the Day 72.0

hft-robots630

On 22 September 2010 the SEC chair Mary Schapiro signaled US authorities were considering the introduction of regulations targeted at HFT:

…High frequency trading firms have a tremendous capacity to affect the stability and integrity of the equity markets. Currently, however, high frequency trading firms are subject to very little in the way of obligations either to protect that stability by promoting reasonable price continuity in tough times, or to refrain from exacerbating price volatility.

However regulating an industry working towards moving as fast as the speed of light is no ordinary administrative task: – Modern finance is undergoing a fundamental transformation. Artificial intelligence, mathematical models, and supercomputers have replaced human intelligence, human deliberation, and human execution…. Modern finance is becoming cyborg finance – an industry that is faster, larger, more complex, more global, more interconnected, and less human. C W Lin proposes a number of principles for regulating this cyber finance industry:

  1. Update antiquated paradigms of reasonable investors and compartmentalised institutions, and confront the emerging institutional realities, and realise the old paradigms of governance of markets may be ill-suited for the new finance industry;
  2. Enhance disclosure which recognises the complexity and technological capacities of the new finance industry;
  3. Adopt regulations to moderate the velocities of finance realising that as these approach the speed of light they may contain more risks than rewards for the new financial industry;
  4. Introduce smarter coordination harmonising financial regulation beyond traditional spaces of jurisdiction.

Electronic markets will require international coordination, surveillance and regulation. The high-frequency trading environment has the potential to generate errors and losses at a speed and magnitude far greater than that in a floor or screen-based trading environment… Moreover, issues related to risk management of these technology-dependent trading systems are numerous and complex and cannot be addressed in isolation within domestic financial markets. For example, placing limits on high-frequency algorithmic trading or restricting Un-filtered sponsored access and co-location within one jurisdiction might only drive trading firms to another jurisdiction where controls are less stringent.

In these regulatory endeavours it will be vital to remember that all innovation is not intrinsically good and might be inherently dangerous, and the objective is to make a more efficient and equitable financial system, not simply a faster system: Despite its fast computers and credit derivatives, the current financial system does not seem better at transferring funds from savers to borrowers than the financial system of 1910. Furthermore as Thomas Piketty‘s Capital in the Twenty-First Century amply demonstrates any thought of the democratisation of finance induced by the huge expansion of superannuation funds together with the increased access to finance afforded by credit cards and ATM machines, is something of a fantasy, since levels of structural inequality have endured through these technological transformations. The tragedy is that under the guise of technological advance and sophistication we could be destroying the capacity of financial markets to fulfil their essential purpose, as Haldane eloquently states:

An efficient capital market transfers savings today into investment tomorrow and growth the day after. In that way, it boosts welfare. Short-termism in capital markets could interrupt this transfer. If promised returns the day after tomorrow fail to induce saving today, there will be no investment tomorrow. If so, long-term growth and welfare would be the casualty.

Accelerated Capital as an Anathema to the Principles of Communicative Action. A Note Quote on the Reciprocity of Capital and Ethicality of Financial Economics

continuum

Markowitz portfolio theory explicitly observes that portfolio managers are not (expected) utility maximisers, as they diversify, and offers the hypothesis that a desire for reward is tempered by a fear of uncertainty. This model concludes that all investors should hold the same portfolio, their individual risk-reward objectives are satisfied by the weighting of this ‘index portfolio’ in comparison to riskless cash in the bank, a point on the capital market line. The slope of the Capital Market Line is the market price of risk, which is an important parameter in arbitrage arguments.

Merton had initially attempted to provide an alternative to Markowitz based on utility maximisation employing stochastic calculus. He was only able to resolve the problem by employing the hedging arguments of Black and Scholes, and in doing so built a model that was based on the absence of arbitrage, free of turpe-lucrum. That the prescriptive statement “it should not be possible to make sure profits”, is a statement explicit in the Efficient Markets Hypothesis and in employing an Arrow security in the context of the Law of One Price. Based on these observations, we conject that the whole paradigm for financial economics is built on the principle of balanced reciprocity. In order to explore this conjecture we shall examine the relationship between commerce and themes in Pragmatic philosophy. Specifically, we highlight Robert Brandom’s (Making It Explicit Reasoning, Representing, and Discursive Commitment) position that there is a pragmatist conception of norms – a notion of primitive correctnesses of performance implicit in practice that precludes and are presupposed by their explicit formulation in rules and principles.

The ‘primitive correctnesses’ of commercial practices was recognised by Aristotle when he investigated the nature of Justice in the context of commerce and then by Olivi when he looked favourably on merchants. It is exhibited in the doux-commerce thesis, compare Fourcade and Healey’s contemporary description of the thesis Commerce teaches ethics mainly through its communicative dimension, that is, by promoting conversations among equals and exchange between strangers, with Putnam’s description of Habermas’ communicative action based on the norm of sincerity, the norm of truth-telling, and the norm of asserting only what is rationally warranted …[and] is contrasted with manipulation (Hilary Putnam The Collapse of the Fact Value Dichotomy and Other Essays)

There are practices (that should be) implicit in commerce that make it an exemplar of communicative action. A further expression of markets as centres of communication is manifested in the Asian description of a market brings to mind Donald Davidson’s (Subjective, Intersubjective, Objective) argument that knowledge is not the product of a bipartite conversations but a tripartite relationship between two speakers and their shared environment. Replacing the negotiation between market agents with an algorithm that delivers a theoretical price replaces ‘knowledge’, generated through communication, with dogma. The problem with the performativity that Donald MacKenzie (An Engine, Not a Camera_ How Financial Models Shape Markets) is concerned with is one of monism. In employing pricing algorithms, the markets cannot perform to something that comes close to ‘true belief’, which can only be identified through communication between sapient humans. This is an almost trivial observation to (successful) market participants, but difficult to appreciate by spectators who seek to attain ‘objective’ knowledge of markets from a distance. To appreciate the relevance to financial crises of the position that ‘true belief’ is about establishing coherence through myriad triangulations centred on an asset rather than relying on a theoretical model.

Shifting gears now, unless the martingale measure is a by-product of a hedging approach, the price given by such martingale measures is not related to the cost of a hedging strategy therefore the meaning of such ‘prices’ is not clear. If the hedging argument cannot be employed, as in the markets studied by Cont and Tankov (Financial Modelling with Jump Processes), there is no conceptual framework supporting the prices obtained from the Fundamental Theorem of Asset Pricing. This lack of meaning can be interpreted as a consequence of the strict fact/value dichotomy in contemporary mathematics that came with the eclipse of Poincaré’s Intuitionism by Hilbert’s Formalism and Bourbaki’s Rationalism. The practical problem of supporting the social norms of market exchange has been replaced by a theoretical problem of developing formal models of markets. These models then legitimate the actions of agents in the market without having to make reference to explicitly normative values.

The Efficient Market Hypothesis is based on the axiom that the market price is determined by the balance between supply and demand, and so an increase in trading facilitates the convergence to equilibrium. If this axiom is replaced by the axiom of reciprocity, the justification for speculative activity in support of efficient markets disappears. In fact, the axiom of reciprocity would de-legitimise ‘true’ arbitrage opportunities, as being unfair. This would not necessarily make the activities of actual market arbitrageurs illicit, since there are rarely strategies that are without the risk of a loss, however, it would place more emphasis on the risks of speculation and inhibit the hubris that has been associated with the prelude to the recent Crisis. These points raise the question of the legitimacy of speculation in the markets. In an attempt to understand this issue Gabrielle and Reuven Brenner identify the three types of market participant. ‘Investors’ are preoccupied with future scarcity and so defer income. Because uncertainty exposes the investor to the risk of loss, investors wish to minimise uncertainty at the cost of potential profits, this is the basis of classical investment theory. ‘Gamblers’ will bet on an outcome taking odds that have been agreed on by society, such as with a sporting bet or in a casino, and relates to de Moivre’s and Montmort’s ‘taming of chance’. ‘Speculators’ bet on a mis-calculation of the odds quoted by society and the reason why speculators are regarded as socially questionable is that they have opinions that are explicitly at odds with the consensus: they are practitioners who rebel against a theoretical ‘Truth’. This is captured in Arjun Appadurai’s argument that the leading agents in modern finance believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control . . . [they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”.

In the context of Pragmatism, financial speculators embody pluralism, a concept essential to Pragmatic thinking and an antidote to the problem of radical uncertainty. Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (The Gift), exploring the moral force behind reciprocity in primitive and archaic societies and goes on to say that the contemporary financial speculator is “betting on the obligation of return”, and this is the fundamental axiom of contemporary finance. David Graeber (Debt The First 5,000 Years) also recognises the fundamental position reciprocity has in finance, but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores uncertainty in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts”. In advocating that reciprocity need not be honoured, Graeber is not just challenging contemporary capitalism but also the foundations of the civitas, based on equality and reciprocity. The origins of Graeber’s argument are in the first half of the nineteenth century. In 1836 John Stuart Mill defined political economy as being concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end.

In Principles of Political Economy With Some of Their Applications to Social Philosophy, Mill defended Thomas Malthus’ An Essay on the Principle of Population, which focused on scarcity. Mill was writing at a time when Europe was struck by the Cholera pandemic of 1829–1851 and the famines of 1845–1851 and while Lord Tennyson was describing nature as “red in tooth and claw”. At this time, society’s fear of uncertainty seems to have been replaced by a fear of scarcity, and these standards of objectivity dominated economic thought through the twentieth century. Almost a hundred years after Mill, Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”. Dichotomies emerge in the aftermath of the Cartesian revolution that aims to remove doubt from philosophy. Theory and practice, subject and object, facts and values, means and ends are all separated. In this environment ex cathedra norms, in particular utility (profit) maximisation, encroach on commercial practice.

In order to set boundaries on commercial behaviour motivated by profit maximisation, particularly when market uncertainty returned after the Nixon shock of 1971, society imposes regulations on practice. As a consequence, two competing ethics, functional Consequential ethics guiding market practices and regulatory Deontological ethics attempting stabilise the system, vie for supremacy. It is in this debilitating competition between two essentially theoretical ethical frameworks that we offer an explanation for the Financial Crisis of 2007-2009: profit maximisation, not speculation, is destabilising in the presence of radical uncertainty and regulation cannot keep up with motivated profit maximisers who can justify their actions through abstract mathematical models that bare little resemblance to actual markets. An implication of reorienting financial economics to focus on the markets as centres of ‘communicative action’ is that markets could become self-regulating, in the same way that the legal or medical spheres are self-regulated through professions. This is not a ‘libertarian’ argument based on freeing the Consequential ethic from a Deontological brake. Rather it argues that being a market participant entails restricting norms on the agent such as sincerity and truth telling that support knowledge creation, of asset prices, within a broader objective of social cohesion. This immediately calls into question the legitimacy of algorithmic/high- frequency trading that seems an anathema in regard to the principles of communicative action.

Crisis. Thought of the Day 66.0

Economic-Crash-2017-720x350

Why do we have a crisis? The system, by being liberal, allowed for the condensation of wealth. This went well as long as there was exponential growth and humans also saw their share of the wealth growing. Now, with the saturation, no longer growth of wealth for humans was possible, and actually decline of wealth occurs since the growth of capital has to continue (by definition). Austerity will accelerate this reduction of wealth, and is thus the most-stupid thing one could do. If debt is paid back, money disappears and economy shrinks. The end point will be zero economy, zero money, and a remaining debt. It is not possible to pay back the money borrowed. The money simply does not exist and cannot be printed by the borrowers in a multi-region single-currency economy.

What will be the outcome? If countries are allowed to go bankrupt, there might be a way that economy recovers. If countries are continuing to be bailed-out, the crisis will continue. It will end in the situation that all countries will have to be bailed-out by each-other, even the strong ones. It is not possible that all countries pay back all the debt, even if it were advisable, without printing money by the borrowing countries. If countries are not allowed to go bankrupt, the ‘heritage’, the capital of the citizens of countries, now belonging to the people, will be confiscated and will belong to the capital, with its seat in fiscal paradises. The people will then pay for using this heritage which belonged to them not so long time ago, and will actually pay for it with money that will be borrowed. This is a modern form of slavery, where people posses nothing, effectively not even their own labor power, which is pawned for generations to come. We will be back to a feudal system.

On the long term, if we insist on pure liberalism without boundaries, it is possible that human production and consumption disappear from this planet, to be substituted by something that is fitter in a Darwinistic way. What we need is something that defends the rights and interests of humans and not of the capital, there where all the measures – all politicians and political lobbies – defend the rights of the capital. It is obvious that the political structures have no remorse in putting humans under more fiscal stress, since the people are inflexible and cannot flee the tax burden. The capital, on the other hand, is completely flexible and any attempt to increase the fiscal pressure makes that it flees the country. Again, the Prisoner’s Dilemma makes that all countries increase tax on people and labor, while reducing the tax on capital and money. We could summarize this as saying that the capital has joined forces – has globalized – while the labor and the people are still not united in the eternal class struggle. This imbalance makes that the people every time draw the short straw. And every time the straw gets shorter.

Conjuncted: Banking – The Collu(i)sion of Housing and Stock Markets

bank-customers-1929-stock-market-crash

There are two main aspects we are to look at here as regards banking. The first aspect is the link between banking and houses. In most countries, lending of money is done on basis of property, especially houses. As collateral for the mortgage, often houses are used. If the value of the house increases, more money can be borrowed from the banks and more money can be injected into society. More investments are generally good for a country. It is therefore of prime importance for a country to keep the house prices high.

The way this is done, is by facilitating borrowing of money, for instance by fiscal stimulation. Most countries have a tax break on mortgages. This, while the effect for the house buyers of these tax breaks is absolutely zero. That is because the price of a house is determined on the market by supply and demand. If neither the supply nor the demand is changing, the price will be fixed by ‘what people can afford’. Imagine there are 100 houses for sale and 100 buyers. Imagine the price on the market will wind up being 100000 Rupees, with a mortgage payment (3% interest rate) being 3 thousand Rupees per year, exactly what people can afford. Now imagine that government makes a tax break for buyers stipulating that they get 50% of the mortgage payment back from the state in a way of fiscal refund. Suddenly, the buyers can afford 6 thousand Rupees per year and the price on the market of the house will rise to 200 thousand Rupees. The net effect for the buyer is zero. Yet, the price of the house has doubled, and this is a very good incentive for the economy. This is the reason why nearly all governments have tax breaks for home owners.

Yet, another way of driving the price of houses up is by reducing the supply. Socialist countries made it a strong point on their agenda that having a home is a human right. They try to build houses for everybody. And this causes the destruction of the economy. Since the supply of houses is so high that the value drops too much, the possibility of investment based on borrowing money with the house as collateral is severely reduced and a collapse of economy is unavoidable. Technically speaking, it is of extreme simplicity to build a house to everybody. Even a villa or a palace. Yet, implementing this idea will imply a recession in economy, since modern economies are based on house prices. It is better to cut off the supply (destroy houses) to help the economy.

The next item of banking is the stock holders. It is often said that the stock market is the axis-of-evil of a capitalist society. Indeed, the stock owners will get the profit of the capital, and the piling up of money will eventually be at the stock owners. However, it is not so that the stock owners are the evil people that care only about money. It is principally the managers that are the culprits. Mostly bank managers.

To give you an example. Imagine I have 2% of each of the three banks, State Bank, Best Bank and Credit Bank. Now imagine that the other 98% of the stock of each bank is placed at the other two banks. State Bank is thus 49% owner of Best Bank, and 49% owner of Credit Bank. In turn, State Bank is owned for 49% by Best Bank and for 49% by Credit Bank. The thing is that I am the full 100% owner of all three banks. As an example, I own directly 2% of State Bank. But I also own 2% of two banks that each own 49% of this bank. And I own 2% of banks that own 49% of banks that own 49% of State Bank. This series adds to 100%. I am the full 100% owner of State Bank. And the same applies to Best Bank and Credit Bank. This is easy to see, since there do not exist other stock owners of the three banks. These banks are fully mine. However, if I go to a stockholders meeting, I will be outvoted on all subjects. Especially on the subject of financial reward for the manager. If today the 10-million-Rupees salary of Arundhati Bhatti of State Bank is discussed, it will get 98% of the votes, namely those of Gautum Ambani representing Best Bank and Mukesh Adani of Credit Bank. They vote in favor, because next week is the stockholders meeting of their banks. This game only ends when Mukesh Adani will be angry with Arundhati Bhatti.

This structure, placing stock at each other’s company is a form of bypassing the stock holders

– the owners – and allow for plundering of a company.

There is a side effect which is as beneficial as the one above. Often, the general manager’s salary is based on a bonus-system; the better a bank performs, the higher the salary of the manager. This high performance can easily be bogus. Imagine the above three banks. The profit it distributed over the shareholders in the form of dividend. Imagine now that each bank makes 2 million profit on normal business operations. Each bank can easily emit 100 million profit in dividend without loss! For example, State Bank distributes 100 million: 2 million to me, 49 million to Best Bank and 49 million to Credit Bank. From these two banks it also gets 49 million Rupees each. Thus, the total flux of money is only 2 million Rupees.

Shareholders often use as a rule-of thumb a target share price of 20 times the dividend. This because that implies a 5% ROI and slightly better than putting the money at a bank (which anyway invests it in that company, gets 5%, and gives you 3%). However, the dividend can be highly misleading. 2 million profit is made, 100 million dividend is paid. Each bank uses this trick. The general managers can present beautiful data and get a fat bonus.

The only thing stopping this game is taxing. What if government decides to put 25% tax on dividend? Suddenly a bank has to pay 25 million where it made only 2 million real profit. The three banks claimed to have made 300 million profit in total, while they factually only made 6 million; the rest came from passing money around to each other. They have to pay 75 million dividend tax. How will they manage?! That is why government gives banks normally a tax break on dividend (except for small stockholders like me). Governments that like to see high profits, since it also fabricates high GDP and thus guarantees low interest rates on their state loans.

Actually, even without taxing, how will they manage to continue presenting nice data in a year where no profit is made on banking activity?

Reclaim Modernity: Beyond Markets, Beyond Machines (Mark Fisher & Jeremy Gilbert)

Untitled

It is understandable that the mainstream left has traditionally been suspicious of anti-bureaucratic politics. The Fabian tradition has always believed – has been defined by its belief – in the development and extension of an enlightened bureaucracy as the main vehicle of social progress. Attacking ‘bureaucracy’ has been – since at least the 1940s – a means by which the Right has attacked the very idea of public service and collective action. Since the early days of Thatcherism, there has been very good reason to become nervous whenever someone attacks bureaucracy, because such attacks are almost invariably followed by plans not for democratisation, but for privatisation.

Nonetheless, it is precisely this situation that has produced a certain paralysis of the Left in the face of one of its greatest political opportunities, an opportunity which it can only take if it can learn to speak an anti-bureaucratic language with confidence and conviction. On the one hand, this is a simple populist opportunity to unite constituencies within both the public and private sectors: simple, but potentially strategically crucial. As workers in both sectors and as users of public services, the public dislike bureaucracy and apparent over-regulation. The Left misses an enormous opportunity if it fails to capitalise on this dislike and transform it into a set of democratic demands.

On the other hand, anti-bureaucratism marks one of the critical points of failure and contradiction in the entire neoliberal project. For the truth is that neoliberalism has not kept its promise in this regard. It has not reduced the interference of managerial mechanisms and apparently pointless rules and regulations in the working life of public-sector professionals, or of public-service users, or of the vast majority of workers in the private sector. In fact it has led in many cases to an enormous proliferation and intensification of just these processes. Targets, performance indicators, quantitative surveys and managerial algorithms dominate more of life today than ever before, not less. The only people who really suffer less regulation than they did in the past are the agents of finance capital: banks, traders, speculators and fund managers.

Where de-regulation is a reality for most workers is not in their working lives as such, but in the removal of those regulations which once protected their rights to secure work, and to a decent life outside of work (pensions, holidays, leave entitlements, etc.). The precarious labour market is not a zone of freedom for such workers, but a space in which the fact of precarity itself becomes a mechanism of discipline and regulation. It only becomes a zone of freedom for those who already have enough capital to be able to choose when and where to work, or to benefit from the hyper-mobility and enforced flexibility of contemporary capitalism.

Reclaiming Modernity Beyond Markets Beyond Machines

Rhizomatic Topology and Global Politics. A Flirtatious Relationship.

 

rhizome

Deleuze and Guattari see concepts as rhizomes, biological entities endowed with unique properties. They see concepts as spatially representable, where the representation contains principles of connection and heterogeneity: any point of a rhizome must be connected to any other. Deleuze and Guattari list the possible benefits of spatial representation of concepts, including the ability to represent complex multiplicity, the potential to free a concept from foundationalism, and the ability to show both breadth and depth. In this view, geometric interpretations move away from the insidious understanding of the world in terms of dualisms, dichotomies, and lines, to understand conceptual relations in terms of space and shapes. The ontology of concepts is thus, in their view, appropriately geometric, a multiplicity defined not by its elements, nor by a center of unification and comprehension and instead measured by its dimensionality and its heterogeneity. The conceptual multiplicity, is already composed of heterogeneous terms in symbiosis, and is continually transforming itself such that it is possible to follow, and map, not only the relationships between ideas but how they change over time. In fact, the authors claim that there are further benefits to geometric interpretations of understanding concepts which are unavailable in other frames of reference. They outline the unique contribution of geometric models to the understanding of contingent structure:

Principle of cartography and decalcomania: a rhizome is not amenable to any structural or generative model. It is a stranger to any idea of genetic axis or deep structure. A genetic axis is like an objective pivotal unity upon which successive stages are organized; deep structure is more like a base sequence that can be broken down into immediate constituents, while the unity of the product passes into another, transformational and subjective, dimension. (Deleuze and Guattari)

The word that Deleuze and Guattari use for ‘multiplicities’ can also be translated to the topological term ‘manifold.’ If we thought about their multiplicities as manifolds, there are a virtually unlimited number of things one could come to know, in geometric terms, about (and with) our object of study, abstractly speaking. Among those unlimited things we could learn are properties of groups (homological, cohomological, and homeomorphic), complex directionality (maps, morphisms, isomorphisms, and orientability), dimensionality (codimensionality, structure, embeddedness), partiality (differentiation, commutativity, simultaneity), and shifting representation (factorization, ideal classes, reciprocity). Each of these functions allows for a different, creative, and potentially critical representation of global political concepts, events, groupings, and relationships. This is how concepts are to be looked at: as manifolds. With such a dimensional understanding of concept-formation, it is possible to deal with complex interactions of like entities, and interactions of unlike entities. Critical theorists have emphasized the importance of such complexity in representation a number of times, speaking about it in terms compatible with mathematical methods if not mathematically. For example, Foucault’s declaration that: practicing criticism is a matter of making facile gestures difficult both reflects and is reflected in many critical theorists projects of revealing the complexity in (apparently simple) concepts deployed both in global politics.  This leads to a shift in the concept of danger as well, where danger is not an objective condition but “an effect of interpretation”. Critical thinking about how-possible questions reveals a complexity to the concept of the state which is often overlooked in traditional analyses, sending a wave of added complexity through other concepts as well. This work seeking complexity serves one of the major underlying functions of critical theorizing: finding invisible injustices in (modernist, linear, structuralist) givens in the operation and analysis of global politics.

In a geometric sense, this complexity could be thought about as multidimensional mapping. In theoretical geometry, the process of mapping conceptual spaces is not primarily empirical, but for the purpose of representing and reading the relationships between information, including identification, similarity, differentiation, and distance. The reason for defining topological spaces in math, the essence of the definition, is that there is no absolute scale for describing the distance or relation between certain points, yet it makes sense to say that an (infinite) sequence of points approaches some other (but again, no way to describe how quickly or from what direction one might be approaching). This seemingly weak relationship, which is defined purely ‘locally’, i.e., in a small locale around each point, is often surprisingly powerful: using only the relationship of approaching parts, one can distinguish between, say, a balloon, a sheet of paper, a circle, and a dot.

To each delineated concept, one should distinguish and associate a topological space, in a (necessarily) non-explicit yet definite manner. Whenever one has a relationship between concepts (here we think of the primary relationship as being that of constitution, but not restrictively, we ‘specify’ a function (or inclusion, or relation) between the topological spaces associated to the concepts). In these terms, a conceptual space is in essence a multidimensional space in which the dimensions represent qualities or features of that which is being represented. Such an approach can be leveraged for thinking about conceptual components, dimensionality, and structure. In these terms, dimensions can be thought of as properties or qualities, each with their own (often-multidimensional) properties or qualities. A key goal of the modeling of conceptual space being representation means that a key (mathematical and theoretical) goal of concept space mapping is

associationism, where associations between different kinds of information elements carry the main burden of representation. (Conceptual_Spaces_as_a_Framework_for_Knowledge_Representation)

To this end,

objects in conceptual space are represented by points, in each domain, that characterize their dimensional values. A concept geometry for conceptual spaces

These dimensional values can be arranged in relation to each other, as Gardenfors explains that

distances represent degrees of similarity between objects represented in space and therefore conceptual spaces are “suitable for representing different kinds of similarity relation. Concept

These similarity relationships can be explored across ideas of a concept and across contexts, but also over time, since “with the aid of a topological structure, we can speak about continuity, e.g., a continuous change” a possibility which can be found only in treating concepts as topological structures and not in linguistic descriptions or set theoretic representations.