Albert Camus Reads Richard K. Morgan: Unsaid Existential Absurdism

Humanity has spread to the stars. We set out like ancient seafarers to explore the limitless ocean of space. But no matter how far we venture into the unknown, the worst monsters are those we bring with us. – Takeshi Kovacs

What I purport to do in this paper is pick up two sci-fi works of Richard Morgan, Altered Carbon (teaser to Netflix series), the first of Takeshi Kovacs trilogy and sometimes a grisly tale of switching bodies to gain immortality transhumanism, either by means of enhanced biology, technology, or biotechnology, and posthumanism. The second is Market Forces, a brutal journey into the heart of corporatized conflict investment by way of conscience elimination. Thereafter a conflation with Camus’ absurdity unravels the very paradoxical ambiguity underlying absurdism as a human condition. The paradoxical ambiguity is as a result of Camus’ ambivalence towards the neo-Platonist conception of the ultimate unifying principle, while accepting Plotinus’ principled pattern, but rejecting its culmination.

Richard Morgan’s is a parody, a commentary, or even en epic fantasy overcharged almost to the point of absurdity and bordering extropianism. If at all there is a semblance of optimism in the future as a result of Moore’s Law of dense hardware realizable through computational extravagance, it is spectacularly offset by complexities of software codes resulting in a disconnect that Morgan brilliantly transposes on to a society in a dystopian ethic. This offsetting disconnect between the physical and mental, between the tangible and the intangible is the existential angst writ large on the societal maneuvered by the powers that be.

Morgan’s Altered Carbon won’t be a deflection from William Gibson’s cyberpunk, or at places even Philip K Dick’s Do Androids Dream of Electric Sheep?, which has inspired the cult classic Ridley Scott’s Blade Runner, wherein the interface between man and machine is coalescing (sleeves as called in the novel), while the singularity pundits are making hay. But, what if the very technological exponent is used against the progenitors, a point that defines much of Artificial Intelligence ethics today? What if the human mind is now digitized, uploaded and downloaded as a mere file, and transferred across platforms (by way of needlecast transmitting DHF, individual digital human freight) rendering the hardware dis- posable, and at the same time the software as a data vulnerable to the vagaries of the networked age? These aren’t questions keeping the ethic at stake alone, but rather a reformatting of humanity off the leash. This forever changes the concept of morality and of death as we know it, for now anyone with adequate resources (note the excess of capitalism here) can technically extend their life for as long as they desire by reserving themselves into cloned organics or by taking a leaf off Orwell’s Government to archive citizen records in perpetual storage. Between the publication in 2002 and now, the fiction in science fiction as a genre has indeed gotten blurred, and what has been the Cartesian devil in mind-body duality leverages the technological metempsychosis of consciousness in bringing forth a new perception on morality.

Imagine, the needle of moral compass behaving most erratically, ranging from extreme apathy to moderate conscience in consideration of the economic of collateral damage, with the narrative wrenching through senses, thoughts and emotions before settling down into a dystopian plot dense with politics, societal disparity, corruption, abuse of wealth and power, and repressively conservative justice. If extreme violence is distasteful in Altered Carbon, the spectacle is countered by the fact that human bodies and memories are informational commodities as digitized freight and cortical stacks, busted and mangled physical shells already having access to a sleeve to reincarnate and rehabilitate on to, opening up new vistas of philosophical dispositions and artificially intelligent deliberation on the ethics of fast-disappearing human-machine interface.

If, Personal is Political, Altered Carbon results in a concussion of overloaded themes of cyberpunk tropes and is indicative of Morgan’s political takes, a conclusion only to be commissioned upon reading his later works. This detective melange heavily slithers through human condition both light and dark without succumbing to the derivatives of high-tech and low-life and keeping the potentials of speculative fiction to explorations. The suffusive metaphysics of longevity, multiplicity of souls and spiritual tentacles meeting its adversary in Catholicism paints a believable futuristic on the canvass of science-fiction spectra.

Market Forces, on the other hand is where cyberpunk-style sci-fi is suddenly replaced with corporatized economy of profit lines via the cogency of conflict investment. The world is in a state of dysphoria with diplomatic lines having given way to negotiations with violence, and contracts won on Ronin-esque car duels shifting the battlefield from the cyberspace of Altered Carbon to the more terrestrial grounds. Directly importing from Gordon Gekko’s “Greed is Good”, corporates enhance their share of GDP via legal funding of foreign wars. The limits of philosophy of liberal politics are stretched on analogizing the widening gap between the rich and the marginalized in the backdrop of crime-ravaged not-so futuristic London. Security is rarefied according to economic stratifications, and surveillance by the rich reach absurd levels of sophistication in the absence of sousveillance by the marginalized.

Enter Chris Faulkner, the protagonist defined by conscience that starts to wither away when confronted with taking hard and decisive actions for his firm, Shorn Associates, in the face of brutality of power dynamics. The intent is real-life testosterone absolutism maximizing the tenets of western capitalism in an ostentatious exhibition of masculinity and competition. The obvious collateral damage is fissuring of familial and societal values born as a result of conscience. Market Forces has certain parallels from the past, in the writings of Robert Sheckley, the American sci-fi author, who would take an element of society and extrapolate on its inherent violence to the extent of the absurd sliding into satire. It’s this sliding wherein lies the question of the beyond, the inevitability of an endowment of aggression defining, or rather questioning the purpose of the hitherto given legacy of societal ethic.

With no dearth of violence, the dystopian future stagnates into dysphoria characterized by law and apparatus at the mercy of corporations, which transcend the Government constitutionally along rapacious capitalism. A capitalism that is so rampant that it transforms the hero into an anti-hero in the unfolding tension between interest and sympathy, disgust and repulsion. The perfectly achievable Market Forces is a realization round the corner seeking birth between the hallucinogenic madness of speculations and hyperreality hinging on the philosophy of free-markets taken to its logical ends in the direction of an unpleasant future. The reductio ad absurdum of neoliberalism is an environment of feral brutality masked with the thinnest veneer of corporate civilization, and is the speculation that portrays the world where all power equates violence. This violence is manifested in aggression in a road rage death match against competitors every time there is a bid for a tender. What goes slightly over the board, and in a pretty colloquial usage of absurdity is why would any competition entail the best of staff on such idiotic suicide missions?

Camus’ absurdity is born in The Myth of Sisyphus, and continues well into the The Rebel, but is barely able to free itself from the clutches of triviality. This might appear to be a bold claim, but the efficacy is to be tested through Camus’ intellectual indebtedness to Plotinus, the Neo-Platonist thinker. Plotinus supplemented the One and Many idea of Plato with gradations of explanatory orders, for only then a coalescing of explanations with reality was conceivable. This coalescing converges into the absolute unity, the One, the necessarily metaphysical ground. Now, Camus accepts Plotinus in the steganographic, but strips the Absolute of its metaphysics. A major strand of absurdity for Camus stems from his dic- tum, “to understand is, above all, to unify”, and the absence of such unifying principle vindicates absurdity. Herein, one is confronted with the first of paradoxes, in that, if the Absolute is rejected, why then is there in Camus a nostalgia for unity? The reason is peculiarly caught between his version of empiricism and monism. His empiricism gives accord to comprehensibility of ordinary experiences by way of language and meaning, while anything transcending the same is meaninglessness and hinges on the Plotinus’ Absolute for comprehensibility, thus making him sound a monist. Add to this contradiction is the face of the Christian God to appear if the Absolute were not to be rejected, which would then have warranted a clash between good and evil in the face of the paradox of the existing of the latter when God was invested with qualities of the former. Invoking modernism’s core dictum, Camus then, questions spontaneity in the presence of Absolute by calling to attention scholastic perplexity.

Having rejected the Absolute, Camus takes the absurd condition as a fact. If one were to carefully tread The Myth of Sisyphus, it works thusly: If a man removes himself, he destroys the situation and hence the absurd condition. Since, the absurd condition is taken as a fact, one who destroys himself denies this fact. But he who denies this fact puts himself in opposition to what is, Truth. To oppose the Truth, recognizing it to be true, is to contradict oneself. Recognizing a truth, one ought to preserve it rather than deny it. Therefore, it follows that one ought not to commit metaphysical suicide in the face of the meaningless universe. This is a major paradox in his thought, where the evaluative absurdity is deemed to be preserved starting from the premise that man and the universe juxtaposed together is absurdity itself. So, what we have here is a logical cul-de-sac. But, what is of cardinal import is the retention of life in mediating between the man and universe as absurdity in polarities. If this were confronting the absurd in life, eschatology is another confrontation with the absurd, an absolute that needs to be opposed, a doctrine that becomes a further sense of the absurd, an ethic of the creation of the absolute rule in a drama of man as a struggle against death.

It is this conjecture that builds up in The Rebel, death as an antagonist subjected to rebellion. The absurdity of death lies across our desire for immortality, the inexplicability of it, and negating and denying the only meaningful existence known. Contradictorily, death would not be absurd if immortality were possible, and existence as is known isn’t the only meaningful existence that there is. Camus is prone to a meshwork logic here, for his thought fluctuates between viewing death as an absolute evil and also as a liberator, because of which it lends legitimacy to freedom. For, it isn’t the case that Camus is unaware of the double bind of his logic, and admittedly he ejects himself out of this quandary by deliberating on death not as a transcendental phenomenon, but as an ordinary lived-experience. If the Myth of Sisyphus holds murder and suicide in an absurdist position by denying the transcendent source of value, The Rebel revels in antagonisms with Nihilism, be it either in the sense of nothing is prohibited, or the absolutist nihilism of “permit all” with a fulcrum on the Absolute. The Rebel epitomizes the intellectual impotency of nihilism. But due credit for the logical progression of Camus is mandated here, for any utopia contains the seed of nihilism, in that, any acceptance of an Absolute other than life ultimately leads to tyranny. If this were to be one strand in the essay, the other is exposited in terms of an unrelenting and absolute opposition to death. Consequently, The Rebel, which is the embodiment of Camus’ ethic cannot kill. To avoid any danger of absolutism in the name of some positive good or value, the absolute value becomes oppositional to death, and hence the Rebel’s ethic is one of ceaseless rebellion, opposition and conflict.

Now, with a not very exhaustive treatment to Camus’ notion of absurdity as there is more than meets the eye in his corpus, let us turn to conflation with Richard Morgan and justify our thesis that we set out with. We shall bring this about by a series of observations.

If antagonism to death is the hallmark of rebellion, then Altered Carbon with its special hard-drives called “Stacks” installed in the brainstem immortalizes consciousness to be ported across humans across spacetimes. Needlecasting, the process by which human consciousness in the format of data gets teleported suffers disorientation across human hardwares, if it could even be called that. Interestingly, this disorientation aggrandizes the receiver conflict-ready, a theme that runs continuously in Market Forces as well as in Altered Carbon. The state of being conflict- and combat-ready is erecting armies to quash down rebellions. To prevent immortality from getting exploited in the hands of the privileged, these armies are trained to withstand torture, drudgery, while at the same time heightening their perception via steganography. But where the plot goes haywire for Camus’ rebel is Richard Morgan’s can neutralize and eliminate. Thats the first observation.

On to the second, which deals with transhumanism. A particular character, Kovac’s partner Kristen Ortega has a neo-Catholic family that’s split over God’s view of resurrecting a loved one. The split is as a result of choosing religious coding, a neo-Catholic semblance being the dead cannot be brought back to life. In these cases, Altered Carbon pushes past its Blade Runner fetish and reflexive cynicism to find something human. But, when the larger world is so thin, it’s hard to put something like neo-Catholicism in a larger context. Characters have had centuries to get used to the idea of stacks begging the larger question: why are many still blindsided by their existence? And why do so few people, including the sour Meths, seem to be doing anything interesting with technology? Now Camus’ man is confronted with his absurd and meaningless existence, which will be extinguished by death. There are two choices to consider here: either he can live inauthentically, implying hiding from truth, the fact that life is meaningless, and accepting the standards and values of the crowd, and in the process escaping the inner misery and despair that results from an honest appraisal of facts. Or, he can take the authentic choice and live heroically, implying facing the truth, life’s futility, and temporarily, submitting to despair which is a necessary consequence, but which, if it does not lead to suicide, will eventually purify him. Despair will drive him out of himself and away from trivialities, and by it he will be impelled to commit himself to a life of dramatic choices. This is ingrained in the intellectual idea of neo-Catholicism, with Camus’ allusion as only the use of the Will can cause a man truly to be. Both Takeshi Kovacs in Altered Carbon and Chris Faulkner in Market Forces amply epitomize this neo-Catholicism, albeit not directly, but rather, as an existential angst in the form of an intrusion.

Now for the third observation. The truth in Altered Carbon is an excavation of the self, more than searching data and tweaking it into information. It admonishes to keep going no matter whichever direction, a scalar over the vector territorialization in order to decrypt that which seems hidden, an exercise in futility. Allow me to quote Morgan in full,

You are still young and stupid. Human life has no value. Haven’t you learned that yet, Takeshi, with all you’ve seen? It has no value, intrinsic to itself. Machines cost money to build. Raw materials cost money to extract. But people? You can always get some more people. They reproduce like cancer cells, whether you want them or not. They are abundant, Takeshi. Why should they be valuable? Do you know that it costs us less to recruit and use up a real snuff whore that it does to set up and run virtual equivalent format. Real human flesh is cheaper than a machine. It’s the axiomatic truth of our times?

In full consciousness and setting aside the impropriety above, Morgan’s prejudicing the machine over human flesh extricates essentialism, mirroring Camusian take on the meaning of life as inessential, but for the burning problem of suicide. This is a direct import from Nietzsche, for who, illusion (the arts, Remember Wagner!) lends credibility to life and resolves despair to some extent, whereas for Camus, despair is only coming to terms with this absurd condition, by way of machination in the full knowhow of condition’s futility and pointlessness. This fact is most brilliantly corroborated in Morgan’s dictum about how constant repetition can even make the most obvious truths irritating enough to disagree with (Woken Furies).

To conclude: Imagine the real world extending into the fictive milieu, or its mirror image, the fictive world territorializing the real leaving it to portend such an intercourse consequent to an existential angst. Such an imagination now moves along the coordinates of hyperreality, where it collaterally damages meaning in a violent burst of EX/IM-plosion. This violent burst disturbs the idealized truth overridden by a hallucinogenic madness prompting iniquities calibrated for an unpleasant future. This invading dissonant realism slithers through the science fiction before culminating in the human characteristics of expediency. Such expediencies abhor fixation to being in the world built on deluded principles, where absurdity is not only a human condition, but an affliction of the transhuman and posthuman condition as well. Only the latter is not necessarily a peep into the future, which it might very well be, but rather a disturbing look into the present-day topographies, which for Camus was acquiescing to predicament, and for Richard Morgan a search for the culpable.

Advertisement

Politics of Teleonomies of Blockchain…Thought of the Day 155.0

DrBSBLdXQAESkFp.jpg

All of this starts with the dictum, “There are no men at work”.

The notion of blockchain is a decentralized polity. Blockchain is immutable, for once written on to the block, it is practically un-erasable. And most importantly, it is collateralized, in that, even if there is a lack thereof of physical assets, the digital ownership could be traded as a collateral. So, once you have a blockchain, you create a stack that could be database controlled using a Virtual Machine, think of it as some sort of digital twin. So, what exactly are the benefits of this decentralized digital polity? One crucial is getting rid of intermediaries (unless, one considers escrow accounts as an invisible intermediary!, which seldom fulfills the definitional criteria). So, in short, digital twinning helps further social scalability by getting intermediaries o to an invisible mode. Now, when blockchains are juxtaposed with algorithmically run machines (AI is just one branch of it), one gets the benefits of social scalability with analytics, the ever-increasing ocean of raw data hermeneutically sealed into information for utilitarian purposes. The advantages of decentralized polity and social scalability compiles for a true democratic experience in an open-sourced modeling, where netizens (since we still are mired in the controversy of net neutrality) experience participatory democracy.
How would these combine with exigencies of scarce nature or resources? It is here that such hackathons combine the ingenuity of blockchain with AI in a process generally referred to as “mining”. This launch from the nature as we know is Nature 2.0. To repeat, decentralized polity and social scalability creates a self-sustaining ecosystem in a sense of Anti-Fragility (yes, Taleb’s anti-fragile is a feedback into this) with autonomously created machine learning systems that are largely correctional in nature on one hand and improving learning capacities from the environment on the other. These two hands coordinate giving rise to resource manipulation in lending a synthetic definition of materialities taken straight from physics textbooks and scared-to-apprehend materialities as thermodynamic quotients. And this is where AI steams up in a grand globalized alliance of machines embodying agencies always looking for cognitive enhancements to fulfill teleonomic life derived from the above stated thermodynamic quotient of randomness and disorder into gratifying sensibilities of self-sustenance. Synthetic biologists (of the Craig Venter and CRISPR-like lines) call this genetic programming, whereas singularitarians term it as evolution, a break away from simulated evolution that defined initial days of AI. The synthetic life is capable of decision making, the more it is subjected to the whims and fancies of surrounding environment via the process of machine learning leading to autonomous materialities with cognitive capabilities. These are parthenogenetic machines with unencumbered networking capacities. Such is the advent of self-ownership, and taking it to mean to nature as we have hitherto known is a cathectic fallacy in ethics. Taking to mean it differently in a sense of establishing a symbiotic relationship between biology and machines to yield bio machines with characteristics of biomachinations, replication (reproduction, CC and CV to be thrown open for editing via genetic programming) and self-actualization is what blockchain in composite with AI and Synthetic Biology is Nature 2.0.
Yes, there are downsides to traditional mannerisms of thought, man playing god with nature and so on and so on…these are ethical constraints and thus political in undertones, but with conservative theoretics and thus unable to come to terms with the politics of resource abundance that the machinic promulgates…

Capital As Power.

DYxMn9QXcAIh8-W.jpg-large

One has the Eric Fromm angle of consciousness as linear and directly proportional to exploitation as one of the strands of Marxian thinking, the non-linearity creeps up from epistemology on the technological side, with, something like, say Moore’s Law, where ascension of conscious thought is or could be likened to exponentials. Now, these exponentials are potent in ridding of the pronouns, as in the “I” having a compossibility with the “We”, for if these aren’t gotten rid of, there is asphyxiation in continuing with them, an effort, an energy expendable into the vestiges of waste, before Capitalism comes sweeping in over such deliberately pronounced islands of pronouns. This is where the sweep is of the “IT”. And this is emancipation of the highest order, where teleology would be replaced by Eschatology. Alienation would be replaced with emancipation. Teleology is alienating, whereas eschatology is emancipating. Agency would become un-agency. An emancipation from alienation, from being, into the arms of becoming, for the former is a mere snapshot of the illusory order, whereas the latter is a continuum of fluidity, the fluid dynamics of the deracinated from the illusory order. The “IT” is pure and brute materialism, the cosmic unfoldings beyond our understanding and importantly mirrored in on the terrestrial. “IT” is not to be realized. “It” is what engulfs us, kills us, and in the process emancipates us from alienation. “IT” is “Realism”, a philosophy without “we”, Capitalism’s excessive power. “IT” enslaves “us” to the point of us losing any identification. In a nutshell, theory of capital is a catalogue of heresies to be welcomed to set free from the vantage of an intention to emancipate economic thought from the etherealized spheres of choice and behaviors or from the paradigm of the disembodied minds.

Jonathan Nitzan and Shimshon Bichler‘s Capital as Power A Study of Order and Creorder

Fragmentation – Lit and Dark Electronic Exchanges. Thought of the Day 116.0

Untitled

Exchanges also control the amount and degree of granularity of the information you receive (e.g., you can use the consolidated/public feed at a low cost or pay a relatively much larger cost for direct/proprietary feeds from the exchanges). They also monetise the need for speed by renting out computer/server space next to their matching engines, a process called colocation. Through coloca­tion, exchanges can provide uniform service to trading clients at competitive rates. Having the traders’ trading engines at a common location owned by the exchange simplifies the exchange’s ability to provide uniform service as it can control the hardware connecting each client to the trading engine, the cable (so all have the same cable of the same length), and the network. This ensures that all traders in colocation have the same fast access, and are not disadvantaged (at least in terms of exchange-provided hardware). Naturally, this imposes a clear distinction between traders who are colocated and those who are not. Those not colocated will always have a speed disadvantage. It then becomes an issue for reg­ulators who have to ensure that exchanges keep access to colocation sufficiently competitive.

The issue of distance from the trading engine brings us to another key dimen­sion of trading nowadays, especially in US equity markets, namely fragmentation. A trader in US equities markets has to be aware that there are up to 13 lit electronic exchanges and more than 40 dark ones. Together with this wide range of trading options, there is also specific regulation (the so-called ‘trade-through’ rules) which affects what happens to market orders sent to one exchange if there are better execution prices at other exchanges. The interaction of multiple trading venues, latency when moving be­tween these venues, and regulation introduces additional dimensions to keep in mind when designing success l trading strategies.

The role of time is fundamental in the usual price-time priority electronic ex­change, and in a fragmented market, the issue becomes even more important. Traders need to be able to adjust their trading positions fast in response to or in anticipation of changes in market circumstances, not just at the local exchange but at other markets as well. The race to be the first in or out of a certain position is one of the focal points of the debate on the benefits and costs of ‘high-frequency trading’.

The importance of speed permeates the whole process of designing trading algorithms, from the actual code, to the choice of programming language, to the hardware it is implemented on, to the characteristics of the connection to the matching engine, and the way orders are routed within an exchange and between exchanges. Exchanges, being aware of the importance of speed, have adapted and, amongst other things, moved well beyond the basic two types of orders (Market Orders and Limit Orders). Any trader should be very well-informed regarding all the different order types available at the exchanges, what they are and how they may be used.

When coding an algorithm one should be very aware of all the possible types of orders allowed, not just in one exchange, but in all competing exchanges where one’s asset of interest is traded. Being uninformed about the variety of order types can lead to significant losses. Since some of these order types allow changes and adjustments at the trading engine level, they cannot be beaten in terms of latency by the trader’s engine, regardless of how efficiently your algorithms are coded and hardwired.

Untitled

Another important issue to be aware of is that trading in an exchange is not free, but the cost is not the same for all traders. For example, many exchanges run what is referred to as a maker-taker system of fees whereby a trader sending an MO (and hence taking liquidity away from the market) pays a trading fee, while a trader whose posted LO is filled by the MO (that is, the LO with which the MO is matched) will a pay much lower trading fee, or even receive a payment (a rebate) from the exchange for providing liquidity (making the market). On the other hand, there are markets with an inverted fee schedule, a taker-maker system where the fee structure is the reverse: those providing liquidity pay a higher fee than those taking liquidity (who may even get a rebate). The issue of exchange fees is quite important as fees distort observed market prices (when you make a transaction the relevant price for you is the net price you pay/receive, which is the published price net of fees).

Accelerating the Synthetic Credit. Thought of the Day 96.0

hqdefault

The structural change in the structured credit universe continues to accelerate. While the market for synthetic structures is already pretty well established, many real money accounts remain outsiders owing to regulatory hurdles and technical limitations, e.g., to participate in the correlation market. Therefore, banks are continuously establishing new products to provide real money accounts with access to the structured market, with Constant proportion debt obligation (CPDOs) recently having been popular. Against this background, three vehicles which offer easy access to structured products for these investors have gained in importance: CDPCs (Credit Derivatives Product Company), PCVs (permanent capital vehicle), and SIVs (structured investment vehicles).

A CDPC is a rated company which buys credit risk via all types of credit derivative instruments, primarily super senior tranches, and sells this risk to investors via preferred shares (equity) or subordinated notes (debt). Hence, the vehicle uses super senior risk to create equity risk. The investment strategy is a buy-and-hold approach, while the aim is to offer high returns to investors and keep default risk limited. Investors are primarily exposed to rating migration risk, to mark-to-market risk, and, finally, to the capability of the external manager. The rating agencies assign, in general, an AAA-rating on the business model of the CDPC, which is a bankruptcy remote vehicle (special purpose vehicle [SPV]). The business models of specific CDPCs are different from each other in terms of investments and thresholds given to the manager. The preferred asset classes CDPC invested in are predominantly single-name CDS (credit default swaps), bespoke synthetic tranches, ABS (asset-backed security), and all kinds of CDOs (collateralized debt obligations). So far, CDPCs main investments are allocated to corporate credits, but CDPCs are extending their universe to ABS (Asset Backed Securities) and CDO products, which provide further opportunities in an overall tight spread environment. The implemented leverage is given through the vehicle and can be in the range of 15–60x. On average, the return target was typically around a 15% return on equity, paid in the form of dividends to the shareholders.

In contrast to CDPCs, PCVs do not invest in the top of the capital structure, but in equity pieces (mostly CDO equity pieces). The leverage is not implemented in the vehicle itself as it is directly related to the underlying instruments. PCVs are also set up as SPVs (special purpose vehicles) and listed on a stock exchange. They use the equity they receive from investors to purchase the assets, while the return on their investment is allocated to the shareholders via dividends. The target return amounts, in general, to around 10%. The portfolio is managed by an external manager and is marked-to-market. The share price of the company depends on the NAV (net asset value) of the portfolio and on the expected dividend payments.

In general, an SIV invests in the top of the capital structure of structured credits and ABS in line with CDPCs. In addition, SIVs also buy subordinated debt of financial institutions, and the portfolio is marked-to-market. SIVs are leveraged credit investment companies and bankruptcy remote. The vehicle issues typically investment-grade rated commercial paper, MTNs (medium term notes), and capital notes to its investors. The leverage depends on the character of the issued note and the underlying assets, ranging from 3 to 5 (bank loans) up to 14 (structured credits).

Malignant Acceleration in Tech-Finance. Some Further Rumination on Regulations. Thought of the Day 72.1

these-stunning-charts-show-some-of-the-wild-trading-activity-that-came-from-a-dark-pool-this-morning

Regardless of the positive effects of HFT that offers, such as reduced spreads, higher liquidity, and faster price discovery, its negative side is mostly what has caught people’s attention. Several notorious market failures and accidents in recent years all seem to be related to HFT practices. They showed how much risk HFT can involve and how huge the damage can be.

HFT heavily depends on the reliability of the trading algorithms that generate, route, and execute orders. High-frequency traders thus must ensure that these algorithms have been tested completely and thoroughly before they are deployed into the live systems of the financial markets. Any improperly-tested, or prematurely-released algorithms may cause losses to both investors and the exchanges. Several examples demonstrate the extent of the ever-present vulnerabilities.

In August 2012, the Knight Capital Group implemented a new liquidity testing software routine into its trading system, which was running live on the NYSE. The system started making bizarre trading decisions, quadrupling the price of one company, Wizzard Software, as well as bidding-up the price of much larger entities, such as General Electric. Within 45 minutes, the company lost USD 440 million. After this event and the weakening of Knight Capital’s capital base, it agreed to merge with another algorithmic trading firm, Getco, which is the biggest HFT firm in the U.S. today. This example emphasizes the importance of implementing precautions to ensure their algorithms are not mistakenly used.

Another example is Everbright Securities in China. In 2013, state-owned brokerage firm, Everbright Securities Co., sent more than 26,000 mistaken buy orders to the Shanghai Stock Exchange (SSE of RMB 23.4 billion (USD 3.82 billion), pushing its benchmark index up 6 % in two minutes. This resulted in a trading loss of approximately RMB 194 million (USD 31.7 million). In a follow-up evaluative study, the China Securities Regulatory Commission (CSRC) found that there were significant flaws in Everbright’s information and risk management systems.

The damage caused by HFT errors is not limited to specific trading firms themselves, but also may involve stock exchanges and the stability of the related financial market. On Friday, May 18, 2012, the social network giant, Facebook’s stock was issued on the NASDAQ exchange. This was the most anticipated initial public offering (IPO) in its history. However, technology problems with the opening made a mess of the IPO. It attracted HFT traders, and very large order flows were expected, and before the IPO, NASDAQ was confident in its ability to deal with the high volume of orders.

But when the deluge of orders to buy, sell and cancel trades came, NASDAQ’s trading software began to fail under the strain. This resulted in a 30-minute delay on NASDAQ’s side, and a 17-second blackout for all stock trading at the exchange, causing further panic. Scrutiny of the problems immediately led to fines for the exchange and accusations that HFT traders bore some responsibility too. Problems persisted after opening, with many customer orders from institutional and retail buyers unfilled for hours or never filled at all, while others ended up buying more shares than they had intended. This incredible gaffe, which some estimates say cost traders USD 100 million, eclipsed NASDAQ’s achievement in getting Facebook’s initial IPO, the third largest IPO in U.S. history. This incident has been estimated to have cost investors USD 100 million.

Another instance occurred on May 6, 2010, when U.S. financial markets were surprised by what has been referred to ever since as the “Flash Crash” Within less than 30 minutes, the main U.S. stock markets experienced the single largest price declines within a day, with a decline of more than 5 % for many U.S.-based equity products. In addition, the Dow Jones Industrial Average (DJIA), at its lowest point that day, fell by nearly 1,000 points, although it was followed by a rapid rebound. This brief period of extreme intraday volatility demonstrated the weakness of the structure and stability of U.S. financial markets, as well as the opportunities for volatility-focused HFT traders. Although a subsequent investigation by the SEC cleared high-frequency traders of directly having caused the Flash Crash, they were still blamed for exaggerating market volatility, withdrawing liquidity for many U.S.-based equities (FLASH BOYS).

Since the mid-2000s, the average trade size in the U.S. stock market had plummeted, the markets had fragmented, and the gap in time between the public view of the markets and the view of high-frequency traders had widened. The rise of high-frequency trading had been accompanied also by a rise in stock market volatility – over and above the turmoil caused by the 2008 financial crisis. The price volatility within each trading day in the U.S. stock market between 2010 and 2013 was nearly 40 percent higher than the volatility between 2004 and 2006, for instance. There were days in 2011 in which volatility was higher than in the most volatile days of the dot-com bubble. Although these different incidents have different causes, the effects were similar and some common conclusions can be drawn. The presence of algorithmic trading and HFT in the financial markets exacerbates the adverse impacts of trading-related mistakes. It may lead to extremely higher market volatility and surprises about suddenly-diminished liquidity. This raises concerns about the stability and health of the financial markets for regulators. With the continuous and fast development of HFT, larger and larger shares of equity trades were created in the U.S. financial markets. Also, there was mounting evidence of disturbed market stability and caused significant financial losses due to HFT-related errors. This led the regulators to increase their attention and effort to provide the exchanges and traders with guidance on HFT practices They also expressed concerns about high-frequency traders extracting profit at the costs of traditional investors and even manipulating the market. For instance, high-frequency traders can generate a large amount of orders within microseconds to exacerbate a trend. Other types of misconduct include: ping orders, which is using some orders to detect other hidden orders; and quote stuffing, which is issuing a large number of orders to create uncertainty in the market. HFT creates room for these kinds of market abuses, and its blazing speed and huge trade volumes make their detection difficult for regulators. Regulators have taken steps to increase their regulatory authority over HFT activities. Some of the problems that arose in the mid-2000s led to regulatory hearings in the United States Senate on dark pools, flash orders and HFT practices. Another example occurred after the Facebook IPO problem. This led the SEC to call for a limit up-limit down mechanism at the exchanges to prevent trades in individual securities from occurring outside of a specified price range so that market volatility will be under better control. These regulatory actions put stricter requirements on HFT practices, aiming to minimize the market disturbance when many fast trading orders occur within a day.

Some content on this page was disabled on May 30, 2018 as a result of a DMCA takedown notice from W.W. Norton. You can learn more about the DMCA here:

https://en.support.wordpress.com/copyright-and-the-dmca/

Regulating the Velocities of Dark Pools. Thought of the Day 72.0

hft-robots630

On 22 September 2010 the SEC chair Mary Schapiro signaled US authorities were considering the introduction of regulations targeted at HFT:

…High frequency trading firms have a tremendous capacity to affect the stability and integrity of the equity markets. Currently, however, high frequency trading firms are subject to very little in the way of obligations either to protect that stability by promoting reasonable price continuity in tough times, or to refrain from exacerbating price volatility.

However regulating an industry working towards moving as fast as the speed of light is no ordinary administrative task: – Modern finance is undergoing a fundamental transformation. Artificial intelligence, mathematical models, and supercomputers have replaced human intelligence, human deliberation, and human execution…. Modern finance is becoming cyborg finance – an industry that is faster, larger, more complex, more global, more interconnected, and less human. C W Lin proposes a number of principles for regulating this cyber finance industry:

  1. Update antiquated paradigms of reasonable investors and compartmentalised institutions, and confront the emerging institutional realities, and realise the old paradigms of governance of markets may be ill-suited for the new finance industry;
  2. Enhance disclosure which recognises the complexity and technological capacities of the new finance industry;
  3. Adopt regulations to moderate the velocities of finance realising that as these approach the speed of light they may contain more risks than rewards for the new financial industry;
  4. Introduce smarter coordination harmonising financial regulation beyond traditional spaces of jurisdiction.

Electronic markets will require international coordination, surveillance and regulation. The high-frequency trading environment has the potential to generate errors and losses at a speed and magnitude far greater than that in a floor or screen-based trading environment… Moreover, issues related to risk management of these technology-dependent trading systems are numerous and complex and cannot be addressed in isolation within domestic financial markets. For example, placing limits on high-frequency algorithmic trading or restricting Un-filtered sponsored access and co-location within one jurisdiction might only drive trading firms to another jurisdiction where controls are less stringent.

In these regulatory endeavours it will be vital to remember that all innovation is not intrinsically good and might be inherently dangerous, and the objective is to make a more efficient and equitable financial system, not simply a faster system: Despite its fast computers and credit derivatives, the current financial system does not seem better at transferring funds from savers to borrowers than the financial system of 1910. Furthermore as Thomas Piketty‘s Capital in the Twenty-First Century amply demonstrates any thought of the democratisation of finance induced by the huge expansion of superannuation funds together with the increased access to finance afforded by credit cards and ATM machines, is something of a fantasy, since levels of structural inequality have endured through these technological transformations. The tragedy is that under the guise of technological advance and sophistication we could be destroying the capacity of financial markets to fulfil their essential purpose, as Haldane eloquently states:

An efficient capital market transfers savings today into investment tomorrow and growth the day after. In that way, it boosts welfare. Short-termism in capital markets could interrupt this transfer. If promised returns the day after tomorrow fail to induce saving today, there will be no investment tomorrow. If so, long-term growth and welfare would be the casualty.

Momentum of Accelerated Capital. Note Quote.

high-frequency-trading

Distinct types of high frequency trading firms include independent proprietary firms, which use private funds and specific strategies which remain secretive, and may act as market makers generating automatic buy and sell orders continuously throughout the day. Broker-dealer proprietary desks are part of traditional broker-dealer firms but are not related to their client business, and are operated by the largest investment banks. Thirdly hedge funds focus on complex statistical arbitrage, taking advantage of pricing inefficiencies between asset classes and securities.

Today strategies using algorithmic trading and High Frequency Trading play a central role on financial exchanges, alternative markets, and banks‘ internalized (over-the-counter) dealings:

High frequency traders typically act in a proprietary capacity, making use of a number of strategies and generating a very large number of trades every single day. They leverage technology and algorithms from end-to-end of the investment chain – from market data analysis and the operation of a specific trading strategy to the generation, routing, and execution of orders and trades. What differentiates HFT from algorithmic trading is the high frequency turnover of positions as well as its implicit reliance on ultra-low latency connection and speed of the system.

The use of algorithms in computerised exchange trading has experienced a long evolution with the increasing digitalisation of exchanges:

Over time, algorithms have continuously evolved: while initial first-generation algorithms – fairly simple in their goals and logic – were pure trade execution algos, second-generation algorithms – strategy implementation algos – have become much more sophisticated and are typically used to produce own trading signals which are then executed by trade execution algos. Third-generation algorithms include intelligent logic that learns from market activity and adjusts the trading strategy of the order based on what the algorithm perceives is happening in the market. HFT is not a strategy per se, but rather a technologically more advanced method of implementing particular trading strategies. The objective of HFT strategies is to seek to benefit from market liquidity imbalances or other short-term pricing inefficiencies.

While algorithms are employed by most traders in contemporary markets, the intense focus on speed and the momentary holding periods are the unique practices of the high frequency traders. As the defence of high frequency trading is built around the principles that it increases liquidity, narrows spreads, and improves market efficiency, the high number of trades made by HFT traders results in greater liquidity in the market. Algorithmic trading has resulted in the prices of securities being updated more quickly with more competitive bid-ask prices, and narrowing spreads. Finally HFT enables prices to reflect information more quickly and accurately, ensuring accurate pricing at smaller time intervals. But there are critical differences between high frequency traders and traditional market makers:

  1. HFT do not have an affirmative market making obligation, that is they are not obliged to provide liquidity by constantly displaying two sides quotes, which may translate into a lack of liquidity during volatile conditions.
  2. HFT contribute little market depth due to the marginal size of their quotes, which may result in larger orders having to transact with many small orders, and this may impact on overall transaction costs.
  3. HFT quotes are barely accessible due to the extremely short duration for which the liquidity is available when orders are cancelled within milliseconds.

Besides the shallowness of the HFT contribution to liquidity, are the real fears of how HFT can compound and magnify risk by the rapidity of its actions:

There is evidence that high-frequency algorithmic trading also has some positive benefits for investors by narrowing spreads – the difference between the price at which a buyer is willing to purchase a financial instrument and the price at which a seller is willing to sell it – and by increasing liquidity at each decimal point. However, a major issue for regulators and policymakers is the extent to which high-frequency trading, unfiltered sponsored access, and co-location amplify risks, including systemic risk, by increasing the speed at which trading errors or fraudulent trades can occur.

Although there have always been occasional trading errors and episodic volatility spikes in markets, the speed, automation and interconnectedness of today‘s markets create a different scale of risk. These risks demand that exchanges and market participants employ effective quality management systems and sophisticated risk mitigation controls adapted to these new dynamics in order to protect against potential threats to market stability arising from technology malfunctions or episodic illiquidity. However, there are more deliberate aspects of HFT strategies which may present serious problems for market structure and functioning, and where conduct may be illegal, for example in order anticipation seeks to ascertain the existence of large buyers or sellers in the marketplace and then to trade ahead of those buyers and sellers in anticipation that their large orders will move market prices. A momentum strategy involves initiating a series of orders and trades in an attempt to ignite a rapid price move. HFT strategies can resemble traditional forms of market manipulation that violate the Exchange Act:

  1. Spoofing and layering occurs when traders create a false appearance of market activity by entering multiple non-bona fide orders on one side of the market at increasing or decreasing prices in order to induce others to buy or sell the stock at a price altered by the bogus orders.
  2. Painting the tape involves placing successive small amount of buy orders at increasing prices in order to stimulate increased demand.

  3. Quote Stuffing and price fade are additional HFT dubious practices: quote stuffing is a practice that floods the market with huge numbers of orders and cancellations in rapid succession which may generate buying or selling interest, or compromise the trading position of other market participants. Order or price fade involves the rapid cancellation of orders in response to other trades.

The World Federation of Exchanges insists: ― Exchanges are committed to protecting market stability and promoting orderly markets, and understand that a robust and resilient risk control framework adapted to today‘s high speed markets, is a cornerstone of enhancing investor confidence. However this robust and resilient risk control framework‘ seems lacking, including in the dark pools now established for trading that were initially proposed as safer than the open market.

Accelerated Capital as an Anathema to the Principles of Communicative Action. A Note Quote on the Reciprocity of Capital and Ethicality of Financial Economics

continuum

Markowitz portfolio theory explicitly observes that portfolio managers are not (expected) utility maximisers, as they diversify, and offers the hypothesis that a desire for reward is tempered by a fear of uncertainty. This model concludes that all investors should hold the same portfolio, their individual risk-reward objectives are satisfied by the weighting of this ‘index portfolio’ in comparison to riskless cash in the bank, a point on the capital market line. The slope of the Capital Market Line is the market price of risk, which is an important parameter in arbitrage arguments.

Merton had initially attempted to provide an alternative to Markowitz based on utility maximisation employing stochastic calculus. He was only able to resolve the problem by employing the hedging arguments of Black and Scholes, and in doing so built a model that was based on the absence of arbitrage, free of turpe-lucrum. That the prescriptive statement “it should not be possible to make sure profits”, is a statement explicit in the Efficient Markets Hypothesis and in employing an Arrow security in the context of the Law of One Price. Based on these observations, we conject that the whole paradigm for financial economics is built on the principle of balanced reciprocity. In order to explore this conjecture we shall examine the relationship between commerce and themes in Pragmatic philosophy. Specifically, we highlight Robert Brandom’s (Making It Explicit Reasoning, Representing, and Discursive Commitment) position that there is a pragmatist conception of norms – a notion of primitive correctnesses of performance implicit in practice that precludes and are presupposed by their explicit formulation in rules and principles.

The ‘primitive correctnesses’ of commercial practices was recognised by Aristotle when he investigated the nature of Justice in the context of commerce and then by Olivi when he looked favourably on merchants. It is exhibited in the doux-commerce thesis, compare Fourcade and Healey’s contemporary description of the thesis Commerce teaches ethics mainly through its communicative dimension, that is, by promoting conversations among equals and exchange between strangers, with Putnam’s description of Habermas’ communicative action based on the norm of sincerity, the norm of truth-telling, and the norm of asserting only what is rationally warranted …[and] is contrasted with manipulation (Hilary Putnam The Collapse of the Fact Value Dichotomy and Other Essays)

There are practices (that should be) implicit in commerce that make it an exemplar of communicative action. A further expression of markets as centres of communication is manifested in the Asian description of a market brings to mind Donald Davidson’s (Subjective, Intersubjective, Objective) argument that knowledge is not the product of a bipartite conversations but a tripartite relationship between two speakers and their shared environment. Replacing the negotiation between market agents with an algorithm that delivers a theoretical price replaces ‘knowledge’, generated through communication, with dogma. The problem with the performativity that Donald MacKenzie (An Engine, Not a Camera_ How Financial Models Shape Markets) is concerned with is one of monism. In employing pricing algorithms, the markets cannot perform to something that comes close to ‘true belief’, which can only be identified through communication between sapient humans. This is an almost trivial observation to (successful) market participants, but difficult to appreciate by spectators who seek to attain ‘objective’ knowledge of markets from a distance. To appreciate the relevance to financial crises of the position that ‘true belief’ is about establishing coherence through myriad triangulations centred on an asset rather than relying on a theoretical model.

Shifting gears now, unless the martingale measure is a by-product of a hedging approach, the price given by such martingale measures is not related to the cost of a hedging strategy therefore the meaning of such ‘prices’ is not clear. If the hedging argument cannot be employed, as in the markets studied by Cont and Tankov (Financial Modelling with Jump Processes), there is no conceptual framework supporting the prices obtained from the Fundamental Theorem of Asset Pricing. This lack of meaning can be interpreted as a consequence of the strict fact/value dichotomy in contemporary mathematics that came with the eclipse of Poincaré’s Intuitionism by Hilbert’s Formalism and Bourbaki’s Rationalism. The practical problem of supporting the social norms of market exchange has been replaced by a theoretical problem of developing formal models of markets. These models then legitimate the actions of agents in the market without having to make reference to explicitly normative values.

The Efficient Market Hypothesis is based on the axiom that the market price is determined by the balance between supply and demand, and so an increase in trading facilitates the convergence to equilibrium. If this axiom is replaced by the axiom of reciprocity, the justification for speculative activity in support of efficient markets disappears. In fact, the axiom of reciprocity would de-legitimise ‘true’ arbitrage opportunities, as being unfair. This would not necessarily make the activities of actual market arbitrageurs illicit, since there are rarely strategies that are without the risk of a loss, however, it would place more emphasis on the risks of speculation and inhibit the hubris that has been associated with the prelude to the recent Crisis. These points raise the question of the legitimacy of speculation in the markets. In an attempt to understand this issue Gabrielle and Reuven Brenner identify the three types of market participant. ‘Investors’ are preoccupied with future scarcity and so defer income. Because uncertainty exposes the investor to the risk of loss, investors wish to minimise uncertainty at the cost of potential profits, this is the basis of classical investment theory. ‘Gamblers’ will bet on an outcome taking odds that have been agreed on by society, such as with a sporting bet or in a casino, and relates to de Moivre’s and Montmort’s ‘taming of chance’. ‘Speculators’ bet on a mis-calculation of the odds quoted by society and the reason why speculators are regarded as socially questionable is that they have opinions that are explicitly at odds with the consensus: they are practitioners who rebel against a theoretical ‘Truth’. This is captured in Arjun Appadurai’s argument that the leading agents in modern finance believe in their capacity to channel the workings of chance to win in the games dominated by cultures of control . . . [they] are not those who wish to “tame chance” but those who wish to use chance to animate the otherwise deterministic play of risk [quantifiable uncertainty]”.

In the context of Pragmatism, financial speculators embody pluralism, a concept essential to Pragmatic thinking and an antidote to the problem of radical uncertainty. Appadurai was motivated to study finance by Marcel Mauss’ essay Le Don (The Gift), exploring the moral force behind reciprocity in primitive and archaic societies and goes on to say that the contemporary financial speculator is “betting on the obligation of return”, and this is the fundamental axiom of contemporary finance. David Graeber (Debt The First 5,000 Years) also recognises the fundamental position reciprocity has in finance, but where as Appadurai recognises the importance of reciprocity in the presence of uncertainty, Graeber essentially ignores uncertainty in his analysis that ends with the conclusion that “we don’t ‘all’ have to pay our debts”. In advocating that reciprocity need not be honoured, Graeber is not just challenging contemporary capitalism but also the foundations of the civitas, based on equality and reciprocity. The origins of Graeber’s argument are in the first half of the nineteenth century. In 1836 John Stuart Mill defined political economy as being concerned with [man] solely as a being who desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end.

In Principles of Political Economy With Some of Their Applications to Social Philosophy, Mill defended Thomas Malthus’ An Essay on the Principle of Population, which focused on scarcity. Mill was writing at a time when Europe was struck by the Cholera pandemic of 1829–1851 and the famines of 1845–1851 and while Lord Tennyson was describing nature as “red in tooth and claw”. At this time, society’s fear of uncertainty seems to have been replaced by a fear of scarcity, and these standards of objectivity dominated economic thought through the twentieth century. Almost a hundred years after Mill, Lionel Robbins defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”. Dichotomies emerge in the aftermath of the Cartesian revolution that aims to remove doubt from philosophy. Theory and practice, subject and object, facts and values, means and ends are all separated. In this environment ex cathedra norms, in particular utility (profit) maximisation, encroach on commercial practice.

In order to set boundaries on commercial behaviour motivated by profit maximisation, particularly when market uncertainty returned after the Nixon shock of 1971, society imposes regulations on practice. As a consequence, two competing ethics, functional Consequential ethics guiding market practices and regulatory Deontological ethics attempting stabilise the system, vie for supremacy. It is in this debilitating competition between two essentially theoretical ethical frameworks that we offer an explanation for the Financial Crisis of 2007-2009: profit maximisation, not speculation, is destabilising in the presence of radical uncertainty and regulation cannot keep up with motivated profit maximisers who can justify their actions through abstract mathematical models that bare little resemblance to actual markets. An implication of reorienting financial economics to focus on the markets as centres of ‘communicative action’ is that markets could become self-regulating, in the same way that the legal or medical spheres are self-regulated through professions. This is not a ‘libertarian’ argument based on freeing the Consequential ethic from a Deontological brake. Rather it argues that being a market participant entails restricting norms on the agent such as sincerity and truth telling that support knowledge creation, of asset prices, within a broader objective of social cohesion. This immediately calls into question the legitimacy of algorithmic/high- frequency trading that seems an anathema in regard to the principles of communicative action.

Nihilism Now! Monsters of Energy by Keith Ansell Pearson and Diane Morgan

Nihilism.Philosophy.Maude+Clark+Interview.Part+1

Blurb: Have we had enough? But enough of what exactly? Of our mourning and melancholia? Of postmodern narcissism? Of our depressive illness and anxieties of not ‘being there’ any longer? Enough of enough! We now ask: what of the future of the human and of the future of the future? Is it now possible to produce revitalised ways of thinking and modes of existing that have digested the demand for transhuman overcomings and so are able to navi- gate new horizons of virtual becoming? Is it possible to save thought from its current degenerative and vegetative state at the hands of a smug and cosy postmodern academicism? Can we still invent new concepts?

If one follows certain influential contemporary accounts, it would appear as if the experience and question of nihilism have become passé. Is not the urgency informing the question of the `now’ of nihilism redundant and otiose? For Jean Baudrillard, for example, there is now only the simulation of a realised nihilism and little remains of a possible nihilism (a nihilism of the possible) in theory. In relation to previous forms of nihilism ± romanticism, surrealism and dadaism ± we find ourselves in an ‘insoluble position’. Our nihilism today is neither aesthetic nor political. The apocalypse is over, its time has gone and lies behind us:

The apocalypse is finished, today it is the precession of the neutral, of the forms of the neutral and of indifference. (Baudrillard)

Baudrillard goes on to make the claim, terrifying in its full import, that all that remains is a ‘fascination’ for these indifferent forms and for the operation of the system that annihilates us.

Surely, Baudrillard is being ironic when he claims that this mode of nihilism is our current ‘passion’? How can one be passionate about indifference and one’s own annihilation? As Baudrillard acknowledges, this is the nihilism of the observer and accepter. It is the nihilism of the passive nihilist who no longer aspires towards a transcendence or overcoming of the human (condition), but who simply announces and enjoys its disappearance, the spectator watching the spectacle of his own demise. History, politics, metaphysics, have all reached their terminal point, and willing nothingness appears to be the only desire of the will available to the post-modern mind:

The dialectic stage, the critical stage is empty. There is no more stage . . . The masses themselves are caught up in a gigantic process of inertia through acceleration. They are this excrescent, devouring, process that annihilates all growth and all surplus meaning. They are this circuit short- circuited by a monstrous finality. (Baudrillard)

Diane Morgan and Keith Ansell Pearson Nihilism Now Monsters of Energy