Game’s Degeneracy Running Proportional to Polytope’s Redundancy.

For a given set of vertices V ⊆ RK a Polytope P can be defined as the following set of points:

P = {∑i=1|V|λivi ∈ RK | ∑i=1|V|λi = 1; λi ≥ 0; vi ∈ V}

Screen Shot 2019-01-02 at 11.03.28 AM

Polytope is an intersection of boundaries that separate the space into two distinct areas. If a polytope is to be defined as an intersection of half spaces, then for a matrix M ∈ Rmxn, and a vector b ∈ Rm, polytope P is defined as a set of points

P = {x ∈ Rn | Mx ≤ b}

Switching over to a two-player game, (A, B) ∈ Rmxn2>0, the row/column best response polytope P/Q is defined by:

P = {x ∈ Rm | x ≥ 0; xB ≤ 1}

Q = {y ∈ Rn | Ay ≤ 1; y ≥ 0}

The polytope P, corresponds to the set of points with an upper bound on the utility of those points when considered as row strategies against which the column player plays.

An affine combination of points z1,….zk in some Euclidean space is of the form ∑i=1kλizi, where λ1, …, λk are reals with ∑i=1kλi= 1. It is called a convex combination, if λ≥ 0 ∀ i. A set of points is convex if it is closed under forming convex combinations. Given points are affinely independent if none of these points are an affine combination of the others. A convex set has dimension d iff it has d + 1, but no more, affinely independent points.

A polyhedron P in Rd is a set {z ∈ Rd | Cz ≤ q} for some matrix C and vector q. It is called full-dimensional if it has dimension d. It is called a polytope if it is bounded. A face of P is a set {z ∈ P | cz = q0} for some c ∈ Rd, q0 ∈ R, such that the inequality cz ≤ q0 holds for all z in P. A vertex of P is the unique element of a zero-dimensional face of P. An edge of P is a one-dimensional face of P. A facet of a d-dimensional polyhedron P is a face of dimension d − 1. It can be shown that any nonempty face F of P can be obtained by turning some of the inequalities defining P into equalities, which are then called binding inequalities. That is, F = {z ∈ P | ciz = qi, i ∈ I}, where ciz ≤ qi for i ∈ I are some of the rows in Cz ≤ q. A facet is characterized by a single binding inequality which is irredundant; i.e., the inequality cannot be omitted without changing the polyhedron. A d-dimensional polyhedron P is called simple if no point belongs to more than d facets of P, which is true if there are no special dependencies between the facet-defining inequalities. The “best response polyhedron” of a player is the set of that player’s mixed strategies together with the “upper envelope” of expected payoffs (and any larger payoffs) to the other player.

Nondegeneracy of a bimatrix game (A, B) can be stated in terms of the polytopes P and Q as no point in P has more than m labels, and no point in Q has more than n labels. (If x ∈ P and x has support of size k and L is the set of labels of x, then |L ∩ M| = m − k, so |L| > m implies x has more than k best responses in L ∩ N. Then P and Q are simple polytopes, because a point of P, say, that is on more than m facets would have more than m labels. Even if P and Q are simple polytopes, the game can be degenerate if the description of a polytope is redundant in the sense that some inequality can be omitted, but nevertheless is sometimes binding. This occurs if a player has a pure strategy that is weakly dominated by or payoff equivalent to some other mixed strategy. Non-simple polytopes or redundant inequalities of this kind do not occur for “generic” payoffs; this illustrates the assumption of nondegeneracy from a geometric viewpoint. (A strictly dominated strategy may occur generically, but it defines a redundant inequality that is never binding, so this does not lead to a degenerate game.) Because the game is nondegenerate, only vertices of P can have m labels, and only vertices of Q can have n labels. Otherwise, a point of P with m labels that is not a vertex would be on a higher dimensional face, and a vertex of that face, which is a vertex of P, would have additional labels. Consequently, only vertices of P and Q have to be inspected as possible equilibrium strategies. Algorithmically, if the input is a nondegenerate bimatrix game, and output is an Nash equilibria of the game, then the method employed for each vertex x of P − {0}, and each vertex y of Q − {0}, if (x, y) is completely labeled, the output then is the Nash equilibrium (x · 1/1x, y · 1/1y).

Advertisement

Coarse Philosophies of Coarse Embeddabilities: Metric Space Conjectures Act Algorithmically On Manifolds – Thought of the Day 145.0

1-s2.0-S002212361730143X-gr007

A coarse structure on a set X is defined to be a collection of subsets of X × X, called the controlled sets or entourages for the coarse structure, which satisfy some simple axioms. The most important of these states that if E and F are controlled then so is

E ◦ F := {(x, z) : ∃y, (x, y) ∈ E, (y, z) ∈ F}

Consider the metric spaces Zn and Rn. Their small-scale structure, their topology is entirely different, but on the large scale they resemble each other closely: any geometric configuration in Rn can be approximated by one in Zn, to within a uniformly bounded error. We think of such spaces as “coarsely equivalent”. The other axioms require that the diagonal should be a controlled set, and that subsets, transposes, and (finite) unions of controlled sets should be controlled. It is accurate to say that a coarse structure is the large-scale counterpart of a uniformity than of a topology.

Coarse structures and coarse spaces enjoy a philosophical advantage over coarse metric spaces, in that, all left invariant bounded geometry metrics on a countable group induce the same metric coarse structure which is therefore transparently uniquely determined by the group. On the other hand, the absence of a natural gauge complicates the notion of a coarse family, while it is natural to speak of sets of uniform size in different metric spaces it is not possible to do so in different coarse spaces without imposing additional structure.

Mikhail Leonidovich Gromov introduced the notion of coarse embedding for metric spaces. Let X and Y be metric spaces.

A map f : X → Y is said to be a coarse embedding if ∃ nondecreasing functions ρ1 and ρ2 from R+ = [0, ∞) to R such that

  • ρ1(d(x,y)) ≤ d(f(x),f(y)) ≤ ρ2(d(x,y)) ∀ x, y ∈ X.
  • limr→∞ ρi(r) = +∞ (i=1, 2).

Intuitively, coarse embeddability of a metric space X into Y means that we can draw a picture of X in Y which reflects the large scale geometry of X. In early 90’s, Gromov suggested that coarse embeddability of a discrete group into Hilbert space or some Banach spaces should be relevant to solving the Novikov conjecture. The connection between large scale geometry and differential topology and differential geometry, such as the Novikov conjecture, is built by index theory. Recall that an elliptic differential operator D on a compact manifold M is Fredholm in the sense that the kernel and cokernel of D are finite dimensional. The Fredholm index of D, which is defined by

index(D) = dim(kerD) − dim(cokerD),

has the following fundamental properties:

(1) it is an obstruction to invertibility of D;

(2) it is invariant under homotopy equivalence.

The celebrated Atiyah-Singer index theorem computes the Fredholm index of elliptic differential operators on compact manifolds and has important applications. However, an elliptic differential operator on a noncompact manifold is in general not Fredholm in the usual sense, but Fredholm in a generalized sense. The generalized Fredholm index for such an operator is called the higher index. In particular, on a general noncompact complete Riemannian manifold M, John Roe (Coarse Cohomology and Index Theory on Complete Riemannian Manifolds) introduced a higher index theory for elliptic differential operators on M.

The coarse Baum-Connes conjecture is an algorithm to compute the higher index of an elliptic differential operator on noncompact complete Riemannian manifolds. By the descent principal, the coarse Baum-Connes conjecture implies the Novikov higher signature conjecture. Guoliang Yu has proved the coarse Baum-Connes conjecture for bounded geometry metric spaces which are coarsely embeddable into Hilbert space. The metric spaces which admit coarse embeddings into Hilbert space are a large class, including e.g. all amenable groups and hyperbolic groups. In general, however, there are counterexamples to the coarse Baum-Connes conjecture. A notorious one is expander graphs. On the other hand, the coarse Novikov conjecture (i.e. the injectivity part of the coarse Baum-Connes conjecture) is an algorithm of determining non-vanishing of the higher index. Kasparov-Yu have proved the coarse Novikov conjecture for spaces which admit coarse embeddings into a uniformly convex Banach space.

Why Should Modinomics Be Bestowed With An Ignoble Prize In Economics? Demonetization’s Spectacular Failure.

This lesson from history is quite well known:

Muhammad bin Tughlaq thought that may be if he could find an alternative currency, he could save some money. So he replaced the Gold and Silver coins with copper currency. Local goldsmiths started manufacturing these coins and which led to a loss of a huge sum of money to the court. He had to take his orders back and reissue Gold/Silver coins against those copper coins. This counter decision was far more devastating as people exchanged all their fake currency and emptied royal treasure.

And nothing seems to have changed ideatically even after close to 800 years since, when another bold and bald move or rather a balderdash move by the Prime Minister of India Narendra Modi launched his version of the lunacy. Throw in Demonetization and flush out black money. Well, that was the reason promulgated along with a host of other nationalistic-sounding derivatives like curbing terror funding, expanding the tax net, open to embracing digital economy and making the banking system more foolproof by introducing banking accounts for the millions hitherto devoid of any. But, financial analysts and economists of the left of the political spectrum saw this as brazen porto-fascistic move, when they almost unanimously faulted the government for not really understanding the essence of black money. These voices of sanity were chased off the net, and chided in person and at fora by paid trolls of the ruling dispensation, who incidentally were as clueless about it as about their existence. Though, some other motives of demonetization were smuggled in in feeble voices but weren’t really paid any heed to for they would have sounded the economic disaster even back then. And these are the contraband that could give some credibility to the whole exercise even though it has turned the world’s fastest-growing emerging economy (God knows how it even reached that pinnacle, but, so be it!) into a laughing stock of a democratically-elected dictatorial regime. What is the credibility talked about here? It was all about smashing the informal economy (which until the announcement of November 8 contributed to 40% of the GDP and had a workforce bordering on 90% of the entire economy) to smithereens and sucking it into the formal channel through getting banking accounts formalized. Yes, this is a positive in the most negative sense, and even today the government and whatever voices emanate from Delhi refuse to consider it as a numero uno aim.

Fast forward by 3 (period of trauma) + 8 (periods of post-trauma) months and the cat is out of the bag slapping the government for its hubris. But a spectacular failure it has turned out to be. The government has refused to reveal the details of how much money in banned notes was deposited back with the RBI although 8 months have passed since the window of exchange closed in January this year. Despite repeated questioning in Parliament, Supreme Court and through RTIs, the govt. and RBI has doggedly maintained that old banned notes were still being counted. In June this year, finance minister Arun Jaitley claimed that each note was being checked whether it was counterfeit and that the process would take “a long time”. The whole country had seen through these lies because how can it take 8 months to count the notes. Obviously there was some hanky panky going on. Despite statutory responsibility to release data related to currency in circulation and its accounts, the RBI too was not doing so for this period. They were under instructions to fiddle around and not reveal the truth. Consider the statistics next:

As on November 8, 2016, there were 1716.5 crore piece of Rs. 500 and 685.8 crore pieces of Rs. 1000 circulating the economy totaling Rs. 15.44 lakh crore. The Reserve Bank of India (RESERVE BANK OF INDIA ANNUAL REPORT 2016-17), which for a time as long as Urjit Patel runs the show has been criticized for surrendering the autonomy of the Central Bank to the whims and fancies of PM-run circus finally revealed that 99% of the junked notes (500 + 1000) have returned to the banking system. This revelation has begun to ricochet the corridors of power with severe criticisms of the government’s move to flush out black money and arrest corruption. When the RBI finally gave the figures through its annual report for 2016-17, it disclosed that Rs. 15.28 lakh crore of junked currency had formally entered the banking system through deposits, thus leaving out a difference of a mere (yes, a ‘mere’ in this case) Rs. 16,050 crore unaccounted for money. Following through with more statistics, post-demonetization, the RBI spent Rs. 7,965 crore in 2016-17 on printing new Rs. 500 and Rs. 2000 notes in addition to other denominations, which is more than double the Rs. 3,421 crore spent on printing new notes in the previous year. Demonetization, that was hailed as a step has proved to be complete damp squib as the RBI said that just 7.1 pieces of Rs. 500 per million in circulation and 19.1 pieces of Rs. 1000 per million in circulation were discovered to be fake further implying that if demonetization was also to flush pout counterfeit currency from the system, this hypothesis too failed miserably.

Opposition was quick to seize on the data with the former Finance minister P Chidambaram tweeting:

Untitled

He further lamented that with 99% of the currency exchanged, was demonetization a scheme designed to convert black money to white? Naresh Agarwal of Samajwadi Party said his party would move privilege motion against Urjit Patel for misleading a Parliamentary Panel on the issue.

But, what of the immense collateral damage that the exercise caused? And why is the government still so shameless in protecting a lunacy? Finance Minister Arun Jaitley on asserted that any attempt to measure the success of the government’s demonetization exercise on the basis of the amount of money that stayed out of the system was flawed since the confiscation of money had not been the objective. He maintained that the government had met its principal objectives of reducing the reliance on cash in the economy, expanding the tax base and pushing digitisation. Holy Shit! And he along with his comrades is selling and marketing this crap and sadly the majority would even buy into this. Let us hear him out on the official position:

Denying that demonetisation failed to achieve its objectives, Finance Minister Arun Jaitley said the measure had succeeded in reducing cash in the economy, increasing digitisation, expanding the tax base, checking black money and in moving towards integrating the informal economy with the formal one. “The objective of demonetisation was that India is a high-cash economy and that scenario needs to be altered,” Jaitley told following the release of the Reserve Bank of India’s (RBI) annual report for the last fiscal giving the figures, for the first time, of demonetised notes returned to the system. The RBI said that of the Rs 15.44 lakh crore of notes taken out of circulation by the demonetisation of Rs 500 and Rs 1,000 notes last November, Rs 15.28 lakh crore, or almost 99 per cent, had returned to the system by way of deposits by the public.”The other objectives of demonetisation were to combat black money and expand the tax base. Post demonetisation, tariff tax base has increased substantially. Personal IT returns have increased by 25 per cent,” the Finance Minister said. “Those dealing in cash currency have now been forced to deposit these in banks, the money has got identified with a particular owner,” he said. “Expanding of the indirect tax base is evident from the results of the GST collections, which shows more and more transactions taking place within the system,” he added. Jaitley said the government has collected Rs 92,283 crore as Goods and Services Tax (GST) revenue for the first month of its roll-out, exceeding the target, while 21.19 lakh taxpayers are yet to file returns. Thus, the July collections target have been met with only 64 per cent of compliance. “The next object of demonetisation is that digitisation must expand, which climaxed during demonetisation and we are trying to sustain that momentum even after remonetisation is completed. Our aim was that the quantum of cash must come down,” Jaitley said. He noted in this regard that RBI reports that the volume of cash transactions had reduced by 17 per cent post-demonetisation. A Finance Ministry reaction to the RBI report said a significant portion of the scrapped notes deposited “could possibly be representing unexplained/black money”. “Accordingly, ‘Operation Clean Money’ was launched on 31st January 2017. Scrutiny of about 18 lakh accounts, prima facie, did not appear to be in line with their tax profile. These were identified and have been approached through email/sms. “Jaitley slammed his predecessor P. Chidambaram for his criticism of the note ban, saying those who had not taken a single step against black money were trying to confuse the objectives of the exercise with the amount of currency that came back into the system. The Finance Ministry said transactions of more than three lakh registered companies are being scrutinised, while one lakh companies have been “struck off the list”. “The government has already identified more than 37,000 shell companies which were engaged in hiding black money and hawala transactions. The Income-tax Directorates of Investigation have identified more than 400 benami transactions up to May 23, 2017, and the market value of properties under attachment is more than Rs 600 crore,” it said. “The integration of the informal with the formal economy was one of the principle objectives of demonetisation,” Jaitley said. He also said that demonetisation had dealt a body blow to terrorist and Maoist financing that was evident from the situation on the ground in Chhattisgarh and Jammu and Kashmir. One thing is for sure: more and more of gobbledygook is to follow.

One of the major offshoots of the demonetisation drive was a push towards a cashless, digital economy. Looking at the chart below, where there is presented the quantum of cashless transactions in some of the major economies of the world…one could only see India’s dismal position. Just about 2% of the volume of economic transactions in India are cashless.

12052016-Volume-equitymaster

Less cash would mean less black money…less corruption…and more transparency. Is it? Assuming it is, how far the drive would go on driving? But was India really ready to go digital? There were 5.3 bank branches per one lakh Indians in rural India 15 years ago. On the eve of demonetization, the figure stood at 7.8 bank branches per one lakh Indians. This shows that a majority of rural India has very little access to banks and the organized financial sector. They rely heavily on cash and the informal credit system. Then, we have just 2.2 lakh ATMs in the country. For a population of over 1.2 billion people, that’s a very small number. And guess what? A majority of ATMs are concentrated in metros and cities. For instance, Delhi has more ATMs than the entire state of Rajasthan. Given the poor penetration of banks and formal sector financial services in rural India, Modi’s cashless economy ambitions were always a distant dream. Then there are issues of related to security. Were the banks and other financial institutions technologically competent to tackle the security issues associated with the swift shift towards a digital economy? Can the common man fully trust that his hard earned money in the financial system will be safe from hackers and fraudsters? And the answer does not seem be a comforting one!

“Those dealing in cash currency have now been forced to deposit these in banks, the money has got identified with a particular owner” So surveillance was the reason. Makes sense why they are so desperate to link aadhaar to bank accounts. Some researchers have considered couple of factors which have actually caused demonetization in India. First one includes the refinancing of public sector banks in India. 80% of banks in India are run by government, during the last two decades these banks have been used to lend out loans to corporations which stink of cronyism. These politically-affiliated businesses did not pay back their money which has resulted into the accumulation of huge amount of non-performing assets (NPAs) within these banks. From last three years warning signals were continuously coming about their collapse. Through demonetization millions of poor people have deposited their meagre sums within these banks which have resulted into their refinancing, so that they can now lend the money to the same guys who earlier do not paid back their loans. sounds pretty simplistic, right? Sad, but true, it is this simple. The second factor is the influence of technological and communications companies on the government, as these companies are among the fastest growing ones during the last two decades. Making payments through digital gate ways will be very beneficial for their growth. They can expand their influence over the whole human race. The statements from technological giants like Apple, Microsoft, MasterCard, Facebook, Google etc. clearly shows their intentions behind cashless society. Tim Cook the chief executive of Apple said that “next generation of children will not know what money is” as he promotes “apple pay” as an alternative. MasterCard executives consider apple pay as another step towards cashless society. MasterCard is mining Facebook users data to get consumer behaviour information which it can sell to banks. Bill gates said India will shift to digital payments, as the digital world lets you track things quickly. The acquisition of artificial intelligence companies by Google, Facebook and Microsoft is also on its peak. Over 200 private companies using AI algorithms across different verticals have been acquired since 2012, with over 30 acquisitions taking place in Q1 of 2017 alone. Apple acquired voice recognition firm “Vocal IQ and real face Google has acquired deep learning and neural network, Facebook acquired Masquerade Technologies and Zurich Eye. So what is actually going on, as private corporations and governments are desperate to introduce cashless economy through biometric payment system.

No black money was unearthed by Modi’s historic folly. Terrorism has also not gone down after demonetization and neither has circulation of counterfeit currency. So, it was a failure on all counts, a point that has been predicted by economists worldwide. What the note ban did was cause untold suffering and misery to common people, destroy livelihoods of millions of wage workers, caused bankruptcy to farmers because prices of their produce crashed and disrupted the economic life of the whole country. The only people who benefited from the note bandi were companies that own digital payment systems (like PayTM, MobiKwik etc.) and credit card companies. It also seems now that ultimately, the black money owners have benefited because they managed to convert all their black wealth in to white using proxies.

Momentum of Accelerated Capital. Note Quote.

high-frequency-trading

Distinct types of high frequency trading firms include independent proprietary firms, which use private funds and specific strategies which remain secretive, and may act as market makers generating automatic buy and sell orders continuously throughout the day. Broker-dealer proprietary desks are part of traditional broker-dealer firms but are not related to their client business, and are operated by the largest investment banks. Thirdly hedge funds focus on complex statistical arbitrage, taking advantage of pricing inefficiencies between asset classes and securities.

Today strategies using algorithmic trading and High Frequency Trading play a central role on financial exchanges, alternative markets, and banks‘ internalized (over-the-counter) dealings:

High frequency traders typically act in a proprietary capacity, making use of a number of strategies and generating a very large number of trades every single day. They leverage technology and algorithms from end-to-end of the investment chain – from market data analysis and the operation of a specific trading strategy to the generation, routing, and execution of orders and trades. What differentiates HFT from algorithmic trading is the high frequency turnover of positions as well as its implicit reliance on ultra-low latency connection and speed of the system.

The use of algorithms in computerised exchange trading has experienced a long evolution with the increasing digitalisation of exchanges:

Over time, algorithms have continuously evolved: while initial first-generation algorithms – fairly simple in their goals and logic – were pure trade execution algos, second-generation algorithms – strategy implementation algos – have become much more sophisticated and are typically used to produce own trading signals which are then executed by trade execution algos. Third-generation algorithms include intelligent logic that learns from market activity and adjusts the trading strategy of the order based on what the algorithm perceives is happening in the market. HFT is not a strategy per se, but rather a technologically more advanced method of implementing particular trading strategies. The objective of HFT strategies is to seek to benefit from market liquidity imbalances or other short-term pricing inefficiencies.

While algorithms are employed by most traders in contemporary markets, the intense focus on speed and the momentary holding periods are the unique practices of the high frequency traders. As the defence of high frequency trading is built around the principles that it increases liquidity, narrows spreads, and improves market efficiency, the high number of trades made by HFT traders results in greater liquidity in the market. Algorithmic trading has resulted in the prices of securities being updated more quickly with more competitive bid-ask prices, and narrowing spreads. Finally HFT enables prices to reflect information more quickly and accurately, ensuring accurate pricing at smaller time intervals. But there are critical differences between high frequency traders and traditional market makers:

  1. HFT do not have an affirmative market making obligation, that is they are not obliged to provide liquidity by constantly displaying two sides quotes, which may translate into a lack of liquidity during volatile conditions.
  2. HFT contribute little market depth due to the marginal size of their quotes, which may result in larger orders having to transact with many small orders, and this may impact on overall transaction costs.
  3. HFT quotes are barely accessible due to the extremely short duration for which the liquidity is available when orders are cancelled within milliseconds.

Besides the shallowness of the HFT contribution to liquidity, are the real fears of how HFT can compound and magnify risk by the rapidity of its actions:

There is evidence that high-frequency algorithmic trading also has some positive benefits for investors by narrowing spreads – the difference between the price at which a buyer is willing to purchase a financial instrument and the price at which a seller is willing to sell it – and by increasing liquidity at each decimal point. However, a major issue for regulators and policymakers is the extent to which high-frequency trading, unfiltered sponsored access, and co-location amplify risks, including systemic risk, by increasing the speed at which trading errors or fraudulent trades can occur.

Although there have always been occasional trading errors and episodic volatility spikes in markets, the speed, automation and interconnectedness of today‘s markets create a different scale of risk. These risks demand that exchanges and market participants employ effective quality management systems and sophisticated risk mitigation controls adapted to these new dynamics in order to protect against potential threats to market stability arising from technology malfunctions or episodic illiquidity. However, there are more deliberate aspects of HFT strategies which may present serious problems for market structure and functioning, and where conduct may be illegal, for example in order anticipation seeks to ascertain the existence of large buyers or sellers in the marketplace and then to trade ahead of those buyers and sellers in anticipation that their large orders will move market prices. A momentum strategy involves initiating a series of orders and trades in an attempt to ignite a rapid price move. HFT strategies can resemble traditional forms of market manipulation that violate the Exchange Act:

  1. Spoofing and layering occurs when traders create a false appearance of market activity by entering multiple non-bona fide orders on one side of the market at increasing or decreasing prices in order to induce others to buy or sell the stock at a price altered by the bogus orders.
  2. Painting the tape involves placing successive small amount of buy orders at increasing prices in order to stimulate increased demand.

  3. Quote Stuffing and price fade are additional HFT dubious practices: quote stuffing is a practice that floods the market with huge numbers of orders and cancellations in rapid succession which may generate buying or selling interest, or compromise the trading position of other market participants. Order or price fade involves the rapid cancellation of orders in response to other trades.

The World Federation of Exchanges insists: ― Exchanges are committed to protecting market stability and promoting orderly markets, and understand that a robust and resilient risk control framework adapted to today‘s high speed markets, is a cornerstone of enhancing investor confidence. However this robust and resilient risk control framework‘ seems lacking, including in the dark pools now established for trading that were initially proposed as safer than the open market.

Reclaim Modernity: Beyond Markets, Beyond Machines (Mark Fisher & Jeremy Gilbert)

Untitled

It is understandable that the mainstream left has traditionally been suspicious of anti-bureaucratic politics. The Fabian tradition has always believed – has been defined by its belief – in the development and extension of an enlightened bureaucracy as the main vehicle of social progress. Attacking ‘bureaucracy’ has been – since at least the 1940s – a means by which the Right has attacked the very idea of public service and collective action. Since the early days of Thatcherism, there has been very good reason to become nervous whenever someone attacks bureaucracy, because such attacks are almost invariably followed by plans not for democratisation, but for privatisation.

Nonetheless, it is precisely this situation that has produced a certain paralysis of the Left in the face of one of its greatest political opportunities, an opportunity which it can only take if it can learn to speak an anti-bureaucratic language with confidence and conviction. On the one hand, this is a simple populist opportunity to unite constituencies within both the public and private sectors: simple, but potentially strategically crucial. As workers in both sectors and as users of public services, the public dislike bureaucracy and apparent over-regulation. The Left misses an enormous opportunity if it fails to capitalise on this dislike and transform it into a set of democratic demands.

On the other hand, anti-bureaucratism marks one of the critical points of failure and contradiction in the entire neoliberal project. For the truth is that neoliberalism has not kept its promise in this regard. It has not reduced the interference of managerial mechanisms and apparently pointless rules and regulations in the working life of public-sector professionals, or of public-service users, or of the vast majority of workers in the private sector. In fact it has led in many cases to an enormous proliferation and intensification of just these processes. Targets, performance indicators, quantitative surveys and managerial algorithms dominate more of life today than ever before, not less. The only people who really suffer less regulation than they did in the past are the agents of finance capital: banks, traders, speculators and fund managers.

Where de-regulation is a reality for most workers is not in their working lives as such, but in the removal of those regulations which once protected their rights to secure work, and to a decent life outside of work (pensions, holidays, leave entitlements, etc.). The precarious labour market is not a zone of freedom for such workers, but a space in which the fact of precarity itself becomes a mechanism of discipline and regulation. It only becomes a zone of freedom for those who already have enough capital to be able to choose when and where to work, or to benefit from the hyper-mobility and enforced flexibility of contemporary capitalism.

Reclaiming Modernity Beyond Markets Beyond Machines

What’s a Market Password Anyway? Towards Defining a Financial Market Random Sequence. Note Quote.

From the point of view of cryptanalysis, the algorithmic view based on frequency analysis may be taken as a hacker approach to the financial market. While the goal is clearly to find a sort of password unveiling the rules governing the price changes, what we claim is that the password may not be immune to a frequency analysis attack, because it is not the result of a true random process but rather the consequence of the application of a set of (mostly simple) rules. Yet that doesn’t mean one can crack the market once and for all, since for our system to find the said password it would have to outperform the unfolding processes affecting the market – which, as Wolfram’s PCE suggests, would require at least the same computational sophistication as the market itself, with at least one variable modelling the information being assimilated into prices by the market at any given moment. In other words, the market password is partially safe not because of the complexity of the password itself but because it reacts to the cracking method.

Figure-6-By-Extracting-a-Normal-Distribution-from-the-Market-Distribution-the-Long-Tail

Whichever kind of financial instrument one looks at, the sequences of prices at successive times show some overall trends and varying amounts of apparent randomness. However, despite the fact that there is no contingent necessity of true randomness behind the market, it can certainly look that way to anyone ignoring the generative processes, anyone unable to see what other, non-random signals may be driving market movements.

Von Mises’ approach to the definition of a random sequence, which seemed at the time of its formulation to be quite problematic, contained some of the basics of the modern approach adopted by Per Martin-Löf. It is during this time that the Keynesian kind of induction may have been resorted to as a starting point for Solomonoff’s seminal work (1 and 2) on algorithmic probability.

Per Martin-Löf gave the first suitable definition of a random sequence. Intuitively, an algorithmically random sequence (or random sequence) is an infinite sequence of binary digits that appears random to any algorithm. This contrasts with the idea of randomness in probability. In that theory, no particular element of a sample space can be said to be random. Martin-Löf randomness has since been shown to admit several equivalent characterisations in terms of compression, statistical tests, and gambling strategies.

The predictive aim of economics is actually profoundly related to the concept of predicting and betting. Imagine a random walk that goes up, down, left or right by one, with each step having the same probability. If the expected time at which the walk ends is finite, predicting that the expected stop position is equal to the initial position, it is called a martingale. This is because the chances of going up, down, left or right, are the same, so that one ends up close to one’s starting position,if not exactly at that position. In economics, this can be translated into a trader’s experience. The conditional expected assets of a trader are equal to his present assets if a sequence of events is truly random.

If market price differences accumulated in a normal distribution, a rounding would produce sequences of 0 differences only. The mean and the standard deviation of the market distribution are used to create a normal distribution, which is then subtracted from the market distribution.

Schnorr provided another equivalent definition in terms of martingales. The martingale characterisation of randomness says that no betting strategy implementable by any computer (even in the weak sense of constructive strategies, which are not necessarily computable) can make money betting on a random sequence. In a true random memoryless market, no betting strategy can improve the expected winnings, nor can any option cover the risks in the long term.

Over the last few decades, several systems have shifted towards ever greater levels of complexity and information density. The result has been a shift towards Paretian outcomes, particularly within any event that contains a high percentage of informational content, i.e. when one plots the frequency rank of words contained in a large corpus of text data versus the number of occurrences or actual frequencies, Zipf showed that one obtains a power-law distribution

Departures from normality could be accounted for by the algorithmic component acting in the market, as is consonant with some empirical observations and common assumptions in economics, such as rule-based markets and agents. The paper.

Causation in Financial Markets. Note Quote.

business-analytics-wallpaper-9

The algorithmic top-down view of causation in financial markets is essentially a deterministic, dynamical systems view. This can serve as an interpretation of financial markets whereby markets are understood through assets prices, representing information in the market, which can be described by a dynamical system model. This is the ideal encapsulated in the Laplacian vision:

We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence.

Here boundary and initial conditions of variables uniquely determine the outcome for the effective dynamics at the level in hierarchy where it is being applied. This implies that higher levels in the hierarchy can drive broad macro-economic behavior, for example: at the highest level there could exist some set of differential equations that describe the behavior of adjustable quantities, such as interest rates, and how they impact measurable quantities such as gross domestic product, aggregate consumption.

The literature on the Lucas critique addresses limitations of this approach. Nevertheless, from a completely ad hoc perspective, a dynamical systems model may offer a best approximation to relationships at a particular level in a complex hierarchy.

Predictors: This system actor views causation in terms of uniquely determined outcomes, based on known boundary and initial conditions. Predictors may be successful when mechanistic dependencies in economic realities become pervasive or dominant. An example of a predictive-based argument since the Global Financial Crises (2007-2009) is the bipolar Risk- On/Risk-Off description for preferences, whereby investors shift to higher risk portfolios when global assessment of riskiness is established to be low and shift to low risk portfolios when global riskiness is considered to be high. Mathematically, a simple approximation of the dynamics can be described by a Lotka-Volterra (or predator-prey) model, which in economics, proposed a way to model the dynamics of various industries by introducing trophic functions between various sectors, and ignoring smaller sectors by considering the interactions of only two industrial sectors. The excess-liquidity due to quantitative easing and the prevalence and ease of trading in exchange traded funds and currencies, combined with low interest rates and the increase use of automation, pro- vided a basis for the risk-on/risk-off analogy for analysing large capital flows in the global arena. In Ising-Potts hierarchy, top down causation is filtered down to the rest of the market through all the shared risk factors, and the top-down information variables, which dominate bottom-up information variables. At higher levels, bottom-up variables are effectively noise terms. Nevertheless, the behaviour of the traders in a lower levels can still become driven by correlations across assets, based on perceived global riskiness. Thus, risk-on/risk-off transitions can have amplified effects.

Quantum K-NN Algo

machinelearningalgorithms

k-nearest neighbours (k-NN) is a supervised algorithm where test vectors are compared against labelled training vectors. Classification of a test vector is performed by taking a majority vote of the class for the k nearest training vectors. In the case of k=1, this algorithm reduces to an equivalent of nearest-centroid. k-NN algorithms lend themselves to applications such as handwriting recognition and useful approximations to the traveling salesman problem.

image_07

k-nearest-neighbor classifiers applied to the simulation data above. The broken purple curve in the background is the Bayes decision boundary.

k-NN has two subtleties. Firstly, for datasets where a particular class has the majority of the training data points, there is a bias towards classifying into this class. One solution is to weight each classification calculation by the distance of the test vector from the training vector, however this may still yield poor classifications for particularly under-represented classes. Secondly, the distance between each test vector and all training data must be calculated for each classification, which is resource intensive. The goal is to seek an algorithm with a favourable scaling in the number of training data vectors.

An extension of the nearest-centroid algorithm has been developed by Wiebe. First, the algorithm prepares a superposition of qubit states with the distance between each training vector and the input vector, using a suitable quantum sub-routine that encodes the distances in the qubit amplitudes. Rather than measuring the state, the amplitudes are transferred onto an ancilla register using coherent amplitude estimation. Grover’s search is then used to find the smallest valued register, corresponding to the training vector closest to the test vector. Therefore, the entire classification occurs within the quantum computer, and we can categorize the quantum k-NN as an L2 algorithm. The advantage over Lloyd’s algorithm is that the power of Grover’s search has been used to provide a speedup and it provides a full and clear recipe for implementation. The time scaling of the quantum k-NN algorithm is complex, however it scales as O ̃(√nlog(n)) to first order. The dependence on m no longer appears except at higher orders.

The quantum k-NN algorithm is not a panacea. There are clearly laid out conditions on the application of quantum k-NN, though, such as the dependence on the sparsity of the data. The classification is decided by majority rule with no weighting and, as such, it is unsuitable for biased datasets. K-NN works well with a small number of input variables (p), but struggles when the number of inputs is very large. Each input variable can be considered a dimension of a p-dimensional input space. For example, if you had two input variables x1 and x2, the input space would be 2-dimensional. As the number of dimensions increases the volume of the input space increases at an exponential rate. In high dimensions, points that may be similar may have very large distances. All points will be far away from each other and our intuition for distances in simple 2 and 3-dimensional spaces breaks down. This might feel unintuitive at first, but this general problem is called the “Curse of Dimensionality“.

Bayesian Networks and Machine Learning

A Bayesian network (BN) is a probabilistic directed acyclic graph representing a set of random variables and their dependence on one another. BNs play an important role in machine learning as they can be used to calculate the probability of a new piece of data being sorted into an existing class by comparison with training data.

graphicalmodels

Each variable requires a finite set of mutually exclusive (independent) states. A node with a dependent is called a parent node and each connected pair has a set of conditional probabilities defined by their mutual dependence. Each node depends only on its parents and has conditional independence from any node it is not descended from. Using this definition, and taking n to be the number of nodes in the set of training data, the joint probability of the set of all nodes, {X1, X2, · · · Xn}, is defined for any graph as

P(Xi) = ∏ni=1 P(Xii)

where πi refers to the set of parents of Xi. Any conditional probability between two nodes can then be calculated.

An argument for the use of BNs over other methods is that they are able to “smooth” data models, making all pieces of data usable for training. However, for a BN with m nodes, the number of possible graphs is exponential in n; a problem which has been addressed with varying levels of success. The bulk of the literature on learning with BNs utilises model selection. This is concerned with using a criterion to measure the fit of the network structure to the original data, before applying a heuristic search algorithm to find an equivalence class that does well under these conditions. This is repeated over the space of BN structures. A special case of BNs is the dynamic (time-dependent) hidden Markov model (HMM), in which only outputs are visible and states are hidden. Such models are often used for speech and handwriting recognition, as they can successfully evaluate which sequences of words are the most common.

Untitled Document

Quantum Bayesian networks (QBNs) and hidden quantum Markov models (HQMMs) have been demonstrated theoretically, but there is currently no experimental research. The format of a HMM lends itself to a smooth transition into the language of open quantum systems. Clark et al. claim that open quantum systems with instantaneous feedback are examples of HQMMs, with the open quantum system providing the internal states and the surrounding bath acting as the ancilla, or external state. This allows feedback to guide the internal dynamics of the system, thus conforming to the description of an HQMM.

Permeability of Autopoietic Principles (revisited) During Cognitive Development of the Brain

Distinctions and binaries have their problematics, and neural networks are no different when one such attempt is made regarding the information that flows from the outside into the inside, where interactions occur. The inside of the system has to cope up with the outside of the system through mechanisms that are either predefined for the system under consideration, or having no independent internal structure at all to begin with. The former mechanism results in loss of adaptability, since all possible eventualities would have to be catered for in the fixed, internal structure of the system. The latter is guided by conditions prevailing in the environment. In either cases, learning to cope with the environmental conditions is the key for system’s reaching any kind of stability. But, how would a system respond to its environment? According to the ideas propounded by Changeaux et. al. , this is possible in two ways, viz,

  1. An instructive mechanism is directly imposed by the environment on the system’s structure and,
  2. a selective mechanism, that is Darwinian in its import, helps maintain order as a result of interactions between the system and environment. The environment facilitates reinforcement, stabilization and development of the structure, without in any way determining it.

These two distinct ways when exported to neural networks take on connotations as supervised and unsupervised learning. The position of Changeaux et. al. is rooted in rule- based, formal and representational formats, and is thus criticized by the likes of Edelman. According to him, in a nervous system (his analysis are based upon nervous systems) neural signals in an information processing models are taken in from the periphery, and thereafter encoded in various ways to be subsequently transformed and retransformed during processing and generating an output. This not only puts extreme emphasis on formal rules, but also makes the claim on the nature of memory that is considered to occur through the representation of events through recording or replication of their informational details. Although, Edelman’s analysis takes nervous system as its centrality, the informational modeling approach that he undertakes is blanketed over the ontological basis that forms the fabric of the universe. Connectionists have no truck with this approach, as can be easily discerned from a long quote Edelman provides:

The notion of information processing tends to put a strong emphasis on the ability of the central nervous system to calculate the relevant invariance of a physical world. This view culminates in discussions of algorithms and computations, on the assumption that brain computes in an algorithmic manner…Categories of natural objects in the physical world are implicitly assumed to fall into defined classes or typologies that are accessible to a program. Pushing the notion even further, proponents of certain versions of this model are disposed to consider that the rules and representation (Chomsky) that appear to emerge in the realization of syntactical structures and higher semantic functions of language arise from corresponding structures at the neural level.

Edelman is aware of the shortcomings in informational processing models, and therefore takes a leap into connectionist fold with his proposal of brain consisting of a large number of undifferentiated, but connected neurons. He, at the same time gives a lot of credence to organization occurring at development phases of the brain. He lays out the following principles of this population thinking in his Neural Darwinism: The Theory of Neuronal Group Selection:

  1. The homogeneous, undifferentiated population of neurons is epigenetically diversified into structurally variant groups through a number of selective processescalled“primary repertoire”.
  2. Connections among the groups are modified due to signals received during the interactions between the system and environment housing the system. Such modifications that occur during the post-natal period become functionally active to used in future, and form “secondary repertoire”.
  3. With the setting up of “primary” and “secondary” repertoires, groups engage in interactions by means of feedback loops as a result of various sensory/motor responses, enabling the brain to interpret conditions in its environment and thus act upon them.

“Degenerate” is what Edelman calls are the neural groups in the primary repertoire to begin with. This entails the possibility of a significant number of non-identical variant groups. This has another dimension to it as well, in that, non-identical variant groups are distributed uniformly across the system. Within Edelman’s nervous system case study, degeneracy and distributedness are crucial features to deny the localization of cortical functions on the one hand, and existence of hierarchical processing structures in a narrow sense on the other. Edelman’s cortical map formation incorporate the generic principles of autopoiesis. Cortical maps are collections (areas) of minicolumns in the brain cortex that have been identified as performing a specific information processing function. Schematically, it is like,

8523645_orig

In Edelman’s theory, neural groups have an optimum size that is not known a priori, but develops spontaneously and dynamically. Within the cortex, this is achieved by means of inhibitory connections spread over a horizontal plane, while excitatory ones are vertically laid out, thus enabling the neuronal activity to be concentrated on the vertical plane rather than the horizontal one. Hebb’s rule facilitates the utility function of this group. Impulses are carried on to neural groups thereby activating the same, and subsequently altering synaptic strengths. During the ensuing process, a correlation gets formed between neural groups with possible overlapping of messages as a result of synaptic activity generated within each neural groups. This correlational activity could be selected for frequent exposure to such overlaps, and once selected, the group might start to exhibit its activity even in the absence of inputs or impulses. The selection is nothing but memory, and is always used in learning procedures. A lot depends upon the frequency of exposure, as if this is on the lower scale, memory, or selection could simply fade away, and be made available for a different procedure. No wonder, why forgetting is always referred to as a precondition for memory. Fading away might be an useful criteria for using the freed allotted memory storage space during developmental process, but at the stage when groups of the right size are in place and ready for selection, weakly interacting groups would meet the fate of elimination. Elimination and retention of groups depends upon what Edelman refers to as the vitality principle, wherein, sensitivity to historical process finds more legitimacy, and that of extant groups find takers in influencing the formation of new groups. The reason for including Edelman’s case was specifically to highlight the permeability of self-organizing principles during the cognitive development of the brain, and also pitting the superiority of neural networks/connectionist models in comprehending brain development over the traditional rule-based expert and formal systems of modeling techniques.

In order to understand the nexus between brain development and environment, it would be secure to carry further Edelman’s analysis. It is a commonsense belief in linking the structural changes in the brain with environmental effects. Even if one takes recourse to Darwinian evolution, these changes are either delayed due to systemic resistance to let these effects take over, or in not so Darwinian a fashion, the effects are a compounded resultant of embedded groups within the network. On the other hand, Edelman’s cortical map formation is not just confined to the processes occurring within brain’s structure alone, but is also realized by how the brain explores its environment. This aspect is nothing but motor behavior in its nexus between the brain and environment and is strongly voiced by Cilliers, when he calls to attention,

The role of active motor behavior forms the first half of the argument against abstract, solipsistic intelligence. The second half concerns the role of communication. The importance of communication, especially the use of symbol systems (language), does not return us to the paradigm of objective information- processing. Structures for communication remain embedded in a neural structure, and therefore will always be subjected to the complexities of network interaction. Our existence is both embodied and contingent.

Edelman is criticized for showing no respect to replication in his theory, which is a strong pillar for natural selection and learning. Recently, attempts to incorporate replication in the brain have been undertaken, and strong indicators for neuronal replicators with the use of Hebb’s learning mechanism as showing more promise when compared with natural selection are in the limelight (Fernando, Goldstein and Szathmáry). These autopoietic systems when given a mathematical description and treatment could be used to model onto a computer or a digital system, thus help giving insights into the world pregnant with complexity.

Autopiesis goes directly to the heart of anti-foundationalism. This is because the epistemological basis of basic beliefs is not paid any due respect or justificatory support in autopietic system’s insistence on internal interactions and external contingent factors obligating the system to undergo continuous transformations. If autopoiesis could survive wonderfully well without the any transcendental intervention, or a priori definition, it has parallels running within French theory. If anti-foundationalism is the hallmark of autopoiesis, so is anti-reductionism, since it is well nigh impossible to analyze to have meaning explicated in terms of atomistic units, and especially so, when the systems are already anti-foundationalist. Even in biologically contextual terms, a mereology according to Garfinkel is emergent as a result of complex interactions that go on within the autopoietic system. Garfinkel says,

We have seen that modeling aggregation requires us to transcend the level of the individual cells to describe the system by holistic variables. But in classical reductionism, the behavior of holistic entities must ultimately be explained by reference to the nature of their constituents, because those entities ‘are just’ collections of the lower-level objects with their interactions. Although, it may be true in some sense that systems are just collections of their elements, it does not follow that we can explain the system’s behavior by reference to its parts, together with a theory of their connections. In particular, in dealing with systems of large numbers of similar components, we must make recourse to holistic concepts that refer to the behavior of the system as a whole. We have seen here, for example, concepts such as entrainment, global attractors, waves of aggregation, and so on. Although these system properties must ultimately be definable in terms of the states of individuals, this fact does not make them ‘fictions’; they are causally efficacious (hence, real) and have definite causal relationships with other system variables and even to the states of the individuals.

Autopoiesis gains vitality, when systems thinking opens up the avenues of accepting contradictions and opposites rather than merely trying to get rid of them. Vitality is centered around a conflict, and ideally comes into a balanced existence, when such a conflict, or strife helps facilitate consensus building, or cooperation. If such goals are achieved, analyzing complexity theory gets a boost, and moreover by being sensitive to autopoiesis, an appreciation of the sort of the real lebenswelt gets underlined. Memory† and history are essentials for complex autopoietic system, whether they be biological and/or social, and this can be fully comprehended in some quite routine situations where systems that are quite identical in most respects, if differing in their histories would have different trajectories in responding to situations they face. Memory does not determine the final description of the system, since it is itself susceptible to transformations, and what really gets passed on are the traces. The same susceptibility to transformations would apply to traces as well. But memory is not stored in the brain as discrete units, but rather as in a distributed pattern, and this is the pivotal characteristic of self-organizing complex systems over any other form of iconic representation. This property of transformation as associated with autopoietic systems is enough to suspend the process in between activity and passivity, in that the former is determining by the environment and the latter is impact on the environment. This is really important in autopoiesis, since the distinction between the inside and outside and active and passive is difficult to discern, and moreover this disappearance of distinction is a sufficient enough case to vouch against any authoritative control as residing within the system, and/or emanating from any single source. Autopoiesis scores over other representational modeling techniques by its ability to self-reflect, or by the system’s ability to act upon itself. For Lawson, reflexivity disallows any static description of the system, since it is not possible to intercept the reflexive moment, and it also disallows a complete description of the system at a meta-level. Even though a meta-level description can be construed, it is only the frozen frames or snapshots of the systems at any given particular instance, and hence ignores the temporal dimensions the systems undergo. For that to be taken into account, and measure the complexity within the system, the role of activity and passivity cannot be ignored at any cost, despite showing up great difficulties while modeling. But, is it not really a blessing in disguise, for the model of a complex system should be retentive of complexity in the real world? Well, the answer is yes, it is.

Somehow, the discussion till now still smells of anarchy within autopoiesis, and if there is no satisfactory account of predictability and stability within the self-organizing system, the fears only get aggravated. A system which undergoes huge effects when small changes or alteration are made in the causes is definitely not a candidate for stability. And autopietic systems are precisely such. Does this mean that these are unstable?, or does it call for a reworking of the notion of stability? This is philosophically contentious and there is no doubt regarding this. Unstability could be a result of probabilities, but complex systems have to fall outside the realm of such probabilities. What happens in complex systems is a result of complex interactions due to a large number of factors, that need not be logically compatible. At the same time, stochasticity has no room, for it serves as an escape route from the annals of classical determinism, and hence a theory based on such escape routes could never be a theory of self-organization (Patteee). Stability is closely related to the ability to predict, and if stability is something very different from what classical determinism tells it is, the case for predictability should be no different. The problems in predictions are gross, as are echoed in the words of Krohn and Küppers,

In the case of these ‘complex systems’ (Nicolis and Prigogine), or ‘non-trivial’ machines, a functional analysis of input-output correlations must be supplemented by the study of ‘mechanisms’, i.e. by causal analysis. Due to the operational conditions of complex systems it is almost impossible to make sense of the output (in terms of the functions or expected effects) without taking into account the mechanisms by which it is produced. The output of the system follows the ‘history’ of the system, which itself depends on its previous output taken as input (operational closure). The system’s development is determined by its mechanisms, but cannot be predicted, because no reliable rule can be found in the output itself. Even more complicated are systems in which the working mechanisms themselves can develop according to recursive operations (learning of learning; invention of inventions, etc.).

The quote above clearly is indicative of predicaments while attempting to provide explanations of predictability. Although, it is quite difficult to get rid of these predicaments, nevertheless, attempts to mitigate them so as to reduce noise levels from distorting or disturbing the stability and predictability of the systems are always in the pipeline. One such attempt lies in collating or mapping constraints onto a real epistemological fold of history and environment, and thereafter apply it to the studies of the social and the political. This is voiced very strongly as a parallel metaphoric in Luhmann, when he draws attention to,

Autopoietic systems, then, are not only self organizing systems. Not only do they produce and eventually change their own structures but their self-reference applies to the production of other components as well. This is the decisive conceptual innovation. It adds a turbo charger to the already powerful engine of self-referential machines. Even elements, that is, last components (individuals), which are, at least for the system itself, undecomposable, are produced by the system itself. Thus, everything which is used as a unit by the system is produced as a unit by the system itself. This applies to elements, processes, boundaries and other structures, and last but not least to the unity of the system itself. Autopoietic systems, of course, exist within an environment. They cannot exist on their own. But there is no input and no output of unity.

What this entails for social systems is that they are autopoietically closed, in that, while they rely on resources from their environment, the resources in question do not become part of the systematic operation. So the system never tries its luck at adjusting to the changes that are brought about superficially and in the process fritter away its available resources, instead of going for trends that do not appear to be superficial. Were a system to ever attempt a fall from grace in making acclimatizations to these fluctuations, a choice that is ethical in nature and contextual at the same time is resorted to. Within the distributed systems as such, a central authority is paid no heed, since such a scenario could result in general degeneracy of the system as a whole. Instead, what gets highlighted is the ethical choice of decentralization, to ensure system’s survivability, and dynamism. Such an ethical treatment is no less altruistic.