Something Out of Almost Nothing. Drunken Risibility.

Kant’s first antinomy makes the error of the excluded third option, i.e. it is not impossible that the universe could have both a beginning and an eternal past. If some kind of metaphysical realism is true, including an observer-independent and relational time, then a solution of the antinomy is conceivable. It is based on the distinction between a microscopic and a macroscopic time scale. Only the latter is characterized by an asymmetry of nature under a reversal of time, i.e. the property of having a global (coarse-grained) evolution – an arrow of time – or many arrows, if they are independent from each other. Thus, the macroscopic scale is by definition temporally directed – otherwise it would not exist.

On the microscopic scale, however, only local, statistically distributed events without dynamical trends, i.e. a global time-evolution or an increase of entropy density, exist. This is the case if one or both of the following conditions are satisfied: First, if the system is in thermodynamic equilibrium (e.g. there is degeneracy). And/or second, if the system is in an extremely simple ground state or meta-stable state. (Meta-stable states have a local, but not a global minimum in their potential landscape and, hence, they can decay; ground states might also change due to quantum uncertainty, i.e. due to local tunneling events.) Some still speculative theories of quantum gravity permit the assumption of such a global, macroscopically time-less ground state (e.g. quantum or string vacuum, spin networks, twistors). Due to accidental fluctuations, which exceed a certain threshold value, universes can emerge out of that state. Due to some also speculative physical mechanism (like cosmic inflation) they acquire – and, thus, are characterized by – directed non-equilibrium dynamics, specific initial conditions, and, hence, an arrow of time.

It is a matter of debate whether such an arrow of time is

1) irreducible, i.e. an essential property of time,

2) governed by some unknown fundamental and not only phenomenological law,

3) the effect of specific initial conditions or

4) of consciousness (if time is in some sense subjective), or

5) even an illusion.

Many physicists favour special initial conditions, though there is no consensus about their nature and form. But in the context at issue it is sufficient to note that such a macroscopic global time-direction is the main ingredient of Kant’s first antinomy, for the question is whether this arrow has a beginning or not.

Time’s arrow is inevitably subjective, ontologically irreducible, fundamental and not only a kind of illusion, thus if some form of metaphysical idealism for instance is true, then physical cosmology about a time before time is mistaken or quite irrelevant. However, if we do not want to neglect an observer-independent physical reality and adopt solipsism or other forms of idealism – and there are strong arguments in favor of some form of metaphysical realism -, Kant’s rejection seems hasty. Furthermore, if a Kantian is not willing to give up some kind of metaphysical realism, namely the belief in a “Ding an sich“, a thing in itself – and some philosophers actually insisted that this is superfluous: the German idealists, for instance -, he has to admit that time is a subjective illusion or that there is a dualism between an objective timeless world and a subjective arrow of time. Contrary to Kant’s thoughts: There are reasons to believe that it is possible, at least conceptually, that time has both a beginning – in the macroscopic sense with an arrow – and is eternal – in the microscopic notion of a steady state with statistical fluctuations.

Is there also some physical support for this proposal?

Surprisingly, quantum cosmology offers a possibility that the arrow has a beginning and that it nevertheless emerged out of an eternal state without any macroscopic time-direction. (Note that there are some parallels to a theistic conception of the creation of the world here, e.g. in the Augustinian tradition which claims that time together with the universe emerged out of a time-less God; but such a cosmological argument is quite controversial, especially in a modern form.) So this possible overcoming of the first antinomy is not only a philosophical conceivability but is already motivated by modern physics. At least some scenarios of quantum cosmology, quantum geometry/loop quantum gravity, and string cosmology can be interpreted as examples for such a local beginning of our macroscopic time out of a state with microscopic time, but with an eternal, global macroscopic timelessness.

To put it in a more general, but abstract framework and get a sketchy illustration, consider the figure.

Untitled

Physical dynamics can be described using “potential landscapes” of fields. For simplicity, here only the variable potential (or energy density) of a single field is shown. To illustrate the dynamics, one can imagine a ball moving along the potential landscape. Depressions stand for states which are stable, at least temporarily. Due to quantum effects, the ball can “jump over” or “tunnel through” the hills. The deepest depression represents the ground state.

In the common theories the state of the universe – the product of all its matter and energy fields, roughly speaking – evolves out of a metastable “false vacuum” into a “true vacuum” which has a state of lower energy (potential). There might exist many (perhaps even infinitely many) true vacua which would correspond to universes with different constants or laws of nature. It is more plausible to start with a ground state which is the minimum of what physically can exist. According to this view an absolute nothingness is impossible. There is something rather than nothing because something cannot come out of absolutely nothing, and something does obviously exist. Thus, something can only change, and this change might be described with physical laws. Hence, the ground state is almost “nothing”, but can become thoroughly “something”. Possibly, our universe – and, independent from this, many others, probably most of them having different physical properties – arose from such a phase transition out of a quasi atemporal quantum vacuum (and, perhaps, got disconnected completely). Tunneling back might be prevented by the exponential expansion of this brand new space. Because of this cosmic inflation the universe not only became gigantic but simultaneously the potential hill broadened enormously and got (almost) impassable. This preserves the universe from relapsing into its non-existence. On the other hand, if there is no physical mechanism to prevent the tunneling-back or makes it at least very improbable, respectively, there is still another option: If infinitely many universes originated, some of them could be long-lived only for statistical reasons. But this possibility is less predictive and therefore an inferior kind of explanation for not tunneling back.

Another crucial question remains even if universes could come into being out of fluctuations of (or in) a primitive substrate, i.e. some patterns of superposition of fields with local overdensities of energy: Is spacetime part of this primordial stuff or is it also a product of it? Or, more specifically: Does such a primordial quantum vacuum have a semi-classical spacetime structure or is it made up of more fundamental entities? Unique-universe accounts, especially the modified Eddington models – the soft bang/emergent universe – presuppose some kind of semi-classical spacetime. The same is true for some multiverse accounts describing our universe, where Minkowski space, a tiny closed, finite space or the infinite de Sitter space is assumed. The same goes for string theory inspired models like the pre-big bang account, because string and M- theory is still formulated in a background-dependent way, i.e. requires the existence of a semi-classical spacetime. A different approach is the assumption of “building-blocks” of spacetime, a kind of pregeometry also the twistor approach of Roger Penrose, and the cellular automata approach of Stephen Wolfram. The most elaborated accounts in this line of reasoning are quantum geometry (loop quantum gravity). Here, “atoms of space and time” are underlying everything.

Though the question whether semiclassical spacetime is fundamental or not is crucial, an answer might be nevertheless neutral with respect of the micro-/macrotime distinction. In both kinds of quantum vacuum accounts the macroscopic time scale is not present. And the microscopic time scale in some respect has to be there, because fluctuations represent change (or are manifestations of change). This change, reversible and relationally conceived, does not occur “within” microtime but constitutes it. Out of a total stasis nothing new and different can emerge, because an uncertainty principle – fundamental for all quantum fluctuations – would not be realized. In an almost, but not completely static quantum vacuum however, macroscopically nothing changes either, but there are microscopic fluctuations.

The pseudo-beginning of our universe (and probably infinitely many others) is a viable alternative both to initial and past-eternal cosmologies and philosophically very significant. Note that this kind of solution bears some resemblance to a possibility of avoiding the spatial part of Kant’s first antinomy, i.e. his claimed proof of both an infinite space without limits and a finite, limited space: The theory of general relativity describes what was considered logically inconceivable before, namely that there could be universes with finite, but unlimited space, i.e. this part of the antinomy also makes the error of the excluded third option. This offers a middle course between the Scylla of a mysterious, secularized creatio ex nihilo, and the Charybdis of an equally inexplicable eternity of the world.

In this context it is also possible to defuse some explanatory problems of the origin of “something” (or “everything”) out of “nothing” as well as a – merely assumable, but never provable – eternal cosmos or even an infinitely often recurring universe. But that does not offer a final explanation or a sufficient reason, and it cannot eliminate the ultimate contingency of the world.

Weyl and Automorphism of Nature. Drunken Risibility.

MTH6105spider

In classical geometry and physics, physical automorphisms could be based on the material operations used for defining the elementary equivalence concept of congruence (“equality and similitude”). But Weyl started even more generally, with Leibniz’ explanation of the similarity of two objects, two things are similar if they are indiscernible when each is considered by itself. Here, like at other places, Weyl endorsed this Leibnzian argument from the point of view of “modern physics”, while adding that for Leibniz this spoke in favour of the unsubstantiality and phenomenality of space and time. On the other hand, for “real substances” the Leibnizian monads, indiscernability implied identity. In this way Weyl indicated, prior to any more technical consideration, that similarity in the Leibnizian sense was the same as objective equality. He did not enter deeper into the metaphysical discussion but insisted that the issue “is of philosophical significance far beyond its purely geometric aspect”.

Weyl did not claim that this idea solves the epistemological problem of objectivity once and for all, but at least it offers an adequate mathematical instrument for the formulation of it. He illustrated the idea in a first step by explaining the automorphisms of Euclidean geometry as the structure preserving bijective mappings of the point set underlying a structure satisfying the axioms of “Hilbert’s classical book on the Foundations of Geometry”. He concluded that for Euclidean geometry these are the similarities, not the congruences as one might expect at a first glance. In the mathematical sense, we then “come to interpret objectivity as the invariance under the group of automorphisms”. But Weyl warned to identify mathematical objectivity with that of natural science, because once we deal with real space “neither the axioms nor the basic relations are given”. As the latter are extremely difficult to discern, Weyl proposed to turn the tables and to take the group Γ of automorphisms, rather than the ‘basic relations’ and the corresponding relata, as the epistemic starting point.

Hence we come much nearer to the actual state of affairs if we start with the group Γ of automorphisms and refrain from making the artificial logical distinction between basic and derived relations. Once the group is known, we know what it means to say of a relation that it is objective, namely invariant with respect to Γ.

By such a well chosen constitutive stipulation it becomes clear what objective statements are, although this can be achieved only at the price that “…we start, as Dante starts in his Divina Comedia, in mezzo del camin”. A phrase characteristic for Weyl’s later view follows:

It is the common fate of man and his science that we do not begin at the beginning; we find ourselves somewhere on a road the origin and end of which are shrouded in fog.

Weyl’s juxtaposition of the mathematical and the physical concept of objectivity is worthwhile to reflect upon. The mathematical objectivity considered by him is relatively easy to obtain by combining the axiomatic characterization of a mathematical theory with the epistemic postulate of invariance under a group of automorphisms. Both are constituted in a series of acts characterized by Weyl as symbolic construction, which is free in several regards. For example, the group of automorphisms of Euclidean geometry may be expanded by “the mathematician” in rather wide ways (affine, projective, or even “any group of transformations”). In each case a specific realm of mathematical objectivity is constituted. With the example of the automorphism group Γ of (plane) Euclidean geometry in mind Weyl explained how, through the use of Cartesian coordinates, the automorphisms of Euclidean geometry can be represented by linear transformations “in terms of reproducible numerical symbols”.

For natural science the situation is quite different; here the freedom of the constitutive act is severely restricted. Weyl described the constraint for the choice of Γ at the outset in very general terms: The physicist will question Nature to reveal him her true group of automorphisms. Different to what a philosopher might expect, Weyl did not mention, the subtle influences induced by theoretical evaluations of empirical insights on the constitutive choice of the group of automorphisms for a physical theory. He even did not restrict the consideration to the range of a physical theory but aimed at Nature as a whole. Still basing on his his own views and radical changes in the fundamental views of theoretical physics, Weyl hoped for an insight into the true group of automorphisms of Nature without any further specifications.

Fundamental Theorem of Asset Pricing: Tautological Meeting of Mathematical Martingale and Financial Arbitrage by the Measure of Probability.

thinkstockphotos-496599823

The Fundamental Theorem of Asset Pricing (FTAP hereafter) has two broad tenets, viz.

1. A market admits no arbitrage, if and only if, the market has a martingale measure.

2. Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The FTAP is a theorem of mathematics, and the use of the term ‘measure’ in its statement places the FTAP within the theory of probability formulated by Andrei Kolmogorov (Foundations of the Theory of Probability) in 1933. Kolmogorov’s work took place in a context captured by Bertrand Russell, who observed that

It is important to realise the fundamental position of probability in science. . . . As to what is meant by probability, opinions differ.

In the 1920s the idea of randomness, as distinct from a lack of information, was becoming substantive in the physical sciences because of the emergence of the Copenhagen Interpretation of quantum mechanics. In the social sciences, Frank Knight argued that uncertainty was the only source of profit and the concept was pervading John Maynard Keynes’ economics (Robert Skidelsky Keynes the return of the master).

Two mathematical theories of probability had become ascendant by the late 1920s. Richard von Mises (brother of the Austrian economist Ludwig) attempted to lay down the axioms of classical probability within a framework of Empiricism, the ‘frequentist’ or ‘objective’ approach. To counter–balance von Mises, the Italian actuary Bruno de Finetti presented a more Pragmatic approach, characterised by his claim that “Probability does not exist” because it was only an expression of the observer’s view of the world. This ‘subjectivist’ approach was closely related to the less well-known position taken by the Pragmatist Frank Ramsey who developed an argument against Keynes’ Realist interpretation of probability presented in the Treatise on Probability.

Kolmogorov addressed the trichotomy of mathematical probability by generalising so that Realist, Empiricist and Pragmatist probabilities were all examples of ‘measures’ satisfying certain axioms. In doing this, a random variable became a function while an expectation was an integral: probability became a branch of Analysis, not Statistics. Von Mises criticised Kolmogorov’s generalised framework as un-necessarily complex. About a decade and a half back, the physicist Edwin Jaynes (Probability Theory The Logic Of Science) champions Leonard Savage’s subjectivist Bayesianism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science”.

The objections to measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting; subjectivist probability is based on a flow of information, which, following Claude Shannon, is now an observable entity in Empirical science. Measure theoretic probability is based on abstract mathematical objects unrelated to sensible phenomena. However, the generality of Kolmogorov’s approach made it flexible enough to handle problems that emerged in physics and engineering during the Second World War and his approach became widely accepted after 1950 because it was practically more useful.

In the context of the first statement of the FTAP, a ‘martingale measure’ is a probability measure, usually labelled Q, such that the (real, rather than nominal) price of an asset today, X0, is the expectation, using the martingale measure, of its (real) price in the future, XT. Formally,

X0 = EQ XT

The abstract probability distribution Q is defined so that this equality exists, not on any empirical information of historical prices or subjective judgement of future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’, probability measures usually assigned the label P, is that they agree on what is possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in a development of von Mises’ work. The idea that asset prices have the martingale property was first proposed by Benoit Mandelbrot in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH), the two concepts being combined by Fama. For Mandelbrot and Fama the key consequence of prices being martingales was that the current price was independent of the future price and technical analysis would not prove profitable in the long run. In developing the EMH there was no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure. While the FTAP employs modern terminology in the context of value-neutrality, the idea of equating a current price with a future, uncertain, has ethical ramifications.

The other technical term in the first statement of the FTAP, arbitrage, has long been used in financial mathematics. Liber Abaci Fibonacci (Laurence Sigler Fibonaccis Liber Abaci) discusses ‘Barter of Merchandise and Similar Things’, 20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’, or ‘mediate’ as Aristotle might say, between the other two commodities.

Within neo-classical economics, the Law of One Price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gérard Debreu and Lionel MacKenzie in the context of general equilibrium, in particular the introduction of the Arrow Security, which, employing the Law of One Price, could be used to price any asset. It was on this principle that Black and Scholes believed the value of the warrants could be deduced by employing a hedging portfolio, in introducing their work with the statement that “it should not be possible to make sure profits” they were invoking the arbitrage argument, which had an eight hundred year history. In the context of the FTAP, ‘an arbitrage’ has developed into the ability to formulate a trading strategy such that the probability, under a natural or martingale measure, of a loss is zero, but the probability of a positive profit is not.

To understand the connection between the financial concept of arbitrage and the mathematical idea of a martingale measure, consider the most basic case of a single asset whose current price, X0, can take on one of two (present) values, XTD < XTU, at time T > 0, in the future. In this case an arbitrage would exist if X0 ≤ XTD < XTU: buying the asset now, at a price that is less than or equal to the future pay-offs, would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if XTD < XTU ≤ X0, short selling the asset now, and buying it back would also lead to an arbitrage. So, for there to be no arbitrage opportunities we require that

XTD < X0 < XTU

This implies that there is a number, 0 < q < 1, such that

X0 = XTD + q(XTU − XTD)

= qXTU + (1−q)XTD

The price now, X0, lies between the future prices, XTU and XTD, in the ratio q : (1 − q) and represents some sort of ‘average’. The first statement of the FTAP can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (real) future price”.

If X0 < XTD ≤ XTU we have that q < 0 whereas if XTD ≤ XTU < X0 then q > 1, and in both cases q does not represent a probability measure which by Kolmogorov’s axioms, must lie between 0 and 1. In either of these cases an arbitrage exists and a trader can make a riskless profit, the market involves ‘turpe lucrum’. This account gives an insight as to why James Bernoulli, in his moral approach to probability, considered situations where probabilities did not sum to 1, he was considering problems that were pathological not because they failed the rules of arithmetic but because they were unfair. It follows that if there are no arbitrage opportunities then quantity q can be seen as representing the ‘probability’ that the XTU price will materialise in the future. Formally

X0 = qXTU + (1−q) XTD ≡ EQ XT

The connection between the financial concept of arbitrage and the mathematical object of a martingale is essentially a tautology: both statements mean that the price today of an asset must lie between its future minimum and maximum possible value. This first statement of the FTAP was anticipated by Frank Ramsey when he defined ‘probability’ in the Pragmatic sense of ‘a degree of belief’ and argues that measuring ‘degrees of belief’ is through betting odds. On this basis he formulates some axioms of probability, including that a probability must lie between 0 and 1. He then goes on to say that

These are the laws of probability, …If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event.

This is a Pragmatic argument that identifies the absence of the martingale measure with the existence of arbitrage and today this forms the basis of the standard argument as to why arbitrages do not exist: if they did the, other market participants would bankrupt the agent who was mis-pricing the asset. This has become known in philosophy as the ‘Dutch Book’ argument and as a consequence of the fact/value dichotomy this is often presented as a ‘matter of fact’. However, ignoring the fact/value dichotomy, the Dutch book argument is an alternative of the ‘Golden Rule’– “Do to others as you would have them do to you.”– it is infused with the moral concepts of fairness and reciprocity (Jeffrey Wattles The Golden Rule).

FTAP is the ethical concept of Justice, capturing the social norms of reciprocity and fairness. This is significant in the context of Granovetter’s discussion of embeddedness in economics. It is conventional to assume that mainstream economic theory is ‘undersocialised’: agents are rational calculators seeking to maximise an objective function. The argument presented here is that a central theorem in contemporary economics, the FTAP, is deeply embedded in social norms, despite being presented as an undersocialised mathematical object. This embeddedness is a consequence of the origins of mathematical probability being in the ethical analysis of commercial contracts: the feudal shackles are still binding this most modern of economic theories.

Ramsey goes on to make an important point

Having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you.

Ramsey is arguing that an agent needs to employ the same measure in pricing all assets in a market, and this is the key result in contemporary derivative pricing. Having identified the martingale measure on the basis of a ‘primal’ asset, it is then applied across the market, in particular to derivatives on the primal asset but the well-known result that if two assets offer different ‘market prices of risk’, an arbitrage exists. This explains why the market-price of risk appears in the Radon-Nikodym derivative and the Capital Market Line, it enforces Ramsey’s consistency in pricing. The second statement of the FTAP is concerned with incomplete markets, which appear in relation to Arrow-Debreu prices. In mathematics, in the special case that there are as many, or more, assets in a market as there are possible future, uncertain, states, a unique pricing vector can be deduced for the market because of Cramer’s Rule. If the elements of the pricing vector satisfy the axioms of probability, specifically each element is positive and they all sum to one, then the market precludes arbitrage opportunities. This is the case covered by the first statement of the FTAP. In the more realistic situation that there are more possible future states than assets, the market can still be arbitrage free but the pricing vector, the martingale measure, might not be unique. The agent can still be consistent in selecting which particular martingale measure they choose to use, but another agent might choose a different measure, such that the two do not agree on a price. In the context of the Law of One Price, this means that we cannot hedge, replicate or cover, a position in the market, such that the portfolio is riskless. The significance of the second statement of the FTAP is that it tells us that in the sensible world of imperfect knowledge and transaction costs, a model within the framework of the FTAP cannot give a precise price. When faced with incompleteness in markets, agents need alternative ways to price assets and behavioural techniques have come to dominate financial theory. This feature was realised in The Port Royal Logic when it recognised the role of transaction costs in lotteries.

Capital as a Symbolic Representation of Power. Nitzan’s and Bichler’s Capital as Power: A Study of Order and Creorder.

golem1

The secret to understanding accumulation, lies not in the narrow confines of production and consumption, but in the broader processes and institutions of power. Capital, is neither a material object nor a social relationship embedded in material entities. It is not ‘augmented’ by power. It is, in itself, a symbolic representation of power….

Unlike the elusive liberals, Marxists try to deal with power head on – yet they too end up with a fractured picture. Unable to fit power into Marx’s value analysis, they have split their inquiry into three distinct branches: a neo-Marxian economics that substitutes monopoly for labour values; a cultural analysis whose extreme versions reject the existence of ‘economics’ altogether (and eventually also the existence of any ‘objective’ order); and a state theory that oscillates between two opposite positions – one that prioritizes state power by demoting the ‘laws’ of the economy, and another that endorses the ‘laws’ of the economy by annulling the autonomy of the state. Gradually, each of these branches has developed its own orthodoxies, academic bureaucracies and barriers. And as the fractures have deepened, the capitalist totality that Marx was so keen on uncovering has dissipated….

The commodified structure of capitalism, Marx argues, is anchored in the labour process: the accumulation of capital is denominated in prices; prices reflect labour values; and labour values are determined by the productive labour time necessary to make the commodities. This sequence is intuitively appealing and politically motivating, but it runs into logical and empirical impossibilities at every step of the way. First, it is impossible to differentiate productive from unproductive labour. Second, even if we knew what productive labour was, there would still be no way of knowing how much productive labour goes into a given commodity, and therefore no way of knowing the labour value of that commodity and the amount of surplus value it embodies. And finally, even if labour values were known, there would be no consistent way to convert them into prices and surplus value into profit. So, in the end, Marxism cannot explain the prices of commodities – not in detail and not even approximately. And without a theory of prices, there can be no theory of profit and accumulation and therefore no theory of capitalism….

Modern capitalists are removed from production: they are absentee owners. Their ownership, says Veblen, doesn’t contribute to industry; it merely controls it for profitable ends. And since the owners are absent from industry, the only way for them to exact their profit is by ‘sabotaging’ industry. From this viewpoint, the accumulation of capital is the manifestation not of productive contribution but of organized power.

To be sure, the process by which capitalists ‘translate’ qualitatively different power processes into quantitatively unified measures of earnings and capitalization isn’t very ‘objective’. Filtered through the conventional assessments of accountants and the future speculations of investors, the conversion is deeply inter-subjective. But it is also very real, extremely imposing and, as we shall see, surprisingly well-defined.

These insights can be extended into a broader metaphor of a ‘social hologram’: a framework that integrates the resonating productive interactions of industry with the dissonant power limitations of business. These hologramic spectacles allow us to theorize the power underpinnings of accumulation, explore their historical evolution and understand the ways in which various forms of power are imprinted on and instituted in the corporation…..

Business enterprise diverts and limits industry instead of boosting it; that ‘business as usual’ needs and implies strategic limitation; that most firms are not passive price takers but active price makers, and that their autonomy makes ‘pure’ economics indeterminate; that the ‘normal rate of return’, just like the ancient rate of interest, is a manifestation not of productive yield but of organized power; that the corporation emerged not to enhance productivity but to contain it; that equity and debt have little to do with material wealth and everything to do with systemic power; and, finally, that there is little point talking about the deviations and distortions of ‘financial capital’ simply because there is no ‘productive capital’ to deviate from and distort.

Jonathan Nitzan, Shimshon Bichler- Capital as Power:_ A Study of Order and Creorder 

 

Kōjin Karatani versus Moishe Postone. Architectonics of Capitalism.

Kōjin Karatani’s theory of different modes of intercourse criticizes architectonic metaphor thinking that the logic of mods of production in terms of base and superstructure without ceding grounds on the centrality of the critique of political economy. the obvious question is what remains of theory when there is a departure not from the objective towards the subjective, but rather the simultaneous constitution of the subjective and the objective dimensions of the social under capitalism. One way of addressing the dilemma is to take recourse to the lesson of commodity form, where capitalism begets a uniform mode of mediation rather than disparate. The language of modes of production according to Moishe Postone happens to be a transhistorical language allowing for a transhistorical epistemology to sneak in through the backdoor thus outlining the necessity of critical theory’s existence only in so far as the object of critique stays in existence. Karatani’s first critique concerns a crude base-superstructure concept, in which nation and nationalism are viewed merely as phenomena of the ideological superstructure, which could be overcome by reason (enlightenment) or would disappear together with the state. But the nation functions autonomously, independent of the state, and as the imaginative return of community or reciprocal mode of exchange A, it is egalitarian in nature. As is the case with universal religions, the nation thus holds a moment of protest, of opposition, of emancipatory imagination. The second critique concerns the conception of the proletariat, which Marxism reduced to the process of production, in which its labor force is turned into a commodity. Production (i.e., consumption of labor power) as a fundamental basis to gain and to increase surplus value remains unchanged. Nonetheless, according to Karatani surplus value is only achieved by selling commodities, in the process of circulation, which does not generate surplus value itself, but without which there cannot be any surplus value. Understanding the proletariat as producer-consumer opens up new possibilities for resistance against the system. In late capitalism, in which capital and company are often separated, workers (in the broadest sense of wage and salary earners) are usually not able to resist their dependency and inferiority in the production process. By contrast, however, in the site of consumption, capital is dependent on the worker as consumer. Whereas capital can thus control the proletariat in the production process and force them to work, it loses its power over them in the process of circulation. If, says Karatani, we would view consumers as workers in the site of circulation, consumer movements could be seen as proletariat movements. They can, for example, resort to the legal means of boycott, which capital is unable to resist directly. Karatani bases his critique of capitalism not on the perspectives of globalization, but rather on what he terms neo-imperialism meaning state-based attempt of capital to subject the entire world to its logic of exploitation, and thus any logic to overcoming the modern world system of capital-nation-state by means of a world revolution and its sublation in a system is to be possible by justice based on exchange. For Postone Capital generates a system characteristically by the opposition of abstract universality, the value form, and particularistic specificity, the use value dimension. It seems to me that rather than viewing a socialist or an emancipatory movement as the heirs to the Enlightenment, as the classic working class movement did, critical movements today should be striving for a new form of universalism that encompasses the particular, rather than existing in opposition to the particular. This will not be easy, because a good part of the Left today has swung to particularity rather than trying to and a new form of universalism. I think this is a fatal mistake.