The Illicit Trade of Firearms, Explosives and Ammunition on the Dark Web

keyboard

DATACRYPTO is a web crawler/scraper class of software that systematically archives websites and extracts information from them. Once a cryptomarket has been identified, DATACRYPTO is set up to log in to the market and download its contents, beginning at the web page fixed by the researchers (typically the homepage). After downloading that page, DATACRYPTO parses it for hyperlinks to other pages hosted on the same market and follows each, adding new hyperlinks encountered, and visiting and downloading these, until no new pages are found. This process is referred to as web crawling. DATACRYPTO then switches from crawler to scraper mode, extracting information from the pages it has downloaded into a single database.

One challenge connected to crawling cryptomarkets arises when, despite appearances to the contrary, the crawler has indexed only a subset of a marketplace’s web pages. This problem is particularly exacerbated by sluggish download speeds on the Tor network which, combined with marketplace downtime, may prevent DATACRYPTO from completing the crawl of a cryptomarket. DATACRYPTO was designed to prevent partial marketplace crawls through its ‘state-aware’ capability, meaning that the result of each page request is analysed and logged by the software. In the event of service disruptions on the marketplace or on the Tor network, DATACRYPTO pauses and then attempts to continue its crawl a few minutes later. If a request for a page returns a different page (e.g. asking for a listing page and receiving the home page of the cryptomarket), the request is marked as failed, with each crawl tallying failed page requests.

DATACRYPTO is programmed for each market to extract relevant information connected to listings and vendors, which is then collected into a single database:

  • Product title;
  • Product description;
  • Listing price;
  • Number of customer feedbacks for the listing;
  • The country or region from which a vendor ships the product;
  • The country or regions to which the vendor placing the listing is willing to ship.

DATACRYPTO is not the first crawler to mirror the dark web, but is novel in its ability to pull information from a variety of cryptomarkets at once, despite differences in page structure and naming conventions across sites. For example, “$…” on one market may give you the price of a listing. On another market, price might be signified by “VALUE…” or “PRICE…” instead.

Researchers who want to create a similar tool to gather data through crawling the web should detail which information exactly they would like to extract. When building a web crawler it is, for example, very important to carefully study the structure and characteristics of the websites to be mirrored. Before setting the crawler loose, ensure that it extracts and parses correct and complete information. Because the process of building a crawler-tool like DATACRYPTO can be costly and time consuming, it is also important to anticipate on future data needs, and build in capabilities to extract that kind of data later on, so no large future modifications are necessary.

Building a complex tool like DATACRYPTO is no easy feat. The crawler needs to be able to copy pages, but also stealthily get around CAPTCHAs and log itself in onto the TOR server. Due to their bulkiness, web crawlers can place a heavy burden on a website’s server, and are easily detected due to their repetitive pattern moving between pages. Site administrators are therefore not afraid to IP-ban badly designed crawlers from their sites.

The Illicit Trade of Firearms Explosives and Ammunition on the Dark Web

Advertisement

Private Equity and Corporate Governance. Thought of the Day 109.0

The two historical models of corporate ownership are (1) dispersed public ownership across many shareholders and (2) family-owned or closely held. Private equity ownership is a hybrid between these two models.

Untitled

The main advantages of public ownership include giving a company the widest possible access to capital and, for start-up companies, more credibility with suppliers and customers. The key disadvantages are that a public listing of stock brings constant scrutiny by regulators and the media, incurs significant costs (listing, legal and other regulatory compliance costs), and creates a significant focus on short-term financial results from a dispersed base of shareholders (many of whom are not well informed). Most investors in public companies have limited ability to influence a company’s decision making because ownership is so dispersed. As a result, if a company performs poorly, these investors are inclined to sell shares instead of attempting to engage with management through the infrequent opportunities to vote on important corporate decisions. This unengaged oversight opens the possibility of managers potentially acting in ways that are contrary to the interests of shareholders.

Family-owned or closely held companies avoid regulatory and public scrutiny. The owners also have a direct say in the governance of the company, minimizing potential conflicts of interest between owners and managers. However, the funding options for these private companies are mainly limited to bank loans and other private debt financing. Raising equity capital through the private placement market is a cumbersome process that often results in a poor outcome.

Private equity firms offer a hybrid model that is sometimes more advantageous for companies that are uncomfortable with both the family-owned/closely held and public ownership models. Changes in corporate governance are generally a key driver of success for private equity investments. Private equity firms usually bring a fresh culture into corporate boards and often incentivize executives in a way that would usually not be possible in a public company. A private equity fund has a vital self-interest to improve management quality and firm performance because its investment track record is the key to raising new funds in the future. In large public companies there is often the possibility of “cross-subsidization” of less successful parts of a corporation, but this suboptimal behavior is usually not found in companies owned by private equity firms. As a result, private equity-owned companies are more likely to expose and reconfigure or sell suboptimal business segments, compared to large public companies. Companies owned by private equity firms avoid public scrutiny and quarterly earnings pressures. Because private equity funds typically have an investment horizon that is longer than the typical mutual fund or other public investor, portfolio companies can focus on longer-term restructuring and investments.

Private equity owners are fully enfranchised in all key management decisions because they appoint their partners as nonexecutive directors to the company’s board, and some- times bring in their own managers to run the company. As a result, they have strong financial incentives to maximize shareholder value. Since the managers of the company are also required to invest in the company’s equity alongside the private equity firm, they have similarly strong incentives to create long-term shareholder value. However, the significant leverage that is brought into a private equity portfolio company’s capital structure puts pressure on management to operate virtually error free. As a result, if major, unanticipated dislocations occur in the market, there is a higher probability of bankruptcy compared to either the family-owned/closely held or public company model, which includes less leverage. The high level of leverage that is often connected with private equity acquisition is not free from controversy. While it is generally agreed that debt has a disciplining effect on management and keeps them from “empire building,” it does not improve the competitive position of a firm and is often not sustainable. Limited partners demand more from private equity managers than merely buying companies based on the use of leverage. In particular, investors expect private equity managers to take an active role in corporate governance to create incremental value.

Private equity funds create competitive pressures on companies that want to avoid being acquired. CEOs and boards of public companies have been forced to review their performance and take steps to improve. In addition, they have focused more on antitakeover strategies. Many companies have initiated large share repurchase programs as a vehicle for increasing earnings per share (sometimes using new debt to finance repurchases). This effort is designed, in part, to make a potential takeover more expensive and therefore less likely. Companies consider adding debt to their balance sheet in order to reduce the overall cost of capital and achieve higher returns on equity. This strategy is sometimes pursued as a direct response to the potential for a private equity takeover. However, increasing leverage runs the risk of lower credit ratings on debt, which increases the cost of debt capital and reduces the margin for error. Although some managers are able to manage a more leveraged balance sheet, others are ill equipped, which can result in a reduction in shareholder value through mismanagement.

Cryptocurrency and Efficient Market Hypothesis. Drunken Risibility.

According to the traditional definition, a currency has three main properties: (i) it serves as a medium of exchange, (ii) it is used as a unit of account and (iii) it allows to store value. Along economic history, monies were related to political power. In the beginning, coins were minted in precious metals. Therefore, the value of a coin was intrinsically determined by the value of the metal itself. Later, money was printed in paper bank notes, but its value was linked somewhat to a quantity in gold, guarded in the vault of a central bank. Nation states have been using their political power to regulate the use of currencies and impose one currency (usually the one issued by the same nation state) as legal tender for obligations within their territory. In the twentieth century, a major change took place: abandoning gold standard. The detachment of the currencies (specially the US dollar) from the gold standard meant a recognition that the value of a currency (specially in a world of fractional banking) was not related to its content or representation in gold, but to a broader concept as the confidence in the economy in which such currency is based. In this moment, the value of a currency reflects the best judgment about the monetary policy and the “health” of its economy.

In recent years, a new type of currency, a synthetic one, emerged. We name this new type as “synthetic” because it is not the decision of a nation state, nor represents any underlying asset or tangible wealth source. It appears as a new tradable asset resulting from a private agreement and facilitated by the anonymity of internet. Among this synthetic currencies, Bitcoin (BTC) emerges as the most important one, with a market capitalization of a few hundred million short of $80 billions.

bitcoin-price-bitstamp-sept1

Bitcoin Price Chart from Bitstamp

There are other cryptocurrencies, based on blockchain technology, such as Litecoin (LTC), Ethereum (ETH), Ripple (XRP). The website https://coinmarketcap.com/currencies/ counts up to 641 of such monies. However, as we can observe in the figure below, Bitcoin represents 89% of the capitalization of the market of all cryptocurrencies.

Untitled

Cryptocurrencies. Share of market capitalization of each currency.

One open question today is if Bitcoin is in fact a, or may be considered as a, currency. Until now, we cannot observe that Bitcoin fulfills the main properties of a standard currency. It is barely (though increasingly so!) accepted as a medium of exchange (e.g. to buy some products online), it is not used as unit of account (there are no financial statements valued in Bitcoins), and we can hardly believe that, given the great swings in price, anyone can consider Bitcoin as a suitable option to store value. Given these characteristics, Bitcoin could fit as an ideal asset for speculative purposes. There is no underlying asset to relate its value to and there is an open platform to operate round the clock.

Untitled

Bitcoin returns, sampled every 5 hours.

Speculation has a long history and it seems inherent to capitalism. One common feature of speculative assets in history has been the difficulty in valuation. Tulipmania, the South Sea bubble, and more others, reflect on one side human greedy behavior, and on the other side, the difficulty to set an objective value to an asset. All speculative behaviors were reflected in a super-exponential growth of the time series.

Cryptocurrencies can be seen as the libertarian response to central bank failure to manage financial crises, as the one occurred in 2008. Also cryptocurrencies can bypass national restrictions to international transfers, probably at a cheaper cost. Bitcoin was created by a person or group of persons under the pseudonym Satoshi Nakamoto. The discussion of Bitcoin has several perspectives. The computer science perspective deals with the strengths and weaknesses of blockchain technology. In fact, according to R. Ali et. al., the introduction of a “distributed ledger” is the key innovation. Traditional means of payments (e.g. a credit card), rely on a central clearing house that validate operations, acting as “middleman” between buyer and seller. On contrary, the payment validation system of Bitcoin is decentralized. There is a growing army of miners, who put their computer power at disposal of the network, validating transactions by gathering together blocks, adding them to the ledger and forming a ’block chain’. This work is remunerated by giving the miners Bitcoins, what makes (until now) the validating costs cheaper than in a centralized system. The validation is made by solving some kind of algorithm. With the time solving the algorithm becomes harder, since the whole ledger must be validated. Consequently it takes more time to solve it. Contrary to traditional currencies, the total number of Bitcoins to be issued is beforehand fixed: 21 million. In fact, the issuance rate of Bitcoins is expected to diminish over time. According to Laursen and Kyed, validating the public ledger was initially rewarded with 50 Bitcoins, but the protocol foresaw halving this quantity every four years. At the current pace, the maximum number of Bitcoins will be reached in 2140. Taking into account the decentralized character, Bitcoin transactions seem secure. All transactions are recorded in several computer servers around the world. In order to commit fraud, a person should change and validate (simultaneously) several ledgers, which is almost impossible. Additional, ledgers are public, with encrypted identities of parties, making transactions “pseudonymous, not anonymous”. The legal perspective of Bitcoin is fuzzy. Bitcoin is not issued, nor endorsed by a nation state. It is not an illegal substance. As such, its transaction is not regulated.

In particular, given the nonexistence of saving accounts in Bitcoin, and consequently the absense of a Bitcoin interest rate, precludes the idea of studying the price behavior in relation with cash flows generated by Bitcoins. As a consequence, the underlying dynamics of the price signal, finds the Efficient Market Hypothesis as a theoretical framework. The Efficient Market Hypothesis (EMH) is the cornerstone of financial economics. One of the seminal works on the stochastic dynamics of speculative prices is due to L Bachelier, who in his doctoral thesis developed the first mathematical model concerning the behavior of stock prices. The systematic study of informational efficiency begun in the 1960s, when financial economics was born as a new area within economics. The classical definition due to Eugene Fama (Foundations of Finance_ Portfolio Decisions and Securities Prices 1976-06) says that a market is informationally efficient if it “fully reflects all available information”. Therefore, the key element in assessing efficiency is to determine the appropriate set of information that impels prices. Following Efficient Capital Markets, informational efficiency can be divided into three categories: (i) weak efficiency, if prices reflect the information contained in the past series of prices, (ii) semi-strong efficiency, if prices reflect all public information and (iii) strong efficiency, if prices reflect all public and private information. As a corollary of the EMH, one cannot accept the presence of long memory in financial time series, since its existence would allow a riskless profitable trading strategy. If markets are informationally efficient, arbitrage prevent the possibility of such strategies. If we consider the financial market as a dynamical structure, short term memory can exist (to some extent) without contradicting the EMH. In fact, the presence of some mispriced assets is the necessary stimulus for individuals to trade and reach an (almost) arbitrage free situation. However, the presence of long range memory is at odds with the EMH, because it would allow stable trading rules to beat the market.

The presence of long range dependence in financial time series generates a vivid debate. Whereas the presence of short term memory can stimulate investors to exploit small extra returns, making them disappear, long range correlations poses a challenge to the established financial model. As recognized by Ciaian et. al., Bitcoin price is not driven by macro-financial indicators. Consequently a detailed analysis of the underlying dynamics (Hurst exponent) becomes important to understand its emerging behavior. There are several methods (both parametric and non parametric) to calculate the Hurst exponent, which become a mandatory framework to tackle BTC trading.

Consequentialism -X- (Pareto Efficiency) -X- Deontology

maxresdefault

Let us check the Polity to begin with:

1. N is the set of all individuals in society.

And that which their politics concerns – the state of society.

2. S is the set of all possible information contained within society, so that a set s ∈ 2S (2S being the set of all possible subsets of S) contains all extant information about a particular iteration of society and will be called the state of society. S is an arbitrary topological space.

And the means by which individuals make judgements about that which their politics concerns. Their preferences over the information contained within the state of society.

3. Each individual i ∈ N has a complete and transitive preference relation ≽i defined over a set of preference-information Si ⊂ S such that si ≽ s′i can be read “individual i prefers preference information si at least as much as preference-information s′i”.

Any particular set of preference-information si ⊂ Si can be thought of as the state of society as viewed by individual i. The set of preference-information for individual i is a subset of the information contained within a particular iteration of society, so si ⊂ s ⊂ S.

A particular state of society s is a Pareto efficient if there is no other state of society s′ for which one individual strictly prefers their preference-information s′i ⊂ s′ to that particular state si ⊂ s, and the preference-information s′j ⊂ s′ in the other state s′ is at least as preferred by every other individual j ≠ i.

4. A state s ∈ S is said to be Pareto efficient iff ∄ s′ ∈ 2S & i ∈ N : s′i ≻ si & s′j ≽ sj ∀ j ≠ i ∈ N.

To put it crudely, a particular state of society is Pareto efficient if no individual can be made “better off” without making another individual “worse off”. A dynamic concept which mirrors this is the concept of a Pareto improvement – whereby a change in the state of society leaves everyone at least indifferent, and at least one individual in a preferable situation.

5. A movement between two states of society, s → s′ is called a Pareto improvement iff ∃ i ∈ N : s′i ≻ si & s′j ≽ sj ∀ j ≠ i ∈ N .

Note that this does not imply that s′ is a Pareto efficient state, because the same could potentially be said of a movement s′ → s′′. The state s′ is only a Pareto efficient state if we cannot find yet another state for which the movement to that state is a Pareto improvement. The following Theorem, demonstrates this distinction and gives an alternative definition of Pareto efficiency.

Theorem: A state s ∈ 2S is Pareto efficient iff there is no other state s′ for which the movement s → s′ is a Pareto improvement.

If one adheres to a consequentialist political doctrine (such as classical utilitarianism) rather than a deontological doctrine (such as liberalism) in which action is guided by some categorical imperative other than consequentialism, the guide offered by Pareto improvement is the least controversial, and least politically committal criterion to decision-making one can find. Indeed if we restrict political statements to those which concern the assignation of losses, it is a-political. It makes a value judgement only about who ought gain (whosoever stands to).

Unless one holds a strict deontological doctrine in the style, say, of Robert Nozick’s Anarchy state and Utopia (in which the maintenance of individual freedom is the categorical imperative), or John Rawls’ A Theory of Justice (in which again individual freedom is the primary categorical imperative and the betterment of the “poorest” the second categorical imperative), it is more difficult to argue against implementing some decision which will cause a change of society which all individuals in society will be at worst indifferent to. Than arguing for some decision rule which will induce a change of society which some individual will find less preferable. To the rationalisitic economist it seems almost petty, certainly irrational to argue against this criterion, like those individuals who demand “fairness” in the famous “dictator” experiment rather than accept someone else becoming “better off”, and themselves no “worse off”.

Fundamental Theorem of Asset Pricing: Tautological Meeting of Mathematical Martingale and Financial Arbitrage by the Measure of Probability.

thinkstockphotos-496599823

The Fundamental Theorem of Asset Pricing (FTAP hereafter) has two broad tenets, viz.

1. A market admits no arbitrage, if and only if, the market has a martingale measure.

2. Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The FTAP is a theorem of mathematics, and the use of the term ‘measure’ in its statement places the FTAP within the theory of probability formulated by Andrei Kolmogorov (Foundations of the Theory of Probability) in 1933. Kolmogorov’s work took place in a context captured by Bertrand Russell, who observed that

It is important to realise the fundamental position of probability in science. . . . As to what is meant by probability, opinions differ.

In the 1920s the idea of randomness, as distinct from a lack of information, was becoming substantive in the physical sciences because of the emergence of the Copenhagen Interpretation of quantum mechanics. In the social sciences, Frank Knight argued that uncertainty was the only source of profit and the concept was pervading John Maynard Keynes’ economics (Robert Skidelsky Keynes the return of the master).

Two mathematical theories of probability had become ascendant by the late 1920s. Richard von Mises (brother of the Austrian economist Ludwig) attempted to lay down the axioms of classical probability within a framework of Empiricism, the ‘frequentist’ or ‘objective’ approach. To counter–balance von Mises, the Italian actuary Bruno de Finetti presented a more Pragmatic approach, characterised by his claim that “Probability does not exist” because it was only an expression of the observer’s view of the world. This ‘subjectivist’ approach was closely related to the less well-known position taken by the Pragmatist Frank Ramsey who developed an argument against Keynes’ Realist interpretation of probability presented in the Treatise on Probability.

Kolmogorov addressed the trichotomy of mathematical probability by generalising so that Realist, Empiricist and Pragmatist probabilities were all examples of ‘measures’ satisfying certain axioms. In doing this, a random variable became a function while an expectation was an integral: probability became a branch of Analysis, not Statistics. Von Mises criticised Kolmogorov’s generalised framework as un-necessarily complex. About a decade and a half back, the physicist Edwin Jaynes (Probability Theory The Logic Of Science) champions Leonard Savage’s subjectivist Bayesianism as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science”.

The objections to measure theoretic probability for empirical scientists can be accounted for as a lack of physicality. Frequentist probability is based on the act of counting; subjectivist probability is based on a flow of information, which, following Claude Shannon, is now an observable entity in Empirical science. Measure theoretic probability is based on abstract mathematical objects unrelated to sensible phenomena. However, the generality of Kolmogorov’s approach made it flexible enough to handle problems that emerged in physics and engineering during the Second World War and his approach became widely accepted after 1950 because it was practically more useful.

In the context of the first statement of the FTAP, a ‘martingale measure’ is a probability measure, usually labelled Q, such that the (real, rather than nominal) price of an asset today, X0, is the expectation, using the martingale measure, of its (real) price in the future, XT. Formally,

X0 = EQ XT

The abstract probability distribution Q is defined so that this equality exists, not on any empirical information of historical prices or subjective judgement of future prices. The only condition placed on the relationship that the martingale measure has with the ‘natural’, or ‘physical’, probability measures usually assigned the label P, is that they agree on what is possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was introduced into mathematics by Jean Ville in a development of von Mises’ work. The idea that asset prices have the martingale property was first proposed by Benoit Mandelbrot in response to an early formulation of Eugene Fama’s Efficient Market Hypothesis (EMH), the two concepts being combined by Fama. For Mandelbrot and Fama the key consequence of prices being martingales was that the current price was independent of the future price and technical analysis would not prove profitable in the long run. In developing the EMH there was no discussion on the nature of the probability under which assets are martingales, and it is often assumed that the expectation is calculated under the natural measure. While the FTAP employs modern terminology in the context of value-neutrality, the idea of equating a current price with a future, uncertain, has ethical ramifications.

The other technical term in the first statement of the FTAP, arbitrage, has long been used in financial mathematics. Liber Abaci Fibonacci (Laurence Sigler Fibonaccis Liber Abaci) discusses ‘Barter of Merchandise and Similar Things’, 20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and Fibonacci solves the problem by having Pisan pounds ‘arbitrate’, or ‘mediate’ as Aristotle might say, between the other two commodities.

Within neo-classical economics, the Law of One Price was developed in a series of papers between 1954 and 1964 by Kenneth Arrow, Gérard Debreu and Lionel MacKenzie in the context of general equilibrium, in particular the introduction of the Arrow Security, which, employing the Law of One Price, could be used to price any asset. It was on this principle that Black and Scholes believed the value of the warrants could be deduced by employing a hedging portfolio, in introducing their work with the statement that “it should not be possible to make sure profits” they were invoking the arbitrage argument, which had an eight hundred year history. In the context of the FTAP, ‘an arbitrage’ has developed into the ability to formulate a trading strategy such that the probability, under a natural or martingale measure, of a loss is zero, but the probability of a positive profit is not.

To understand the connection between the financial concept of arbitrage and the mathematical idea of a martingale measure, consider the most basic case of a single asset whose current price, X0, can take on one of two (present) values, XTD < XTU, at time T > 0, in the future. In this case an arbitrage would exist if X0 ≤ XTD < XTU: buying the asset now, at a price that is less than or equal to the future pay-offs, would lead to a possible profit at the end of the period, with the guarantee of no loss. Similarly, if XTD < XTU ≤ X0, short selling the asset now, and buying it back would also lead to an arbitrage. So, for there to be no arbitrage opportunities we require that

XTD < X0 < XTU

This implies that there is a number, 0 < q < 1, such that

X0 = XTD + q(XTU − XTD)

= qXTU + (1−q)XTD

The price now, X0, lies between the future prices, XTU and XTD, in the ratio q : (1 − q) and represents some sort of ‘average’. The first statement of the FTAP can be interpreted simply as “the price of an asset must lie between its maximum and minimum possible (real) future price”.

If X0 < XTD ≤ XTU we have that q < 0 whereas if XTD ≤ XTU < X0 then q > 1, and in both cases q does not represent a probability measure which by Kolmogorov’s axioms, must lie between 0 and 1. In either of these cases an arbitrage exists and a trader can make a riskless profit, the market involves ‘turpe lucrum’. This account gives an insight as to why James Bernoulli, in his moral approach to probability, considered situations where probabilities did not sum to 1, he was considering problems that were pathological not because they failed the rules of arithmetic but because they were unfair. It follows that if there are no arbitrage opportunities then quantity q can be seen as representing the ‘probability’ that the XTU price will materialise in the future. Formally

X0 = qXTU + (1−q) XTD ≡ EQ XT

The connection between the financial concept of arbitrage and the mathematical object of a martingale is essentially a tautology: both statements mean that the price today of an asset must lie between its future minimum and maximum possible value. This first statement of the FTAP was anticipated by Frank Ramsey when he defined ‘probability’ in the Pragmatic sense of ‘a degree of belief’ and argues that measuring ‘degrees of belief’ is through betting odds. On this basis he formulates some axioms of probability, including that a probability must lie between 0 and 1. He then goes on to say that

These are the laws of probability, …If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event.

This is a Pragmatic argument that identifies the absence of the martingale measure with the existence of arbitrage and today this forms the basis of the standard argument as to why arbitrages do not exist: if they did the, other market participants would bankrupt the agent who was mis-pricing the asset. This has become known in philosophy as the ‘Dutch Book’ argument and as a consequence of the fact/value dichotomy this is often presented as a ‘matter of fact’. However, ignoring the fact/value dichotomy, the Dutch book argument is an alternative of the ‘Golden Rule’– “Do to others as you would have them do to you.”– it is infused with the moral concepts of fairness and reciprocity (Jeffrey Wattles The Golden Rule).

FTAP is the ethical concept of Justice, capturing the social norms of reciprocity and fairness. This is significant in the context of Granovetter’s discussion of embeddedness in economics. It is conventional to assume that mainstream economic theory is ‘undersocialised’: agents are rational calculators seeking to maximise an objective function. The argument presented here is that a central theorem in contemporary economics, the FTAP, is deeply embedded in social norms, despite being presented as an undersocialised mathematical object. This embeddedness is a consequence of the origins of mathematical probability being in the ethical analysis of commercial contracts: the feudal shackles are still binding this most modern of economic theories.

Ramsey goes on to make an important point

Having any definite degree of belief implies a certain measure of consistency, namely willingness to bet on a given proposition at the same odds for any stake, the stakes being measured in terms of ultimate values. Having degrees of belief obeying the laws of probability implies a further measure of consistency, namely such a consistency between the odds acceptable on different propositions as shall prevent a book being made against you.

Ramsey is arguing that an agent needs to employ the same measure in pricing all assets in a market, and this is the key result in contemporary derivative pricing. Having identified the martingale measure on the basis of a ‘primal’ asset, it is then applied across the market, in particular to derivatives on the primal asset but the well-known result that if two assets offer different ‘market prices of risk’, an arbitrage exists. This explains why the market-price of risk appears in the Radon-Nikodym derivative and the Capital Market Line, it enforces Ramsey’s consistency in pricing. The second statement of the FTAP is concerned with incomplete markets, which appear in relation to Arrow-Debreu prices. In mathematics, in the special case that there are as many, or more, assets in a market as there are possible future, uncertain, states, a unique pricing vector can be deduced for the market because of Cramer’s Rule. If the elements of the pricing vector satisfy the axioms of probability, specifically each element is positive and they all sum to one, then the market precludes arbitrage opportunities. This is the case covered by the first statement of the FTAP. In the more realistic situation that there are more possible future states than assets, the market can still be arbitrage free but the pricing vector, the martingale measure, might not be unique. The agent can still be consistent in selecting which particular martingale measure they choose to use, but another agent might choose a different measure, such that the two do not agree on a price. In the context of the Law of One Price, this means that we cannot hedge, replicate or cover, a position in the market, such that the portfolio is riskless. The significance of the second statement of the FTAP is that it tells us that in the sensible world of imperfect knowledge and transaction costs, a model within the framework of the FTAP cannot give a precise price. When faced with incompleteness in markets, agents need alternative ways to price assets and behavioural techniques have come to dominate financial theory. This feature was realised in The Port Royal Logic when it recognised the role of transaction costs in lotteries.

Conjuncted: Whats Right-Wing With Negri? Note Quote.

Capture1-600x250

Already with his concept of the socialised worker, Negri had rejected the central pillar of Marx’s economics – the relationship between value and labour. As the whole of society becomes a social factory, so the duration of labour becomes unquantifiable and it becomes impossible to reduce specific forms of labour into abstract socially necessary labour. As the 1980s and 1990s unfolded Negri underpinned his new politics with reference to two fashionable right wing theories – the idea of a ‘weightless economy’ developing out of a high tech ‘third industrial revolution’ and, more recently, extreme versions of globalisation theory depicting the death of the nationstate. Today Negri claims that ‘immaterial labour’ has taken the place of industrial labour as the hegemonic form of production that other forms of labour tend towards. Negri’s descriptions of contemporary production will seem unfamiliar to most workers: ‘A gigantic cultural revolution is under way. Free expression and the joy of bodies, the autonomy, hybridisation and the reconstruction of languages, the creation of new singular mobile modes of production—all this emerges, everywhere and continually’.

[Global corporations are anxious to include] difference within their realm and thus aim to maximise creativity, free play and diversity in the corporate workplace. People of all different races, sexes and sexual orientations should potentially be included in the corporation; the daily routine of the workplace should be rejuvenated with unexpected changes and an atmosphere of fun. Break down the old boundaries and let 100 flowers bloom!

Exploitation, in the Marxist sense of the pumping of unpaid surplus labour out of workers, has ended. Exploitation today means capturing the creative energies of a joyous, cooperating multitude – who may be inside or outside of the workplace. The domination of dead labour, such as machinery or computers, over living is finished because living (for Negri, intellectual) labour is now dominant. The tool of production is now the brain. Paul Thompson explains how Negri’s thinking parallels right wing accounts of the economic changes since the 1970s:

This appears to be remarkably similar to knowledge economy arguments, which we might briefly summarise in the following way. In the information age, capital and labour are said to have been displaced by the centrality of knowledge; brawn by brain; and the production of goods by services and manipulation of symbols. As a commodity, knowledge is too complex, intensive and esoteric to be managed through command and control. The archetypal worker in the new economy makes his or her living from judgement, service and analysis… As none of this is calculable or easily measured, it is the inherent property of the producer… This shifts the power balance to the employee, an increasing proportion of whom fall into the category of mobile, self-reliant and demanding ‘free workers’.

Thompson goes on to provide a detailed critique of the idea of immaterial labour. Even at the most immaterial end of the labour market, intellectual property regimes allow the commodification of knowledge. And such workers are still subject to exploitation and control centred upon the workplace.

Far from the workplace ceasing to be the centre of capital accumulation for the ruling class, it plays an increasingly important role in a world of labour intensification and tightening managerial control. The workplace is still the point at which fixed capital necessary for the production of most goods and services is centralised. And it is still the site where surplus value is extracted from workers – the central obsession of capitalists and states – and thus the point at which those opposed to the rule of capital should concentrate their efforts. Just like his vision of the weightless economy, Negri’s account of
globalisation is almost entirely unsupported by empirical evidence. He writes that:

large transnational corporations have effectively surpassed the jurisdiction and authority of nation-states…the state has been defeated and corporations now rule the earth!

Austrian Economics. Some More Further Ruminations. Part 3.

The dominant British tradition received its first serious challenge in many years when Carl Menger’s Principles of Economics was published in 1871. Menger, the founder of the Austrian School proper, resurrected the Scholastic-French approach to economics, and put it on firmer ground.

Menger spelled out the subjective basis of economic value, and fully explained, for the first time, the theory of marginal utility (the greater the number of units of a good that an individual possesses, the less he will value any given unit). In addition, Menger showed how money originates in a free market when the most marketable commodity is desired, not for consumption, but for use in trading for other goods. Menger restored economics as the science of human action based on deductive logic, and prepared the way for later theorists to counter the influence of socialist thought. Indeed, his student Friederich von Wieser strongly influenced Friedrich von Hayek’s later writings.

Menger’s admirer and follower at the University of Innsbruck, Eugen Böhm-Bawerk, took Menger’s exposition, reformulated it, and applied it to a host of new problems involving value, price, capital, and interest. His History and Critique of Interest Theories, appearing in 1884, is a sweeping account of fallacies in the history of thought and a firm defense of the idea that the interest rate is not an artificial construct but an inherent part of the market. It reflects the universal fact of “time preference,” the tendency of people to prefer satisfaction of wants sooner rather than later.

Böhm-Bawerk’s Positive Theory of Capital demonstrated that the normal rate of business profit is the interest rate. Capitalists save money, pay laborers, and wait until the final product is sold to receive profit. In addition, he demonstrated that capital is not homogeneous but an intricate and diverse structure that has a time dimension. A growing economy is not just a consequence of increased capital investment, but also of longer and longer processes of production.

Böhm-Bawerk favored policies that deferred to the ever-present reality of economic law. He regarded interventionism as an attack on market economic forces that cannot succeed in the long run. But one area where Böhm-Bawerk had not elaborated on the analysis of Menger was money, the institutional intersection of the “micro” and “macro” approach. A young Ludwig von Mises, economic advisor to the Austrian Chamber of Commerce, took on the challenge.

The result of Mises’s research was The Theory of Money and Credit, published in 1912. He spelled out how the theory of marginal utility applies to money, and laid out his “regression theorem,” showing that money not only originates in the market, but must always do so. Drawing on the British Currency School, Knut Wicksell’s theory of interest rates, and Böhm-Bawerk’s theory of the structure of production, Mises presented the broad outline of the Austrian theory of the business cycle. To note once again, his was not a theory of the physical capital, but a theory of interest. So, even if some of the economists of the school had covered through their writings the complexities of the structure of production, that wasn’t really their research object, but rather what their concentration really opted for was interest phenomenon, trade cycle or entrepreneurship.

Ludwig Lachmann in his Capital and its Structure is most serious about the complexities of the structure of production, especially on the heterogeneity of physical capital not only in relation to successive stages of production, but denying any possibility of systematically categorizing, measuring or aggregating capital goods. But, does that mean he is from a different camp? Evidently not, since much of his discussion contains an important contribution to the historically specificity of capital, in that the heterogenous is not itself the research object, but only a problem statement for the theory of the entrepreneur. Says he,

For most purposes capital goods have to be used jointly. complementarity is of the essence of capital use. but the heterogenous capital resources do not lend themselves to combination in any arbitrary fashion. For any given number of them only certain modes of complementarity are technically possible, and only a few of these are economically significant. It is among the latter that the entrepreneur has to find the ‘optimum combination’.

for him, the true function of the entrepreneur must remain hidden as long as we disregard the heterogeneity of capital. But, Peter Lewin’s Capital in Disequilibrium reads Lachmann revealingly. What makes it possible for entrepreneurs to make production plans comprising numerous heterogenous capital goods is a combination of the market process and the institution of money and financial accounting. There, you can see Lachmann slipping into the historical territory. Says Lewin,

Planning within firms proceeds against the necessary backdrop of the market. Planning within firms can occur precisely because “the market” furnishes it with the necessary prices for the factor inputs that would be absent in a fullblown state ownership situation.

Based on these prices, the institution of monetary calculation allows entrepreneurs to calculate retrospective and prospective profits. The calculation of profits, Lewin states, is “indispensable in that it provides the basis for discrimination between viable and non-viable production projects.” The approach is not concerned with the heterogeneity of capital goods as such but, to the contrary, with the way these goods are made homogeneous so that entrepreneurs can make the calculations their production plans are based on. Without this homogeneity of capital goods in relation to the goal of the entrepreneur – making monetary profit – it would be difficult, if not impossible, to combine them in a meaningful way.