Long Term Capital Management. Note Quote.

Long Term Capital Management, or LTCM, was a hedge fund founded in 1994 by John Meriwether, the former head of Salomon Brothers’s domestic fixed-income arbitrage group. Meriwether had grown the arbitrage group to become Salomon’s most profitable group by 1991, when it was revealed that one of the traders under his purview had astonishingly submitted a false bid in a U.S. Treasury bond auction. Despite reporting the trade immediately to CEO John Gutfreund, the outcry from the scandal forced Meriwether to resign.

Meriwether revived his career several years later with the founding of LTCM. Amidst the beginning of one of the greatest bull markets the global markets had ever seen, Meriwether assembled a team of some of the world’s most respected economic theorists to join other refugees from the arbitrage group at Salomon. The board of directors included Myron Scholes, a coauthor of the famous Black-Scholes formula used to price option contracts, and MIT Sloan professor Robert Merton, both of whom would later share the 1997 Nobel Prize for Economics. The firm’s impressive brain trust, collectively considered geniuses by most of the financial world, set out to raise a $1 billion fund by explaining to investors that their profoundly complex computer models allowed them to price securities according to risk more accurately than the rest of the market, in effect “vacuuming up nickels that others couldn’t see.”

One typical LTCM trade concerned the divergence in price between long-term U.S. Treasury bonds. Despite offering fundamentally the same (minimal) default risk, those issued more recently – known as “on-the-run” securities – traded more heavily than those “off-the-run” securities issued just months previously. Heavier trading meant greater liquidity, which in turn resulted in ever-so-slightly higher prices. As “on-the-run” securities become “off-the-run” upon the issuance of a new tranche of Treasury bonds, the price discrepancy generally disappears with time. LTCM sought to exploit that price convergence by shorting the more expensive “on-the-run” bond while purchasing the “off- the-run” security.

By early 1998 the intellectual firepower of its board members and the aggressive trading practices that had made the arbitrage group at Salomon so successful had allowed LTCM to flourish, growing its initial $1 billion of investor equity to $4.72 billion. However, the miniscule spreads earned on arbitrage trades could not provide the type of returns sought by hedge fund investors. In order to make transactions such as these worth their while, LTCM had to employ massive leverage in order to magnify its returns. Ultimately, the fund’s equity component sat atop more than $124.5 billion in borrowings for total assets of more than $129 billion. These borrowings were merely the tip of the ice-berg; LTCM also held off-balance-sheet derivative positions with a notional value of more than $1.25 trillion.

Untitled

The fund’s success began to pose its own problems. The market lacked sufficient capacity to absorb LTCM’s bloated size, as trades that had been profitable initially became impossible to conduct on a massive scale. Moreover, a flood of arbitrage imitators tightened the spreads on LTCM’s “bread-and-butter” trades even further. The pressure to continue delivering returns forced LTCM to find new arbitrage opportunities, and the fund diversified into areas where it could not pair its theoretical insights with trading experience. Soon LTCM had made large bets in Russia and in other emerging markets, on S&P futures, and in yield curve, junk bond, merger, and dual-listed securities arbitrage.

Combined with its style drift, the fund’s more than 26 leverage put LTCM in an increasingly precarious bubble, which was eventually burst by a combination of factors that forced the fund into a liquidity crisis. In contrast to Scholes’s comments about plucking invisible, riskless nickels from the sky, financial theorist Nassim Taleb later compared the fund’s aggressive risk taking to “picking up pennies in front of a steamroller,” a steamroller that finally came in the form of 1998’s market panic. The departure of frequent LTCM counterparty Salomon Brothers from the arbitrage market that summer put downward pressure on many of the fund’s positions, and Russia’s default on its government-issued bonds threw international credit markets into a downward spiral. Panicked investors around the globe demonstrated a “flight to quality,” selling the risky securities in which LTCM traded and purchasing U.S. Treasury securities, further driving up their price and preventing a price convergence upon which the fund had bet so heavily.

None of LTCM’s sophisticated theoretical models had contemplated such an internationally correlated credit market collapse, and the fund began hemorrhaging money, losing nearly 20% of its equity in May and June alone. Day after day, every market in which LTCM traded turned against it. Its powerless brain trust watched in horror as its equity shrank to $600 million in early September without any reduction in borrowing, resulting in an unfathomable 200 leverage ratio. Sensing the fund’s liquidity crunch, Bear Stearns refused to continue acting as a clearinghouse for the fund’s trades, throwing LTCM into a panic. Without the short-term credit that enabled its entire trading operations, the fund could not continue and its longer-term securities grew more illiquid by the day.

Obstinate in their refusal to unwind what they still considered profitable trades hammered by short-term market irrationality, LTCM’s partners refused a buyout offer of $250 million by Goldman Sachs, ING Barings, and Warren Buffet’s Berkshire Hathaway. However, LTCM’s role as a counterparty in thousands of derivatives trades that touched investment firms around the world threatened to provoke a wider collapse in international securities markets if the fund went under, so the U.S. Federal Reserve stepped in to maintain order. Wishing to avoid the precedent of a government bailout of a hedge fund and the moral hazard it could subsequently encourage, the Fed invited every major investment bank on Wall Street to an emergency meeting in New York and dictated the terms of the $3.625 billion bailout that would preserve market liquidity. The Fed convinced Bankers Trust, Barclays, Chase, Credit Suisse First Boston, Deutsche Bank, Goldman Sachs, Merrill Lynch, J.P. Morgan, Morgan Stanley, Salomon Smith Barney, and UBS – many of whom were investors in the fund – to contribute $300 million apiece, with $125 million coming from Société Générale and $100 million from Lehman Brothers and Paribas. Eventually the market crisis passed, and each bank managed to liquidate its position at a slight profit. Only one bank contacted by the Fed refused to join the syndicate and share the burden in the name of preserving market integrity.

That bank was Bear Stearns.

Bear’s dominant trading position in bonds and derivatives had won it the profitable business of acting as a settlement house for nearly all of LTCM’s trading in those markets. On September 22, 1998, just days before the Fed-organized bailout, Bear put the final nail in the LTCM coffin by calling in a short-term debt in the amount of $500 million in an attempt to limit its own exposure to the failing hedge fund, rendering it insolvent in the process. Ever the maverick in investment banking circles, Bear stubbornly refused to contribute to the eventual buyout, even in the face of a potentially apocalyptic market crash and despite the millions in profits it had earned as LTCM’s prime broker. In typical Bear fashion, James Cayne ignored the howls from other banks that failure to preserve confidence in the markets through a bailout would bring them all down in flames, famously growling through a chewed cigar as the Fed solicited contributions for the emergency financing, “Don’t go alphabetically if you want this to work.”

Market analysts were nearly unanimous in describing the lessons learned from LTCM’s implosion; in effect, the fund’s profound leverage had placed it in such a precarious position that it could not wait for its positions to turn profitable. While its trades were sound in principal, LTCM’s predicted price convergence was not realized until long after its equity had been wiped out completely. A less leveraged firm, they explained, might have realized lower profits than the 40% annual return LTCM had offered investors up until the 1998 crisis, but could have weathered the storm once the market turned against it. In the words of economist John Maynard Keynes, the market had remained irrational longer than LTCM could remain solvent. The crisis further illustrated the importance not merely of liquidity but of perception in the less regulated derivatives markets. Once LTCM’s ability to meet its obligations was called into question, its demise became inevitable, as it could no longer find counterparties with whom to trade and from whom it could borrow to continue operating.

The thornier question of the Fed’s role in bailing out an overly aggressive investment fund in the name of market stability remained unresolved, despite the Fed’s insistence on private funding for the actual buyout. Though impossible to foresee at the time, the issue would be revisited anew less than ten years later, and it would haunt Bear Stearns. With negative publicity from Bear’s $38.5 million settlement with the SEC regarding charges that it had ignored fraudulent behavior by a client for whom it cleared trades and LTCM’s collapse behind it, Bear Stearns continued to grow under Cayne’s leadership, with its stock price appreciating some 600% from his assumption of control in 1993 until 2008. However, a rapid-fire sequence of negative events began to unfurl in the summer of 2007 that would push Bear into a liquidity crunch eerily similar to the one that felled LTCM.

Advertisement

Synthetic Structured Financial Instruments. Note Quote.

Untitled

An option is common form of a derivative. It’s a contract, or a provision of a contract, that gives one party (the option holder) the right, but not the obligation to perform a specified transaction with another party (the option issuer or option writer) according to specified terms. Options can be embedded into many kinds of contracts. For example, a corporation might issue a bond with an option that will allow the company to buy the bonds back in ten years at a set price. Standalone options trade on exchanges or Over The Counter (OTC). They are linked to a variety of underlying assets. Most exchange-traded options have stocks as their underlying asset but OTC-traded options have a huge variety of underlying assets (bonds, currencies, commodities, swaps, or baskets of assets). There are two main types of options: calls and puts:

  • Call options provide the holder the right (but not the obligation) to purchase an underlying asset at a specified price (the strike price), for a certain period of time. If the stock fails to meet the strike price before the expiration date, the option expires and becomes worthless. Investors buy calls when they think the share price of the underlying security will rise or sell a call if they think it will fall. Selling an option is also referred to as ”writing” an option.
  • Put options give the holder the right to sell an underlying asset at a specified price (the strike price). The seller (or writer) of the put option is obligated to buy the stock at the strike price. Put options can be exercised at any time before the option expires. Investors buy puts if they think the share price of the underlying stock will fall, or sell one if they think it will rise. Put buyers – those who hold a “long” – put are either speculative buyers looking for leverage or “insurance” buyers who want to protect their long positions in a stock for the period of time covered by the option. Put sellers hold a “short” expecting the market to move upward (or at least stay stable) A worst-case scenario for a put seller is a downward market turn. The maximum profit is limited to the put premium received and is achieved when the price of the underlyer is at or above the option’s strike price at expiration. The maximum loss is unlimited for an uncovered put writer.

Coupon is the annual interest rate paid on a bond, expressed as percentage of the face value.

Coupon rate or nominal yield = annual payments ÷ face value of the bond

Current yield = annual payments ÷ market value of the bond

The reason for these terms to be briefed here through their definitions from investopedia lies in the fact that these happen to be pillars of synthetic financial instruments, to which we now take a detour.

According to the International Financial Reporting Standards (IFRS), a synthetic instrument is a financial product designed, acquired, and held to emulate the characteristics of another instrument. For example, such is the case of a floating-rate long-term debt combined with an interest rate swap. This involves

  • Receiving floating payments
  • Making fixed payments, thereby synthesizing a fixed-rate long-term debt

Another example of a synthetic is the output of an option strategy followed by dealers who are selling synthetic futures for a commodity that they hold by using a combination of put and call options. By simultaneously buying a put option in a given commodity, say, gold, and selling the corresponding call option, a trader can construct a position analogous to a short sale in the commodity’s futures market.

Because the synthetic short sale seeks to take advantage of price disparities between call and put options, it tends to be more profitable when call premiums are greater than comparable put premiums. For example, the holder of a synthetic short future will profit if gold prices decrease and incur losses if gold prices increase.

By analogy, a long position in a given commodity’s call option combined with a short sale of the same commodity’s futures creates price protection that is similar to that gained through purchasing put options. A synthetic put seeks to capitalize on disparities between call and put premiums.

Basically, synthetic products are covered options and certificates characterized by identical or similar profit and loss structures when compared with traditional financial instruments, such as equities or bonds. Basket certificates in equities are based on a specific number of selected stocks.

A covered option involves the purchase of an underlying asset, such as equity, bond, currency, or other commodity, and the writing of a call option on that same asset. The writer is paid a premium, which limits his or her loss in the event of a fall in the market value of the underlying. However, his or her potential return from any increase in the asset’s market value is conditioned by gains limited by the option’s strike price.

The concept underpinning synthetic covered options is that of duplicating traditional covered options, which can be achieved by both purchase of the underlying asset and writing of the call option. The purchase price of such a product is that of the underlying, less the premium received for the sale of the call option.

Moreover, synthetic covered options do not contain a hedge against losses in market value of the underlying. A hedge might be emulated by writing a call option or by calculating the return from the sale of a call option into the product price. The option premium, however, tends to limit possible losses in the market value of the underlying.

Alternatively, a synthetic financial instrument is done through a certificate that accords a right, based on either a number of underlyings or on having a value derived from several indicators. This presents a sense of diversification over a range of risk factors. The main types are

  • Index certificates
  • Region certificates
  • Basket certificates

By being based on an official index, index certificates reflect a given market’s behavior. Region certificates are derived from a number of indexes or companies from a given region, usually involving developing countries. Basket certificates are derived from a selection of companies active in a certain industry sector.

An investment in index, region, or basket certificates fundamentally involves the same level of potential loss as a direct investment in the corresponding assets themselves. Their relative advantage is diversification within a given specified range; but risk is not eliminated. Moreover, certificates also carry credit risk associated to the issuer.

Also available in the market are compound financial instruments, a frequently encountered form being that of a debt product with an embedded conversion option. An example of a compound financial instrument is a bond that is convertible into ordinary shares of the issuer. As an accounting standard, the IFRS requires the issuer of such a financial instrument to present separately on the balance sheet the

  • Equity component
  • Liability component

On initial recognition, the fair value of the liability component is the present value of the contractually determined stream of future cash flows, discounted at the rate of interest applied at that time by the market to substantially similar cash flows. These should be characterized by practically the same terms, albeit without a conversion option. The fair value of the option comprises its

  • Time value
  • Intrinsic value (if any)

The IFRS requires that on conversion of a convertible instrument at maturity, the reporting company derecognizes the liability component and recognizes it as equity. Embedded derivatives are an interesting issue inasmuch as some contracts that themselves are not financial instruments may have financial instruments embedded in them. This is the case of a contract to purchase a commodity at a fixed price for delivery at a future date.

Contracts of this type have embedded in them a derivative that is indexed to the price of the commodity, which is essentially a derivative feature within a contract that is not a financial derivative. International Accounting Standard 39 (IAS 39) of the IFRS requires that under certain conditions an embedded derivative is separated from its host contract and treated as a derivative instrument. For instance, the IFRS specifies that each of the individual derivative instruments that together constitute a synthetic financial product represents a contractual right or obligation with its own terms and conditions. Under this perspective,

  • Each is exposed to risks that may differ from the risks to which other financial products are exposed.
  • Each may be transferred or settled separately.

Therefore, when one financial product in a synthetic instrument is an asset and another is a liability, these two do not offset each other. Consequently, they should be presented on an entity’s balance sheet on a net basis, unless they meet specific criteria outlined by the aforementioned accounting standards.

Like synthetics, structured financial products are derivatives. Many are custom-designed bonds, some of which (over the years) have presented a number of problems to their buyers and holders. This is particularly true for those investors who are not so versatile in modern complex instruments and their further-out impact.

Typically, instead of receiving a fixed coupon or principal, a person or company holding a structured note will receive an amount adjusted according to a fairly sophisticated formula. Structured instruments lack transparency; the market, however, seems to like them, the proof being that the amount of money invested in structured notes continues to increase. One of many examples of structured products is the principal exchange-rate-linked security (PERLS). These derivative instruments target changes in currency rates. They are disguised to look like bonds, by structuring them as if they were debt instruments, making it feasible for investors who are not permitted to play in currencies to place bets on the direction of exchange rates.

For instance, instead of just repaying principal, a PERLS may multiply such principal by the change in the value of the dollar against the euro; or twice the change in the value of the dollar against the Swiss franc or the British pound. The fact that this repayment is linked to the foreign exchange rate of different currencies sees to it that the investor might be receiving a lot more than an interest rate on the principal alone – but also a lot less, all the way to capital attrition. (Even capital protection notes involve capital attrition since, in certain cases, no interest is paid over their, say, five-year life cycle.)

Structured note trading is a concept that has been subject to several interpretations, depending on the time frame within which the product has been brought to the market. Many traders tend to distinguish between three different generations of structured notes. The elder, or first generation, usually consists of structured instruments based on just one index, including

  • Bull market vehicles, such as inverse floaters and cap floaters
  • Bear market instruments, which are characteristically more leveraged, an example being the superfloaters

Bear market products became popular in 1993 and 1994. A typical superfloater might pay twice the London Interbank Offered Rate (LIBOR) minus 7 percent for two years. At currently prevailing rates, this means that the superfloater has a small coupon at the beginning that improves only if the LIBOR rises. Theoretically, a coupon that is below current market levels until the LIBOR goes higher is much harder to sell than a big coupon that gets bigger every time rates drop. Still, bear plays find customers.

Second-generation structured notes are different types of exotic options; or, more precisely, they are yet more exotic than superfloaters, which are exotic enough in themselves. There exist serious risks embedded in these instruments, as such risks have never been fully appreciated. Second-generation examples are

  • Range notes, with embedded binary or digital options
  • Quanto notes, which allow investors to take a bet on, say, sterling London Interbank Offered Rates, but get paid in dollar.

There are different versions of such instruments, like you-choose range notes for a bear market. Every quarter the investor has to choose the “range,” a job that requires considerable market knowledge and skill. For instance, if the range width is set to 100 basis points, the investor has to determine at the start of the period the high and low limits within that range, which is far from being a straight job.

Surprisingly enough, there are investors who like this because sometimes they are given an option to change their mind; and they also figure their risk period is really only one quarter. In this, they are badly mistaken. In reality even for banks you-choose notes are much more difficult to hedge than regular range notes because, as very few people appreciate, the hedges are both

  • Dynamic
  • Imperfect

There are as well third-generation notes offering investors exposure to commodity or equity prices in a cross-category sense. Such notes usually appeal to a different class than fixed-income investors. For instance, third-generation notes are sometimes purchased by fund managers who are in the fixed-income market but want to diversify their exposure. In spite of the fact that the increasing sophistication and lack of transparency of structured financial instruments sees to it that they are too often misunderstood, and they are highly risky, a horde of equity-linked and commodity-linked notes are being structured and sold to investors. Examples are LIBOR floaters designed so that the coupon is “LIBOR plus”:

The pros say that flexibly structured options can be useful to sophisticated investors seeking to manage particular portfolio and trading risks. However, as a result of exposure being assumed, and also because of the likelihood that there is no secondary market, transactions in flexibly structured options are not suitable for investors who are not

  • In a position to understand the behavior of their intrinsic value
  • Financially able to bear the risks embedded in them when worst comes to worst

It is the price of novelty, customization, and flexibility offered by synthetic and structured financial instruments that can be expressed in one four-letter word: risk. Risk taking is welcome when we know how to manage our exposure, but it can be a disaster when we don’t – hence, the wisdom of learning ahead of investing the challenges posed by derivatives and how to be in charge of risk control.

Carnap, c-notions. Thought of the Day 87.0

oldcarnap

A central distinction for Carnap is that between definite and indefinite notions. A definite notion is one that is recursive, such as “is a formula” and “is a proof of φ”. An indefinite notion is one that is non-recursive, such as “is an ω-consequence of PA” and “is true in Vω+ω”. This leads to a distinction between (i) the method of derivation (or d-method), which investigates the semi-definite (recursively enumerable) metamathematical notions, such as demonstrable, derivable, refutable, resoluble, and irresoluble, and (ii) the method of consequence (or c-method), which investigates the (typically) non-recursively enumerable metamathematical notions such as consequence, analytic, contradictory, determinate, and synthetic.

A language for Carnap is what we would today call a formal axiom system. The rules of the formal system are definite (recursive) and Carnap is fully aware that a given language cannot include its own c-notions. The logical syntax of a language is what we would today call metatheory. It is here that one formalizes the c-notions for the (object) language. From among the various c-notions Carnap singles out one as central, namely, the notion of (direct) consequence; from this c-notion all of the other c-notions can be defined in routine fashion.

We now turn to Carnap’s account of his fundamental notions, most notably, the analytic/synthetic distinction and the division of primitive terms into ‘logico-mathematical’ and ‘descriptive’. Carnap actually has two approaches. The first approach occurs in his discussion of specific languages – Languages I and II. Here he starts with a division of primitive terms into ‘logico-mathematical’ and ‘descriptive’ and upon this basis defines the c-notions, in particular the notions of being analytic and synthetic. The second approach occurs in the discussion of general syntax. Here Carnap reverses procedure: he starts with a specific c-notion – namely, the notion of direct consequence – and he uses it to define the other c-notions and draw the division of primitive terms into ‘logico-mathematical’ and ‘descriptive’.

In the first approach Carnap introduces two languages – Language I and Language II. The background languages (in the modern sense) of Language I and Language II are quite general – they include expressions that we would call ‘descriptive’. Carnap starts with a demarcation of primitive terms into ‘logico-mathematical’ and ‘descriptive’. The expressions he classifies as ‘logico-mathematical’ are exactly those included in the modern versions of these systems; the remaining expressions are classified as ‘descriptive’. Language I is a version of Primitive Recursive Arithmetic and Language II is a version of finite type theory built over Peano Arithmetic. The d-notions for these languages are the standard proof-theoretic ones.

For Language I Carnap starts with a consequence relation based on two rules – (i) the rule that allows one to infer φ if T \vdash \!\, φ (where T is some fixed ∑10-complete formal system) and (ii) the ω-rule. It is then easily seen that one has a complete theory for the logico-mathematical fragment, that is, for any logico-mathematical sentence φ, either φ or ¬φ is a consequence of the null set. The other c-notions are then defined in the standard fashion. For example, a sentence is analytic if it is a consequence of the null set; contradictory if its negation is analytic; and so on.

For Language II Carnap starts by defining analyticity. His definition is a notational variant of the Tarskian truth definition with one important difference – namely, it involves an asymmetric treatment of the logico-mathematical and descriptive expressions. For the logico-mathematical expressions his definition really just is a notational variant of the Tarskian truth definition. But descriptive expressions must pass a more stringent test to count as analytic – they must be such that if one replaces all descriptive expressions in them by variables of the appropriate type, then the resulting logico-mathematical expression is analytic, that is, true. In other words, to count as analytic a descriptive expression must be a substitution-instance of a general logico-mathematical truth. With this definition in place the other c-notions are defined in the standard fashion.

The content of a sentence is defined to be the set of its non-analytic consequences. It then follows immediately from the definitions that logico-mathematical sentences (of both Language I and Language II) are analytic or contradictory and (assuming consistency) that analytic sentences are without content.

In the second approach, for a given language, Carnap starts with an arbitrary notion of direct consequence and from this notion he defines the other c-notions in the standard fashion. More importantly, in addition to defining the other c-notion, Carnap also uses the primitive notion of direct consequence (along with the derived c-notions) to effect the classification of terms into ‘logico-mathematical’ and ‘descriptive’. The guiding idea is that “the formally expressible distinguishing peculiarity of logical symbols and expressions [consists] in the fact that each sentence constructed solely from them is determinate”. Howsoever the guiding idea is implemented the actual division between “logico-mathematical” and “descriptive” expressions that one obtains as output is sensitive to the scope of the direct consequence relation with which one starts.

With this basic division in place, Carnap can now draw various derivative divisions, most notably, the division between analytic and synthetic statements: Suppose φ is a consequence of Γ. Then φ is said to be an L-consequence of Γ if either (i) φ and the sentences in Γ are logico-mathematical, or (ii) letting φ’ and Γ’ be the result of unpacking all descriptive symbols, then for every result φ” and Γ” of replacing every (primitive) descriptive symbol by an expression of the same genus, maintaining equal expressions for equal symbols, we have that φ” is a consequence of Γ”. Otherwise φ is a P-consequence of Γ. This division of the notion of consequence into L-consequence and P-consequence induces a division of the notion of demonstrable into L-demonstrable and P-demonstrable and the notion of valid into L-valid and P-valid and likewise for all of the other d-notions and c-notions. The terms ‘analytic’, ‘contradictory’, and ‘synthetic’ are used for ‘L-valid’, ‘L-contravalid’, and ‘L-indeterminate’.

It follows immediately from the definitions that logico-mathematical sentences are analytic or contradictory and that analytic sentences are without content. The trouble with the first approach is that the definitions of analyticity that Carnap gives for Languages I and II are highly sensitive to the original classification of terms into ‘logico-mathematical’ and ‘descriptive’. And the trouble with the second approach is that the division between ‘logico-mathematical’ and ‘descriptive’ expressions (and hence division between ‘analytic’ and ‘synthetic’ truths) is sensitive to the scope of the direct consequence relation with which one starts. This threatens to undermine Carnap’s thesis that logico-mathematical truths are analytic and hence without content. 

In the first approach, the original division of terms into ‘logico-mathematical’ and ‘descriptive’ is made by stipulation and if one alters this division one thereby alters the derivative division between analytic and synthetic sentences. For example, consider the case of Language II. If one calls only the primitive terms of first-order logic ‘logico-mathematical’ and then extends the language by adding the machinery of arithmetic and set theory, then, upon running the definition of ‘analytic’, one will have the result that true statements of first-order logic are without content while (the distinctive) statements of arithmetic and set theory have content. For another example, if one takes the language of arithmetic, calls the primitive terms ‘logico-mathematical’ and then extends the language by adding the machinery of finite type theory, calling the basic terms ‘descriptive’, then, upon running the definition of ‘analytic’, the result will be that statements of first-order arithmetic are analytic or contradictory while (the distinctive) statements of second- and higher-order arithmetic are synthetic and hence have content. In general, by altering the input, one alters the output, and Carnap adjusts the input to achieve his desired output.

In the second approach, there are no constraints on the scope of the direct consequence relation with which one starts and if one alters it one thereby alters the derivative division between ‘logico-mathematical’ and ‘descriptive’ expressions. Logical symbols and expressions have the feature that sentences composed solely of them are determinate. The trouble is that the resulting division of terms into ‘logico-mathematical’ and ‘descriptive’ will be highly sensitive to the scope of the direct consequence relation with which one starts. For example, let S be first-order PA and for the direct consequence relation take “provable in PA”. Under this assignment Fermat’s Last Theorem will be deemed descriptive, synthetic, and to have non-trivial content. For an example at the other extreme, let S be an extension of PA that contains a physical theory and let the notion of direct consequence be given by a Tarskian truth definition for the language. Since in the metalanguage one can prove that every sentence is true or false, every sentence will be either analytic (and so have null content) or contradictory (and so have total content). To overcome such counter-examples and get the classification that Carnap desires one must ensure that the consequence relation is (i) complete for the sublanguage consisting of expressions that one wants to come out as ‘logico-mathematical’ and (ii) not complete for the sublanguage consisting of expressions that one wants to come out as ‘descriptive’. Once again, by altering the input, one alters the output.

Carnap merely provides us with a flexible piece of technical machinery involving free parameters that can be adjusted to yield a variety of outcomes concerning the classifications of analytic/synthetic, contentful/non-contentful, and logico-mathematical/descriptive. In his own case, he has adjusted the parameters in such a way that the output is a formal articulation of his logicist view of mathematics that the truths of mathematics are analytic and without content. And one can adjust them differently to articulate a number of other views, for example, the view that the truths of first-order logic are without content while the truths of arithmetic and set theory have content. The point, however, is that we have been given no reason for fixing the parameters one way rather than another. The distinctions are thus not principled distinctions. It is trivial to prove that mathematics is trivial if one trivializes the claim.

Carnap is perfectly aware that to define c-notions like analyticity one must ascend to a stronger metalanguage. However, there is a distinction that he appears to overlook, namely, the distinction between (i) having a stronger system S that can define ‘analytic in S’ and (ii) having a stronger system S that can, in addition, evaluate a given statement of the form ‘φ is analytic in S’. It is an elementary fact that two systems S1 and S2 can employ the same definition (from an intensional point of view) of ‘analytic in S’ (using either the definition given for Language I or Language II) but differ on their evaluation of ‘φ is analytic in S’ (that is, differ on the extension of ‘analytic in S’). Thus, to determine whether ‘φ is analytic in S’ holds one needs to access much more than the “syntactic design” of φ – in addition to ascending to an essentially richer metalanguage one must move to a sufficiently strong system to evaluate ‘φ is analytic in S’.

In fact, to answer ‘Is φ analytic in Language I?’ is just to answer φ and, in the more general setting, to answer all questions of the form ‘Is φ analytic in S?’ (for various mathematical φ and S), where here ‘analytic’ is defined as Carnap defines it for Language II, just to answer all questions of mathematics. The same, of course, applies to the c-notion of consequence. So, when in first stating the Principle of Tolerance, Carnap tells us that we can choose our system S arbitrarily and that ‘no question of justification arises at all, but only the question of the syntactical consequences to which one or other of the choices leads’, where he means the c-notion of consequence.

Malthusian Catastrophe.

population-arti19_depositphotos_18606893_m

As long as wealth is growing exponentially, it does not matter that some of the surplus labor is skimmed. If the production of the laborers is growing x% and their wealth grows y% – even if y% < x%, and the wealth of the capital grows faster, z%, with z% > x% – everybody is happy. The workers minimally increased their wealth, even if their productivity has increased tremendously. Nearly all increased labor production has been confiscated by the capital, exorbitant bonuses of bank managers are an example. (Managers, by the way, by definition, do not ’produce’ anything, but only help skim the production of others; it is ‘work’, but not ‘production’. As long as the skimming [money in] is larger than the cost of their work [money out], they will be hired by the capital. For instance, if they can move the workers into producing more for equal pay. If not, out they go).

If the economy is growing at a steady pace (x%), resulting in an exponential growth (1+x/100)n, effectively today’s life can be paid with (promises of) tomorrow’s earnings, ‘borrowing from the future’. (At a shrinking economy, the opposite occurs, paying tomorrow’s life with today’s earnings; having nothing to live on today).

Let’s put that in an equation. The economy of today Ei is defined in terms of growth of economy itself, the difference between today’s economy and tomorrow’s economy, Ei+1 − Ei,

Ei = α(Ei+1 − Ei) —– (1)

with α related to the growth rate, GR ≡ (Ei+1 − Ei)/Ei = 1/α. In a time-differential equation:

E(t) = αdE(t)/dt —– (2)

which has as solution

E(t) = E0e1/α —– (3)

exponential growth.

The problem is that eternal growth of x% is not possible. Our entire society depends on a

continuous growth; it is the fiber of our system. When it stops, everything collapses, if the derivative dE(t)/dt becomes negative, economy itself becomes negative and we start destroying things (E < 0) instead of producing things. If the growth gets relatively smaller, E itself gets smaller, assuming steady borrowing-from-tomorrow factor α (second equation above). But that is a contradiction; if E gets smaller, the derivative must be negative. The only consistent observation is that if E shrinks, E becomes immediately negative! This is what is called a Malthusian Catastrophe.

Now we seem to saturate with our production, we no longer have x% growth, but it is closer to 0. The capital, however, has inertia (viz. The continuing culture in the financial world of huge bonuses, often justified as “well, that is the market. What can we do?!”). The capital continues to increase their skimming of the surplus labor with the same z%. The laborers, therefore, now have a decrease of wealth close to z%. (Note that the capital cannot have a decline, a negative z%, because it would refuse to do something if that something does not make profit).

Many things that we took for granted before, free health care for all, early pension, free education, cheap or free transport (no road tolls, etc.) are more and more under discussion, with an argument that they are “becoming unaffordable”. This label is utter nonsense, when you think of it, since

1) Before, apparently, they were affordable.

2) We have increased productivity of our workers.

1 + 2 = 3) Things are becoming more and more affordable. Unless, they are becoming unaffordable for some (the workers) and not for others (the capitalists).

It might well be that soon we discover that living is unaffordable. The new money M’ in Marx’s equation is used as a starting point in new cycle M → M’. The eternal cycle causes condensation of wealth to the capital, away from the labor power. M keeps growing and growing. Anything that does not accumulate capital, M’ – M < 0, goes bankrupt. Anything that does not grow fast enough, M’ – M ≈ 0, is bought by something that does, reconfigured to have M’ – M large again. Note that these reconfigurations – optimizations of skimming (the laborers never profit form the reconfigurations, they are rather being sacked as a result of them) – are presented by the media as something good, where words as ‘increased synergy’ are used to defend mergers, etc. It alludes to the sponsors of the messages coming to us. Next time you read the word ‘synergy’ in these communications, just replace it with ‘fleecing’.

The capital actually ‘refuses’ to do something if it does not make profit. If M’ is not bigger than M in a step, the step would simply not be done, implying also no Labour Power used and no payment for Labour Power. Ignoring for the moment philanthropists, in capitalistic Utopia capital cannot but grow. If economy is not growing it is therefore always at the cost of labor! Humans, namely, do not have this option of not doing things, because “better to get 99 paise while living costs 1 rupee, i.e., ‘loss’, than get no paisa at all [while living still costs one rupee (haha, excuse me the folly of quixotic living!]”. Death by slow starvation is chosen before rapid death.

In an exponential growing system, everything is OK; Capital grows and reward on labor as well. When the economy stagnates only the labor power (humans) pays the price. It reaches a point of revolution, when the skimming of Labour Power is so big, that this Labour Power (humans) cannot keep itself alive. Famous is the situation of Marie-Antoinette (representing the capital), wife of King Louis XVI of France, who responded to the outcry of the public (Labour Power) who demanded bread (sic!) by saying “They do not have bread? Let them eat cake!” A revolution of the labor power is unavoidable in a capitalist system when it reaches saturation, because the unavoidable increment of the capital is paid by the reduction of wealth of the labor power. That is a mathematical certainty.

Conjuncted: Mispricings Happened in the Past do not Influence the Derivative Price: Black-Scholes (BS) Analysis and Arbitrage-Free Financial Economics. Note Quote.

wpec0c5c1f_05_06

It can be shown that the probability (up to a normalization constant) of the trajectory R(·,·) has the form:

P[R(.,.)] ∼ exp[-1/2∑0 dt dt’ dS dS’ R(t, S) K-1(t, S|t’, S’) R(t’, S’)] —– (1)

where the kernel of the operator K is defined as:

K(t, S|t’, S’) = θ (T – t) θ (T – t’)∫0 dτ ds f(τ) θ (t – τ) θ (t’ – τ) e-λ(t + t’ – 2τ) x P (t, S|τ, s)P (t′, S′|τ, s) —– (2)

It is easy to see that the kernel is of order 1/λ and vanishes as λ → ∞. Equation 2, in particular, results in the equality for the correlation function:

⟨R(t, S) R(t′, S′)⟩ = Σ2 · K(t, S|t′, S′) —– (3)

Black-Scholes (BS) Analysis and Arbitrage-Free Financial Economics

thinkstockphotos493208894

The Black-Scholes (BS) analysis of derivative pricing is one of the most beautiful results in financial economics. There are several assumptions in the basis of BS analysis such as the quasi-Brownian character of the underlying price process, constant volatility and, the absence of arbitrage.

let us denote V (t, S) as the price of a derivative at time t condition to the underlying asset price equal to S. We assume that the underlying asset price follows the geometrical Brownian motion,

dS/S = μdt + σdW —– (1)

with some average return μ and the volatility σ. They can be kept constant or be arbitrary functions of S and t. The symbol dW stands for the standard Wiener process. To price the derivative one forms a portfolio which consists of the derivative and ∆ units of the underlying asset so that the price of the portfolio is equal to Π:

Π = V − ∆S —– (2)

The change in the portfolio price during a time step dt can be written as

dΠ = dV − ∆dS = (∂V/∂t + σ2S22V/2∂S2) dt + (∂V/∂S – ∆) dS —– (3)

from of Ito’s lemma. We can now chose the number of the underlying asset units ∆ to be equal to ∂V/∂S to cancel the second term on the right hand side of the last equation. Since, after cancellation, there are no risky contributions (i.e. there is no term proportional to dS) the portfolio is risk-free and hence, in the absence of the arbitrage, its price will grow with the risk-free interest rate r:

dΠ = rΠdt —– (4)

or, in other words, the price of the derivative V(t,S) shall obey the Black-Scholes equation:

(∂V/∂t + σ2S22V/2∂S2) dt + rS∂V/∂S – rV = 0 —– (5)

In what follows we use this equation in the following operator form:

LBSV = 0, LBS = ∂/∂t + σ2S22V/2∂S2 + rS∂/∂S – r —– (6)

To formulate the model we return back to Eqn(1). Let us imagine that at some moment of time τ < t a fluctuation of the return (an arbitrage opportunity) appeared in the market. It happened when the price of the underlying stock was S′ ≡ S(τ). We then denote this instantaneous arbitrage return as ν(τ, S′). Arbitragers would react to this circumstance and act in such a way that the arbitrage gradually disappears and the market returns to its equilibrium state, i.e. the absence of the arbitrage. For small enough fluctuations it is natural to assume that the arbitrage return R (in absence of other fluctuations) evolves according to the following equation:

dR/dt = −λR,   R(τ) = ν(τ,S′) —– (7)

with some parameter λ which is characteristic for the market. This parameter can be either estimated from a microscopic theory or can be found from the market using an analogue of the fluctuation-dissipation theorem. The fluctuation-dissipation theorem states that the linear response of a given system to an external perturbation is expressed in terms of fluctuation properties of the system in thermal equilibrium. This theorem may be represented by a stochastic equation describing the fluctuation, which is a generalization of the familiar Langevin equation in the classical theory of Brownian motion. In the last case the parameter λ can be estimated from the market data as

λ = -1/(t -t’) log [〈LBSV/(V – S∂V/∂S) (t) LBSV/(V – S∂V/∂S) (t’)〉market / 〈(LBSV/(V – S∂V/∂S)2 (t)〉market] —– (8)

and may well be a function of time and the price of the underlying asset. We consider λ as a constant to get simple analytical formulas for derivative prices. The generalization to the case of time-dependent parameters is straightforward.

The solution of Equation 7 gives us R(t,S) = ν(τ,S)e−λ(t−τ) which, after summing over all possible fluctuations with the corresponding frequencies, leads us to the following expression for the arbitrage return at time t:

R (t, S) = ∫0t dτ ∫0 dS’ P(t, S|τ, S’) e−λ(t−τ) ν (τ, S’), t < T —– (9)

where T is the expiration date for the derivative contract started at time t = 0 and the function P (t, S|τ, S′) is the conditional probability for the underlying price. To specify the stochastic process ν(t,S) we assume that the fluctuations at different times and underlying prices are independent and form the white noise with a variance Σ2 · f (t):

⟨ν(t, S)⟩ = 0 , ⟨ν(t, S) ν (t′, S′)⟩ = Σ2 · θ(T − t) f(t) δ(t − t′) δ(S − S′) —– (10)

The function f(t) is introduced here to smooth out the transition to the zero virtual arbitrage at the expiration date. The quantity Σ2 · f (t) can be estimated from the market data as:

∑2/2λ· f (t) = 〈(LBSV/(V – S∂V/∂S)) 2 (t)⟩ market —– (11)

and has to vanish as time tends to the expiration date. Since we introduced the stochastic arbitrage return R(t, S), equation 4 has to be substituted with the following equation:

dΠ = [r + R(t, S)]Πdt, which can be rewritten as

LBSV = R (t, S) V – (S∂V/∂S) —– (12)

using the operator LBS. 

It is worth noting that the model reduces to the pure BS analysis in the case of infinitely fast market reaction, i.e. λ → ∞. It also returns to the BS model when there are no arbitrage opportunities at all, i.e. when Σ = 0. In the presence of the random arbitrage fluctuations R(t, S), the only objects which can be calculated are the average value and other higher moments of the derivative price.

Feed Forward Perceptron and Philosophical Representation

In a network that does not recognize a node’s relation with any specific concept, the hidden layer is populated by neurons, in a way that architecturally blue prints each neuron as connected with every input layer node. What happens to information passed into the network is interesting from the point of view of distribution over all of the neurons that populate the hidden layer. This distribution over the domain strips any particular neuron within the hidden layer of any privileged status, in turn meaning no privileged (ontological) status for any weights and nodes as well. With the absence of any privileged status accorded to nodes, weights and even neurons in the hidden layer, representation comes to mean something entirely different as compared with what it normally meant in semantic networks, and that being representations are not representative of any coherent concept. Such a scenario is representational with sub-symbolic features, and since all the weights have a share in participation, each time a network is faced with something like pattern recognition, the representation is what is called distributed representation. The best example of such a distributed representation is a multilayer perceptron, which is a feedforward artificial neural network that maps sets of input data onto a set of appropriate output, and finds its use in image recognition, pattern recognition and even speech recognition.

A multilayer perceptron is characterized by each neuron as using a nonlinear activation function to model firing of biological neurons in the brain. The activation functions for the current application are sigmoids, and the equations are:

Ф (yi) = tanh (vi) and Ф (yi) = (1 + e-vi)-1

where, yi is the output of the ith neuron, and vi is the weighted sum of the input synapses, and the former function is a hyperbolic tangent in the range of -1 to +1, and the latter is equivalent in shape but ranging from 0 to +1. Learning takes place through backpropagation. The connection weights are changed, after adjustments are made in the output compared with the expected result. To be on the technical side, let us see how backpropagation is responsible for learning to take place in the multilayer perceptron.

Error in the output node j in the nth data point is represented by,

ej (n) = dj (n) – yj (n),

where, d is the target value and y is the value produced by the perceptron. Corrections to the weights of the nodes that minimize the error in the entire output is made by,

ξ(n)=0.5 * ∑e2j (n)

With the help of gradient descent, the change in each weight happens to be given by,

∆ wji (n)=−η * (δξ(n)/δvj (n)) * yi (n)

where, yi is the output of the previous neuron, and η is the learning rate that is carefully selected to make sure that weights converge to a response quickly enough without undergoing any sort of oscillations. Gradient descent is based on the observation that if the real-valued function F (x) is defined and differentiable in a neighborhood of a point ‘a’, then F (x) decreases fastest if one goes from ‘a’ in the direction of the negative gradient of F at ‘a.

The derivative to be calculated depends on the local induced field vj, that is susceptible to variations. The derivative is simplified for the output node,

− (δξ(n)/δvj (n)) = ej (n) Ф'(vj (n))

where, Ф’ is the first-order derivative of the activation function Ф, and does not vary. The analysis is more difficult to a change in weights to a hidden node, but can be shown with the relevant derivative as,

− (δξ(n)/δvj (n)) = Ф'(vj (n))∑− (δξ(n)/δvk (n)) * wkj (n)

which depends on the change of weights of the kth node, representing the output layer. So to change the hidden layer weights, we must first change the output layer weights according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function. Perceptron as a distributed representation is gaining wider applications in AI project, but since biological knowledge is prone to change over time, its biological plausibility is doubtful. A major drawback despite scoring over semantic networks, or symbolic models is the loose modeling capabilities with neurons and synapses. At the same time, backpropagation multilayer perceptrons are not too closely resembling brain-like structures, and for near complete efficiency, require synapses to be varying. A typical multilayer perceptron would look something like,

mlfnwithweights

where, (x1,….,xp) are the predictor variable values as presented to the input layer. Note that the standardized values for these variables are in the range -1 to +1. Wji is the weight that multiplies with each of the values coming from the input neuron, and uj is the compounded combined value of the addition of the resulting weighted values in the hidden layer. The weighted sum is fed into a transfer function of a sigmoidal/non-linear kind, σ, that outputs a value hj, before getting distributed to an output layer. Arriving at a neuron in the output layer, the value from each hidden layer neuron is multiplied by a weight wkj, and the resulting weighted values are added together producing a compounded combined value vj. This weighted sum vj is fed into a transfer function of a sigmoid/non-linear kind, σ, that outputs a value yk, which are the outputs of the network.