Bullish or Bearish. Note Quote.

Untitled

The term spread refers to the difference in premiums between the purchase and sale of options. An option spread is the simultaneous purchase of one or more options contracts and sale of the equivalent number of options contracts, in a different series of the class of options. A spread could involve the same underlying: 

  •  Buying and selling calls, or 
  •  Buying and selling puts.

Combining puts and calls into groups of two or more makes it feasible to design derivatives with interesting payoff profiles. The profit and loss outcomes depend on the options used (puts or calls); positions taken (long or short); whether their strike prices are identical or different; and the similarity or difference of their exercise dates. Among directional positions are bullish vertical call spreads, bullish vertical put spreads, bearish vertical spreads, and bearish vertical put spreads. 

If the long position has a higher premium than the short position, this is known as a debit spread, and the investor will be required to deposit the difference in premiums. If the long position has a lower premium than the short position, this is a credit spread, and the investor will be allowed to withdraw the difference in premiums. The spread will be even if the premiums on each side results are the same. 

A potential loss in an option spread is determined by two factors: 

  • Strike price 
  • Expiration date 

If the strike price of the long call is greater than the strike price of the short call, or if the strike price of the long put is less than the strike price of the short put, a margin is required because adverse market moves can cause the short option to suffer a loss before the long option can show a profit.

A margin is also required if the long option expires before the short option. The reason is that once the long option expires, the trader holds an unhedged short position. A good way of looking at margin requirements is that they foretell potential loss. Here are, in a nutshell, the main option spreadings.

A calendar, horizontal, or time spread is the simultaneous purchase and sale of options of the same class with the same exercise prices but with different expiration dates. A vertical, or price or money, spread is the simultaneous purchase and sale of options of the same class with the same expiration date but with different exercise prices. A bull, or call, spread is a type of vertical spread that involves the purchase of the call option with the lower exercise price while selling the call option with the higher exercise price. The result is a debit transaction because the lower exercise price will have the higher premium.

  • The maximum risk is the net debit: the long option premium minus the short option premium. 
  • The maximum profit potential is the difference in the strike prices minus the net debit. 
  • The breakeven is equal to the lower strike price plus the net debit. 

A trader will typically buy a vertical bull call spread when he is mildly bullish. Essentially, he gives up unlimited profit potential in return for reducing his risk. In a vertical bull call spread, the trader is expecting the spread premium to widen because the lower strike price call comes into the money first. 

Vertical spreads are the more common of the direction strategies, and they may be bullish or bearish to reflect the holder’s view of market’s anticipated direction. Bullish vertical put spreads are a combination of a long put with a low strike, and a short put with a higher strike. Because the short position is struck closer to-the-money, this generates a premium credit. 

Bearish vertical call spreads are the inverse of bullish vertical call spreads. They are created by combining a short call with a low strike and a long call with a higher strike. Bearish vertical put spreads are the inverse of bullish vertical put spreads, generated by combining a short put with a low strike and a long put with a higher strike. This is a bearish position taken when a trader or investor expects the market to fall. 

The bull or sell put spread is a type of vertical spread involving the purchase of a put option with the lower exercise price and sale of a put option with the higher exercise price. Theoretically, this is the same action that a bull call spreader would take. The difference between a call spread and a put spread is that the net result will be a credit transaction because the higher exercise price will have the higher premium. 

  • The maximum risk is the difference in the strike prices minus the net credit. 
  • The maximum profit potential equals the net credit. 
  • The breakeven equals the higher strike price minus the net credit. 

The bear or sell call spread involves selling the call option with the lower exercise price and buying the call option with the higher exercise price. The net result is a credit transaction because the lower exercise price will have the higher premium.

A bear put spread (or buy spread) involves selling some of the put option with the lower exercise price and buying the put option with the higher exercise price. This is the same action that a bear call spreader would take. The difference between a call spread and a put spread, however, is that the net result will be a debit transaction because the higher exercise price will have the higher premium. 

  • The maximum risk is equal to the net debit. 
  • The maximum profit potential is the difference in the strike
    prices minus the net debit. 
  • The breakeven equals the higher strike price minus the net debit.

An investor or trader would buy a vertical bear put spread because he or she is mildly bearish, giving up an unlimited profit potential in return for a reduction in risk. In a vertical bear put spread, the trader is expecting the spread premium to widen because the higher strike price put comes into the money first. 

In conclusion, investors and traders who are bullish on the market will either buy a bull call spread or sell a bull put spread. But those who are bearish on the market will either buy a bear put spread or sell a bear call spread. When the investor pays more for the long option than she receives in premium for the short option, then the spread is a debit transaction. In contrast, when she receives more than she pays, the spread is a credit transaction. Credit spreads typically require a margin deposit. 

Advertisement

Knowledge Limited for Dummies….Didactics.

header_Pipes

Bertrand Russell with Alfred North Whitehead, in the Principia Mathematica aimed to demonstrate that “all pure mathematics follows from purely logical premises and uses only concepts defined in logical terms.” Its goal was to provide a formalized logic for all mathematics, to develop the full structure of mathematics where every premise could be proved from a clear set of initial axioms.

Russell observed of the dense and demanding work, “I used to know of only six people who had read the later parts of the book. Three of those were Poles, subsequently (I believe) liquidated by Hitler. The other three were Texans, subsequently successfully assimilated.” The complex mathematical symbols of the manuscript required it to be written by hand, and its sheer size – when it was finally ready for the publisher, Russell had to hire a panel truck to send it off – made it impossible to copy. Russell recounted that “every time that I went out for a walk I used to be afraid that the house would catch fire and the manuscript get burnt up.”

Momentous though it was, the greatest achievement of Principia Mathematica was realized two decades after its completion when it provided the fodder for the metamathematical enterprises of an Austrian, Kurt Gödel. Although Gödel did face the risk of being liquidated by Hitler (therefore fleeing to the Institute of Advanced Studies at Princeton), he was neither a Pole nor a Texan. In 1931, he wrote a treatise entitled On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which demonstrated that the goal Russell and Whitehead had so single-mindedly pursued was unattainable.

The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, even more trivially, a statement like; “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true – it is provable – which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential. This same sort of self-referentiality is the keystone of Gödel’s proof, where he uses statements that imbed other statements within them. This problem did not totally escape Russell and Whitehead. By the end of 1901, Russell had completed the first round of writing Principia Mathematica and thought he was in the homestretch, but was increasingly beset by these sorts of apparently simple-minded contradictions falling in the path of his goal. He wrote that “it seemed unworthy of a grown man to spend his time on such trivialities, but . . . trivial or not, the matter was a challenge.” Attempts to address the challenge extended the development of Principia Mathematica by nearly a decade.

Yet Russell and Whitehead had, after all that effort, missed the central point. Like granite outcroppings piercing through a bed of moss, these apparently trivial contradictions were rooted in the core of mathematics and logic, and were only the most readily manifest examples of a limit to our ability to structure formal mathematical systems. Just four years before Gödel had defined the limits of our ability to conquer the intellectual world of mathematics and logic with the publication of his Undecidability Theorem, the German physicist Werner Heisenberg’s celebrated Uncertainty Principle had delineated the limits of inquiry into the physical world, thereby undoing the efforts of another celebrated intellect, the great mathematician Pierre-Simon Laplace. In the early 1800s Laplace had worked extensively to demonstrate the purely mechanical and predictable nature of planetary motion. He later extended this theory to the interaction of molecules. In the Laplacean view, molecules are just as subject to the laws of physical mechanics as the planets are. In theory, if we knew the position and velocity of each molecule, we could trace its path as it interacted with other molecules, and trace the course of the physical universe at the most fundamental level. Laplace envisioned a world of ever more precise prediction, where the laws of physical mechanics would be able to forecast nature in increasing detail and ever further into the future, a world where “the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule.”

What Gödel did to the work of Russell and Whitehead, Heisenberg did to Laplace’s concept of causality. The Uncertainty Principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality – namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value. The reason for this, viewed from the standpoint of classical physics, is that accurately measuring the position of an electron requires illuminating the electron with light of a very short wavelength. But the shorter the wavelength the greater the amount of energy that hits the electron, and the greater the energy hitting the electron the greater the impact on its velocity.

What is true in the subatomic sphere ends up being true – though with rapidly diminishing significance – for the macroscopic. Nothing can be measured with complete precision as to both location and velocity because the act of measuring alters the physical properties. The idea that if we know the present we can calculate the future was proven invalid – not because of a shortcoming in our knowledge of mechanics, but because the premise that we can perfectly know the present was proven wrong. These limits to measurement imply limits to prediction. After all, if we cannot know even the present with complete certainty, we cannot unfailingly predict the future. It was with this in mind that Heisenberg, ecstatic about his yet-to-be-published paper, exclaimed, “I think I have refuted the law of causality.”

The epistemological extrapolation of Heisenberg’s work was that the root of the problem was man – or, more precisely, man’s examination of nature, which inevitably impacts the natural phenomena under examination so that the phenomena cannot be objectively understood. Heisenberg’s principle was not something that was inherent in nature; it came from man’s examination of nature, from man becoming part of the experiment. (So in a way the Uncertainty Principle, like Gödel’s Undecidability Proposition, rested on self-referentiality.) While it did not directly refute Einstein’s assertion against the statistical nature of the predictions of quantum mechanics that “God does not play dice with the universe,” it did show that if there were a law of causality in nature, no one but God would ever be able to apply it. The implications of Heisenberg’s Uncertainty Principle were recognized immediately, and it became a simple metaphor reaching beyond quantum mechanics to the broader world.

This metaphor extends neatly into the world of financial markets. In the purely mechanistic universe of classical physics, we could apply Newtonian laws to project the future course of nature, if only we knew the location and velocity of every particle. In the world of finance, the elementary particles are the financial assets. In a purely mechanistic financial world, if we knew the position each investor has in each asset and the ability and willingness of liquidity providers to take on those assets in the event of a forced liquidation, we would be able to understand the market’s vulnerability. We would have an early-warning system for crises. We would know which firms are subject to a liquidity cycle, and which events might trigger that cycle. We would know which markets are being overrun by speculative traders, and thereby anticipate tactical correlations and shifts in the financial habitat. The randomness of nature and economic cycles might remain beyond our grasp, but the primary cause of market crisis, and the part of market crisis that is of our own making, would be firmly in hand.

The first step toward the Laplacean goal of complete knowledge is the advocacy by certain financial market regulators to increase the transparency of positions. Politically, that would be a difficult sell – as would any kind of increase in regulatory control. Practically, it wouldn’t work. Just as the atomic world turned out to be more complex than Laplace conceived, the financial world may be similarly complex and not reducible to a simple causality. The problems with position disclosure are many. Some financial instruments are complex and difficult to price, so it is impossible to measure precisely the risk exposure. Similarly, in hedge positions a slight error in the transmission of one part, or asynchronous pricing of the various legs of the strategy, will grossly misstate the total exposure. Indeed, the problems and inaccuracies in using position information to assess risk are exemplified by the fact that major investment banking firms choose to use summary statistics rather than position-by-position analysis for their firmwide risk management despite having enormous resources and computational power at their disposal.

Perhaps more importantly, position transparency also has implications for the efficient functioning of the financial markets beyond the practical problems involved in its implementation. The problems in the examination of elementary particles in the financial world are the same as in the physical world: Beyond the inherent randomness and complexity of the systems, there are simply limits to what we can know. To say that we do not know something is as much a challenge as it is a statement of the state of our knowledge. If we do not know something, that presumes that either it is not worth knowing or it is something that will be studied and eventually revealed. It is the hubris of man that all things are discoverable. But for all the progress that has been made, perhaps even more exciting than the rolling back of the boundaries of our knowledge is the identification of realms that can never be explored. A sign in Einstein’s Princeton office read, “Not everything that counts can be counted, and not everything that can be counted counts.”

The behavioral analogue to the Uncertainty Principle is obvious. There are many psychological inhibitions that lead people to behave differently when they are observed than when they are not. For traders it is a simple matter of dollars and cents that will lead them to behave differently when their trades are open to scrutiny. Beneficial though it may be for the liquidity demander and the investor, for the liquidity supplier trans- parency is bad. The liquidity supplier does not intend to hold the position for a long time, like the typical liquidity demander might. Like a market maker, the liquidity supplier will come back to the market to sell off the position – ideally when there is another investor who needs liquidity on the other side of the market. If other traders know the liquidity supplier’s positions, they will logically infer that there is a good likelihood these positions shortly will be put into the market. The other traders will be loath to be the first ones on the other side of these trades, or will demand more of a price concession if they do trade, knowing the overhang that remains in the market.

This means that increased transparency will reduce the amount of liquidity provided for any given change in prices. This is by no means a hypothetical argument. Frequently, even in the most liquid markets, broker-dealer market makers (liquidity providers) use brokers to enter their market bids rather than entering the market directly in order to preserve their anonymity.

The more information we extract to divine the behavior of traders and the resulting implications for the markets, the more the traders will alter their behavior. The paradox is that to understand and anticipate market crises, we must know positions, but knowing and acting on positions will itself generate a feedback into the market. This feedback often will reduce liquidity, making our observations less valuable and possibly contributing to a market crisis. Or, in rare instances, the observer/feedback loop could be manipulated to amass fortunes.

One might argue that the physical limits of knowledge asserted by Heisenberg’s Uncertainty Principle are critical for subatomic physics, but perhaps they are really just a curiosity for those dwelling in the macroscopic realm of the financial markets. We cannot measure an electron precisely, but certainly we still can “kind of know” the present, and if so, then we should be able to “pretty much” predict the future. Causality might be approximate, but if we can get it right to within a few wavelengths of light, that still ought to do the trick. The mathematical system may be demonstrably incomplete, and the world might not be pinned down on the fringes, but for all practical purposes the world can be known.

Unfortunately, while “almost” might work for horseshoes and hand grenades, 30 years after Gödel and Heisenberg yet a third limitation of our knowledge was in the wings, a limitation that would close the door on any attempt to block out the implications of microscopic uncertainty on predictability in our macroscopic world. Based on observations made by Edward Lorenz in the early 1960s and popularized by the so-called butterfly effect – the fanciful notion that the beating wings of a butterfly could change the predictions of an otherwise perfect weather forecasting system – this limitation arises because in some important cases immeasurably small errors can compound over time to limit prediction in the larger scale. Half a century after the limits of measurement and thus of physical knowledge were demonstrated by Heisenberg in the world of quantum mechanics, Lorenz piled on a result that showed how microscopic errors could propagate to have a stultifying impact in nonlinear dynamic systems. This limitation could come into the forefront only with the dawning of the computer age, because it is manifested in the subtle errors of computational accuracy.

The essence of the butterfly effect is that small perturbations can have large repercussions in massive, random forces such as weather. Edward Lorenz was testing and tweaking a model of weather dynamics on a rudimentary vacuum-tube computer. The program was based on a small system of simultaneous equations, but seemed to provide an inkling into the variability of weather patterns. At one point in his work, Lorenz decided to examine in more detail one of the solutions he had generated. To save time, rather than starting the run over from the beginning, he picked some intermediate conditions that had been printed out by the computer and used those as the new starting point. The values he typed in were the same as the values held in the original simulation at that point, so the results the simulation generated from that point forward should have been the same as in the original; after all, the computer was doing exactly the same operations. What he found was that as the simulated weather pattern progressed, the results of the new run diverged, first very slightly and then more and more markedly, from those of the first run. After a point, the new path followed a course that appeared totally unrelated to the original one, even though they had started at the same place.

Lorenz at first thought there was a computer glitch, but as he investigated further, he discovered the basis of a limit to knowledge that rivaled that of Heisenberg and Gödel. The problem was that the numbers he had used to restart the simulation had been reentered based on his printout from the earlier run, and the printout rounded the values to three decimal places while the computer carried the values to six decimal places. This rounding, clearly insignificant at first, promulgated a slight error in the next-round results, and this error grew with each new iteration of the program as it moved the simulation of the weather forward in time. The error doubled every four simulated days, so that after a few months the solutions were going their own separate ways. The slightest of changes in the initial conditions had traced out a wholly different pattern of weather.

Intrigued by his chance observation, Lorenz wrote an article entitled “Deterministic Nonperiodic Flow,” which stated that “nonperiodic solutions are ordinarily unstable with respect to small modifications, so that slightly differing initial states can evolve into considerably different states.” Translation: Long-range weather forecasting is worthless. For his application in the narrow scientific discipline of weather prediction, this meant that no matter how precise the starting measurements of weather conditions, there was a limit after which the residual imprecision would lead to unpredictable results, so that “long-range forecasting of specific weather conditions would be impossible.” And since this occurred in a very simple laboratory model of weather dynamics, it could only be worse in the more complex equations that would be needed to properly reflect the weather. Lorenz discovered the principle that would emerge over time into the field of chaos theory, where a deterministic system generated with simple nonlinear dynamics unravels into an unrepeated and apparently random path.

The simplicity of the dynamic system Lorenz had used suggests a far-reaching result: Because we cannot measure without some error (harking back to Heisenberg), for many dynamic systems our forecast errors will grow to the point that even an approximation will be out of our hands. We can run a purely mechanistic system that is designed with well-defined and apparently well-behaved equations, and it will move over time in ways that cannot be predicted and, indeed, that appear to be random. This gets us to Santa Fe.

The principal conceptual thread running through the Santa Fe research asks how apparently simple systems, like that discovered by Lorenz, can produce rich and complex results. Its method of analysis in some respects runs in the opposite direction of the usual path of scientific inquiry. Rather than taking the complexity of the world and distilling simplifying truths from it, the Santa Fe Institute builds a virtual world governed by simple equations that when unleashed explode into results that generate unexpected levels of complexity.

In economics and finance, institute’s agenda was to create artificial markets with traders and investors who followed simple and reasonable rules of behavior and to see what would happen. Some of the traders built into the model were trend followers, others bought or sold based on the difference between the market price and perceived value, and yet others traded at random times in response to liquidity needs. The simulations then printed out the paths of prices for the various market instruments. Qualitatively, these paths displayed all the richness and variation we observe in actual markets, replete with occasional bubbles and crashes. The exercises did not produce positive results for predicting or explaining market behavior, but they did illustrate that it is not hard to create a market that looks on the surface an awful lot like a real one, and to do so with actors who are following very simple rules. The mantra is that simple systems can give rise to complex, even unpredictable dynamics, an interesting converse to the point that much of the complexity of our world can – with suitable assumptions – be made to appear simple, summarized with concise physical laws and equations.

The systems explored by Lorenz were deterministic. They were governed definitively and exclusively by a set of equations where the value in every period could be unambiguously and precisely determined based on the values of the previous period. And the systems were not very complex. By contrast, whatever the set of equations are that might be divined to govern the financial world, they are not simple and, furthermore, they are not deterministic. There are random shocks from political and economic events and from the shifting preferences and attitudes of the actors. If we cannot hope to know the course of the deterministic systems like fluid mechanics, then no level of detail will allow us to forecast the long-term course of the financial world, buffeted as it is by the vagaries of the economy and the whims of psychology.

Convertible Arbitrage. Thought of the Day 108.0

A convertible bond can be thought of as a fixed income security that has an embedded equity call option. The convertible investor has the right, but not the obligation, to convert (exchange) the bond into a predetermined number of common shares. The investor will presumably convert sometime at or before the maturity of the bond if the value of the common shares exceeds the cash redemption value of the bond. The convertible therefore has both debt and equity characteristics and, as a result, provides an asymmetrical risk and return profile. Until the investor converts the bond into common shares of the issuer, the issuer is obligated to pay a fixed coupon to the investor and repay the bond at maturity if conversion never occurs. A convertible’s price is sensitive to, among other things, changes in market interest rates, credit risk of the issuer, and the issuer’s common share price and share price volatility.

Untitled

Analysis of convertible bond prices factors in three different sources of value: investment value, conversion value, and option value. The investment value is the theoretical value at which the bond would trade if it were not convertible. This represents the security’s floor value, or minimum price at which it should trade as a nonconvertible bond. The conversion value represents the value of the common stock into which the bond can be converted. If, for example, these shares are trading at $30 and the bond can convert into 100 shares, the conversion value is $3,000. The investment value and conversion value can be considered, at maturity, the low and high price boundaries for the convertible bond. The option value represents the theoretical value of having the right, but not the obligation, to convert the bond into common shares. Until maturity, a convertible trades at a price between the investment value and the option value.

A Black-Scholes option pricing model, in combination with a bond valuation model, can be used to price a convertible security. However, a binomial option model, with some adjustments, is the best method for determining the value of a convertible security. Convertible arbitrage is a market-neutral investment strategy that involves the simultaneous purchase of convertible securities and the short sale of common shares (selling borrowed stock) that underlie the convertible. An investor attempts to exploit inefficiencies in the pricing of the convertible in relation to the security’s embedded call option on the convertible issuer’s common stock. In addition, there are cash flows associated with the arbitrage position that combine with the security’s inefficient pricing to create favorable returns to an investor who is able to properly manage a hedge position through a dynamic hedging process. The hedge involves selling short a percentage of the shares that the convertible can convert into based on the change in the convertible’s price with respect to the change in the underlying common stock price (delta) and the change in delta with respect to the change in the underlying common stock (gamma). The short position must be adjusted frequently in an attempt to neutralize the impact of changing common share prices during the life of the convertible security. This process of managing the short position in the issuer’s stock is called “delta hedging.”

If hedging is done properly, whenever the convertible issuer’s common share price decreases, the gain from the short stock position should exceed the loss from the convertible holding. Equally, whenever the issuer’s common share price increases, the gain from the convertible holding should exceed the loss from the short stock position. In addition to the returns produced by delta hedging, the investor will receive returns from the convertible’s coupon payment and interest income associated with the short stock sale. However, this cash flow is reduced by paying a cash amount to stock lenders equal to the dividend the lenders would have received if the stock were not loaned to the convertible investor, and further reduced by stock borrow costs paid to a prime broker. In addition, if the investor leverages the investment by borrowing cash from a prime broker, there will be interest expense on the loan. Finally, if an investor chooses to hedge credit risk of the issuer, or interest rate risk, there will be additional costs associated with credit default swaps and a short Treasury position. This strategy attempts to create returns that exceed the returns that would be available from purchasing a nonconverting bond with the same maturity issued by the same issuer, without being exposed to common share price risk. Most convertible arbitrageurs attempt to achieve double-digit annual returns from convertible arbitrage.

Option Spread. Drunken Risibility.

iqoption-trading-platform-binary-options

The term spread refers to the difference in premiums between the purchase and sale of options. An option spread is the simultaneous purchase of one or more options contracts and sale of the equivalent number of options contracts, in a different series of the class of options. A spread could involve the same underlying:

  •  Buying and selling calls, or
  •  Buying and selling puts.

Combining puts and calls into groups of two or more makes it feasible to design derivatives with interesting payoff profiles. The profit and loss outcomes depend on the options used (puts or calls); positions taken (long or short); whether their strike prices are identical or different; and the similarity or difference of their exercise dates. Among directional positions are bullish vertical call spreads, bullish vertical put spreads, bearish vertical spreads, and bearish vertical put spreads.

If the long position has a higher premium than the short position, this is known as a debit spread, and the investor will be required to deposit the difference in premiums. If the long position has a lower premium than the short position, this is a credit spread, and the investor will be allowed to withdraw the difference in premiums. The spread will be even if the premiums on each side results are the same.

A potential loss in an option spread is determined by two factors:

  • Strike price
  • Expiration date

If the strike price of the long call is greater than the strike price of the short call, or if the strike price of the long put is less than the strike price of the short put, a margin is required because adverse market moves can cause the short option to suffer a loss before the long option can show a profit.

A margin is also required if the long option expires before the short option. The reason is that once the long option expires, the trader holds an unhedged short position. A good way of looking at margin requirements is that they foretell potential loss. Here are, in a nutshell, the main option spreadings.

A calendar, horizontal, or time spread is the simultaneous purchase and sale of options of the same class with the same exercise prices but with different expiration dates. A vertical, or price or money, spread is the simultaneous purchase and sale of options of the same class with the same expiration date but with different exercise prices. A bull, or call, spread is a type of vertical spread that involves the purchase of the call option with the lower exercise price while selling the call option with the higher exercise price. The result is a debit transaction because the lower exercise price will have the higher premium.

  • The maximum risk is the net debit: the long option premium minus the short option premium.
  • The maximum profit potential is the difference in the strike prices minus the net debit.
  • The breakeven is equal to the lower strike price plus the net debit.

A trader will typically buy a vertical bull call spread when he is mildly bullish. Essentially, he gives up unlimited profit potential in return for reducing his risk. In a vertical bull call spread, the trader is expecting the spread premium to widen because the lower strike price call comes into the money first.

Vertical spreads are the more common of the direction strategies, and they may be bullish or bearish to reflect the holder’s view of market’s anticipated direction. Bullish vertical put spreads are a combination of a long put with a low strike, and a short put with a higher strike. Because the short position is struck closer to-the-money, this generates a premium credit.

Bearish vertical call spreads are the inverse of bullish vertical call spreads. They are created by combining a short call with a low strike and a long call with a higher strike. Bearish vertical put spreads are the inverse of bullish vertical put spreads, generated by combining a short put with a low strike and a long put with a higher strike. This is a bearish position taken when a trader or investor expects the market to fall.

The bull or sell put spread is a type of vertical spread involving the purchase of a put option with the lower exercise price and sale of a put option with the higher exercise price. Theoretically, this is the same action that a bull call spreader would take. The difference between a call spread and a put spread is that the net result will be a credit transaction because the higher exercise price will have the higher premium.

  • The maximum risk is the difference in the strike prices minus the net credit.
  • The maximum profit potential equals the net credit.
  • The breakeven equals the higher strike price minus the net credit.

The bear or sell call spread involves selling the call option with the lower exercise price and buying the call option with the higher exercise price. The net result is a credit transaction because the lower exercise price will have the higher premium.

A bear put spread (or buy spread) involves selling some of the put option with the lower exercise price and buying the put option with the higher exercise price. This is the same action that a bear call spreader would take. The difference between a call spread and a put spread, however, is that the net result will be a debit transaction because the higher exercise price will have the higher premium.

  • The maximum risk is equal to the net debit.
  • The maximum profit potential is the difference in the strike
    prices minus the net debit.
  • The breakeven equals the higher strike price minus the net debit.

An investor or trader would buy a vertical bear put spread because he or she is mildly bearish, giving up an unlimited profit potential in return for a reduction in risk. In a vertical bear put spread, the trader is expecting the spread premium to widen because the higher strike price put comes into the money first.

So, investors and traders who are bullish on the market will either buy a bull call spread or sell a bull put spread. But those who are bearish on the market will either buy a bear put spread or sell a bear call spread. When the investor pays more for the long option than she receives in premium for the short option, then the spread is a debit transaction. In contrast, when she receives more than she pays, the spread is a credit transaction. Credit spreads typically require a margin deposit.

Arbitrage, or Tensors thereof…

qs_2015_1

What is an arbitrage? Basically it means ”to get something from nothing” and a free lunch after all. More strict definition states the arbitrage as an operational opportunity to make a risk-free profit with a rate of return higher than the risk-free interest rate accured on deposit.

The arbitrage appears in the theory when we consider a curvature of the connection. A rate of excess return for an elementary arbitrage operation (a difference between rate of return for the operation and the risk-free interest rate) is an element of curvature tensor calculated from the connection. It can be understood keeping in mind that a curvature tensor elements are related to a difference between two results of infinitesimal parallel transports performed in different order. In financial terms it means that the curvature tensor elements measure a difference in gains accured from two financial operations with the same initial and final points or, in other words, a gain from an arbitrage operation.

In a certain sense, the rate of excess return for an elementary arbitrage operation is an analogue of the electromagnetic field. In an absence of any uncertanty (or, in other words, in an absense of walks of prices, exchange and interest rates) the only state is realised is the state of zero arbitrage. However, if we place the uncertenty in the game, prices and the rates move and some virtual arbitrage possibilities to get more than less appear. Therefore we can say that the uncertanty play the same role in the developing theory as the quantization did for the quantum gauge theory.

What of “matter” fields then, which interact through the connection. The “matter” fields are money flows fields, which have to be gauged by the connection. Dilatations of money units (which do not change a real wealth) play a role of gauge transformation which eliminates the effect of the dilatation by a proper tune of the connection (interest rate, exchange rates, prices and so on) exactly as the Fisher formula does for the real interest rate in the case of an inflation. The symmetry of the real wealth to a local dilatation of money units (security splits and the like) is the gauge symmetry of the theory.

A theory may contain several types of the “matter” fields which may differ, for example, by a sign of the connection term as it is for positive and negative charges in the electrodynamics. In the financial stage it means different preferances of investors. Investor’s strategy is not always optimal. It is due to partially incomplete information in hands, choice procedure, partially, because of investors’ (or manager’s) internal objectives. Physics of Finance

 

 

Single Asset Optimal Investment Fraction

Protecting-your-nest-egg_investment-outcomes

We first consider a situation, when an investor can spend a fraction of his capital to buy shares of just one risky asset. The rest of his money he keeps in cash.

Generalizing Kelly, we consider the following simple strategy of the investor: he regularly checks the asset’s current price p(t), and sells or buys some asset shares in order to keep the current market value of his asset holdings a pre-selected fraction r of his total capital. These readjustments are made periodically at a fixed interval, which we refer to as readjustment interval, and select it as the discrete unit of time. In this work the readjustment time interval is selected once and for all, and we do not attempt optimization of its length.

We also assume that on the time-scale of this readjustment interval the asset price p(t) undergoes a geometric Brownian motion:

p(t + 1) = eη(t)p(t) —– (1)

i.e. at each time step the random number η(t) is drawn from some probability distribution π(η), and is independent of it’s value at previous time steps. This exponential notation is particularly convenient for working with multiplicative noise, keeping the necessary algebra at minimum. Under these rules of dynamics the logarithm of the asset’s price, ln p(t), performs a random walk with an average drift v = ⟨η⟩ and a dispersion D = ⟨η2⟩ − ⟨η⟩2.

It is easy to derive the time evolution of the total capital W(t) of an investor, following the above strategy:

W(t + 1) = (1 − r)W(t) + rW(t)eη(t) —– (2)

Let us assume that the value of the investor’s capital at t = 0 is W(0) = 1. The evolution of the expectation value of the expectation value of the total capital ⟨W (t)⟩ after t time steps is obviously given by the recursion ⟨W (t + 1)⟩ = (1 − r + r⟨eη⟩)⟨W (t)⟩. When ⟨eη⟩ > 1, at first thought the investor should invest all his money in the risky asset. Then the expectation value of his capital would enjoy an exponential growth with the fastest growth rate. However, it would be totally unreasonable to expect that in a typical realization of price fluctuations, the investor would be able to attain the average growth rate determined as vavg = d⟨W(t)⟩/dt. This is because the main contribution to the expectation value ⟨W(t)⟩ comes from exponentially unlikely outcomes, when the price of the asset after a long series of favorable events with η > ⟨η⟩ becomes exponentially big. Such outcomes lie well beyond reasonable fluctuations of W (t), determined by the standard deviation √Dt of ln W (t) around its average value ⟨ln W (t)⟩ = ⟨η⟩t. For the investor who deals with just one realization of the multiplicative process it is better not to rely on such unlikely events, and maximize his gain in a typical outcome of a process. To quantify the intuitively clear concept of a typical value of a random variable x, we define xtyp as a median of its distribution, i.e xtyp has the property that Prob(x > xtyp) = Prob(x < xtyp) = 1/2. In a multiplicative process (2) with r = 1, W (t + 1) = eη(t)W (t), one can show that Wtyp(t) – the typical value of W(t) – grows exponentially in time: Wtyp(t) = e⟨η⟩t at a rate vtyp = ⟨η⟩, while the expectation value ⟨W(t)⟩ also grows exponentially as ⟨W(t)⟩ = ⟨eη⟩t, but at a faster rate given by vavg = ln⟨eη⟩. Notice that ⟨lnW(t)⟩ always grows with the typical growth rate, since those very rare outcomes when W (t) is exponentially big, do not make significant contribution to this average.

The question we are going to address is: which investment fraction r provides the investor with the best typical growth rate vtyp of his capital. Kelly has answered this question for a particular realization of multiplicative stochastic process, where the capital is multiplied by 2 with probability q > 1/2, and by 0 with probability p = 1 − q. This case is realized in a gambling game, where betting on the right outcome pays 2:1, while you know the right outcome with probability q > 1/2. In our notation this case corresponds to η being equal to ln 2 with probability q and −∞ otherwise. The player’s capital in Kelly’s model with r = 1 enjoys the growth of expectation value ⟨W(t)⟩ at a rate vavg = ln2q > 0. In this case it is however particularly clear that one should not use maximization of the expectation value of the capital as the optimum criterion. If the player indeed bets all of his capital at every time step, sooner or later he will loose everything and would not be able to continue to play. In other words, r = 1 corresponds to the worst typical growth of the capital: asymptotically the player will be bankrupt with probability 1. In this example it is also very transparent, where the positive average growth rate comes from: after T rounds of the game, in a very unlikely (Prob = qT) event that the capital was multiplied by 2 at all times (the gambler guessed right all the time!), the capital is equal to 2T. This exponentially large value of the capital outweighs exponentially small probability of this event, and gives rise to an exponentially growing average. This would offer condolence to a gambler who lost everything.

We generalize Kelly’s arguments for arbitrary distribution π(η). As we will see this generalization reveals some hidden results, not realized in Kelly’s “betting” game. As we learned above, the growth of the typical value of W(t), is given by the drift of ⟨lnW(t)⟩ = vtypt, which in our case can be written as

vtyp(r) = ∫ dη π(η) ln(1 + r(eη − 1)) —– (3)

One can check that vtyp(0) = 0, since in this case the whole capital is in the form of cash and does not change in time. In another limit one has vtyp(1) = ⟨η⟩, since in this case the whole capital is invested in the asset and enjoys it’s typical growth rate (⟨η⟩ = −∞ for Kelly’s case). Can one do better by selecting 0 < r < 1? To find the maximum of vtyp(r) one differentiates (3) with respect to r and looks for a solution of the resulting equation: 0 = v’typ(r) = ∫ dη π(η) (eη −1)/(1+r(eη −1)) in the interval 0 ≤ r ≤ 1. If such a solution exists, it is unique since v′′typ(r) = − ∫ dη π(η) (eη − 1)2 / (1 + r(eη − 1))2 < 0 everywhere. The values of the v’typ(r) at 0 and 1 are given by v’typ(0) = ⟨eη⟩ − 1, and v’typ(1) = 1−⟨e−η⟩. One has to consider three possibilities:

(1) ⟨eη⟩ is realized at r = 0 and is equal to 0. In other words, one should never invest in an asset with negative average return per capital ⟨eη⟩ − 1 < 0.

(2) ⟨eη⟩ > 1 , and ⟨e−η⟩ > 1. In this case v’typ(0) > 0, but v’typ(1) < 0 and the maximum of v(r) is realized at some 0 < r < 1, which is a unique solution to v’typ(r) = 0. The typical growth rate in this case is always positive (because you could have always selected r = 0 to make it zero), but not as big as the average rate ln⟨eη⟩, which serves as an unattainable ideal limit. An intuitive understanding of why one should select r < 1 in this case comes from the following observation: the condition ⟨e−η⟩ > 1 makes ⟨1/p(t)⟩ to grow exponentially in time. Such an exponential growth indicates that the outcomes with very small p(t) are feasible and give dominant contribution to ⟨1/p(t)⟩. This is an indicator that the asset price is unstable and one should not trust his whole capital to such a risky investment.

(3) ⟨eη⟩ > 1 , and ⟨e−η⟩ < 1. This is a safe asset and one can invest his whole capital in it. The maximum vtyp(r) is achieved at r = 1 and is equal to vtyp(1) = ln⟨η⟩. A simple example of this type of asset is one in which the price p(t) with equal probabilities is multiplied by 2 or by a = 2/3. As one can see this is a marginal case in which ⟨1/p(t)⟩ = const. For a < 2/3 one should invest only a fraction r < 1 of his capital in the asset, while for a ≥ 2/3 the whole sum could be trusted to it. The specialty of the case with a = 2/3 cannot not be guessed by just looking at the typical and average growth rates of the asset! One has to go and calculate ⟨e−η⟩ to check if ⟨1/p(t)⟩ diverges. This “reliable” type of asset is a new feature of the model with a general π(η). It is never realized in Kelly’s original model, which always has ⟨η⟩ = −∞, so that it never makes sense to gamble the whole capital every time.

An interesting and somewhat counterintuitive consequence of the above results is that under certain conditions one can make his capital grow by investing in asset with a negative typical growth rate ⟨η⟩ < 0. Such asset certainly loses value, and its typical price experiences an exponential decay. Any investor bold enough to trust his whole capital in such an asset is losing money with the same rate. But as long as the fluctuations are strong enough to maintain a positive average return per capital ⟨eη⟩ − 1 > 0) one can maintain a certain fraction of his total capital invested in this asset and almost certainly make money! A simple example of such mind-boggling situation is given by a random multiplicative process in which the price of the asset with equal probabilities is doubled (goes up by 100%) or divided by 3 (goes down by 66.7%). The typical price of this asset drifts down by 18% each time step. Indeed, after T time steps one could reasonably expect the price of this asset to be ptyp(T) = 2T/2 3−T/2 = (√2/3)T ≃ 0.82T. On the other hand, the average ⟨p(t)⟩ enjoys a 17% growth ⟨p(t + 1)⟩ = 7/6 ⟨p(t)⟩ ≃ 1.17⟨W (t)⟩. As one can easily see, the optimum of the typical growth rate is achieved by maintaining a fraction r = 1/4 of the capital invested in this asset. The typical rate in this case is a meager √(25/24) ≃ 1.02, meaning that in a long run one almost certainly gets a 2% return per time step, but it is certainly better than losing 18% by investing the whole capital in this asset.

Of course the properties of a typical realization of a random multiplicative process are not fully characterized by the drift vtyp(r)t in the position of the center of mass of P(h,t), where h(t) = lnW(t) is a logarithm of the wealth of the investor. Indeed, asymptotically P (h, t) has a Gaussian shape P (h, t) =1/ (√2π D(r)t) (exp(−(h−vtyp(r)t)2)/(2D(r)t), where vtyp(r) is given by eq. (3). One needs to know the dispersion D(r) to estimate √D(r)t, which is the magnitude of characteristic deviations of h(t) away from its typical value htyp(t) = vtypt. At the infinite time horizon t → ∞, the process with the biggest vtyp(r) will certainly be preferable over any other process. This is because the separation between typical values of h(t) for two different investment fractions r grows linearly in time, while the span of typical fluctuations grows only as a √t. However, at a finite time horizon the investor should take into account both vtyp(r) and D(r) and decide what he prefers: moderate growth with small fluctuations or faster growth with still bigger fluctuations. To quantify this decision one needs to introduce an investor’s “utility function” which we will not attempt in this work. The most conservative players are advised to always keep their capital in cash, since with any other arrangement the fluctuations will certainly be bigger. As a rule one can show that the dispersion D(r) = ∫ π(η) ln2[1 + r(eη − 1)]dη − v2typ monotonically increases with r. Therefore, among two solutions with equal vtyp(r) one should always select the one with a smaller r, since it would guarantee smaller fluctuations. Here it is more convenient to switch to the standard notation. It is customary to use the random variable

Λ(t)= (p(t+1)−p(t))/p(t) = eη(t) −1 —– (4)

which is referred to as return per unit capital of the asset. The properties of a random multiplicative process are expressed in terms of the average return per capital α = ⟨Λ⟩ = ⟨eη⟩ − 1, and the volatility (standard deviation) of the return per capital σ = √(⟨Λ2⟩ – ⟨Λ⟩2. In our notation, α = ⟨eη⟩ – 1, is determined by the average and not typical growth rate of the process. For η ≪ 1 , α ≃ v + D/2 + v2/2, while the volatility σ is related to D ( the dispersion of η) through σ ≃ √D.

 

Portfolio Optimization, When the Underlying Asset is Subject to a Multiplicative Continuous Brownian Motion With Gaussian Price Fluctuations

1469941490053

Imagine that you are an investor with some starting capital, which you can invest in just one risky asset. You decided to use the following simple strategy: you always maintain a given fraction 0 < r < 1 of your total current capital invested in this asset, while the rest (given by the fraction 1 − r) you wisely keep in cash. You select a unit of time (say a week, a month, a quarter, or a year, depending on how closely you follow your investment, and what transaction costs are involved) at which you check the asset’s current price, and sell or buy some shares of this asset. By this transaction you adjust the current money equivalent of your investment to the above pre-selected fraction of your total capital.

The question that is interesting is: which investment fraction provides the optimal typical long-term growth rate of investor’s capital? By typical, it is meant that this growth rate occurs at large-time horizon in majority of realizations of the multiplicative process. By extending time-horizons, one can make this rate to occur with probability arbitrary close to one. Contrary to the traditional economics approach, where the expectation value of an artificial “utility function” of an investor is optimized, the optimization of a typical growth rate does not contain any ambiguity.

Let us assume that during the timescale, at which the investor checks and readjusts his asset’s capital to the selected investment fraction, the asset’s price changes by a random factor, drawn from some probability distribution, and uncorrelated from price dynamics at earlier intervals. In other words, the price of an asset experiences a multiplicative random walk with some known probability distribution of steps. This assumption is known to hold in real financial markets beyond a certain time scale. Contrary to continuum theories popular among economists our approach is not limited to Gaussian distributed returns: indeed, we were able to formulate our strategy for a general probability distribution of returns per capital (elementary steps of the multiplicative random walk).

Thus risk-free interest rate, asset’s dividends, and transaction costs are ignored (when volatility is large they are indeed negligible). However, the task of including these effects in our formalism is rather straightforward. The quest of finding a strategy, which optimizes the long-term growth rate of the capital is by no means new: indeed it was first discussed by Daniel Bernoulli in about 1730 in connection with the St. Petersburg game. In the early days of information sciences, C. F. Shannon has considered the application of the concept of information entropy in designing optimal strategies in such games as gambling. Working from the foundations of Shannon, J. L. Kelly Jr. has specifically designed an optimal gambling strategy in placing bets, when a gambler has some incomplete information about the winning outcome (a “noisy information channel”). In modern day finance, especially the investment in very risky assets is no different from gambling. The point Shannon and Kelly wanted to make is that, given that the odds are slightly in your favor albeit with large uncertainty, the gambler should not bet his whole capital at every time step. On the other hand, he would achieve the biggest long-term capital growth by betting some specially optimized fraction of his whole capital in every game. This cautious approach to investment is recommended in situations when the volatility is very large. For instance, in many emergent markets the volatility is huge, but they are still swarming with investors, since the long-term return rate in some cautious investment strategy is favorable.

Later on Kelly’s approach was expanded and generalized in the works of Breiman. Our results for multi-asset optimal investment are in agreement with his exact but non-constructive equations. In some special cases, Merton has considered the problem of portfolio optimization, when the underlying asset is subject to a multiplicative continuous Brownian motion with Gaussian price fluctuations.

What Drives Investment? Or How Responsible is Kelly’s Optimum Investment Fraction?

srep10523-f2

A reasonable way to describe assets price variations (on a given time-scale) is to assume them to be multiplicative random walks with log-normal step. This comes from the assumption that growth rates of prices are more significant than their absolute variations. So, we describe the price of a financial assets as a time-dependent multiplicative random process. We introduce a set of N Gaussian random variables xi(t) depending on a time parameter t. By this set, we define N independent multiplicative Gaussian random walks, whose assigned discrete time evolution is given by

pi(t+1) = exi(t)pi(t) —– (1)

for i = 1,…,N, where each xi(t) is not correlated in time. To optimize an investment, one can choose different risk-return strategies. Here, by optimization we will mean the maximization of the typical capital growth rate of a portfolio. A capital W(t), invested into different financial assets who behave as multiplicative random walks, grows almost certainly at an exponential rate ⟨ln W (t+1)/W (t)⟩, where one must average over the distribution of the single multiplicative step. We assume that an investment is diversified according to the Kelly’s optimum investment fraction, in order to maximize the typical capital growth rate over N assets with identical average return α = ⟨exi(t)⟩ − 1 and squared volatility ∆ = ⟨e2xi(t)⟩ − ⟨exi(t)⟩2. It should be noted that Kelly capital growth criterion, which maximizes the expected log of final wealth, provides the strategy that maximizes long run wealth growth asymptotically for repeated investments over time. However, one drawback is found in its very risky behavior due to the log’s essentially zero risk aversion; consequently it tends to suggest large concentrated investments or bets that can lead to high volatility in the short-term. Many investors, hedge funds, and sports bettors use the criterion and its seminal application is to a long sequence of favorable investment situations. On each asset, the investor will allocate a fraction fi of his capital, according to the return expected from that asset. The time evolution of the total capital is ruled by the following multiplicative process

W(t+1) = [1 + ∑i=1Nfi(exi(t) -1)] W(t) —– (2)

First, we consider the case of an unlimited investment, i.e. we put no restriction tothe value of ∑i=1Nfi. The typical growth rate

Vtyp = ⟨ln[1+  ∑i=1Nfi(exi -1)]⟩ —– (3)

of the investor’s capital can be calculated through the following 2nd-order expansion in exi -1, if we assume that fluctuations of prices are small and uncorrelated, that seems to be quite reasonable

Vtyp ≅ ∑i=1Nfi(⟨exi⟩ – 1) – fi2/2(⟨e2xi⟩ – 2⟨exi⟩ + 1 —– (4)

By solving d/df(Vtyp = 0), it easy to show that the optimal value for fi is fiopt (α, Δ) = α / (α2 + Δ) ∀ i. We assume that the investor has little ignorance about the real value of α, that we represent by a Gaussian fluctuation around the real value of α. In the investor’s mind, each asset is different, because of this fluctuation αi = α + εi. The εi are drawn from the same distribution, with ⟨εi⟩ = 0 as errors are normally distributed around the real value. We suppose that the investor makes an effort E to investigate and get information about the statistical parameters of the N assets upon which he will spread his capital. So, his ignorance (i.e. the width of the distribution of the εi) about the real value of αi will be a decreasing function of the effort “per asset” E ; more, we suppose that an even infinite effort will not make N this ignorance vanish. In order to plug these assumptions in the model, we write the width of the distribution of ε as

⟨ε2i⟩ = D0 + (N/E)γ —– (5)

with γ > 0. As one can see, the greater is E, the more exact is the perception, and better is the investment. D0 is the asymptotic ignorance. All the invested fraction fopt (αi, Δ) will be different, according to the investor’s perception. Assuming that the εi are small, we expand all fi(α + εi) in equation 4 up to the 2nd order in εi, and after averaging over the distribution of εi, we obtain the mean value of the typical capital growth rate for an investor who provides a given effort E:

Vtyp = N[A − (D0 + (N/E)γ )B] —– (6)

where

A = (α (3Δ – α2))/(α2 + Δ)3 B = -(α2 – Δ)2/2(α2 + Δ)3 —– (7)

We are now able to find the optimal number of assets to be included in the portfolio (i.e., for which the investment is more advantageous, taken into account the effort provided to get information), by solving d/dNVtyp = 0, it is easy to see that the number of optimal assets is given by

Nopt(E) = E {[(A – D0]/(1 + γ)B}1/γ —– (8)

that is an increasing function of the effort E. If the investor has no limit in the total capital fraction invested in the portfolio (so that it can be greater than 1, i.e. the investor can invest more money than he has, borrowing it from an external source), the capital can take negative values, if the assets included in the portfolio encounter a simultaneous negative step. So, if the total investment fraction is greater than 1, we should take into account also the cost of refunding loss to the bank, to predict the typical growth rate of the capital.

Debt versus Equity Financing. Why the Difference matters?

debt-versus-equity-finance

There is a lot of confusion between debt and equity financing, though there is a clear line of demarcation as such. Whats even more sorry as a state of affair is these jargons being used pretty platitudinously, and this post tries to recover from any such usage now bordering on the colloquial, especially on the activists’s side of the camp.

What is Debt Financing?

Debt financing is a means of raising funds to generate working capital that is used to pay for projects or endeavors that the issuer of the debt wishes to undertake. The issuer may choose to issue bonds, promissory notes or other debt instruments as a means of financing the debt associated with the project. In return for purchasing the notes or bonds, the investor is provided with some type of return above and beyond the original amount of purchase.

Debt financing is very different from equity financing. With equity financing, revenue is generated by issuing shares of stock at a public offering. The shares remain active from the point of issue and will continue to generate returns for investors as long as the shares are held. By contrast, debt financing involves the use of debt instruments that are anticipated to be repaid in full within a given time frame.

With debt financing, the investor anticipates earning a return in the form of interest for a specified period of time. At the end of the life of a bond or note, the investor receives the full face value of the bond, including any interest that may have accrued. In some cases, bonds or notes may be structured to allow for periodic interest payments to investors throughout the life of the debt instrument.

For the issuer of the bonds or notes, debt financing is a great way to raise needed capital in a short period of time. Since it does not involve the issuing of shares of stock, there is a clear start and end date in mind for the debt. It is possible to project the amount of interest that will be repaid during the life of the bond and thus have a good idea of how to meet those obligations without causing undue hardship. Selling bonds is a common way of funding special projects, and is utilized by municipalities as well as many corporations.

Investors also benefit from debt financing. Since the bonds and notes are often set up with either a fixed rate of interest or a variable rate with a guarantee of a minimum interest rate, it is possible to project the return on the investment over the life of the bond. There is relatively little risk with this type of debt financing, so the investor does not have to be concerned about losing money on the deal. While the return may be somewhat modest, it is reliable. The low risk factor makes entering into a debt financing strategy very attractive for conservative investors.

What is Equity Financing?

Also known as share capital, equity financing is the strategy of generating funds for company projects by selling a limited amount of stock to investors. The financing may involve issuing shares of common stock or preferred stock. In addition, the shares may be sold to commercial or individual investors, depending on the type of shares involved and the governmental regulations that apply in the nation where the issuer is located. Both large and small business owners make use of this strategy when undertaking new company projects.

Equity financing is a means of raising the capital needed for some sort of company activity, such as the purchase of new equipment or the expansion of company locations or manufacturing facilities. The choice of which means of financing to use will often depend on the purpose that the business is pursuing, as well as the company’s current credit rating. With the strategy of equity financing, the expectation is that the project funded with the sale of the stock will eventually begin to turn a profit. At that point, the business not only is able to provide dividends to the shareholders who purchased the stock, but also realize profits that help to increase the financial stability of the company overall. In addition, there is no outstanding debt owed to a bank or other lending institution. The end result is that the company successfully funds the project without going into debt, and without the need to divert existing resources as a means of financing the project during its infancy.

While equity financing is an option that is often ideal for funding new projects, there are situations where looking into debt financing is in the best interests of the company. Should the project be anticipated to yield a return in a very short period of time, the company may find that obtaining loans at competitive interest rates is a better choice. This is especially true if this option makes it possible to launch the project sooner rather than later, and take advantage of favorable market conditions that increase the projected profits significantly. The choice between equity financing and debt financing may also involve considering different outcomes for the project. By considering how the company would be affected if the project fails, as well as considering the fortunes of the company if the project is successful, it is often easier to determine which financing alternative will serve the interests of the business over the long-term.

In summation, equity financing is the technique for raising capital organization stock to speculators whereas debt financing is the technique of raising capital by borrowing. Equity financing is offered forms like gained capital or revenue while debt financing is available in form of loan. Equity financing involves high risk as compare to debt financing. Equity holders have security but debt holders don’t have. In equity financing, entrepreneurs don’t need to channel benefits into credit reimbursement while in debt financing, entrepreneurs’ have to channel profit into repayment of loans.