Capital As Power.

DYxMn9QXcAIh8-W.jpg-large

One has the Eric Fromm angle of consciousness as linear and directly proportional to exploitation as one of the strands of Marxian thinking, the non-linearity creeps up from epistemology on the technological side, with, something like, say Moore’s Law, where ascension of conscious thought is or could be likened to exponentials. Now, these exponentials are potent in ridding of the pronouns, as in the “I” having a compossibility with the “We”, for if these aren’t gotten rid of, there is asphyxiation in continuing with them, an effort, an energy expendable into the vestiges of waste, before Capitalism comes sweeping in over such deliberately pronounced islands of pronouns. This is where the sweep is of the “IT”. And this is emancipation of the highest order, where teleology would be replaced by Eschatology. Alienation would be replaced with emancipation. Teleology is alienating, whereas eschatology is emancipating. Agency would become un-agency. An emancipation from alienation, from being, into the arms of becoming, for the former is a mere snapshot of the illusory order, whereas the latter is a continuum of fluidity, the fluid dynamics of the deracinated from the illusory order. The “IT” is pure and brute materialism, the cosmic unfoldings beyond our understanding and importantly mirrored in on the terrestrial. “IT” is not to be realized. “It” is what engulfs us, kills us, and in the process emancipates us from alienation. “IT” is “Realism”, a philosophy without “we”, Capitalism’s excessive power. “IT” enslaves “us” to the point of us losing any identification. In a nutshell, theory of capital is a catalogue of heresies to be welcomed to set free from the vantage of an intention to emancipate economic thought from the etherealized spheres of choice and behaviors or from the paradigm of the disembodied minds.

Jonathan Nitzan and Shimshon Bichler‘s Capital as Power A Study of Order and Creorder

Advertisement

Symmetrical – Asymmetrical Dialectics Within Catastrophical Dynamics. Thought of the Day 148.0

Screen Shot 2018-05-29 at 7.49.54 AM

Catastrophe theory has been developed as a deterministic theory for systems that may respond to continuous changes in control variables by a discontinuous change from one equilibrium state to another. A key idea is that system under study is driven towards an equilibrium state. The behavior of the dynamical systems under study is completely determined by a so-called potential function, which depends on behavioral and control variables. The behavioral, or state variable describes the state of the system, while control variables determine the behavior of the system. The dynamics under catastrophe models can become extremely complex, and according to the classification theory of Thom, there are seven different families based on the number of control and dependent variables.

Let us suppose that the process yt evolves over t = 1,…, T as

dyt = -dV(yt; α, β)dt/dyt —– (1)

where V (yt; α, β) is the potential function describing the dynamics of the state variable ycontrolled by parameters α and β determining the system. When the right-hand side of (1) equals zero, −dV (yt; α, β)/dyt = 0, the system is in equilibrium. If the system is at a non-equilibrium point, it will move back to its equilibrium where the potential function takes the minimum values with respect to yt. While the concept of potential function is very general, i.e. it can be quadratic yielding equilibrium of a simple flat response surface, one of the most applied potential functions in behavioral sciences, a cusp potential function is defined as

−V(yt; α, β) = −1/4yt4 + 1/2βyt2 + αyt —– (2)

with equilibria at

-dV(yt; α, β)dt/dyt = -yt3 + βyt + α —– (3)

being equal to zero. The two dimensions of the control space, α and β, further depend on realizations from i = 1 . . . , n independent variables xi,t. Thus it is convenient to think about them as functions

αx = α01x1,t +…+ αnxn,t —– (4)

βx = β0 + β1x1,t +…+ βnxn,t —– (5)

The control functions αx and βx are called normal and splitting factors, or asymmetry and bifurcation factors, respectively and they determine the predicted values of yt given xi,t. This means that for each combination of values of independent variables there might be up to three predicted values of the state variable given by roots of

-dV(yt; αx, βx)dt/dyt = -yt3 + βyt + α = 0 —– (6)

This equation has one solution if

δx = 1/4αx2 − 1/27βx3 —– (7)

is greater than zero, δx > 0 and three solutions if δx < 0. This construction can serve as a statistic for bimodality, one of the catastrophe flags. The set of values for which the discriminant is equal to zero, δx = 0 is the bifurcation set which determines the set of singularity points in the system. In the case of three roots, the central root is called an “anti-prediction” and is least probable state of the system. Inside the bifurcation, when δx < 0, the surface predicts two possible values of the state variable which means that the state variable is bimodal in this case.

Screen Shot 2018-05-29 at 7.36.54 AM

Most of the systems in behavioral sciences are subject to noise stemming from measurement errors or inherent stochastic nature of the system under study. Thus for a real-world applications, it is necessary to add non-deterministic behavior into the system. As catastrophe theory has primarily been developed to describe deterministic systems, it may not be obvious how to extend the theory to stochastic systems. An important bridge has been provided by the Itô stochastic differential equations to establish a link between the potential function of a deterministic catastrophe system and the stationary probability density function of the corresponding stochastic process. Adding a stochastic Gaussian white noise term to the system

dyt = -dV(yt; αx, βx)dt/dyt + σytdWt —– (8)

where -dV(yt; αx, βx)dt/dyt is the deterministic term, or drift function representing the equilibrium state of the cusp catastrophe, σyt is the diffusion function and Wt is a Wiener process. When the diffusion function is constant, σyt = σ, and the current measurement scale is not to be nonlinearly transformed, the stochastic potential function is proportional to deterministic potential function and probability distribution function corresponding to the solution from (8) converges to a probability distribution function of a limiting stationary stochastic process as dynamics of yt are assumed to be much faster than changes in xi,t. The probability density that describes the distribution of the system’s states at any t is then

fs(y|x) = ψ exp((−1/4)y4 + (βx/2)y2 + αxy)/σ —– (9)

The constant ψ normalizes the probability distribution function so its integral over the entire range equals to one. As bifurcation factor βx changes from negative to positive, the fs(y|x) changes its shape from unimodal to bimodal. On the other hand, αx causes asymmetry in fs(y|x).

Bullish or Bearish. Note Quote.

Untitled

The term spread refers to the difference in premiums between the purchase and sale of options. An option spread is the simultaneous purchase of one or more options contracts and sale of the equivalent number of options contracts, in a different series of the class of options. A spread could involve the same underlying: 

  •  Buying and selling calls, or 
  •  Buying and selling puts.

Combining puts and calls into groups of two or more makes it feasible to design derivatives with interesting payoff profiles. The profit and loss outcomes depend on the options used (puts or calls); positions taken (long or short); whether their strike prices are identical or different; and the similarity or difference of their exercise dates. Among directional positions are bullish vertical call spreads, bullish vertical put spreads, bearish vertical spreads, and bearish vertical put spreads. 

If the long position has a higher premium than the short position, this is known as a debit spread, and the investor will be required to deposit the difference in premiums. If the long position has a lower premium than the short position, this is a credit spread, and the investor will be allowed to withdraw the difference in premiums. The spread will be even if the premiums on each side results are the same. 

A potential loss in an option spread is determined by two factors: 

  • Strike price 
  • Expiration date 

If the strike price of the long call is greater than the strike price of the short call, or if the strike price of the long put is less than the strike price of the short put, a margin is required because adverse market moves can cause the short option to suffer a loss before the long option can show a profit.

A margin is also required if the long option expires before the short option. The reason is that once the long option expires, the trader holds an unhedged short position. A good way of looking at margin requirements is that they foretell potential loss. Here are, in a nutshell, the main option spreadings.

A calendar, horizontal, or time spread is the simultaneous purchase and sale of options of the same class with the same exercise prices but with different expiration dates. A vertical, or price or money, spread is the simultaneous purchase and sale of options of the same class with the same expiration date but with different exercise prices. A bull, or call, spread is a type of vertical spread that involves the purchase of the call option with the lower exercise price while selling the call option with the higher exercise price. The result is a debit transaction because the lower exercise price will have the higher premium.

  • The maximum risk is the net debit: the long option premium minus the short option premium. 
  • The maximum profit potential is the difference in the strike prices minus the net debit. 
  • The breakeven is equal to the lower strike price plus the net debit. 

A trader will typically buy a vertical bull call spread when he is mildly bullish. Essentially, he gives up unlimited profit potential in return for reducing his risk. In a vertical bull call spread, the trader is expecting the spread premium to widen because the lower strike price call comes into the money first. 

Vertical spreads are the more common of the direction strategies, and they may be bullish or bearish to reflect the holder’s view of market’s anticipated direction. Bullish vertical put spreads are a combination of a long put with a low strike, and a short put with a higher strike. Because the short position is struck closer to-the-money, this generates a premium credit. 

Bearish vertical call spreads are the inverse of bullish vertical call spreads. They are created by combining a short call with a low strike and a long call with a higher strike. Bearish vertical put spreads are the inverse of bullish vertical put spreads, generated by combining a short put with a low strike and a long put with a higher strike. This is a bearish position taken when a trader or investor expects the market to fall. 

The bull or sell put spread is a type of vertical spread involving the purchase of a put option with the lower exercise price and sale of a put option with the higher exercise price. Theoretically, this is the same action that a bull call spreader would take. The difference between a call spread and a put spread is that the net result will be a credit transaction because the higher exercise price will have the higher premium. 

  • The maximum risk is the difference in the strike prices minus the net credit. 
  • The maximum profit potential equals the net credit. 
  • The breakeven equals the higher strike price minus the net credit. 

The bear or sell call spread involves selling the call option with the lower exercise price and buying the call option with the higher exercise price. The net result is a credit transaction because the lower exercise price will have the higher premium.

A bear put spread (or buy spread) involves selling some of the put option with the lower exercise price and buying the put option with the higher exercise price. This is the same action that a bear call spreader would take. The difference between a call spread and a put spread, however, is that the net result will be a debit transaction because the higher exercise price will have the higher premium. 

  • The maximum risk is equal to the net debit. 
  • The maximum profit potential is the difference in the strike
    prices minus the net debit. 
  • The breakeven equals the higher strike price minus the net debit.

An investor or trader would buy a vertical bear put spread because he or she is mildly bearish, giving up an unlimited profit potential in return for a reduction in risk. In a vertical bear put spread, the trader is expecting the spread premium to widen because the higher strike price put comes into the money first. 

In conclusion, investors and traders who are bullish on the market will either buy a bull call spread or sell a bull put spread. But those who are bearish on the market will either buy a bear put spread or sell a bear call spread. When the investor pays more for the long option than she receives in premium for the short option, then the spread is a debit transaction. In contrast, when she receives more than she pays, the spread is a credit transaction. Credit spreads typically require a margin deposit. 

The Plantation Labour Act, 1951. Random Musings.

bigpic-bengal-759-6

The Plantation Labour Act, 1951 provides for the welfare of plantation labour and regulates the conditions of work in plantations. According to the Act, the term ‘plantation’ means “any plantation to which this Act, whether wholly or in part, applies and includes offices, hospitals, dispensaries, schools, and any other premises used for any purpose connected with such plantation, but does not include any factory on the premises to which the provisions of the Factories Act, 1948 apply.”

The Act applies to any land used as plantations which measures 5 hectares or more in which 15 or more persons are working. However, the State Governments are free to declare any plantation land less than 5 hectares or less than 15 persons to be covered by the Act.

The Act provides that no adult worker and adolescent or child shall be employed for more than 48 hours and 27 hours respectively a week, and every worker is entitled for a day of rest in every period of 7 days. In every plantation covered under the Act, medical facilities for the workers and their families are to be made readily available. Also, it provides for setting up of canteens, creches, recreational facilities, suitable accommodation and educational facilities for the benefit of plantation workers in and around the work places in the plantation estate. Its amendment in 1981 provided for compulsory registration of plantations. 

The Act is administered by the Ministry of Labour through its Industrial Relations Division. The Division is concerned with improving the institutional framework for dispute settlement and amending labour laws relating to industrial relations. It works in close co-ordination with the Central Industrial Relations Machinery (CIRM) in an effort to ensure that the country gets a stable, dignified and efficient workforce, free from exploitation and capable of generating higher levels of output. The CIRM, which is an attached office of the Ministry of Labour, is also known as the Chief Labour Commissioner (Central) [CLC(C)] Organisation. The CIRM is headed by the Chief Labour Commissioner (Central).

In the case of the tea plantations, the responsibility for welfare measures has been given to their management. The Government of India imposed this responsibility on them through the Plantation Labour Act of 1951 (PLA). The Government of Assam gave it a concrete shape in the Assam Plantation Labour Rules, 1956. This act provided for certain welfare measures for the workers and imposed restrictions on the working hours. They are to be 54 (per week) for adults and 44 (per week) for non-adults. The employers are also to attend to the health aspect, provide adequate drinking water, latrines and urinals separately for men and women for every 50 acres of land under cultivation, proper maintenance of the drinking water and sanitation system. The employer is also to provide a garden hospital for the estates with more than 500 workers or have a lien of 15 beds for every 1,000 workers in a neighbouring hospital within a distance of five kilometres. The gardens are also to have a group hospital in a sub area considered central for the people and provide transport to the patients. Along with the canteen facility a well furnished lighted and ventilated crèche for children below 2 is to be provided in gardens with more than 50 women workers. An open playground is to be provided for children above 2. The workers are to be provided with recreational facilities such as community radio and TV sets and indoor games. 

Specific to the PLA is the clause on educational facilities. If the number of children in the 6-12 age group exceeds 25 the employer should provide and maintain at least a primary school for imparting primary education to them. The school should have facilities such as a building in accordance with the guidelines and standard plans of the Education Department. If the garden does not maintain a school because a public school is situated within a mile from the garden then the employer is to pay a cess or tax for the children’s primary education. 

The tea plantation workers are still paid wages below the minimum wage of agricultural workers. An industry, which is highly capitalistic in character, owing to the colonial times when British private businesses with the extended involvement of British capital expanded the industry from the vantage point of international marketing and financial activities, and still continuing in formats no different in kind post-independence, bifurcates the wages partly in cash and partly in kind. Even if there has been a numerical increase in wages post-independence for the plantation workers, qualitatively, this hasn’t had any substantial improvement, thanks to minute upward fluctuations in real wages. What this has amounted to is a continuation of feudal relations of production and a highly structured organization of production in its pre-marketing phases, and thus expropriating super-profits on the basis of semi-feudal, extra-economic coercion and exploitation.

The literacy rate among the tea garden workers and their families is a poor 20 per cent. Around one-third of the workforce is denied housing facilities. Every year, hundreds of people in the plantations die from water-borne diseases like gastro-enteritis and cholera. Most of the plantations have no potable drinking water facilities and drainage systems.

The majority of the workers are suffering from anaemia and tuberculosis. Malaria is rampant. There are tea gardens where at least one in every family is suffering from tuberculosis. And the children and women are the worst affected. The infant mortality rate is very high, far above the state and national averages. 

The ethnicity of the tea workforce is probably one reason why nobody cares. a significant percentage of the tea plantation workers of Assam and West Bengal are tribals, fourth generation immigrants of indentured migrants from the Central and South-Central Indian tribal heartland. In Assam, they do not enjoy any special status, as their brethren elsewhere do. They are merely referred to as the tea labour and ex-tea labour community. The children cannot avail of any reservation facility in educational institutions, the youth do not enjoy any opportunity in the employment circuit. Most of the time, education begins and ends with lower primary schools housed within the gardens themselves. In other words, being coerced into plantation labour at the cost of continuing education is nothing uncommon. After getting sucked into the plantation, this young labour force, due to lack of skilled exposure and an almost complete absence of alternative employment opportunity only add credence to the epitome of modern-day bonded labour: forced and unfree in nature. With the institution of labour laws and the PLA in the tea plantation industry, it is the women who have been the prime target of deprivation and exploitation. They have been subjected to long working hours and heavy workload. Even the pregnant women are not spared from activities like deep hoeing. The majority of the temporary workers, today, are women. For them, social welfare benefits under PLA including maternity and medical benefits do not exist.

The tea plantation industry is amongst the largest organized industry in India, where the workers are unionized. In West Bengal, there are up north of 30 unions, whereas in Assam, the mantle of workers’ representation over the last five decades has been invested  with the Assam Cha Mazdoor Sangh (ACMS). ACMS happens to be the only registered union, even though some others have central trade unions affiliations.  Despite strong unionization, the issue of PLA implementation is weak with not a single plantation boasting of total implementation. One major implication of such a lack is reflected in the dominion of tea industry associations, which maneuver wage agreements. With hardly any promotional avenues opening up for a large majority of unskilled workers, these across ages and experiences receive same wages and are classified as daily wage workers. The last few decades of wage agreements show that the tea employers have not conceded any major demand of the trade unions. The tea associations have also not agreed to the CPI-linked variable Dearness Allowance. Nearly 40 per cent of the workers in the tea plantations of West Bengal and Assam are temporary and casual workers with growing numbers ruling them out of the ambit of PLA. That the tea industry is reaping all the benefits without investing a unit currency on a large section of its workforce is a direct consequent of the above fact. 

The agreements in West Bengal are tripartite in that the union, tea industry association and the government work out the agreement, whereas bipartite in Assam where the government is not a party. The long-term understanding with the Indian National Trade Union Congress (INTUC) affiliated ACMS has given the Assam employers a clear domination and stranglehold over the industry. Officially, there is no labour unrest, industrial relations remain generally peaceful and ACMS, understandably, ‘co-operates with the industry’. In West Bengal, however, any demand by the workers and the unions, termed unfair by the industry, is either flatly rejected, or is repeatedly discussed by the tea industry in a series of consultations, a delaying tactic mainly, until the unions are fed up and ask the government to intervene. Even then, there is a lot of resentment amongst the workers, but the very threat to their survival forces them to keep quiet and accept the verdict. For a tea plantation worker, whose forefathers were indentured immigrants, and were born and brought up inside the tea gardens, dismissal means not only the loss of livelihood but a threat to their general existence. It is therefore very evident that even with complete unionization, positive interventions on behalf of the workers are confined to the micro-scale and any extrapolation to the macro-scale doesn’t really help beat seclusion and isolation. But, what is really ironic is that these unions have remained workers’ only link to the outside world, albeit in a manner that hasn’t concretely contributed to their cause. 

The trade unions in the tea industry are operating under the same hierarchical and organizational setup master-minded and practiced by the planters right from the colonial days. Beyond a point, logic says that they will never be able to confront the tea industry to struggle for the betterment and uplift of the tea workers. The trade unions have thus miles to go, starting foremost with the politics of architecture: to revamp organizational change and hierarchies in favour of workers to be able to survive and discharge responsibilities towards the tea plantation workers.

The Natural Theoretic of Electromagnetism. Thought of the Day 147.0

pRwcC

In Maxwell’s theory, the field strength F = 1/2Fμν dxμ ∧ dxν is a real 2-form on spacetime, and thence a natural object at the same time. The homogeneous Maxwell equation dF = 0 is an equation involving forms and it has a well-known local solution F = dA’, i.e. there exists a local spacetime 1-form A’ which is a potential for the field strength F. Of course, if spacetime is contractible, as e.g. for Minkowski space, the solution is also a global one. As is well-known, in the non-commutative Yang-Mills theory case the field strength F = 1/2FAμν TA ⊗ dxμ ∧ dxν is no longer a spacetime form. This is a somewhat trivial remark since the transformation laws of such field strength are obtained as the transformation laws of the curvature of a principal connection with values in the Lie algebra of some (semisimple) non-Abelian Lie group G (e.g. G = SU(n), n 2 ≥ 2). However, the common belief that electromagnetism is to be intended as the particular case (for G =U(1)) of a non-commutative theory is not really physically evident. Even if we subscribe this common belief, which is motivated also by the tremendous success of the quantized theory, let us for a while discuss electromagnetism as a standalone theory.

From a mathematical viewpoint this is a (different) approach to electromagnetism and the choice between the two can be dealt with on a physical ground only. Of course the 1-form A’ is defined modulo a closed form, i.e. locally A” = A’ + dα is another solution.

How can one decide whether the potential of electromagnetism should be considered as a 1-form or rather as a principal connection on a U(1)-bundle? First of all we notice that by a standard hole argument (one can easily define compact supported closed 1-forms, e.g. by choosing the differential of compact supported functions which always exist on a paracompact manifold) the potentials A and A’ represent the same physical situation. On the other hand, from a mathematical viewpoint we would like the dynamical field, i.e. the potential A’, to be a global section of some suitable configuration bundle. This requirement is a mathematical one, motivated on the wish of a well-defined geometrical perspective based on global Variational Calculus.

The first mathematical way out is to restrict attention to contractible spacetimes, where A’ may be always chosen to be global. Then one can require the gauge transformations A” = A’ + dα to be Lagrangian symmetries. In this way, field equations select a whole equivalence class of gauge-equivalent potentials, a procedure which solves the hole argument problem. In this picture the potential A’ is really a 1-form, which can be dragged along spacetime diffeomorphism and which admits the ordinary Lie derivatives of 1-forms. Unfortunately, the restriction to contractible spacetimes is physically unmotivated and probably wrong.

Alternatively, one can restrict electromagnetic fields F, deciding that only exact 2-forms F are allowed. That actually restricts the observable physical situations, by changing the homogeneous Maxwell equations (i.e. Bianchi identities) by requiring that F is not only closed but exact. One should in principle be able to empirically reject this option.

On non-contractible spacetimes, one is necessarily forced to resort to a more “democratic” attitude. The spacetime is covered by a number of patches Uα. On each patch Uα one defines a potential A(α). In the intersection of two patches the two potentials A(α) and A(β) may not agree. In each patch, in fact, the observer chooses his own conventions and he finds a different representative of the electromagnetic potential, which is related by a gauge transformation to the representatives chosen in the neighbour patch(es). Thence we have a family of gauge transformations, one in each intersection Uαβ, which obey cocycle identities. If one recognizes in them the action of U(1) then one can build a principal bundle P = (P, M, π; U(1)) and interpret the ensuing potential as a connection on P. This leads way to the gauge natural formalism.

Anyway this does not close the matter. One can investigate if and when the principal bundle P, in addition to the obvious principal structure, can be also endowed with a natural structure. If that were possible then the bundle of connections Cp (which is associated to P) would also be natural. The problem of deciding whether a given gauge natural bundle can be endowed with a natural structure is quite difficult in general and no full theory is yet completely developed in mathematical terms. That is to say, there is no complete classification of the topological and differential geometric conditions which a principal bundle P has to satisfy in order to ensure that, among the principal trivializations which determine its gauge natural structure, one can choose a sub-class of trivializations which induce a purely natural bundle structure. Nor it is clear how many inequivalent natural structures a good principal bundle may support. Though, there are important examples of bundles which support at the same time a natural and a gauge natural structure. Actually any natural bundle is associated to some frame bundle L(M), which is principal; thence each natural bundle is also gauge natural in a trivial way. Since on any paracompact manifold one can choose a global Riemannian metric g, the corresponding tangent bundle T(M) can be associated to the orthonormal frame bundle O(M, g) besides being obviously associated to L(M). Thence the natural bundle T(M) may be also endowed with a gauge natural bundle structure with structure group O(m). And if M is orientable the structure can be further reduced to a gauge natural bundle with structure group SO(m).

Roughly speaking, the task is achieved by imposing restrictions to cocycles which generate T(M) according to the prescription by imposing a privileged class of changes of local laboratories and sets of measures. Imposing the cocycle ψ(αβ) to take its values in O(m) rather than in the larger group GL(m). Inequivalent gauge natural structures are in one-to-one correspondence with (non isometric) Riemannian metrics on M. Actually whenever there is a Lie group homomorphism ρ : GU(m) → G for some s onto some given Lie group G we can build a natural G-principal bundle on M. In fact, let (Uα, ψ(α)) be an atlas of the given manifold M, ψ(αβ) be its transition functions and jψ(αβ) be the induced transition functions of L(M). Then we can define a G-valued cocycle on M by setting ρ(jψ(αβ)) and thence a (unique up to fibered isomorphisms) G-principal bundle P(M) = (P(M), M, π; G). The bundle P(M), as well as any gauge natural bundle associated to it, is natural by construction. Now, defining a whole family of natural U(1)-bundles Pq(M) by using the bundle homomorphisms

ρq: GL(m) → U(1): J ↦ exp(iq ln det|J|) —– (1)

where q is any real number and In denotes the natural logarithm. In the case q = 0 the image of ρ0 is the trivial group {I}; and, all the induced bundles are trivial, i.e. P = M x U(1).

The natural lift φ’ of a diffeomorphism φ: M → M is given by

φ'[x, e]α = [φ(x), eiq ln det|J|. e]α —– (2)

where J is the Jacobin of the morphism φ. The bundles Pq(M) are all trivial since they allow a global section. In fact, on any manifold M, one can define a global Riemannian metric g, where the local sections glue together.

Since the bundles Pq(M) are all trivial, they are all isomorphic to M x U(1) as principal U(1)-bundles, though in a non-canonical way unless q = 0. Any two of the bundles Pq1(M) and Pq2(M) for two different values of q are isomorphic as principal bundles but the isomorphism obtained is not the lift of a spacetime diffeomorphism because of the two different values of q. Thence they are not isomorphic as natural bundles. We are thence facing a very interesting situation: a gauge natural bundle C associated to the trivial principal bundle P can be endowed with an infinite family of natural structures, one for each q ∈ R; each of these natural structures can be used to regard principal connections on P as natural objects on M and thence one can regard electromagnetism as a natural theory.

Now that the mathematical situation has been a little bit clarified, it is again a matter of physical interpretation. One can in fact restrict to electromagnetic potentials which are a priori connections on a trivial structure bundle P ≅ M x U(1) or to accept that more complicated situations may occur in Nature. But, non-trivial situations are still empirically unsupported, at least at a fundamental level.

Gauge Fixity Towards Hyperbolicity: The Case For Equivalences. Part 2.

F1.large

The Lagrangian has in fact to depend on reference backgrounds in a quite peculiar way, so that a reference background cannot interact with any other physical field, otherwise its effect would be observable in a laboratory….

Let then Γ’ be any (torsionless) reference connection. Introducing the following relative quantities, which are both tensors:

qμαβ = Γμαβ – Γ’μαβ

wμαβ = uμαβ – u’μαβ —– (1)

For any linear torsionless connection Γ’, the Hilbert-Einstein Lagrangian

LH: J2Lor(m) → ∧om(M)

LH: LH(gαβ, Rαβ)ds = 1/2κ (R – 2∧)√g ds

can be covariantly recast as:

LH = dα(Pβμuαβμ)ds + 1/2κ[gβμρβσΓσρμ – ΓαασΓσβμ) – 2∧]√g ds

= dα(Pβμwαβμ)ds + 1/2κ[gβμ(R’βμ + qρβσqσρμ – qαασqσβμ)  – 2∧]√g ds —– (2)

The first expression for LH shows that Γ’ (or g’, if Γ’ are assumed a priori to be Christoffel symbols of the reference metric g’) has no dynamics, i.e. field equations for the reference connection are identically satisfied (since any dependence on it is hidden under a divergence). The second expression shows instead that the same Einstein equations for g can be obtained as the Euler-Lagrange equation for the Lagrangian:

L1 = 1/2κ[gβμ(R’βμ + qρβσqσρμ – qαασqσβμ)  – 2∧]√g ds —– (3)

which is first order in the dynamical field g and it is covariant since q is a tensor. The two Lagrangians Land L1, are thence said to be equivalent, since they provide the same field equations.

In order to define the natural theory, we will have to declare our attitude towards the reference field Γ’. One possibility is to mimic the procedure used in Yang-Mills theories, i.e. restrict to variations which keep the reference background fixed. Alternatively we can consider Γ’ (or g’) as a dynamical field exactly as g is, even though the reference is not endowed with a physical meaning. In other words, we consider arbitrary variations and arbitrary transformations even if we declare that g is “observable” and genuinely related to the gravitational field, while Γ’ is not observable and it just sets the reference level of conserved quantities. A further important role played by Γ’ is that it allows covariance of the first order Lagrangian L1, . No first order Lagrangian for Einstein equations exists, in fact, if one does not allow the existence of a reference background field (a connection or something else, e.g. a metric or a tetrad field). To obtain a good and physically sound theory out of the Lagrangian L1, we still have to improve its dependence on the reference background Γ’. For brevity’s sake, let us assume that Γ’ is the Levi-Civita connection of a metric g’ which thence becomes the reference background. Let us also assume (even if this is not at all necessary) that the reference background g’ is Lorentzian. We shall introduce a dynamics for the reference background g’, (thus transforming its Levi-Civita connection into a truly dynamical connection), by considering a new Lagrangian:

L1B = 1/2κ[√g(R – 2∧) – dα(√g gμνwαμν) – √g'(R’ – 2∧)]ds

= 1/2κ[(R’ – 2∧)(√g – √g’) + √g gβμ(qρβσqσρμ – qαασqσβμ)]ds —– (4)

which is obtained from L1 by subtracting the kinetic term (R’ – 2∧) √g’. The field g’ is no longer undetermined by field equations, but it has to be a solution of the variational equations for L1B w. r. t. g, which coincide with Einstein field equations. Why should a reference field, which we pretend not to be observable, obey some field equation? Field equations are here functional to the role that g’ plays in our framework. If g’ has to fix the zero value of conserved quantities of g which are relative to the reference configuration g’ it is thence reasonable to require that g’ is a solution of Einstein equations as well. Under this assumption, in fact, both g and g’ represent a physical situation and relative conserved quantities represent, for example, the energy “spent to go” from the configuration g’ to the configuration g. To be strictly precise, further hypotheses should be made to make the whole matter physically meaningful in concrete situations. In a suitable sense we have to ensure that g’ and g belong to the same equivalence class under some (yet undetermined equivalence relation), e.g. that g’ can be homotopically deformed onto g or that they satisfy some common set of boundary (or asymptotic) conditions.

Considering the Lagrangian L1B as a function of the two dynamical fields g and g’, first order in g and second order in g’. The field g is endowed with a physical meaning ultimately related to the gravitational field, while g’ is not observable and it provides at once covariance and the zero level of conserved quantities. Moreover, deformations will be ordinary (unrestricted) deformations both on g’ and g, and symmetries will drag both g’ and g. Of course, a natural framework has to be absolute to have a sense; any further trick or limitation does eventually destroy the naturality. The Lagrangian L1B is thence a Lagrangian

L1B : J2Lor(M) xM J1Lor(M) → Am(M)

Gauge Fixity Towards Hyperbolicity: General Theory of Relativity and Superpotentials. Part 1.

Untitled

Gravitational field is described by a pseudo-Riemannian metric g (with Lorentzian signature (1, m-1)) over the spacetime M of dimension dim(M) = m; in standard General Relativity, m = 4. The configuration bundle is thence the bundle of Lorentzian metrics over M, denoted by Lor(M) . The Lagrangian is second order and it is usually chosen to be the so-called Hilbert Lagrangian:

LH: J2Lor(m) → ∧om(M)

LH: LH(gαβ, Rαβ)ds = 1/2κ (R – 2∧)√g ds —– (1)

where

R = gαβ Rαβ denotes the scalar curvature, √g the square root of the absolute value of the metric determinant and ∧ is a real constant (called the cosmological constant). The coupling constant 1/2κ which is completely irrelevant until the gravitational field is not coupled to some other field, depends on conventions; in natural units, i.e. c = 1, h = 1, G = 1, dimension 4 and signature ( + , – , – , – ) one has κ = – 8π.

Field equations are the well known Einstein equations with cosmological constant

Rαβ – 1/2 Rgαβ = -∧gαβ —— (2)

Lagrangian momenta is defined by:

pαβ = ∂LH/∂gαβ = 1/2κ (Rαβ – 1/2(R – 2∧)gαβ)√g

Pαβ = ∂LH/∂Rαβ = 1/2κ gαβ√g —– (3)

Thus the covariance identity is the following:

dα(LHξα) = pαβ£ξgαβ + Pαβ£ξRαβ —– (4)

or equivalently,

α(LHξα) = pαβ£ξgαβ + PαβεξΓεαβ – δεβ£ξΓλαλ) —– (5)

where ∇ε denotes the covariant derivative with respect to the Levi-Civita connection of g. Thence we have a weak conservation law for the Hilbert Lagrangian

Div ε(LH, ξ) = W(LH, ξ) —– (6)

Conserved currents and work forms have respectively the following expressions:

ε(LH, ξ) = [Pαβ£ξΓεαβ – Pαε£ξΓλαλ – LHξε]dsε = √g/2κ(gαβgεσ – gσβgεα) ∇α£ξgβσdsε – √g/2κξεRdsε = √g/2κ[(3/2Rαλ – (R – 2∧)δαλλ + (gβγδαλ – gα(γδβ)λβγξλ]dsα —– (7)

W(LH, ξ) = √g/κ(Rαβ – 1/2(R – 2∧)gαβ)∇(αξβ)ds —– (8)

As any other natural theory, General Relativity allows superpotentials. In fact, the current can be recast into the form:

ε(LH, ξ) = ε'(LH, ξ) + Div U(LH, ξ) —– (9)

where we set

ε'(LH, ξ) = √g/κ(Rαβ – 1/2(R – 2∧)δαββ)dsα

U(LH, ξ) = 1/2κ ∇[βξα] √gdsαβ —– (10)

The superpotential (10) generalizes to an arbitrary vector field ξ, the well known Komar superpotential which is originally derived for timelike Killing vectors. Whenever spacetime is assumed to be asymptotically fiat, then the superpotential of Komar is known to produce upon integration at spatial infinity ∞ the correct value for angular momentum (e.g. for Kerr-Newman solutions) but just one half of the expected value of the mass. The classical prescriptions are in fact:

m = 2∫ U(LH, ∂t, g)

J = ∫ U(LH, ∂φ, g) —– (11)

For an asymptotically flat solution (e.g. the Kerr-Newman black hole solution) m coincides with the so-called ADM mass and J is the so-called (ADM) angular momentum. For the Kerr-Newman solution in polar coordinates (t, r, θ, φ) the vector fields ∂t and ∂φ are the Killing vectors which generate stationarity and axial symmetry, respectively. Thence, according to this prescription, U(LH, ∂φ) is the superpotential for J while 2U(LH, ∂t) is the superpotential for m. This is known as the anomalous factor problem for the Komar potential. To obtain the expected values for all conserved quantities from the same superpotential, one has to correct the superpotential (10) by some ad hoc additional boundary term. Equivalently and alternatively, one can deduce a corrected superpotential as the canonical superpotential for a corrected Lagrangian, which is in fact the first order Lagrangian for standard General Relativity. This can be done covariantly, provided that one introduces an extra connection Γ’αβμ. The need of a reference connection Γ’ should be also motivated by physical considerations, according to which the conserved quantities have no absolute meaning but they are intrinsically relative to an arbitrarily fixed vacuum level. The simplest choice consists, in fact, in fixing a background metric g (not necessarily of the correct Lorentzian signature) and assuming Γ’ to be the Levi-Civita connection of g. This is rather similar to the gauge fixing à la Hawking which allows to show that Einstein equations form in fact an essentially hyperbolic PDE system. Nothing prevents, however, from taking Γ’ to be any (in principle torsionless) connection on spacetime; also this corresponds to a gauge fixing towards hyperbolicity.

Now, using the term background for a field which enters a field theory in the same way as the metric enters Yang-Mills theory, we see that the background has to be fixed once for all and thence preserved, e.g. by symmetries and deformations. A background has no field equations since deformations fix it; it eventually destroys the naturality of a theory, since fixing the background results in allowing a smaller group of symmetries G ⊂ Diff(M). Accordingly, in truly natural field theories one should not consider background fields either if they are endowed with a physical meaning (as the metric in Yang-Mills theory does) or if they are not.

On the contrary we shall use the expression reference or reference background to denote an extra dynamical field which is not endowed with a direct physical meaning. As long as variational calculus is concerned, reference backgrounds behave in exactly the same way as other dynamical fields do. They obey field equations and they can be dragged along deformations and symmetries. It is important to stress that such a behavior has nothing to do with a direct physical meaning: even if a reference background obeys field equations this does not mean that it is observable, i.e. it can be measured in a laboratory. Of course, not any dynamical field can be treated as a reference background in the above sense. The Lagrangian has in fact to depend on reference backgrounds in a quite peculiar way, so that a reference background cannot interact with any other physical field, otherwise its effect would be observable in a laboratory….

The Canonical of a priori and a posteriori Variational Calculus as Phenomenologically Driven. Note Quote.

montage

The expression variational calculus usually identifies two different but related branches in Mathematics. The first aimed to produce theorems on the existence of solutions of (partial or ordinary) differential equations generated by a variational principle and it is a branch of local analysis (usually in Rn); the second uses techniques of differential geometry to deal with the so-called variational calculus on manifolds.

The local-analytic paradigm is often aimed to deal with particular situations, when it is necessary to pay attention to the exact definition of the functional space which needs to be considered. That functional space is very sensitive to boundary conditions. Moreover, minimal requirements on data are investigated in order to allow the existence of (weak) solutions of the equations.

On the contrary, the global-geometric paradigm investigates the minimal structures which allow to pose the variational problems on manifolds, extending what is done in Rn but usually being quite generous about regularity hypotheses (e.g. hardly ever one considers less than C-objects). Since, even on manifolds, the search for solutions starts with a local problem (for which one can use local analysis) the global-geometric paradigm hardly ever deals with exact solutions, unless the global geometric structure of the manifold strongly constrains the existence of solutions.

Untitled.png

Untitled

A further a priori different approach is the one of Physics. In Physics one usually has field equations which are locally given on a portion of an unknown manifold. One thence starts to solve field equations locally in order to find a local solution and only afterwards one tries to find the maximal analytical extension (if any) of that local solution. The maximal extension can be regarded as a global solution on a suitable manifold M, in the sense that the extension defines M as well. In fact, one first proceeds to solve field equations in a coordinate neighbourhood; afterwards, one changes coordinates and tries to extend the found solution out of the patches as long as it is possible. The coordinate changes are the cocycle of transition functions with respect to the atlas and they define the base manifold M. This approach is essential to physical applications when the base manifold is a priori unknown, as in General Relativity, and it has to be determined by physical inputs.

Luckily enough, that approach does not disagree with the standard variational calculus approach in which the base manifold M is instead fixed from the very beginning. One can regard the variational problem as the search for a solution on that particular base manifold. Global solutions on other manifolds may be found using other variational principles on different base manifolds. Even for this reason, the variational principle should be universal, i.e. one defines a family of variational principles: one for each base manifold, or at least one for any base manifold in a “reasonably” wide class of manifolds. The strong requirement, which is physically motivated by the belief that Physics should work more or less in the same way regardless of the particular spacetime which is actually realized in Nature. Of course, a scenario would be conceivable in which everything works because of the particular (topological, differentiable, etc.) structure of the spacetime. This position, however, is not desirable from a physical viewpoint since, in this case, one has to explain why that particular spacetime is realized (a priori or a posteriori).

In spite of the aforementioned strong regularity requirements, the spectrum of situations one can encounter is unexpectedly wide, covering the whole of fundamental physics. Moreover, it is surprising how the geometric formalism is effectual for what concerns identifications of basic structures of field theories. In fact, just requiring the theory to be globally well-defined and to depend on physical data only, it often constrains very strongly the choice of the local theories to be globalized. These constraints are one of the strongest motivations in choosing a variational approach in physical applications. Another motivation is a well formulated framework for conserved quantities. A global- geometric framework is a priori necessary to deal with conserved quantities being non-local.

In the modem perspective of Quantum Field Theory (QFT) the basic object encoding the properties of any quantum system is the action functional. From a quantum viewpoint the action functional is more fundamental than field equations which are obtained in the classical limit. The geometric framework provides drastic simplifications of some key issues, such as the definition of the variation operator. The variation is deeply geometric though, in practice, it coincides with the definition given in the local-analytic paradigm. In the latter case, the functional derivative is usually the directional derivative of the action functional which is a function on the infinite-dimensional space of fields defined on a region D together with some boundary conditions on the boundary ∂D. To be able to define it one should first define the functional space, then define some notion of deformation which preserves the boundary conditions (or equivalently topologize the functional space), define a variation operator on the chosen space, and, finally, prove the most commonly used properties of derivatives. Once one has done it, one finds in principle the same results that would be found when using the geometric definition of variation (for which no infinite dimensional space is needed). In fact, in any case of interest for fundamental physics, the functional derivative is simply defined by means of the derivative of a real function of one real variable. The Lagrangian formalism is a shortcut which translates the variation of (infinite dimensional) action functionals into the variation of the (finite dimensional) Lagrangian structure.

Another feature of the geometric framework is the possibility of dealing with non-local properties of field theories. There are, in fact, phenomena, such as monopoles or instantons, which are described by means of non-trivial bundles. Their properties are tightly related to the non-triviality of the configuration bundle; and they are relatively obscure when regarded by any local paradigm. In some sense, a local paradigm hides global properties in the boundary conditions and in the symmetries of the field equations, which are in turn reflected in the functional space we choose and about which, it being infinite dimensional, we do not know almost anything a priori. We could say that the existence of these phenomena is a further hint that field theories have to be stated on bundles rather than on Cartesian products. This statement, if anything, is phenomenologically driven.

When a non-trivial bundle is involved in a field theory, from a physical viewpoint it has to be regarded as an unknown object. As for the base manifold, it has then to be constructed out of physical inputs. One can do that in (at least) two ways which are both actually used in applications. First of all, one can assume the bundle to be a natural bundle which is thence canonically constructed out of its base manifold. Since the base manifold is identified by the (maximal) extension of the local solutions, then the bundle itself is identified too. This approach is the one used in General Relativity. In these applications, bundles are gauge natural and they are therefore constructed out of a structure bundle P, which, usually, contains extra information which is not directly encoded into the spacetime manifolds. In physical applications the structure bundle P has also to be constructed out of physical observables. This can be achieved by using gauge invariance of field equations. In fact, two local solutions differing by a (pure) gauge transformation describe the same physical system. Then while extending from one patch to another we feel free both to change coordinates on M and to perform a (pure) gauge transformation before glueing two local solutions. Then coordinate changes define the base manifold M, while the (pure) gauge transformations form a cocycle (valued in the gauge group) which defines, in fact, the structure bundle P. Once again solutions with different structure bundles can be found in different variational principles. Accordingly, the variational principle should be universal with respect to the structure bundle.

Local results are by no means less important. They are often the foundations on which the geometric framework is based on. More explicitly, Variational Calculus is perhaps the branch of mathematics that possibilizes the strongest interaction between Analysis and Geometry.

Transmission of Eventual Lending Rates: MCLRs. Note Quote.

w_money-lead-kDBH--621x414@LiveMint

Given that capital market instruments are not subject to MCLR/base rate regulations, the issuances of Commercial Paper/bonds reflect the current interest rates as banks are able to buy/subscribe new deposits reflecting extant interest rates, making transmission instantaneous. 

The fundamental challenge we have here is that there is no true floating rate liability structure for banks. One can argue that banks themselves will have to develop the floating rate deposit product, but customer response, given the complexity and uncertainty for the depositor, has been at best lukewarm. In an environment where the banking system is fighting multiple battles – asset quality, weak growth, challenges on transition to Ind AS accounting practice, rapid digitization leading to new competition from non-bank players, vulnerability in the legacy IT systems –  creating a mindset for floating rate deposits hardly appears to be a priority. 

In this context, it is clear that Marginal Costs of Funds Based Lending Rates (MCLRs) have largely come down in line with policy rates. MCLR is built on four components – marginal cost of funds, negative carry on account of cash reserve ratio (CRR), operating costs and tenor premium. Marginal cost of funds is the marginal cost of borrowing and return on net worth for banks. The operating cost includes cost of providing the loan product including cost of raising funds. Tenor premium arises from loan commitments with longer tenors. Some data indicate that while MCLR has indeed tracked policy rates (especially post-demonetization), as liquidity has been abundant, average leading rates have not yet reflected the fall in MCLR rates. This is simply because MCLR reset happens over a period of time depending on the benchmark MCLR used for sanctioning the loans. 

Before jumping the gun that this is a flaw in the structure as the benefit of lower interest rates is significantly lagging, the benefit will be to the borrower when the interest cycle turns. In fact, given that MCLR benchmarks vary from one month to one year, unlike base rate, banks are in a better situation to cut MCLRs, as not the entire book resets immediately. The stakeholders must therefore want for a few more months before concluding on the effectiveness of transmission on eventual lending rates.