Equilibrium Market Prices are Unique – Convexity and Concavity Utility Functions on a Linear Case. Note Quote + Didactics.

slide_8

Consider a market consisting of a set B of buyers and a set A of divisible goods. Assume |A| = n and |B| = n′. We are given for each buyer i the amount ei of money she possesses and for each good j the amount bj of this good. In addition, we are given the utility functions of the buyers. Our critical assumption is that these functions are linear. Let uij denote the utility derived by i on obtaining a unit amount of good j. Thus if the buyer i is given xij units of good j, for 1 ≤ j ≤ n, then the happiness she derives is

j=1nuijxij —— (1)

Prices p1, . . . , pn of the goods are said to be market clearing prices if, after each buyer is assigned an optimal basket of goods relative to these prices, there is no surplus or deficiency of any of the goods. So, is it possible to compute such prices in polynomial time?

First observe that without loss of generality, we may assume that each bj is unit – by scaling the uij’s appropriately. The uij’s and ei’s are in general rational; by scaling appropriately, they may be assumed to be integral. By making the mild assumption that each good has a potential buyer, i.e., a buyer who derives nonzero utility from this good. Under this assumption, market clearing prices do exist.

It turns out that equilibrium allocations for Fisher’s linear case are captured as optimal solutions to a remarkable convex program, the Eisenberg–Gale convex program.

A convex program whose optimal solution is an equilibrium allocation must have as constraints the packing constraints on the xij’s. Furthermore, its objective function, which attempts to maximize utilities derived, should satisfy the following:

  1. If the utilities of any buyer are scaled by a constant, the optimal allocation remains unchanged.
  2. If the money of a buyer b is split among two new buyers whose utility functions are the same as that of b then sum of the optimal allocations of the new buyers should be an optimal allocation for b.

The money weighted geometric mean of buyers’ utilities satisfies both these conditions:

max (∏i∈Auiei)1/∑iei —– (2)

then, the following objective function is equivalent:

max (∏i∈Auiei) —– (3)

Its log is used in the Eisenberg–Gale convex program:

maximize, ∑i=1n’eilogui

subject to

ui = ∑j=1nuijxij ∀ i ∈ B

i=1n’ xij ≤ 1 ∀ j ∈ A

xij ≥ 0 ∀ i ∈ B, j ∈ A —– (4)

where xij is the amount of good j allocated to buyer i. Interpret Lagrangian variables, say pj’s, corresponding to the second set of conditions as prices of goods. Optimal solutions to xij’s and pj’s must satisfy the following:

    1. ∀ j ∈ A : p≥ 0
    2. ∀ j ∈ A : p> 0 ⇒ ∑i∈A xij = 1
    3. ∀ i ∈ B, j ∈ A : uij/pj ≤ ∑j∈Auijxij/ei
    4. ∀ i ∈ B, j ∈ A : xij > 0 ⇒ uij/pj = ∑j∈Auijxij/ei

From these conditions, one can derive that an optimal solution to convex program (4) must satisfy the market clearing conditions.

For the linear case of Fisher’s model:

  1. If each good has a potential buyer, equilibrium exists.
  2. The set of equilibrium allocations is convex.
  3. Equilibrium utilities and prices are unique.
  4. If all uij’s and ei’s are rational, then equilibrium allocations and prices are also rational. Moreover, they can be written using polynomially many bits in the length of the instance.

Corresponding to good j there is a buyer i such that uij > 0. By the third condition as stated above,

pj ≥ eiuij/∑juijxij > 0

By the second condition, ∑i∈A xij = 1, implying that prices of all goods are positive and all goods are fully sold. The third and fourth conditions imply that if buyer i gets good j then j must be among the goods that give buyer i maximum utility per unit money spent at current prices. Hence each buyer gets only a bundle consisting of her most desired goods, i.e., an optimal bundle.

The fourth condition is equivalent to

∀ i ∈ B, j ∈ A : eiuijxij/∑j∈Auijxij = pjxij

Summing over all j

∀ i ∈ B : eij uijxij/∑j∈Auijxij = pjxij

⇒ ∀ i ∈ B : ei = ∑jpjxij

Hence the money of each buyer is fully spent completing the proof that market equilibrium exists. Since each equilibrium allocation is an optimal solution to the Eisenberg-Gale convex program, the set of equilibrium allocations must form a convex set. Since log is a strictly concave function, if there is more than one equilibrium, the utility derived by each buyer must be the same in all equilibria. This fact, together with the fourth condition, gives that the equilibrium prices are unique.

Advertisements

Blue Economy – Sagarmala Financial Engineering: Yet Another Dig. Skeletal Sketch of an Upcoming Talk in Somnath, Gujarat.

untitled

Authorized Share Capital in the case of Sagarmala happens to be INR 1000 crore, and is the number of stock units Sagarmala Development Company Limited (SDCL) has issued in its articles of incorporation. This ASC is open, in that share capital isn’t fully used, and there is ample room for future issuance of additional stock in order to raise capital quickly as and when a demand arises. SDCL can increase the authorized capital at anytime with shareholders’ approval and paying an additional fee to the RoC, Registrar of Companies. 

Capital Budgeting: Business determines and evaluates potential large expenditures/investments. Capital Budgeting is generally a long-term venture, and is a process that SDCL would use (and uses) to identify hat capital projects would create the biggest returns compared with the funds invested in the project. The system of ranking helps establish a potential return in the future, such that the SDCL management can choose where to invest first and most. Let us simply call it the first and most principle of budgeting. Blue Economy that instantiates itself via Sagarmala in India has options to choose from as regards its Capital Budgeting, viz. 

  1. Throughput analysis – This defines the main motives behind a project, where all the costs are operating costs, and the main emphasis is on maximizing profits in passing through a bottleneck. The best example for Sagarmala speculatively thought out is the marking of Western Shipping Corridor for container traffic and posing a livelihood threat to traditional fishermen. Throughput is an alternative to the traditional cost accounting, but is neither accounting, not costing, since it is focused on cash flows. It does not allocate fixed costs to products and services sold or provided and treats direct labour as a fixed expense. Decisions made are based on three critical monetary variables: throughput, investment or inventory and operating expenses. Mathematically, this is defined as revenue minus totally variable expenses, the cost of raw materials or services incurred to produce the products sold or services delivred. T = R – TVE. 
  2. Net Present Value (NPV) – this s the value of all future cash flows, either positive or negative over the entire life of an investment discounted to the present. NPV forms a part of an intrinsic valuation, and is employed for valuing business, investment security, capital project, new venture, cost reduction and almost anything involving cash flows. 

NPV = z1/(1 + r) + z2/(1 + r)2 – X

      , where z1 is the cash flow in time 1, z2 is the cash flow in time 2, r is the discount       range, and X is the purchase price, or initial investment. NPV takes into account the timing of each cash flow that can result in a large impact on the present value of an investment. It is always better to have cash inflows sooner and cash outflows later. this is one spect where SDCL might encounter a bottleneck and thereby take recourse to throughput analysis. Importantly, NPV deliberates on revolving funds.  

  1. Internal Rate of Return (IRR) – this is an interest rate at which NPV from all cash flows become zero. IRR qualifies attractiveness of an investment, whereby if IRR of a new project exceeds company’s required rate of return, then investment in that project is desirable, else project stands in need of a rejection. IRR escapes derivation analytically, and must be noted via mathematical trial and error. Interestingly, business spreadsheets are automated to perform these calculations. Mathematically, IRR is:

0 = P0 + P1/(1 + IRR) + P2/(1 + IRR)2 + …. + Pn/(1 + IRR)n

, where P0, P1,…, Pn are cash flows in periods of time 1, 2, …, n. 

 With a likelihood of venture capital and private equity expected in Sagarmala accompanied with multiple cash investments over the life-cycle of the project, IRR could come in handy for an IPO. 

     4. Discounted Cash Flow – this calculates the present value of an investment’s future            cash flows in order to arrive at  current fair value estimate for an investment. Mathematically, 

DCF =  CF1/(1 + r) + CF2/(1 + r)2 + CF3/(1 + r)3 + … + CFn/(1 + r)n

, where CFn are cash flows in respective n periods, and r is discount rate of return. 

DCF accounts for the fact that money received today can be invested today, while money we have to wait for cannot. DCF accounts for the time value of money and provides an estimate of what e should spend today to have an investment worth a certain amount of money at a specific point in the future. 

       5. Payback period – mathematically, this is defined as: 

Payback Period = Investment required/Annual Project Cash flow

This occurs the year plus a number of months before the cash flow turns positive. Though seemingly important, payback period does not consider the time value of investment/money, and is quite inept at handling projects with uneven cash flows. 

As a recap (and here, here, here)

Sagarmala is a 3-tier SPV structure

untitled

Private Players/PPPs OR EPCs/Turnkey – the latter are used for projects with high social impact or low IRR. 

Expenses incurred for project development will be treated as part of equity contribution by SDCL, or, in case SDCL does not have any equity, or expenses incurred are more than the stake of SDCL, SPV will defray SDCL. Divestment possibilities cannot be ruled out in order to recoup capital for future projects. 

Conjuncted: Integer Pivoting as a Polynomial-Time Algorithm

hqdefault1

The Lemke-Howson Algorithm follows the edges of a polyhedron, which is implemented algebraically by pivoting as used by the simplex algorithm for solving a linear program. Let us see, if there is an efficient implementation that has no numerical errors by storing integers of arbitrary precision. The constraints defining the polyhedron are thereby represented as linear equations with nonnegative slack variables. For the polytopes P and Q in

P = {x ∈ RM| x ≥ 0, Bx ≤ 1},

Q = {y ∈ RN |Ay ≤ 1, y ≥ 0}

these slack variables are nonnegative vectors s ∈ RN and r ∈ RM so that x ∈ P and y ∈ Q iff

Bx + s = 1, r + Ay = 1 —– (1)

and

x ≥ 0, s ≥ 0, r ≥ 0, y ≥ 0 —— (2)

A binding inequality corresponds to a zero slack variable. The pair (x, y) is completely labeled iff xiri = 0 ∀ i ∈ M and yjsj = 0 ∀ j ∈ N, which by (2) can be written as the orthogonality condition

xr = 0, ys = 0

A basic solution to (1) is given by n basic (linearly independent) columns of Bx + s = 1 and m basic columns of r + Ay = 1, where the nonbasic variables that correspond to the m respectively n other (nonbasic) columns are set to zero, so that the basic variables are uniquely determined. A basic feasible solution also fulfills (2), and defines a vertex x of P and y of Q. The labels of such a vertex are given by the respective nonbasic columns.

Pivoting is a change of the basis where a nonbasic variable enters and a basic variable leaves the set of basic variables, while preserving feasibility (2).

Integer pivoting always maintains an integer matrix (or “tableau”) of coefficients of a system of linear equations that is equivalent to the original system Bx + s = 1, in the form

CBx + Cs = C1 —– (3)

In (3), C is the inverse of the basis matrix given by the basic columns of the original system, multiplied by the determinant of the basis matrix. The matrix C is given by the (integer) cofactors of the basis matrix; the cofactor of a matrix entry is the determinant of the matrix when the row and column of that element are deleted. When each entry has a bounded number of digits (by at most a factor of n log n compared to the original matrix entries), then integer pivoting is a polynomial-time algorithm. It is also superior to using fractions of integers or rational numbers because their cancelation requires greatest common divisor computations that take the bulk of computation time.

Lemke-Howson Algorithm – Symmetric Game with Symmetric Or NonSymmetric Equilibria. Note Quote.

Lemke-Howson Algorithm (LHA) function computes a sample mixed strategy Nash equilibrium in a bimatrix game. This function implements the Lemke-Howson complementary pivoting algorithm for solving Bimatrix Games, a variant of the Lemke algorithm for linear complementarity problems (LCPs). The LHA not only provides an elementary proof for the existence of equilibrium points, but also an efficient computational method for finding at least one equilibrium point. The LHA follows a path (called LH path) of vertex pairs (x, y) of P × Q, for the polytopes P and Q,

P = {x ∈ RM| x ≥ 0, Bx ≤ 1},

Q = {y ∈ RN |Ay ≤ 1, y ≥ 0}

that starts at (0, 0) and ends at a Nash equilibrium. An LH path alternately follows edges of P and Q, keeping the vertex in the other polytope fixed. Because the game is nondegenerate, a vertex of P is given by m labels, and a vertex of Q is given by n labels. An edge of P is defined by m−1 labels.

untitled

For example, in the above figure, the edge defined by labels 1 and 3 joins the vertices 0 and c. Dropping a label l of a vertex x of P, say, means traversing the unique edge that has all the labels of x except for l. For example, dropping label 2 of the vertex 0 of P in the figure gives the edge, defined by labels 1 and 3, that joins 0 to vertex c. The endpoint of the edge has a new label, which is said to be picked up, for example, label 5 is picked up at vertex c.

The LHA starts from (0, 0) in P × Q. This is called the artificial equilibrium, which is a completely labeled vertex pair because every pure strategy has probability zero. It does not represent a Nash equilibrium of the game because the zero vector cannot be rescaled to a mixed strategy vector. An initial free choice of the LHA is a pure strategy k of a player (any label in M ∪ N ), called the missing label. Starting with (x, y) = (0, 0), label k is dropped. At the endpoint of the corresponding edge (of P if k ∈ M, of Q if k ∈ N), the new label that is picked up is duplicate because it was present in the other polytope. That duplicate label is then dropped in the other polytope, picking up a new label. If the newly picked label is the missing label, the algorithm terminates and has found a Nash equilibrium. Otherwise, the algorithm repeats by dropping the duplicate label in the other polytope, and continues in this fashion.

Input: Nondegenerate bimatrix game.

Output: One Nash equilibrium of the game.

Method: Choose k ∈ M ∪ N , called the missing label. Let (x, y) = (0, 0) ∈ P × Q. Drop label k (from x in P if k ∈ M, from y in Q if k ∈ N).

Loop: Call the new vertex pair (x, y). Let l be the label that is picked up. If l = k, terminate with Nash equilibrium (x, y) (rescaled as mixed strategy pair). Otherwise, drop l in the other polytope and repeat.

The LHA terminates, and finds a Nash equilibrium, because P × Q has only finitely many vertex pairs. The next vertex pair on the path is always unique. Hence, a given vertex pair cannot be revisited because that would provide an additional possibility to proceed in the first place.

What we seem to have done is describe the LH path for missing label k by means of alternating edges between two polytopes. In fact, it is a path on the product polytope P × Q, given by the set of pairs (x, y) of P × Q that are k-almost completely labeled, meaning that every label in M ∪ N − {k} appears as a label of either x or y. In the above figure for k = 2, the vertex pairs on the path are (0, 0), (c, 0), (c, p), (d, p), (d, q).

For a fixed missing label k, the k-almost completely labeled vertices and edges of the product polytope P × Q form a graph of degree 1 or 2. Clearly, such a graph consists of disjoints paths and cycles. The endpoints of the paths are completely labeled. They are the Nash equilibria of the game and the artificial equilibrium (0, 0).

Though, there is a corollary to the this, in that, a nondegenerate bimatrix game has an odd number of Nash equilibria. The LHA can start at any Nash equilibrium, not just the artificial equilibrium. In the figure with missing label 2, starting the algorithm at the Nash equilibrium (d, q) would just generate the known LH path backward to (0, 0). When started at the Nash equilibrium (a, s), the LH path for the missing label 2 gives the vertex pair (b, s), where label 5 is duplicate, and then the equilibrium (b, r). This path cannot go back to (0, 0) because the path leading to (0, 0) starts at (d, q). This gives the three Nash equilibria of the game as endpoints of the two LH paths for missing label 2. These three equilibria can also be found by the LHA by varying the missing label.

However, some Nash equilibria can remain elusive to the LHA. An example is the following symmetric 3 × 3 game with

A = B =  untitled

Every Nash equilibrium (x, y) of this game is symmetric, i.e., x = y, where x is (0, 0, 1), (1/2, 1/4, 1/4), or (3/4, 1/4, 0). Only the first of these is found by the LHA, for any missing label; because the game is symmetric, it suffices to consider the missing labels 1, 2, 3. (A symmetric game remains unchanged when the players are exchanged; a symmetric game has always a symmetric equilibrium, but may also have nonsymmetric equilibria, which obviously come in pairs.)

Game’s Degeneracy Running Proportional to Polytope’s Redundancy.

For a given set of vertices V ⊆ RK a Polytope P can be defined as the following set of points:

P = {∑i=1|V|λivi ∈ RK | ∑i=1|V|λi = 1; λi ≥ 0; vi ∈ V}

Screen Shot 2019-01-02 at 11.03.28 AM

Polytope is an intersection of boundaries that separate the space into two distinct areas. If a polytope is to be defined as an intersection of half spaces, then for a matrix M ∈ Rmxn, and a vector b ∈ Rm, polytope P is defined as a set of points

P = {x ∈ Rn | Mx ≤ b}

Switching over to a two-player game, (A, B) ∈ Rmxn2>0, the row/column best response polytope P/Q is defined by:

P = {x ∈ Rm | x ≥ 0; xB ≤ 1}

Q = {y ∈ Rn | Ay ≤ 1; y ≥ 0}

The polytope P, corresponds to the set of points with an upper bound on the utility of those points when considered as row strategies against which the column player plays.

An affine combination of points z1,….zk in some Euclidean space is of the form ∑i=1kλizi, where λ1, …, λk are reals with ∑i=1kλi= 1. It is called a convex combination, if λ≥ 0 ∀ i. A set of points is convex if it is closed under forming convex combinations. Given points are affinely independent if none of these points are an affine combination of the others. A convex set has dimension d iff it has d + 1, but no more, affinely independent points.

A polyhedron P in Rd is a set {z ∈ Rd | Cz ≤ q} for some matrix C and vector q. It is called full-dimensional if it has dimension d. It is called a polytope if it is bounded. A face of P is a set {z ∈ P | cz = q0} for some c ∈ Rd, q0 ∈ R, such that the inequality cz ≤ q0 holds for all z in P. A vertex of P is the unique element of a zero-dimensional face of P. An edge of P is a one-dimensional face of P. A facet of a d-dimensional polyhedron P is a face of dimension d − 1. It can be shown that any nonempty face F of P can be obtained by turning some of the inequalities defining P into equalities, which are then called binding inequalities. That is, F = {z ∈ P | ciz = qi, i ∈ I}, where ciz ≤ qi for i ∈ I are some of the rows in Cz ≤ q. A facet is characterized by a single binding inequality which is irredundant; i.e., the inequality cannot be omitted without changing the polyhedron. A d-dimensional polyhedron P is called simple if no point belongs to more than d facets of P, which is true if there are no special dependencies between the facet-defining inequalities. The “best response polyhedron” of a player is the set of that player’s mixed strategies together with the “upper envelope” of expected payoffs (and any larger payoffs) to the other player.

Nondegeneracy of a bimatrix game (A, B) can be stated in terms of the polytopes P and Q as no point in P has more than m labels, and no point in Q has more than n labels. (If x ∈ P and x has support of size k and L is the set of labels of x, then |L ∩ M| = m − k, so |L| > m implies x has more than k best responses in L ∩ N. Then P and Q are simple polytopes, because a point of P, say, that is on more than m facets would have more than m labels. Even if P and Q are simple polytopes, the game can be degenerate if the description of a polytope is redundant in the sense that some inequality can be omitted, but nevertheless is sometimes binding. This occurs if a player has a pure strategy that is weakly dominated by or payoff equivalent to some other mixed strategy. Non-simple polytopes or redundant inequalities of this kind do not occur for “generic” payoffs; this illustrates the assumption of nondegeneracy from a geometric viewpoint. (A strictly dominated strategy may occur generically, but it defines a redundant inequality that is never binding, so this does not lead to a degenerate game.) Because the game is nondegenerate, only vertices of P can have m labels, and only vertices of Q can have n labels. Otherwise, a point of P with m labels that is not a vertex would be on a higher dimensional face, and a vertex of that face, which is a vertex of P, would have additional labels. Consequently, only vertices of P and Q have to be inspected as possible equilibrium strategies. Algorithmically, if the input is a nondegenerate bimatrix game, and output is an Nash equilibria of the game, then the method employed for each vertex x of P − {0}, and each vertex y of Q − {0}, if (x, y) is completely labeled, the output then is the Nash equilibrium (x · 1/1x, y · 1/1y).

Graviton Fields Under Helicity Rotations. Thought of the Day 156.0

reconstruction_comparison

Einstein described gravity as equivalent to curves in space and time, but physicists have long searched for a theory of gravitons, its putative quantum-scale source. Though gravitons are individually too weak to detect, most physicists believe the particles roam the quantum realm in droves, and that their behavior somehow collectively gives rise to the macroscopic force of gravity, just as light is a macroscopic effect of particles called photons. But every proposed theory of how gravity particles might behave faces the same problem: upon close inspection, it doesn’t make mathematical sense. Calculations of graviton interactions might seem to work at first, but when physicists attempt to make them more exact, they yield gibberish – an answer of “infinity.” This is the disease of quantized gravity. With regard to the exchange particles concept in the quantum electrodynamics theory and the existence of graviton, let’s consider a photon that is falling in the gravitational field, and revert back to the behavior of a photon in the gravitational field. But when we define the graviton relative to the photon, it is necessary to explain the properties and behavior of photon in the gravitational field. The fields around a “ray of light” are electromagnetic waves, not static fields. The electromagnetic field generated by a photon is much stronger than the associated gravitational field. When a photon is falling in the gravitational field, it goes from a low layer to a higher layer density of gravitons. We should assume that the graviton is not a solid sphere without any considerable effect. Graviton carries gravity force, so it is absorbable by other gravitons; in general; gravitons absorb each other and combine. This new view on graviton shows, identities of graviton changes, in fact it has mass with changeable spin.

When we derive various supermultiplets of states, at the noninteracting level, these states can easily be described in terms of local fields. But, at the interacting level, there are certain ambiguities that withdraw as a result of different field representations describing the same massless free states. So the proper choice of the field representation may be subtle. The supermultiplets can then be converted into supersymmetric actions, quadratic in the fields. For selfdual tensor fields, the action must be augmented by a duality constraint on the corresponding field strength. For the graviton field,

The linearized Einstein equation for gμν = ημν + κhμν implies that (for D ≥ 3)

Rμν ∝ ∂2hμν + ∂μνh – ∂μρhνρ – ∂νρhρμ = 0 —– (1)

where h ≡ hμμ and Rμν is the Ricci tensor. To analyze the number of states implied by this equation, one may count the number of plane-wave solutions with given momentum qμ. It then turns out that there are D arbitrary solutions, corresponding to the linearized gauge invariance hμν → hμν + ∂μξν + ∂νξμ, which can be discarded. Many other components vanish and the only nonvanishing ones require the momentum to be lightlike. Thee reside in the fields hij, where the components i, j are in the transverse (D-2) dimensional subspace. In addition, the trace of hij must be zero. Hence, the relevant plane-wave solutions are massless and have polarizations (helicities) characterized by a symmetric traceless 2-rank tensor. This tensor comprises 1/2D(D-3), which transform irreducibly under the SO(D-2) helicity group of transverse rotations. For the special case of D = 6 spacetime dimensions, the helicity group is SO(4), which factorizes into two SU(2) groups. The symmetric traceless representation then transforms as a doublet under each of the SU(2) factors and it is thus denoted by (2,2). As for D = 3, there are obviously no dynamic degrees of freedom associated with the gravitational field. When D = 2 there are again no dynamic degrees of freedom, but here (1) should be replaced by Rμν = 1/2gμνR.

Ricci-Flat Metric & Diffeomorphism – Does there Exist a Possibility of a Complete Construction of a Metric if the Surface isn’t a Smooth Manifold? Note Quote.

1-s2.0-S0393044017301584-gr1

Using twistors, the Gibbons-Hawking ansatz is generalized to investigate 4n-dimensional hyperkähler metrics admitting an action of the n-torus Tn. The hyperkähler case could further admit a tri-holomorphic free action of the n-torus Tn. It turns out that the metric may be written in coordinates adapted to the torus action, in a form similar to the Gibbons-Hawking ansatz in dimension 4, and such that the non-linear Einstein equations reduce to a set of linear equations (essentially saying that certain functions on Euclidean 3-space are harmonic). In the case of 8-manifolds (n = 2), the solutions can be described geometrically, in terms of arrangements of 3-dimensional linear subspaces in Euclidean 6-space.

There are in fact many explicit examples known of metrics on non-compact manifolds with SU(n) or Sp(2n) holonomy. The other holonomy groups automatically yielding Ricci-flat metrics are the special holonomy groups G2 in dimension 7 and Spin(7) in dimension 8. Until fairly recently only three explicit examples of complete metrics (in dimension 7) with G2-holonomy and one explicit example (in dimension 8) with Spin(7)-holonomy were known. The G2-holonomy examples are asymptotically conical and live on the bundle of self-dual two-forms over S4, the bundle of self-dual two-forms over CP2, and the spin bundle of S3 (topologically R4 × S3), respectively. The metrics are of cohomogeneity one with respect to the Lie groups SO(5), SU(3) and SU(2) × SU(2) × SU(2) respectively. A cohomogeneity-one metric has a Lie group acting via isometries, with general (principal) orbits of real codimension one. In particular, if the metric is complete, then X is the holomorphic cotangent bundle of projective n-space TCPn, and the metric is the Calabi hyperkähler metric.

KovalevTCS

The G2-holonomy examples are all examples in which a Lie group G acts with low codimension orbits. This is a general feature of explicit examples of Einstein metrics. The simplest case of such a situation would be when there is a single orbit of a group action, in which case the metric manifold is homogeneous. For metrics on homogeneous manifolds, the Einstein condition may be expressed purely algebraically. Moreover, all homogeneous Ricci-flat manifolds are flat, and so no interesting metrics occur. Then what about cohomogeneity one with respect to G, i.e., the orbits of G are codimension one in general? Here, the Einstein condition reduces to a system of non-linear ordinary differential equations in one variable, namely the parameter on the orbit space. In the Ricci-flat case, Cheeger-Gromoll theorem implies that the manifold has at most one end. In the non-compact case, the orbit space is R+ and there is just one singular orbit. Geometrically, if the principal orbit is of the form G/K, the singular orbit (the bolt) is G/H for some subgroup H ⊃ K; if G is compact, a necessary and sufficient condition for the space to be a smooth manifold is that H/K is diffeomorphic to a sphere. In many cases, this is impossible because of the form of the group G, and so any metric constructed will not be complete.