Equilibrium Market Prices are Unique – Convexity and Concavity Utility Functions on a Linear Case. Note Quote + Didactics.


Consider a market consisting of a set B of buyers and a set A of divisible goods. Assume |A| = n and |B| = n′. We are given for each buyer i the amount ei of money she possesses and for each good j the amount bj of this good. In addition, we are given the utility functions of the buyers. Our critical assumption is that these functions are linear. Let uij denote the utility derived by i on obtaining a unit amount of good j. Thus if the buyer i is given xij units of good j, for 1 ≤ j ≤ n, then the happiness she derives is

j=1nuijxij —— (1)

Prices p1, . . . , pn of the goods are said to be market clearing prices if, after each buyer is assigned an optimal basket of goods relative to these prices, there is no surplus or deficiency of any of the goods. So, is it possible to compute such prices in polynomial time?

First observe that without loss of generality, we may assume that each bj is unit – by scaling the uij’s appropriately. The uij’s and ei’s are in general rational; by scaling appropriately, they may be assumed to be integral. By making the mild assumption that each good has a potential buyer, i.e., a buyer who derives nonzero utility from this good. Under this assumption, market clearing prices do exist.

It turns out that equilibrium allocations for Fisher’s linear case are captured as optimal solutions to a remarkable convex program, the Eisenberg–Gale convex program.

A convex program whose optimal solution is an equilibrium allocation must have as constraints the packing constraints on the xij’s. Furthermore, its objective function, which attempts to maximize utilities derived, should satisfy the following:

  1. If the utilities of any buyer are scaled by a constant, the optimal allocation remains unchanged.
  2. If the money of a buyer b is split among two new buyers whose utility functions are the same as that of b then sum of the optimal allocations of the new buyers should be an optimal allocation for b.

The money weighted geometric mean of buyers’ utilities satisfies both these conditions:

max (∏i∈Auiei)1/∑iei —– (2)

then, the following objective function is equivalent:

max (∏i∈Auiei) —– (3)

Its log is used in the Eisenberg–Gale convex program:

maximize, ∑i=1n’eilogui

subject to

ui = ∑j=1nuijxij ∀ i ∈ B

i=1n’ xij ≤ 1 ∀ j ∈ A

xij ≥ 0 ∀ i ∈ B, j ∈ A —– (4)

where xij is the amount of good j allocated to buyer i. Interpret Lagrangian variables, say pj’s, corresponding to the second set of conditions as prices of goods. Optimal solutions to xij’s and pj’s must satisfy the following:

    1. ∀ j ∈ A : p≥ 0
    2. ∀ j ∈ A : p> 0 ⇒ ∑i∈A xij = 1
    3. ∀ i ∈ B, j ∈ A : uij/pj ≤ ∑j∈Auijxij/ei
    4. ∀ i ∈ B, j ∈ A : xij > 0 ⇒ uij/pj = ∑j∈Auijxij/ei

From these conditions, one can derive that an optimal solution to convex program (4) must satisfy the market clearing conditions.

For the linear case of Fisher’s model:

  1. If each good has a potential buyer, equilibrium exists.
  2. The set of equilibrium allocations is convex.
  3. Equilibrium utilities and prices are unique.
  4. If all uij’s and ei’s are rational, then equilibrium allocations and prices are also rational. Moreover, they can be written using polynomially many bits in the length of the instance.

Corresponding to good j there is a buyer i such that uij > 0. By the third condition as stated above,

pj ≥ eiuij/∑juijxij > 0

By the second condition, ∑i∈A xij = 1, implying that prices of all goods are positive and all goods are fully sold. The third and fourth conditions imply that if buyer i gets good j then j must be among the goods that give buyer i maximum utility per unit money spent at current prices. Hence each buyer gets only a bundle consisting of her most desired goods, i.e., an optimal bundle.

The fourth condition is equivalent to

∀ i ∈ B, j ∈ A : eiuijxij/∑j∈Auijxij = pjxij

Summing over all j

∀ i ∈ B : eij uijxij/∑j∈Auijxij = pjxij

⇒ ∀ i ∈ B : ei = ∑jpjxij

Hence the money of each buyer is fully spent completing the proof that market equilibrium exists. Since each equilibrium allocation is an optimal solution to the Eisenberg-Gale convex program, the set of equilibrium allocations must form a convex set. Since log is a strictly concave function, if there is more than one equilibrium, the utility derived by each buyer must be the same in all equilibria. This fact, together with the fourth condition, gives that the equilibrium prices are unique.


Pareto Optimality

There are some solutions. (“If you don’t give a solution, you are part of the problem”). Most important: Human wealth should be set as the only goal in society and economy. Liberalism is ruinous for humans, while it may be optimal for fitter entities. Nobody is out there to take away the money of others without working for it. In a way of ‘revenge’ or ‘envy’, (basically justifying laziness) taking away the hard-work earnings of others. No way. Nobody wants it. Thinking that yours can be the only way a rational person can think. Anybody not ‘winning’ the game is a ‘loser’. Some of us, actually, do not even want to enter the game.

Yet – the big dilemma – that money-grabbing mentality is essential for the economy. Without it we would be equally doomed. But, what we will see now is that you’ll will lose every last penny either way, even without divine intervention.

Having said that, the solution is to take away the money. Seeing that the system is not stable and accumulates the capital on a big pile, disconnected from humans, mathematically there are two solutions:

1) Put all the capital in the hands of people. If profit is made M’-M, this profit falls to the hands of the people that caused it. This seems fair, and mathematically stable. However, how the wealth is then distributed? That would be the task of politicians, and history has shown that they are a worse pest than capital. Politicians, actually, always wind up representing the capital. No country in the world ever managed to avoid it.

2) Let the system be as it is, which is great for giving people incentives to work and develop things, but at the end of the year, redistribute the wealth to follow an ideal curve that optimizes both wealth and increments of wealth.

The latter is an interesting idea. Also since it does not need rigorous restructuring of society, something that would only be possible after a total collapse of civilization. While unavoidable in the system we have, it would be better to act pro-actively and do something before it happens. Moreover, since money is air – or worse, vacuum – there is actually nothing that is ‘taken away’. Money is just a right to consume and can thus be redistributed at will if there is a just cause to do so. In normal cases this euphemistic word ‘redistribution’ amounts to theft and undermines incentives for work and production and thus causes poverty. Yet, if it can be shown to actually increase incentives to work, and thus increase overall wealth, it would need no further justification.

We set out to calculate this idea. However, it turned out to give quite remarkable results. Basically, the optimal distribution is slavery. Let us present them here. Let’s look at the distribution of wealth. Figure below shows a curve of wealth per person, with the richest conventionally placed at the right and the poor on the left, to result in what is in mathematics called a monotonously-increasing function. This virtual country has 10 million inhabitants and a certain wealth that ranges from nearly nothing to millions, but it can easily be mapped to any country.


Figure 1: Absolute wealth distribution function

As the overall wealth increases, it condenses over time at the right side of the curve. Left unchecked, the curve would become ever-more skew, ending eventually in a straight horizontal line at zero up to the last uttermost right point, where it shoots up to an astronomical value. The integral of the curve (total wealth/capital M) always increases, but it eventually goes to one person. Here it is intrinsically assumed that wealth, actually, is still connected to people and not, as it in fact is, becomes independent of people, becomes ‘capital’ autonomously by itself. If independent of people, this wealth can anyway be without any form of remorse whatsoever be confiscated and redistributed. Ergo, only the system where all the wealth is owned by people is needed to be studied.

A more interesting figure is the fractional distribution of wealth, with the normalized wealth w(x) plotted as a function of normalized population x (that thus runs from 0 to 1). Once again with the richest plotted on the right. See Figure below.


Figure 2: Relative wealth distribution functions: ‘ideal communist’ (dotted line. constant distribution), ‘ideal capitalist’ (one person owns all, dashed line) and ‘ideal’ functions (work-incentive optimized, solid line).

Every person x in this figure feels an incentive to work harder, because it wants to overtake his/her right-side neighbor and move to the right on the curve. We can define an incentive i(x) for work for person x as the derivative of the curve, divided by the curve itself (a person will work harder proportional to the relative increase in wealth)

i(x) = dw(x)/dx/w(x) —– (1)

A ‘communistic’ (in the negative connotation) distribution is that everybody earns equally, that means that w(x) is constant, with the constant being one

‘ideal’ communist: w(x) = 1.

and nobody has an incentive to work, i(x) = 0 ∀ x. However, in a utopic capitalist world, as shown, the distribution is ‘all on a big pile’. This is what mathematicians call a delta-function

‘ideal’ capitalist: w(x) = δ(x − 1),

and once again, the incentive is zero for all people, i(x) = 0. If you work, or don’t work, you get nothing. Except one person who, working or not, gets everything.

Thus, there is somewhere an ‘ideal curve’ w(x) that optimizes the sum of incentives I defined as the integral of i(x) over x.

I = ∫01i(x)dx = ∫01(dw(x)/dx)/w(x) dx = ∫x=0x=1dw(x)/w(x) = ln[w(x)]|x=0x=1 —– (2)

Which function w is that? Boundary conditions are

1. The total wealth is normalized: The integral of w(x) over x from 0 to 1 is unity.

01w(x)dx = 1 —– (3)

2. Everybody has a at least a minimal income, defined as the survival minimum. (A concept that actually many societies implement). We can call this w0, defined as a percentage of the total wealth, to make the calculation easy (every year this parameter can be reevaluated, for instance when the total wealth increased, but not the minimum wealth needed to survive). Thus, w(0) = w0.

The curve also has an intrinsic parameter wmax. This represents the scale of the figure, and is the result of the other boundary conditions and therefore not really a parameter as such. The function basically has two parameters, minimal subsistence level w0 and skewness b.

As an example, we can try an exponentially-rising function with offset that starts by being forced to pass through the points (0, w0) and (1, wmax):

w(x) = w0 + (wmax − w0)(ebx −1)/(eb − 1) —– (4)

An example of such a function is given in the above Figure. To analytically determine which function is ideal is very complicated, but it can easily be simulated in a genetic algorithm way. In this, we start with a given distribution and make random mutations to it. If the total incentive for work goes up, we keep that new distribution. If not, we go back to the previous distribution.

The results are shown in the figure 3 below for a 30-person population, with w0 = 10% of average (w0 = 1/300 = 0.33%).


Figure 3: Genetic algorithm results for the distribution of wealth (w) and incentive to work (i) in a liberal system where everybody only has money (wealth) as incentive. 

Depending on the starting distribution, the system winds up in different optima. If we start with a communistic distribution of figure 2, we wind up with a situation in which the distribution stays homogeneous ‘everybody equal’, with the exception of two people. A ‘slave’ earns the minimum wages and does nearly all the work, and a ‘party official’ that does not do much, but gets a large part of the wealth. Everybody else is equally poor (total incentive/production equal to 21), w = 1/30 = 10w0, with most people doing nothing, nor being encouraged to do anything. The other situation we find when we start with a random distribution or linear increasing distribution. The final situation is shown in situation 2 of the figure 3. It is equal to everybody getting minimum wealth, w0, except the ‘banker’ who gets 90% (270 times more than minimum), while nobody is doing anything, except, curiously, the penultimate person, which we can call the ‘wheedler’, for cajoling the banker into giving him money. The total wealth is higher (156), but the average person gets less, w0.

Note that this isn’t necessarily an evolution of the distribution of wealth over time. Instead, it is a final, stable, distribution calculated with an evolutionary (‘genetic’) algorithm. Moreover, this analysis can be made within a country, analyzing the distribution of wealth between people of the same country, as well as between countries.

We thus find that a liberal system, moreover one in which people are motivated by the relative wealth increase they might attain, winds up with most of the wealth accumulated by one person who not necessarily does any work. This is then consistent with the tendency of liberal capitalist societies to have indeed the capital and wealth accumulate in a single point, and consistent with Marx’s theories that predict it as well. A singularity of distribution of wealth is what you get in a liberal capitalist society where personal wealth is the only driving force of people. Which is ironic, in a way, because by going only for personal wealth, nobody gets any of it, except the big leader. It is a form of Prisoner’s Dilemma.

Leibniz’s Compossibility and Compatibility


Leibniz believed in discovering a suitable logical calculus of concepts enabling its user to solve any rational question. Assuming that it is done he was in power to sketch the full ontological system – from monads and qualities to the real world.

Thus let some logical calculus of concepts (names?, predicates?) be given. Cn is its connected consequence operator, whereas – for any x – Th(x) is the Cn-theory generated by x.

Leibniz defined modal concepts by the following metalogical conditions:

M(x) :↔ ⊥ ∉ Th(x)

x is possible (its theory is consistent)

L(x) :↔ ⊥ ∈ Th(¬x)

x is necessary (its negation is impossible)

C(x,y) :↔ ⊥ ∉ Cn(Th(x) ∪ Th(y))

x and y are compossible (their common theory is consistent).

Immediately we obtain Leibnizian ”soundness” conditions:

C(x, y) ↔ C(y, x) Compossibility relation is symmetric.

M(x) ↔ C(x, x) Possibility means self-compossibility.

C(x, y) → M(x)∧M(y) Compossibility implies possibility.

When can the above implication be reversed?

Onto\logical construction

Observe that in the framework of combination ontology we have already defined M(x) in a way respecting M(x) ↔ C(x, x).

On the other hand, between MP( , ) and C( , ) there is another relation, more fundamental than compossibility. It is so-called compatibility relation. Indeed, putting

CP(x, y) :↔ MP(x, y) ∧ MP(y, x) – for compatibility, and C(x,y) :↔ M(x) ∧ M(y) ∧ CP(x,y) – for compossibility

we obtain a manageable compossibility relation obeying the above Leibniz’s ”soundness” conditions.

Wholes are combinations of compossible collections, whereas possible worlds are obtained by maximalization of wholes.

Observe that we start with one basic ontological making: MP(x, y) – modality more fundamental than Leibnizian compossibility, for it is definable in two steps. Observe also that the above construction can be done for making impossible and to both basic ontological modalities as well (producing quite Hegelian output in this case!).

Meillassoux’s Principle of Unreason Towards an Intuition of the Absolute In-itself. Note Quote.


The principle of reason such as it appears in philosophy is a principle of contingent reason: not only how philosophical reason concerns difference instead of identity, we but also why the Principle of Sufficient Reason can no longer be understood in terms of absolute necessity. In other words, Deleuze disconnects the Principle of Sufficient Reason from the ontotheological tradition no less than from its Heideggerian deconstruction. What remains then of Meillassoux’s criticism in After finitude: An Essay on the Necessity of Contigency that Deleuze no less than Hegel hypostatizes or absolutizes the correlation between thinking and being and thus brings back a vitalist version of speculative idealism through the back door?

At stake in Meillassoux’s criticism of the Principle of Sufficient Reason is a double problem: the conditions of possibility of thinking and knowing an absolute and subsequently the conditions of possibility of rational ideology critique. The first problem is primarily epistemological: how can philosophy justify scientific knowledge claims about a reality that is anterior to our relation to it and that is hence not given in the transcendental object of possible experience (the arche-fossil )? This is a problem for all post-Kantian epistemologies that hold that we can only ever know the correlate of being and thought. Instead of confronting this weak correlationist position head on, however, Meillassoux seeks a solution in the even stronger correlationist position that denies not only the knowability of the in itself, but also its very thinkability or imaginability. Simplified: if strong correlationists such as Heidegger or Wittgenstein insist on the historicity or facticity (non-necessity) of the correlation of reason and ground in order to demonstrate the impossibility of thought’s self-absolutization, then the very force of their argument, if it is not to contradict itself, implies more than they are willing to accept: the necessity of the contingency of the transcendental structure of the for itself. As a consequence, correlationism is incapable of demonstrating itself to be necessary. This is what Meillassoux calls the principle of factiality or the principle of unreason. It says that it is possible to think of two things that exist independently of thought’s relation to it: contingency as such and the principle of non-contradiction. The principle of unreason thus enables the intellectual intuition of something that is absolutely in itself, namely the absolute impossibility of a necessary being. And this in turn implies the real possibility of the completely random and unpredictable transformation of all things from one moment to the next. Logically speaking, the absolute is thus a hyperchaos or something akin to Time in which nothing is impossible, except it be necessary beings or necessary temporal experiences such as the laws of physics.

There is, moreover, nothing mysterious about this chaos. Contingency, and Meillassoux consistently refers to this as Hume’s discovery, is a purely logical and rational necessity, since without the principle of non-contradiction not even the principle of factiality would be absolute. It is thus a rational necessity that puts the Principle of Sufficient Reason out of action, since it would be irrational to claim that it is a real necessity as everything that is is devoid of any reason to be as it is. This leads Meillassoux to the surprising conclusion that [t]he Principle of Sufficient Reason is thus another name for the irrational… The refusal of the Principle of Sufficient Reason is not the refusal of reason, but the discovery of the power of chaos harboured by its fundamental principle (non-contradiction). (Meillassoux 2007: 61) The principle of factiality thus legitimates or founds the rationalist requirement that reality be perfectly amenable to conceptual comprehension at the same time that it opens up [a] world emancipated from the Principle of Sufficient Reason (Meillassoux) but founded only on that of non-contradiction.

This emancipation brings us to the practical problem Meillassoux tries to solve, namely the possibility of ideology critique. Correlationism is essentially a discourse on the limits of thought for which the deabsolutization of the Principle of Sufficient Reason marks reason’s discovery of its own essential inability to uncover an absolute. Thus if the Galilean-Copernican revolution of modern science meant the paradoxical unveiling of thought’s capacity to think what there is regardless of whether thought exists or not, then Kant’s correlationist version of the Copernican revolution was in fact a Ptolemaic counterrevolution. Since Kant and even more since Heidegger, philosophy has been adverse precisely to the speculative import of modern science as a formal, mathematical knowledge of nature. Its unintended consequence is therefore that questions of ultimate reasons have been dislocated from the domain of metaphysics into that of non-rational, fideist discourse. Philosophy has thus made the contemporary end of metaphysics complicit with the religious belief in the Principle of Sufficient Reason beyond its very thinkability. Whence Meillassoux’s counter-intuitive conclusion that the refusal of the Principle of Sufficient Reason furnishes the minimal condition for every critique of ideology, insofar as ideology cannot be identified with just any variety of deceptive representation, but is rather any form of pseudo-rationality whose aim is to establish that what exists as a matter of fact exists necessarily. In this way a speculative critique pushes skeptical rationalism’s relinquishment of the Principle of Sufficient Reason to the point where it affirms that there is nothing beneath or beyond the manifest gratuitousness of the given nothing, but the limitless and lawless power of its destruction, emergence, or persistence. Such an absolutizing even though no longer absolutist approach would be the minimal condition for every critique of ideology: to reject dogmatic metaphysics means to reject all real necessity, and a fortiori to reject the Principle of Sufficient Reason, as well as the ontological argument.

On the one hand, Deleuze’s criticism of Heidegger bears many similarities to that of Meillassoux when he redefines the Principle of Sufficient Reason in terms of contingent reason or with Nietzsche and Mallarmé: nothing rather than something such that whatever exists is a fiat in itself. His Principle of Sufficient Reason is the plastic, anarchic and nomadic principle of a superior or transcendental empiricism that teaches us a strange reason, that of the multiple, chaos and difference. On the other hand, however, the fact that Deleuze still speaks of reason should make us wary. For whereas Deleuze seeks to reunite chaotic being with systematic thought, Meillassoux revives the classical opposition between empiricism and rationalism precisely in order to attack the pre-Kantian, absolute validity of the Principle of Sufficient Reason. His argument implies a return to a non-correlationist version of Kantianism insofar as it relies on the gap between being and thought and thus upon a logic of representation that renders Deleuze’s Principle of Sufficient Reason unrecognizable, either through a concept of time, or through materialism.

Topological Drifts in Deleuze. Note Quote.

Brion Gysin: How do you get in… get into these paintings?

William Burroughs: Usually I get in by a port of entry, as I call it. It is often a face through whose eyes the picture opens into a landscape and I go literally right through that eye into that landscape. Sometimes it is rather like an archway… a number of little details or a special spot of colours makes the port of entry and then the entire picture will suddenly become a three-dimensional frieze in plaster or jade or some other precious material.

The word fornix means “an archway” or “vault” (in Rome, prostitutes could be solicited there). More directly, fornicatio means “done in the archway”; thus a euphemism for prostitution.

Diagrammatic praxis proposes a contractual (push, pull) approach in which the movement between abstract machine, biogram (embodied, inflected diagram), formal diagram (drawing of, drawing off) and artaffect (realized thing) is topologically immanent. It imagines the practice of writing, of this writing, interleaved with the mapping processes with which it folds and unfolds – forming, deforming and reforming both processes. The relations of non-relations that power the diagram, the thought intensities that resonate between fragments, between content ad expression, the seeable and the sayable, the discursive and the non-discursive, mark entry points; portals of entry through which all points of the diagram pass – push, pull, fold, unfold – without the designation of arrival and departure, without the input/output connotations of a black boxed confection. Ports, as focal points of passage, attract lines of resistance or lines of flight through which the diagram may become both an effectuating concrete assemblage (thing) and remain outside the stratified zone of the audiovisual. It’s as if the port itself is a bifurcating point, a figural inflected archway. The port, as a bifurcation point of resistance (contra black box), modulates and changes the unstable, turbulent interplay between pure Matter and pure Function of the abstract machine. These ports are marked out, localized, situated, by the continuous movement of power-relations:

These power-relations … simultaneously local, unstable and diffuse, do not emanate from a central point or unique locus of sovereignty, but at each moment move from one point to another in a field of forces, marking inflections, resistances, twists and turns when one changes direction or retraces one’s steps… (Gilles Deleuze, Sean Hand-Foucault)

An inflection point, marked out by the diagram, is not a symmetrical form but the difference between concavity and convexity, a pure temporality, a “true atom of form, the true object of geography.” (Bernard Cache)


Figure: Left: A bifurcating event presented figurally as an archway, a port of entry through order and chaos. Right: Event/entry with inflexion points, points of suspension, of pure temporality, that gives a form “of an absolute exteriority that is not even the exteriority of any given interiority, but which arise from that most interior place that can be perceived or even conceived […] that of which the perceiving itself is radically temporal or transitory”. The passing through of passage.

Cache’s absolute exteriority is equivalent to Deleuze’s description of the Outside “more distant than any exterior […] ‘twisted’, folded and doubled by an Inside that is deeper than any interior, and alone creates the possibility of the derived relation between the interior and the exterior”. This folded and doubled interior is diagrammed by Deleuze in the folds chapter of Foucault.

Thinking does not depend on a beautiful interiority that reunites the visible ad articulable elements, but is carried under the intrusion of an outside that eats into the interval and forces or dismembers the internal […] when there are only environments and whatever lies betwen them, when words and things are opened up by the environment without ever coinciding, there is a liberation of forces which come from the outside and exist only in a mixed up state of agitation, modification and mutation. In truth they are dice throws, for thinking involves throwing the dice. If the outside, farther away than any external world, is also closer than any internal world, is this not a sign that thought affects itself, by revealing the outside to be its own unthought element?

“It cannot discover the unthought […] without immediately bringing the unthought nearer to itself – or even, perhaps, without pushing it farther away, and in any case without causing man’s own being to undergo a change by the very fact, since it is deployed in the distance between them” (Gilles Deleuze, Sean Hand-Foucault)


Figure: Left: a simulation of Deleuze’s central marking in his diagram of the Foucaultian diagram. This is the line of the Outside as Fold. Right: To best express the relations of diagrammatic praxis between content and expression (theory and practice) the Fold figure needs to be drawn as a double Fold (“twice twice” as Massumi might say) – a folded möbius strip. Here the superinflections between inside/outside and content/expression provide transversal vectors.

A topology or topological becoming-shapeshift retains its connectivity, its interconnectedness to preserve its autonomy as a singularity. All the points of all its matter reshape as difference in itself. A topology does not resemble itself. The möbius strip and the infamous torus-to-coffe cup are examples of 2d and 3d topologies. technically a topological surface is totalized, it can not comprise fragments cut or glued to produce a whole. Its change is continuous. It is not cut-copy-pasted. But the cut and its interval are requisite to an emergent new.

For Deleuze, the essence of meaning, the essence of essence, is best expressed in two infinitives; ‘to cut ” and “to die” […] Definite tenses keeping company in time. In the slash between their future and their past: “to cut” as always timeless and alone (Massumi).

Add the individuating “to shift” to the infinitives that reside in the timeless zone of indetermination of future-past. Given the paradigm of the topological-becoming, how might we address writing in the age of copy-paste and hypertext? The seamless and the stitched? As potential is it diagram? A linguistic multiplicity whose virtual immanence is the metalanguage potentiality between the phonemes that gives rise to all language?


An overview diagram of diagrammatic praxis based on Deleuze’s diagram of the Foucaultian model shown below. The main modification is to the representation of the Fold. In the top figure, the Fold or zone of subjectification becomes a double-folded möbius strip.

Four folds of subjectification:

1. material part of ourselves which is to be surrounded and folded

2. the fold of the relation between forces always according to a particular rule that the relation between forces is bent back in order to become a relation to oneself (rule ; natural, divine, rational, aesthetic, etc)

3. fold of knowledge constitutes the relation of truth to our being and our being to truth which will serve as the formal condition for any kind of knowledge

4. the fold of the outside itself is the ultimate fold: an ‘interiority of expectation’ from which the subject, in different ways, hopes for immortality, eternity, salvation, freedom or death or detachment.

Weil Conjectures. Note Quote.


Solving Diophantine equations, that is giving integer solutions to polynomials, is often unapproachably difficult. Weil describes one strategy in a letter to his sister, the philosopher Simone Weil: Look for solutions in richer fields than the rationals, perhaps fields of rational functions over the complex numbers. But these are quite different from the integers:

We would be badly blocked if there were no bridge between the two. And voilà god carries the day against the devil: this bridge exists; it is the theory of algebraic function fields over a finite field of constants.

A solution modulo 5 to a polynomial P(X,Y,..Z) is a list of integers X,Y,..Z making the value P(X,Y,..Z) divisible by 5, or in other words equal to 0 modulo 5. For example, X2 + Y2 − 3 has no integer solutions. That is clear since X and Y would both have to be 0 or ±1, to keep their squares below 3, and no combination of those works. But it has solutions modulo 5 since, among others, 32 + 32 − 3 = 15 is divisible by 5. Solutions modulo a given prime p are easier to find than integer solutions and they amount to the same thing as solutions in the finite field of integers modulo p.

To see if a list of polynomial equations Pi(X, Y, ..Z) = 0 have a solution modulo p we need only check p different values for each variable. Even if p is impractically large, equations are more manageable modulo p. Going farther, we might look at equations modulo p, but allow some irrationals, and ask how the number of solutions grows as we allow irrationals of higher and higher degree—roots of quadratic polynomials, roots of cubic polynomials, and so on. This is looking for solutions in all finite fields, as in Weil’s letter.

The key technical points about finite fields are: For each prime number p, the field of integers modulo p form a field, written Fp. For each natural number r > 0 there is (up to isomorphism) just one field with pr elements, written as Fpr or as Fq with q = pr. This comes from Fp by adjoining the roots of a degree r polynomial. These are all the finite fields. Trivially, then, for any natural number s > 0 there is just one field with qs elements, namely Fp(r+s) which we may write Fqs. The union for all r of the Fpr is the algebraic closure Fp. By Galois theory, roots for polynomials in Fpr, are fixed points for the r-th iterate of the Frobenius morphism, that is for the map taking each x ∈ Fp to xpr.

Take any good n-dimensional algebraic space (any smooth projective variety of dimension n) defined by integer polynomials on a finite field Fq. For each s ∈ N, let Ns be the number of points defined on the extension field F(qs). Define the zeta function Z(t) as an exponential using a formal variable t:

Z(t) = exp(∑s=1Nsts/s)

The first Weil conjecture says Z(t) is a rational function:

Z(t) = P(t)/Q(t)

for integer polynomials P(t) and Q(t). This is a strong constraint on the numbers of solutions Ns. It means there are complex algebraic numbers a1 . . . ai and b1 . . . bj such that

Ns =(as1 +…+ asi) − (bs1 +…+ bsj)

And each algebraic conjugate of an a (resp. b) also an a (resp. b).

The second conjecture is a functional equation:

Z(1/qnt) = ± qnE/2tEZ(t)

This says the operation x → qn/x permutes the a’s (resp. the b’s).The third is a Riemann Hypothesis

Z(t) = (P1(t)P3(t) · · · P2n−1(t))/(P0(t)P2(t) · · · P2n(t))

where each Pk is an integer polynomial with all roots of absolute value q−k/2. That means each a has absolute value qk for some 0 ≤ k ≤ n. Each b has absolute value q(2k−1)/2 for some 0 ≤ k ≤ n.

Over it all is the conjectured link to topology. Let B0, B1, . . . B2n be the Betti numbers of the complex manifold defined by the same polynomials. That is, each Bk gives the number of k-dimensional holes or handles on the continuous space of complex number solutions to the equations. And recall an algebraically n-dimensional complex manifold is topologically 2n-dimensional. Then each Pk has degree Bk. And E is the Euler number of the manifold, the alternating sum

k=02n (−1)kBk

On its face the topology of a continuous manifold is worlds apart from arithmetic over finite fields. But the topology of this manifold tells how many a’s and b’s there are with each absolute value. This implies useful numerical approximations to the numbers of roots Ns. Special cases of these conjectures, with aspects of the topology, were proved before Weil, and he proved more. All dealt with curves (1-dimensional) or hypersurfaces (defined by a single polynomial).

Weil presented the topology as motivating the conjectures for higher dimensional varieties. He especially pointed out how the whole series of conjectures would follow quickly if we could treat the spaces of finite field solutions as topological manifolds. The topological strategy was powerfully seductive but seriously remote from existing tools. Weil’s arithmetic spaces were not even precisely defined. To all appearances they would be finite or (over the algebraic closures of the finite fields) countable and so everywhere discontinuous. Topological manifold methods could hardly apply.

Anthropocosmism. Thought of the Day 20.0


Russian cosmism appeared as sort of antithesis to the classical physicalist paradigm of thinking that was based on strict a differentiation of man and nature. It made an attempt to revive the ontology of an integral vision that organically unites man and cosmos. These problems were discussed both in the scientific and the religious form of cosmism. In the religious form N. Fedorov’s conception was the most significant one. Like other cosmists, he was not satisfied with the split of the Universe into man and nature as opposed entities. Such an opposition, in his opinion, condemned nature to thoughtlessness and destructiveness, and man to submission to the existing “evil world”. Fedorov maintained the ideas of a unity of man and nature, a connection between “soul” and cosmos in terms of regulation and resurrection. He offered a project of resurrection that was not understood only as a resurrection of ancestors, but contained at least two aspects: raising from the dead in a narrow, direct sense, and in a wider, metaphoric sense that includes nature’s ability of self-reconstruction. Fedorov’s resurrection project was connected with the idea of the human mind’s going to outer space. For him, “the Earth is not bound”, and “human activity cannot be restricted by the limits of the terrestrial planet”, which is only the starting point of this activity. One should critically look at the Utopian and fantastic elements of N. Fedorov’s views, which contain a considerable grain of mysticism, but nevertheless there are important rational moments of his conception: the quite clearly expressed idea of interconnection, the unity of man and cosmos, the idea of the correlation of the rational and moral elements of man, the ideal of the unity of humanity as planetary community of people.

But while religious cosmism was more notable for the fantastic and speculative character of its discourses, the natural scientific trend, solving the problem of interconnection between man and cosmos, paid special attention to the comprehension of scientific achievements that confirmed that interconnection. N. G. Kholodny developed these ideas in terms of anthropocosmism, opposing it to anthropocentrism. He wrote: “Having put himself in the place of God, man destroyed his natural connections with nature and condemned himself to a long solitary existence”. In Kholodny ́s opinion, anthropocentrism passed through several stages in its development: at the first stage man did not oppose himself to nature and did not oppose it, he rather “humanized” the natural forces. At the second stage man, extracting himself from nature, man looks at it as the object for research, the base of his well-being. At the next stage man uplifts himself over nature, basing himself in this activity on spiritual forces he studies the Universe. And, lastly, the next stage is characterized by a crisis of the anthropocentric worldview, which starts to collapse under the influence of the achievements of science and philosophy. N. G. Kholodny was right noting that in the past anthropocentrism had played a positive role; it freed man from his fright at nature by means of uplifting him over the latter. But gradually, beside anthropocentrism there appeared sprouts of the new vision – anthropocosmism. Kholodny regarded anthropocosmism as a certain line of development of the human intellect, will and feelings, which led people to their aims. An essential element in anthropocosmism was the attempt to reconsider the question of man ́s place in nature and of his interrelations with cosmos on the foundation of natural scientific knowledge.