Coding Information While Operators Fail to Commute. Drunken Risibility.

Suppose ∇ is a derivative operator on the manifold M. Then there is a (unique) smooth tensor field Rabcd on M such that for all smooth fields ξb,

Rabcd ξb = −2∇[cd] ξa —– (1)

Uniqueness is immediate since any two fields that satisfied this condition would agree in their action on all vectors ξb at all points. For existence, we introduce a field Rabcd and do so in such a way that it is clear that it satisfies the required condition. Let p be any point in M and let ξ’b be any vector at p. We define Rabcd ξ’b by considering any smooth field ξb on M that assumes the value ξ’b at p and setting Rabcdξ’b = −2∇[cda. It suffices to verify that the choice of the field ξb plays no role. For this it suffices to show that if ηb is a smooth field on M that vanishes at p, then necessarily ∇[cd] ηb vanishes at p as well. (For then we can apply this result, taking ηb to be the difference between any two candidates for ξb.)

The usual argument works. Let λa be any smooth field on M. Then we have,

0 = ∇[cd] (ηaλa) = ∇[c ηad] λa + ηa[cd] λa  + (∇[c λ|a|) (∇d] ηa) + λa ∇[cd] ηa —– (2)

It is to be noted that in the third term of the final sum the vertical lines around the index indicate that it is not to be included in the anti-symmetrization. Now the first and third terms in that sum cancel each other. And the second vanishes at p. So we have 0= λa∇[cda at p. But the field λa can be chosen so that it assumes any particular value at p. So ∇[cd] ηa = 0 at p.

Untitled

Rabcd is called the Riemann curvature tensor field (associated with ∇). It codes information about the degree to which the operators ∇c and ∇d fail to commute.

Transcendentally Realist Modality. Thought of the Day 78.1

22jun

Let us start at the beginning first! Though the fact is not mentioned in Genesis, the first thing God said on the first day of creation was ‘Let there be necessity’. And there was necessity. And God saw necessity, that it was good. And God divided necessity from contingency. And only then did He say ‘Let there be light’. Several days later, Adam and Eve were introducing names for the animals into their language, and during a break between the fish and the birds, introduced also into their language modal auxiliary verbs, or devices that would be translated into English using modal auxiliary verbs, and rules for their use, rules according to which it can be said of some things that they ‘could’ have been otherwise, and of other things that they ‘could not’. In so doing they were merely putting labels on a distinction that was no more their creation than were the fishes of the sea or the beasts of the field or the birds of the air.

And here is the rival view. The failure of Genesis to mention any command ‘Let there be necessity’ is to be explained simply by the fact that no such command was issued. We have no reason to suppose that the language in which God speaks to the angels contains modal auxiliary verbs or any equivalent device. Sometime after the Tower of Babel some tribes found that their purposes would be better served by introducing into their language certain modal auxiliary verbs, and fixing certain rules for their use. When we say that this is necessary while that is contingent, we are applying such rules, rules that are products of human, not divine intelligence.

This theological language would have been the natural way for seventeenth or eighteenth century philosophers, who nearly all were or professed to be theists or deists, to discuss the matter. For many today, such language cannot be literally accepted, and if it is only taken metaphorically, then at least better than those who speak figuratively and frame the question as that of whether the ‘origin’ of necessity lies outside us or within us. So let us drop the theological language, and try again.

Well, here the first view: Ultimately reality as it is in itself, independently of our attempts to conceptualize and comprehend it, contains both facts about what is, and superfacts about what not only is but had to have been. Our modal usages, for instance, the distinction between the simple indicative ‘is’ and the construction ‘had to have been’, simply reflect this fundamental distinction in the world, a distinction that is and from the beginning always was there, independently of us and our concerns.

And here is the second view: We have reasons, connected with our various purposes in life, to use certain words, including ‘would’ and ‘might’, in certain ways, and thereby to make certain distinctions. The distinction between those things in the world that would have been no matter what and those that might have failed to be if only is a projection of the distinctions made in our language. Our saying there were necessities there before us is a retroactive application to the pre-human world of a way of speaking invented and created by human beings in order to solve human problems.

Well, that’s the second try. With it even if one has gotten rid of theology, unfortunately one has not gotten rid of all metaphors. The key remaining metaphor is the optical one: reflection vs projection. Perhaps the attempt should be to get rid of all metaphors, and admit that the two views are not so much philosophical theses or doctrines as ‘metaphilosophical’ attitudes or orientations: a stance that finds the ‘reflection’ metaphor congenial, and the stance that finds the ‘projection’ metaphor congenial. So, lets try a third time to describe the distinction between the two outlooks in literal terms, avoiding optics as well as theology.

To begin with, both sides grant that there is a correspondence or parallelism between two items. On the one hand, there are facts about the contrast between what is necessary and what is contingent. On the other hand, there are facts about our usage of modal auxiliary verbs such as ‘would’ and ‘might’, and these include, for instance, the fact that we have no use for questions of the form ‘Would 29 still have been a prime number if such-and- such?’ but may have use for questions of the form ‘Would 29 still have been the number of years it takes for Saturn to orbit the sun if such-and-such?’ The difference between the two sides concerns the order of explanation of the relation between the two parallel ranges of facts.

And what is meant by that? Well, both sides grant that ‘29 is necessarily prime’, for instance, is a proper thing to say, but they differ in the explanation why it is a proper thing to say. Asked why, the first side will say that ultimately it is simply because 29 is necessarily prime. That makes the proposition that 29 is necessarily prime true, and since the sentence ‘29 is necessarily prime’ expresses that proposition, it is true also, and a proper thing to say. The second side will say instead that ‘29 is necessarily prime’ is a proper thing to say because there is a rule of our language according to which it is a proper thing to say. This formulation of the difference between the two sides gets rid of metaphor, though it does put an awful lot of weight on the perhaps fragile ‘why’ and ‘because’.

Note that the adherents of the second view need not deny that 29 is necessarily prime. On the contrary, having said that the sentence ‘29 is necessarily prime’ is, per rules of our language, a proper thing to say, they will go on to say it. Nor need the adherents of the first view deny that recognition of the propriety of saying ‘29 is necessarily prime’ is enshrined in a rule of our language. The adherents of the first view need not even deny that proximately, as individuals, we learn that ‘29 is necessarily prime’ is a proper thing to say by picking up the pertinent rule in the course of learning our language. But the adherents of the first view will maintain that the rule itself is only proper because collectively, as the creators of the language, we or our remote answers have, in setting up the rule, managed to achieve correspondence with a pre-existing fact, or rather, a pre-existing superfact, the superfact that 29 is necessarily prime. The difference between the two views is, in the order of explanation.

The adherents regarding labels for the two sides, or ‘metaphilosophical’ stances, rather than inventing new ones, will simply take two of the most overworked terms in the philosophical lexicon and give them one more job to do, calling the reflection view ‘realism’ about modality, and the projection view ‘pragmatism’. That at least will be easy to remember, since ‘realism’ and ‘reflection’ begin with the same first two letters, as do ‘pragmatism’ and ‘projection’. The realist/pragmatist distinction has bearing across a range of issues and problems, and above all it has bearing on the meta-issue of which issues are significant. For the two sides will, or ought to, recognize quite different questions as the central unsolved problems in the theory of modality.

For those on the realist side, the old problem of the ultimate source of our knowledge of modality remains, even if it is granted that the proximate source lies in knowledge of linguistic conventions. For knowledge of linguistic conventions constitutes knowledge of a reality independent of us only insofar as our linguistic conventions reflect, at least to some degree, such an ultimate reality. So for the realist the problem remains of explaining how such degree of correspondence as there is between distinctions in language and distinctions in the world comes about. If the distinction in the world is something primary and independent, and not a mere projection of the distinction in language, then how the distinction in language comes to be even imperfectly aligned with the distinction in the world remains to be explained. For it cannot be said that we have faculties responsive to modal facts independent of us – not in any sense of ‘responsive’ implying that if the facts had been different, then our language would have been different, since modal facts couldn’t have been different. What then is the explanation? This is the problem of the epistemology of modality as it confronts the realist, and addressing it is or ought to be at the top of the realist agenda.

As for the pragmatist side, a chief argument of thinkers from Kant to Ayer and Strawson and beyond for their anti-realist stance has been precisely that if the distinction we perceive in reality is taken to be merely a projection of a distinction created by ourselves, then the epistemological problem dissolves. That seems more like a reason for hoping the Kantian or Ayerite or Strawsonian view is the right one, than for believing that it is; but in any case, even supposing the pragmatist view is the right one, and the problems of the epistemology of modality are dissolved, still the pragmatist side has an important unanswered question of its own to address. The pragmatist account, begins by saying that we have certain reasons, connected with our various purposes in life, to use certain words, including ‘would’ and ‘might’, in certain ways, and thereby to make certain distinctions. What the pragmatist owes us is an account of what these purposes are, and how the rules of our language help us to achieve them. Addressing that issue is or ought to be at the top of the pragmatists’ to-do list.

While the positivist Ayer dismisses all metaphysics, the ordinary-language philosopher Strawson distinguishes good metaphysics, which he calls ‘descriptive’, from bad metaphysics, which he calls ‘revisionary’, but which rather be called ‘transcendental’ (without intending any specifically Kantian connotations). Descriptive metaphysics aims to provide an explicit account of our ‘conceptual scheme’, of the most general categories of commonsense thought, as embodied in ordinary language. Transcendental metaphysics aims to get beyond or behind all merely human conceptual schemes and representations to ultimate reality as it is in itself, an aim that Ayer and Strawson agree is infeasible and probably unintelligible. The descriptive/transcendental divide in metaphysics is a paradigmatically ‘metaphilosophical’ issue, one about what philosophy is about. Realists about modality are paradigmatic transcendental metaphysicians. Pragmatists must in the first instance be descriptive metaphysicians, since we must to begin with understand much better than we currently do how our modal distinctions work and what work they do for us, before proposing any revisions or reforms. And so the difference between realists and pragmatists goes beyond the question of what issue should come first on the philosopher’s agenda, being as it is an issue about what philosophical agendas are about.

The Mystery of Modality. Thought of the Day 78.0

sixdimensionquantificationalmodallogic.01

The ‘metaphysical’ notion of what would have been no matter what (the necessary) was conflated with the epistemological notion of what independently of sense-experience can be known to be (the a priori), which in turn was identified with the semantical notion of what is true by virtue of meaning (the analytic), which in turn was reduced to a mere product of human convention. And what motivated these reductions?

The mystery of modality, for early modern philosophers, was how we can have any knowledge of it. Here is how the question arises. We think that when things are some way, in some cases they could have been otherwise, and in other cases they couldn’t. That is the modal distinction between the contingent and the necessary.

How do we know that the examples are examples of that of which they are supposed to be examples? And why should this question be considered a difficult problem, a kind of mystery? Well, that is because, on the one hand, when we ask about most other items of purported knowledge how it is we can know them, sense-experience seems to be the source, or anyhow the chief source of our knowledge, but, on the other hand, sense-experience seems able only to provide knowledge about what is or isn’t, not what could have been or couldn’t have been. How do we bridge the gap between ‘is’ and ‘could’? The classic statement of the problem was given by Immanuel Kant, in the introduction to the second or B edition of his first critique, The Critique of Pure Reason: ‘Experience teaches us that a thing is so, but not that it cannot be otherwise.’

Note that this formulation allows that experience can teach us that a necessary truth is true; what it is not supposed to be able to teach is that it is necessary. The problem becomes more vivid if one adopts the language that was once used by Leibniz, and much later re-popularized by Saul Kripke in his famous work on model theory for formal modal systems, the usage according to which the necessary is that which is ‘true in all possible worlds’. In these terms the problem is that the senses only show us this world, the world we live in, the actual world as it is called, whereas when we claim to know about what could or couldn’t have been, we are claiming knowledge of what is going on in some or all other worlds. For that kind of knowledge, it seems, we would need a kind of sixth sense, or extrasensory perception, or nonperceptual mode of apprehension, to see beyond the world in which we live to these various other worlds.

Kant concludes, that our knowledge of necessity must be what he calls a priori knowledge or knowledge that is ‘prior to’ or before or independent of experience, rather than what he calls a posteriori knowledge or knowledge that is ‘posterior to’ or after or dependant on experience. And so the problem of the origin of our knowledge of necessity becomes for Kant the problem of the origin of our a priori knowledge.

Well, that is not quite the right way to describe Kant’s position, since there is one special class of cases where Kant thinks it isn’t really so hard to understand how we can have a priori knowledge. He doesn’t think all of our a priori knowledge is mysterious, but only most of it. He distinguishes what he calls analytic from what he calls synthetic judgments, and holds that a priori knowledge of the former is unproblematic, since it is not really knowledge of external objects, but only knowledge of the content of our own concepts, a form of self-knowledge.

We can generate any number of examples of analytic truths by the following three-step process. First, take a simple logical truth of the form ‘Anything that is both an A and a B is a B’, for instance, ‘Anyone who is both a man and unmarried is unmarried’. Second, find a synonym C for the phrase ‘thing that is both an A and a B’, for instance, ‘bachelor’ for ‘one who is both a man and unmarried’. Third, substitute the shorter synonym for the longer phrase in the original logical truth to get the truth ‘Any C is a B’, or in our example, the truth ‘Any bachelor is unmarried’. Our knowledge of such a truth seems unproblematic because it seems to reduce to our knowledge of the meanings of our own words.

So the problem for Kant is not exactly how knowledge a priori is possible, but more precisely how synthetic knowledge a priori is possible. Kant thought we do have examples of such knowledge. Arithmetic, according to Kant, was supposed to be synthetic a priori, and geometry, too – all of pure mathematics. In his Prolegomena to Any Future Metaphysics, Kant listed ‘How is pure mathematics possible?’ as the first question for metaphysics, for the branch of philosophy concerned with space, time, substance, cause, and other grand general concepts – including modality.

Kant offered an elaborate explanation of how synthetic a priori knowledge is supposed to be possible, an explanation reducing it to a form of self-knowledge, but later philosophers questioned whether there really were any examples of the synthetic a priori. Geometry, so far as it is about the physical space in which we live and move – and that was the original conception, and the one still prevailing in Kant’s day – came to be seen as, not synthetic a priori, but rather a posteriori. The mathematician Carl Friedrich Gauß had already come to suspect that geometry is a posteriori, like the rest of physics. Since the time of Einstein in the early twentieth century the a posteriori character of physical geometry has been the received view (whence the need for border-crossing from mathematics into physics if one is to pursue the original aim of geometry).

As for arithmetic, the logician Gottlob Frege in the late nineteenth century claimed that it was not synthetic a priori, but analytic – of the same status as ‘Any bachelor is unmarried’, except that to obtain something like ‘29 is a prime number’ one needs to substitute synonyms in a logical truth of a form much more complicated than ‘Anything that is both an A and a B is a B’. This view was subsequently adopted by many philosophers in the analytic tradition of which Frege was a forerunner, whether or not they immersed themselves in the details of Frege’s program for the reduction of arithmetic to logic.

Once Kant’s synthetic a priori has been rejected, the question of how we have knowledge of necessity reduces to the question of how we have knowledge of analyticity, which in turn resolves into a pair of questions: On the one hand, how do we have knowledge of synonymy, which is to say, how do we have knowledge of meaning? On the other hand how do we have knowledge of logical truths? As to the first question, presumably we acquire knowledge, explicit or implicit, conscious or unconscious, of meaning as we learn to speak, by the time we are able to ask the question whether this is a synonym of that, we have the answer. But what about knowledge of logic? That question didn’t loom large in Kant’s day, when only a very rudimentary logic existed, but after Frege vastly expanded the realm of logic – only by doing so could he find any prospect of reducing arithmetic to logic – the question loomed larger.

Many philosophers, however, convinced themselves that knowledge of logic also reduces to knowledge of meaning, namely, of the meanings of logical particles, words like ‘not’ and ‘and’ and ‘or’ and ‘all’ and ‘some’. To be sure, there are infinitely many logical truths, in Frege’s expanded logic. But they all follow from or are generated by a finite list of logical rules, and philosophers were tempted to identify knowledge of the meanings of logical particles with knowledge of rules for using them: Knowing the meaning of ‘or’, for instance, would be knowing that ‘A or B’ follows from A and follows from B, and that anything that follows both from A and from B follows from ‘A or B’. So in the end, knowledge of necessity reduces to conscious or unconscious knowledge of explicit or implicit semantical rules or linguistics conventions or whatever.

Such is the sort of picture that had become the received wisdom in philosophy departments in the English speaking world by the middle decades of the last century. For instance, A. J. Ayer, the notorious logical positivist, and P. F. Strawson, the notorious ordinary-language philosopher, disagreed with each other across a whole range of issues, and for many mid-century analytic philosophers such disagreements were considered the main issues in philosophy (though some observers would speak of the ‘narcissism of small differences’ here). And people like Ayer and Strawson in the 1920s through 1960s would sometimes go on to speak as if linguistic convention were the source not only of our knowledge of modality, but of modality itself, and go on further to speak of the source of language lying in ourselves. Individually, as children growing up in a linguistic community, or foreigners seeking to enter one, we must consciously or unconsciously learn the explicit or implicit rules of the communal language as something with a source outside us to which we must conform. But by contrast, collectively, as a speech community, we do not so much learn as create the language with its rules. And so if the origin of modality, of necessity and its distinction from contingency, lies in language, it therefore lies in a creation of ours, and so in us. ‘We, the makers and users of language’ are the ground and source and origin of necessity. Well, this is not a literal quotation from any one philosophical writer of the last century, but a pastiche of paraphrases of several.

Grothendieck’s Universes and Wiles Proof (Fermat’s Last Theorem). Thought of the Day 77.0

math-equations-16133692

In formulating the general theory of cohomology Grothendieck developed the concept of a universe – a collection of sets large enough to be closed under any operation that arose. Grothendieck proved that the existence of a single universe is equivalent over ZFC to the existence of a strongly inaccessible cardinal. More precisely, 𝑈 is the set 𝑉𝛼 of all sets with rank below 𝛼 for some uncountable strongly inaccessible cardinal.

Colin McLarty summarised the general situation:

Large cardinals as such were neither interesting nor problematic to Grothendieck and this paper shares his view. For him they were merely legitimate means to something else. He wanted to organize explicit calculational arithmetic into a geometric conceptual order. He found ways to do this in cohomology and used them to produce calculations which had eluded a decade of top mathematicians pursuing the Weil conjectures. He thereby produced the basis of most current algebraic geometry and not only the parts bearing on arithmetic. His cohomology rests on universes but weaker foundations also suffice at the loss of some of the desired conceptual order.

The applications of cohomology theory implicitly rely on universes. Most number theorists regard the applications as requiring much less than their ‘on their face’ strength and in particular believe the large cardinal appeals are ‘easily eliminable’. There are in fact two issues. McLarty writes:

Wiles’s proof uses hard arithmetic some of which is on its face one or two orders above PA, and it uses functorial organizing tools some of which are on their face stronger than ZFC.

There are two current programs for verifying in detail the intuition that the formal requirements for Wiles proof of Fermat’s last theorem can be substantially reduced. On the one hand, McLarty’s current work aims to reduce the ‘on their face’ strength of the results in cohomology from large cardinal hypotheses to finite order Peano. On the other hand Macintyre aims to reduce the ‘on their face’ strength of results in hard arithmetic to Peano. These programs may be complementary or a full implementation of Macintyre’s might avoid the first.

McLarty reduces

  1. ‘ all of SGA (Revêtements Étales et Groupe Fondamental)’ to Bounded Zermelo plus a Universe.
  2. “‘the currently existing applications” to Bounded Zermelo itself, thus the con-sistency strength of simple type theory.’ The Grothendieck duality theorem and others like it become theorem schema.

The essential insight of the McLarty’s papers on cohomology is the role of replacement in giving strength to the universe hypothesis. A 𝑍𝐶-universe is defined to be a transitive set U modeling 𝑍𝐶 such that every subset of an element of 𝑈 is itself an element of 𝑈. He remarks that any 𝑉𝛼 for 𝛼 a limit ordinal is provable in 𝑍𝐹𝐶 to be a 𝑍𝐶-universe. McLarty then asserts the essential use of replacement in the original Grothendieck formulation is to prove: For an arbitrary ring 𝑅 every module over 𝑅 embeds in an injective 𝑅-module and thus injective resolutions exist for all 𝑅-modules. But he gives a proof in a system with the proof theoretic strength of finite order arithmetic that every sheaf of modules on any small site has an infinite resolution.

Angus Macintyre dismisses with little comment the worries about the use of ‘large-structure’ tools in Wiles proof. He begins his appendix,

At present, all roads to a proof of Fermat’s Last Theorem pass through some version of a Modularity Theorem (generically MT) about elliptic curves defined over Q . . . A casual look at the literature may suggest that in the formulation of MT (or in some of the arguments proving whatever version of MT is required) there is essential appeal to higher-order quantification, over one of the following.

He then lists such objects as C, modular forms, Galois representations …and summarises that a superficial formulation of MT would be 𝛱1m for some small 𝑚. But he continues,

I hope nevertheless that the present account will convince all except professional sceptics that MT is really 𝛱01.

There then follows a 13 page highly technical sketch of an argument for the proposition that MT can be expressed by a sentence in 𝛱01 along with a less-detailed strategy for proving MT in PA.

Macintyre’s complexity analysis is in traditional proof theoretic terms. But his remark that ‘genus’ is more a useful geometric classification of curves than the syntactic notion of degree suggests that other criteria may be relevant. McLarty’s approach is not really a meta-theorem, but a statement that there was only one essential use of replacement and it can be eliminated. In contrast, Macintyre argues that ‘apparent second order quantification’ can be replaced by first order quantification. But the argument requires deep understanding of the number theory for each replacement in a large number of situations. Again, there is no general theorem that this type of result is provable in PA.

Welfare Economics, or Social Psychic Wellbeing. Note Quote.

monopoly_welfare_1

The economic system is a social system in which commodities are exchanged. Sets of these commodities can be represented by vectors x within a metric space X contained within the non-negative orthant of an Euclidean space RNx+ of dimensionality N equal to the number of such commodities.

An allocation {xi}i∈N ⊂ X ⊂ RNx+ of commodities in society is a set of vectors xi representing the commodities allocated within the economic system to each individual i ∈ N.

In questions of welfare economics at least in all practical policy matters, the state of society is equated with this allocation, that is, s = {xi}i∈N, and the set of all possible information concerning the economic state of society is S = X. It is typically taken to be the case that the individual’s preference-information is simply their allocation xi, si = xi. The concept of Pareto efficiency is thus narrowed to “neoclassical Pareto efficiency” for the school of economic thought in which originates, and to distinguish it from the weaker criterion.

An allocation {xi}i∈N is said to be neoclassical Pareto efficient iff ∄{xi}i∈N ⊂ X & i ∈ N : x′i ≻ xi & x′j ≽ xj ∀ j ≠ i ∈ N.

A movement between two allocations, {xi}i∈N → {x′i}i∈N is called a neoclassical Pareto improvement iff ∃i∈N : x′i ≻ xi & x′j ≽ xj ∀ j ≠ i ∈ N.

For technical reasons it is almost always in practice assumed for simplicity that individual preference relations are monotonically increasing across the space of commodities.

If individual preferences are monotonically increasing then x′ii xi ⇐⇒ x′i ≥ xi, and x′ ≻ xi ⇐⇒ xi > x′i2.

This is problematic, because a normative economics guided by the principle of implementing a decision if it yields a neoclassical Pareto improvement where individuals have such preference relations above leads to the following situation.

Suppose that individual’s preference-information is their own allocation of commodities, and that their preferences are monotonically increasing. Take one individual j ∈ N and an initial allocation {xi}i∈N.

– A series of movements between allocations {{xi}ti∈N → {x′i}ti∈N}Tt=1 such that xi≠j = x′i≠j ∀ t and x′j > xj ∀ t and therefore that xj − xi → ∞∀i≠j ∈ N, are neoclassical Pareto improvements. Furthermore, if these movements are made possible only by the discovery of new commodities, each individual state in the movement is neoclassical Pareto efficient prior to the next discovery if the first allocation was neoclassical Pareto efficient.

Admittedly perhaps not to the economic theorist, but to most this seems a rather dubious out- come. It means that if we are guided by neoclassical Pareto efficiency it is acceptable, indeed de- sirable, that one individual within society be made increasingly “richer” without end and without increasing the wealth of others. Provided only the wealth of others does not decrease. The same result would hold if instead of an individual, we made a whole group, or indeed the whole of society “better off”, without making anyone else “worse off”.

Even the most devoted disciple of Ayn Rand would find this situation dubious, for there is no requirement that the individual in question be in some sense “deserving” of their riches. But it is perfectly logically consistent with Pareto optimality if individual preferences concern only to their allocation and are monotonically increasing. So what is it that is strange here? What generates this odd condonation? It is the narrowing of that which the polity care about to each individual allocation, alone, independent of others. The fact that neoclassical Pareto improvements are distribution-invariant because the polity is supposed to care only about their own individual allocation xi ∈ {xi}ti∈N alone rather than broader states of society si ⊂ s as they see it.

To avoid such awkward results, the economist may move from the preference-axiomatic concept of Pareto efficiency to embrace utilitarianism. The policy criterion (actually not immediately representative of Bentham’s surprisingly subtle statement) being the maximisation of some combination W(x) = W {ui(xi)}i∈N of individual utilities ui(xi) over allocations. The “social psychic wellbeing” metric known as the Social Welfare Function.

In theory, the maximisation of W(x) would, given the “right” assumptions on the combination method W (·) (sum, multiplication, maximin etc.) and utilities (concavity, montonocity, independence etc.) fail to condone a distribution of commodities x extreme as that discussed above. By dint of its failure to maximise social welfare W(x). But to obtain this egalitarian sensitivity to the distribution of income, three properties of Social Welfare Functions are introduced. Which prove fatal to the a-politicality of the economist’s policy advice, and introduce presuppositions which must lay naked upon the political passions of the economist, so much more indecently for their hazy concealment under the technicalistic canopy of functional mathematics.

Firstly, it is so famous a result as to be called the “third theorem of welfare economics” that any such function W(·) as has certain “uncontroversially” desirable technical properties will impose upon the polity N the preferences of a dictator i ∈ N within it. The preference of one individual i ∈ N will serve to determine the preference indicated between by society between different states by W(x). In practice, the preferences of the economist, who decides upon the form of W(·) and thus imposes their particular political passions (be they egalitarian or otherwise) upon policy, deeming what is “socially optimal” by the different weightings assigned to individual utilities ui(·) within the polity. But the political presuppositions imported by the economist go deeper in fact than this. Utilitari-anism which allows for inter-personal comparisons of utility in the construction of W(x) requires utility functions be “cardinal” – representing “how much” utility one derives from commodities over and above the bare preference between different sets thereof. Utility is an extremely vague concept, because it was constructed to represent a common hedonistic experiential metric where the very existence of such is uncertain in the first place. In practice, the economist decides upon, extrapolates, assigns to i ∈ N a particular utility function which imports yet further assumptions about how any one individual values their commodity allocation, and thus contributes to social psychic wellbeing.

And finally, utilitarianism not only makes political statements about who in the polity is to be assigned a disimproved situation. It makes statements so outlandish and outrageous to the common sensibility as to have provided the impetus for two of the great systems of philosophy of justice in modernity – those of John Rawls and Amartya Sen. Under almost any combination method W(·), the maximization of W(·) demands allocation to those most able to realize utility from their allocation. It would demand, for instance, redistribution of commodities from sick children to the hedonistic libertine, for the latter can obtain greater “utility” there from. A problem so severe in its political implications it provided the basic impetus for Rawls’ and Sen’s systems. A Theory of Justice is, of course, a direct response to the problematic political content of utilitarianism.

So Pareto optimality stands as the best hope for the economist to make a-political statements about policy, refraining from making statements therein concerning the assignation of dis-improvements in the situation of any individual. Yet if applied to preferences over individual allocations alone it condones some extreme situations of dubious political desirability across the spectrum of political theory and philosophy. But how robust a guide is it when we allow the polity to be concerned with states of society in general? Not only their own individual allocation of commodities. As they must be in the process of public reasoning in every political philosophy from Plato to Popper and beyond.

Consequentialism -X- (Pareto Efficiency) -X- Deontology

maxresdefault

Let us check the Polity to begin with:

1. N is the set of all individuals in society.

And that which their politics concerns – the state of society.

2. S is the set of all possible information contained within society, so that a set s ∈ 2S (2S being the set of all possible subsets of S) contains all extant information about a particular iteration of society and will be called the state of society. S is an arbitrary topological space.

And the means by which individuals make judgements about that which their politics concerns. Their preferences over the information contained within the state of society.

3. Each individual i ∈ N has a complete and transitive preference relation ≽i defined over a set of preference-information Si ⊂ S such that si ≽ s′i can be read “individual i prefers preference information si at least as much as preference-information s′i”.

Any particular set of preference-information si ⊂ Si can be thought of as the state of society as viewed by individual i. The set of preference-information for individual i is a subset of the information contained within a particular iteration of society, so si ⊂ s ⊂ S.

A particular state of society s is a Pareto efficient if there is no other state of society s′ for which one individual strictly prefers their preference-information s′i ⊂ s′ to that particular state si ⊂ s, and the preference-information s′j ⊂ s′ in the other state s′ is at least as preferred by every other individual j ≠ i.

4. A state s ∈ S is said to be Pareto efficient iff ∄ s′ ∈ 2S & i ∈ N : s′i ≻ si & s′j ≽ sj ∀ j ≠ i ∈ N.

To put it crudely, a particular state of society is Pareto efficient if no individual can be made “better off” without making another individual “worse off”. A dynamic concept which mirrors this is the concept of a Pareto improvement – whereby a change in the state of society leaves everyone at least indifferent, and at least one individual in a preferable situation.

5. A movement between two states of society, s → s′ is called a Pareto improvement iff ∃ i ∈ N : s′i ≻ si & s′j ≽ sj ∀ j ≠ i ∈ N .

Note that this does not imply that s′ is a Pareto efficient state, because the same could potentially be said of a movement s′ → s′′. The state s′ is only a Pareto efficient state if we cannot find yet another state for which the movement to that state is a Pareto improvement. The following Theorem, demonstrates this distinction and gives an alternative definition of Pareto efficiency.

Theorem: A state s ∈ 2S is Pareto efficient iff there is no other state s′ for which the movement s → s′ is a Pareto improvement.

If one adheres to a consequentialist political doctrine (such as classical utilitarianism) rather than a deontological doctrine (such as liberalism) in which action is guided by some categorical imperative other than consequentialism, the guide offered by Pareto improvement is the least controversial, and least politically committal criterion to decision-making one can find. Indeed if we restrict political statements to those which concern the assignation of losses, it is a-political. It makes a value judgement only about who ought gain (whosoever stands to).

Unless one holds a strict deontological doctrine in the style, say, of Robert Nozick’s Anarchy state and Utopia (in which the maintenance of individual freedom is the categorical imperative), or John Rawls’ A Theory of Justice (in which again individual freedom is the primary categorical imperative and the betterment of the “poorest” the second categorical imperative), it is more difficult to argue against implementing some decision which will cause a change of society which all individuals in society will be at worst indifferent to. Than arguing for some decision rule which will induce a change of society which some individual will find less preferable. To the rationalisitic economist it seems almost petty, certainly irrational to argue against this criterion, like those individuals who demand “fairness” in the famous “dictator” experiment rather than accept someone else becoming “better off”, and themselves no “worse off”.

Finsler Space as a Locally Minkowskian Space: Caught Between Curvature and Torsion Tensors.

main

The extension of Riemannian “point”-space {xi} into a “line-space” {xi, dxi} make things clearer but not easier: how do you explain to a physicist a geometry supporting at least 3 curvature tensors and five torsion tensors? Not to speak of its usefulness for physics! Fortunately, the “impenetrable forest” by now has become a real, enjoyable park: through the application of the concepts of fibre bundle and non-linear connection. The different curvatures and torsion tensors result from vertical and horizontal parts of geometric objects in the tangent bundle, or in the Finsler bundle of the underlying manifold.

In essence, Finsler geometry is analogous to Riemannian geometry: there, the tangent space in a point p is euclidean space; here, the tangent space is just a normed space, i.e., Minkowski Space. Put differently: A Finsler metric for a differentiable manifold M is a map that assigns to each point x ∈ M a norm on the tangent space TxM. When referred to the almost exclusive use of methods from Riemannian geometry it means that this norm is demanded to derive from the length of a smooth path γ : [a, b] → M defined by ∫ab ∥ dγ(t)/dt ∥ dt. Then Finsler space becomes an example for the class of length spaces.

Starting from the length of the curve,

dγ(p, q):= ∫pq Lx(t), dx(t)/dt dt

the variational principle δdγ(p, q) = 0 leads to the Euler-Lagrange equation

d/dt(∂L/∂x ̇i) – ∂L/∂xi = 0

which may be rewritten into

d2xi/dt2 + 2Gi(xl, x ̇m) = 0

with Gi(xl, x ̇m) = 1/4gkl(-∂L/∂xl + ∂2L/∂xl∂x ̇m), and 2gik = ∂2L/∂x ̇l∂x ̇m, gilgjl = δij. The theory then is developed from the Lagrangian defined in this way. This involves an important object Nil := ∂Gi/∂yl, the geometrical meaning of which is a non-linear connection.

In general, a Finsler structure L(x, y) with y := dx(t))/dt = x ̇ and homogeneous degree 1 in y is introduced, from which the Finsler metric follows as:

fij = fji = ∂(1/2L2)/∂yi∂yj, fijyiyj = L2, yl∂L/∂yl = L, fijyj = L∂L/∂yi

A further totally symmetric tensor Cijk ensues:

Cijk := ∂(1/2L2)/∂yi∂yj∂yk

which will be interpreted as a torsion tensor. As an example of a Finsler metric is the Randers metric.

L(x.y) = bi(x)yi + √(aij(x)yiyj)

The Finsler metric following is

fik = bibk + aik + 2b(iak)lyˆl − aillakmm(bnn)

with yˆk := yk(alm(x)ylym)−1/2. Setting aij = ηij, yk = x ̇k, and identifying bi with the electromagnetic 4-potential eAi leads back to the Lagrangian for the motion of a charged particle.

In this context, a Finsler space thus is called a locally Minkowskian space if there exists a coordinate system, in which the Finsler structure is a function of yi alone. The use of the “element of support” (xi, dxk ≡ yk) essentially amounts to a step towards working in the tangent bundle TM of the manifold M.