Whitehead and Peirce’s Synchronicity with Hegel’s Capital Error. Thought of the Day 97.0

6a00d83451aec269e201b8d28c1a40970c-800wi

The focus on experience ensures that Whitehead’s metaphysics is grounded. Otherwise the narrowness of approach would only culminate in sterile measurement. This becomes especially evident with regard to the science of history. Whitehead gives a lucid example of such ‘sterile measurement’ lacking the immediacy of experience.

Consider, for example, the scientific notion of measurement. Can we elucidate the turmoil of Europe by weighing its dictators, its prime ministers, and its editors of newspapers? The idea is absurd, although some relevant information might be obtained. (Alfred North Whitehead – Modes of Thought)

The wealth of experience leaves us with the problem of how to cope with it. Selection of data is required. This selection is done by a value judgment – the judgment of importance. Although Whitehead opposes the dichotomy of the two notions ‘importance’ and ‘matter of fact’, it is still necessary to distinguish grades and types of importance, which enables us to structure our experience, to focus it. This is very similar to hermeneutical theories in Schleiermacher, Gadamer and Habermas: the horizon of understanding structures the data. Therefore, we not only need judgment but the process of concrescence implicitly requires an aim. Whitehead explains that

By this term ‘aim’ is meant the exclusion of the boundless wealth of alternative potentiality and the inclusion of that definite factor of novelty which constitutes the selected way of entertaining those data in that process of unification.

The other idea that underlies experience is “matter of fact.”

There are two contrasted ideas which seem inevitably to underlie all width of experience, one of them is the notion of importance, the sense of importance, the presupposition of importance. The other is the notion of matter of fact. There is no escape from sheer matter of fact. It is the basis of importance; and importance is important because of the inescapable character of matter of fact.

By stressing the “alien character” of feeling that enters into the privately felt feeling of an occasion, Whitehead is able to distinguish the responsive and the supplemental stages of concrescence. The responsive stage being a purely receptive phase, the latter integrating the former ‘alien elements’ into a unity of feeling. The alien factor in the experiencing subjects saves Whitehead’s concept from being pure Spirit (Geist) in a Hegelian sense. There are more similarities between Hegelian thinking and Whitehead’s thought than his own comments on Hegel may suggest. But, his major criticism could probably be stated with Peirce, who wrote that

The capital error of Hegel which permeates his whole system in every part of it is that he almost altogether ignores the Outward clash. (The Essential Peirce 1)

Whitehead refers to that clash as matter of fact. Although, even there, one has to keep in mind that matter-of-fact is an abstraction. 

Matter of fact is an abstraction, arrived at by confining thought to purely formal relations which then masquerade as the final reality. This is why science, in its perfection, relapses into the study of differential equations. The concrete world has slipped through the meshes of the scientific net.

Whitehead clearly keeps the notion of prehension in his late writings as developed in Process and Reality. Just to give one example, 

I have, in my recent writings, used the word ‘prehension’ to express this process of appropriation. Also I have termed each individual act of immediate self-enjoyment an ‘occasion of experience’. I hold that these unities of existence, these occasions of experience, are the really real things which in their collective unity compose the evolving universe, ever plunging into the creative advance. 

Process needs an aim in Process and Reality as much as in Modes of Thought:

We must add yet another character to our description of life. This missing characteristic is ‘aim’. By this term ‘aim’ is meant the exclusion of the boundless wealth of alternative potentiality, and the inclusion of that definite factor of novelty which constitutes the selected way of entertaining those data in that process of unification. The aim is at that complex of feeling which is the enjoyment of those data in that way. ‘That way of enjoyment’ is selected from the boundless wealth of alternatives. It has been aimed at for actualization in that process.

Advertisements

Accelerating the Synthetic Credit. Thought of the Day 96.0

hqdefault

The structural change in the structured credit universe continues to accelerate. While the market for synthetic structures is already pretty well established, many real money accounts remain outsiders owing to regulatory hurdles and technical limitations, e.g., to participate in the correlation market. Therefore, banks are continuously establishing new products to provide real money accounts with access to the structured market, with Constant proportion debt obligation (CPDOs) recently having been popular. Against this background, three vehicles which offer easy access to structured products for these investors have gained in importance: CDPCs (Credit Derivatives Product Company), PCVs (permanent capital vehicle), and SIVs (structured investment vehicles).

A CDPC is a rated company which buys credit risk via all types of credit derivative instruments, primarily super senior tranches, and sells this risk to investors via preferred shares (equity) or subordinated notes (debt). Hence, the vehicle uses super senior risk to create equity risk. The investment strategy is a buy-and-hold approach, while the aim is to offer high returns to investors and keep default risk limited. Investors are primarily exposed to rating migration risk, to mark-to-market risk, and, finally, to the capability of the external manager. The rating agencies assign, in general, an AAA-rating on the business model of the CDPC, which is a bankruptcy remote vehicle (special purpose vehicle [SPV]). The business models of specific CDPCs are different from each other in terms of investments and thresholds given to the manager. The preferred asset classes CDPC invested in are predominantly single-name CDS (credit default swaps), bespoke synthetic tranches, ABS (asset-backed security), and all kinds of CDOs (collateralized debt obligations). So far, CDPCs main investments are allocated to corporate credits, but CDPCs are extending their universe to ABS (Asset Backed Securities) and CDO products, which provide further opportunities in an overall tight spread environment. The implemented leverage is given through the vehicle and can be in the range of 15–60x. On average, the return target was typically around a 15% return on equity, paid in the form of dividends to the shareholders.

In contrast to CDPCs, PCVs do not invest in the top of the capital structure, but in equity pieces (mostly CDO equity pieces). The leverage is not implemented in the vehicle itself as it is directly related to the underlying instruments. PCVs are also set up as SPVs (special purpose vehicles) and listed on a stock exchange. They use the equity they receive from investors to purchase the assets, while the return on their investment is allocated to the shareholders via dividends. The target return amounts, in general, to around 10%. The portfolio is managed by an external manager and is marked-to-market. The share price of the company depends on the NAV (net asset value) of the portfolio and on the expected dividend payments.

In general, an SIV invests in the top of the capital structure of structured credits and ABS in line with CDPCs. In addition, SIVs also buy subordinated debt of financial institutions, and the portfolio is marked-to-market. SIVs are leveraged credit investment companies and bankruptcy remote. The vehicle issues typically investment-grade rated commercial paper, MTNs (medium term notes), and capital notes to its investors. The leverage depends on the character of the issued note and the underlying assets, ranging from 3 to 5 (bank loans) up to 14 (structured credits).

Geometry and Localization: An Unholy Alliance? Thought of the Day 95.0

SYM5

There are many misleading metaphors obtained from naively identifying geometry with localization. One which is very close to that of String Theory is the idea that one can embed a lower dimensional Quantum Field Theory (QFT) into a higher dimensional one. This is not possible, but what one can do is restrict a QFT on a spacetime manifold to a submanifold. However if the submanifold contains the time axis (a ”brane”), the restricted theory has too many degrees of freedom in order to merit the name ”physical”, namely it contains as many as the unrestricted; the naive idea that by using a subspace one only gets a fraction of phase space degrees of freedom is a delusion, this can only happen if the subspace does not contain a timelike line as for a null-surface (holographic projection onto a horizon).

The geometric picture of a string in terms of a multi-component conformal field theory is that of an embedding of an n-component chiral theory into its n-dimensional component space (referred to as a target space), which is certainly a string. But this is not what modular localization reveals, rather those oscillatory degrees of freedom of the multicomponent chiral current go into an infinite dimensional Hilbert space over one localization point and do not arrange themselves according according to the geometric source-target idea. A theory of this kind is of course consistent but String Theory is certainly a very misleading terminology for this state of affairs. Any attempt to imitate Feynman rules by replacing word lines by word sheets (of strings) may produce prescriptions for cooking up some mathematically interesting functions, but those results can not be brought into the only form which counts in a quantum theory, namely a perturbative approach in terms of operators and states.

String Theory is by no means the only area in particle theory where geometry and modular localization are at loggerheads. Closely related is the interpretation of the Riemann surfaces, which result from the analytic continuation of chiral theories on the lightray/circle, as the ”living space” in the sense of localization. The mathematical theory of Riemann surfaces does not specify how it should be realized; if its refers to surfaces in an ambient space, a distinguished subgroup of Fuchsian group or any other of the many possible realizations is of no concern for a mathematician. But in the context of chiral models it is important not to confuse the living space of a QFT with its analytic continuation.

Whereas geometry as a mathematical discipline does not care about how it is concretely realized the geometrical aspects of modular localization in spacetime has a very specific geometric content namely that which can be encoded in subspaces (Reeh-Schlieder spaces) generated by operator subalgebras acting onto the vacuum reference state. In other words the physically relevant spacetime geometry and the symmetry group of the vacuum is contained in the abstract positioning of certain subalgebras in a common Hilbert space and not that which comes with classical theories.

Kant and Non-Euclidean Geometries. Thought of the Day 94.0

ei5yC

The argument that non-Euclidean geometries contradict Kant’s doctrine on the nature of space apparently goes back to Hermann Helmholtz and was retaken by several philosophers of science such as Hans Reichenbach (The Philosophy of Space and Time) who devoted much work to this subject. In a essay written in 1870, Helmholtz argued that the axioms of geometry are not a priori synthetic judgments (in the sense given by Kant), since they can be subjected to experiments. Given that Euclidian geometry is not the only possible geometry, as was believed in Kant’s time, it should be possible to determine by means of measurements whether, for instance, the sum of the three angles of a triangle is 180 degrees or whether two straight parallel lines always keep the same distance among them. If it were not the case, then it would have been demonstrated experimentally that space is not Euclidean. Thus the possibility of verifying the axioms of geometry would prove that they are empirical and not given a priori.

Helmholtz developed his own version of a non-Euclidean geometry on the basis of what he believed to be the fundamental condition for all geometries: “the possibility of figures moving without change of form or size”; without this possibility, it would be impossible to define what a measurement is. According to Helmholtz:

the axioms of geometry are not concerned with space-relations only but also at the same time with the mechanical deportment of solidest bodies in motion.

Nevertheless, he was aware that a strict Kantian might argue that the rigidity of bodies is an a priori property, but

then we should have to maintain that the axioms of geometry are not synthetic propositions… they would merely define what qualities and deportment a body must have to be recognized as rigid.

At this point, it is worth noticing that Helmholtz’s formulation of geometry is a rudimentary version of what was later developed as the theory of Lie groups. As for the transport of rigid bodies, it is well known that rigid motion cannot be defined in the framework of the theory of relativity: since there is no absolute simultaneity of events, it is impossible to move all parts of a material body in a coordinated and simultaneous way. What is defined as the length of a body depends on the reference frame from where it is observed. Thus, it is meaningless to invoke the rigidity of bodies as the basis of a geometry that pretend to describe the real world; it is only in the mathematical realm that the rigid displacement of a figure can be defined in terms of what mathematicians call a congruence.

Arguments similar to those of Helmholtz were given by Reichenbach in his intent to refute Kant’s doctrine on the nature of space and time. Essentially, the argument boils down to the following: Kant assumed that the axioms of geometry are given a priori and he only had classical geometry in mind, Einstein demonstrated that space is not Euclidean and that this could be verified empirically, ergo Kant was wrong. However, Kant did not state that space must be Euclidean; instead, he argued that it is a pure form of intuition. As such, space has no physical reality of its own, and therefore it is meaningless to ascribe physical properties to it. Actually, Kant never mentioned Euclid directly in his work, but he did refer many times to the physics of Newton, which is based on classical geometry. Kant had in mind the axioms of this geometry which is a most powerful tool of Newtonian mechanics. Actually, he did not even exclude the possibility of other geometries, as can be seen in his early speculations on the dimensionality of space.

The important point missed by Reichenbach is that Riemannian geometry is necessarily based on Euclidean geometry. More precisely, a Riemannian space must be considered as locally Euclidean in order to be able to define basic concepts such as distance and parallel transport; this is achieved by defining a flat tangent space at every point, and then extending all properties of this flat space to the globally curved space (Luther Pfahler Eisenhart Riemannian Geometry). To begin with, the structure of a Riemannian space is given by its metric tensor gμν from which the (differential) length is defined as ds2 = gμν dxμ dxν; but this is nothing less than a generalization of the usual Pythagoras theorem in Euclidean space. As for the fundamental concept of parallel transport, it is taken directly from its analogue in Euclidean space: it refers to the transport of abstract (not material, as Helmholtz believed) figures in such a space. Thus Riemann’s geometry cannot be free of synthetic a priori propositions because it is entirely based upon concepts such as length and congruence taken form Euclid. We may conclude that Euclids geometry is the condition of possibility for a more general geometry, such as Riemann’s, simply because it is the natural geometry adapted to our understanding; Kant would say that it is our form of grasping space intuitively. The possibility of constructing abstract spaces does not refute Kant’s thesis; on the contrary, it reinforces it.

Open Market Operations. Thought of the Day 93.0

fig

It can be argued that it would be much more democratic if the Treasuries were allowed to borrow directly from their central bank. By electing a government on a program, we would know what deficit it intends to run and thus how much it will be willing to print, which in the long run is a debate about the possible level of inflation. Instead, it has been argued that decisions made on democratic grounds might be unstable as they are affected by elections. However, the independence of central banks is also serving the interest of commercial bankers as we argue now.

In practice, the central bank buys and sells bonds in open market operations. At least it is always doing so with short term T-bonds as part of the conventional monetary policy, and it might decide sometimes to do it as well with longer maturity T-bonds as part of the unconventional monetary policy. This blurs the lines between a model where the central bank directly finances the Treasury, and a model where this is done by commercial banks since they result in the same final situation. Indeed, before an open market operation the Treasury owes central bank money to a commercial bank, and in the final situation it owes it to the central bank itself, and the central bank money held by the commercial bank has been increased accordingly.

The commercial bank has accepted to get rid of an IOU which bears interest, in exchange of a central bank IOU which bears no interest. However the Treasury will never default on its debt, because the state also runs the central bank which can buy an infinite amount of T-bonds. Said differently, if the interest rates for short term T-bonds start to increase as the commercial banks become more and more reluctant to buy these, the central bank needs to buy as many short-term bonds as necessary to ensure the short term interest rates on T-bonds remain at the targeted level. By using these open market operations a sovereign state running a sovereign currency has the means to ensure that the banks are always willing to buy T-bonds, whatever the deficit is.

However, this system has a drawback. First when the commercial bank bought the T-bond, it had to pretend that it was worried the state might never reimburse, so as to ask for interests rates which are at least slightly higher than the interest rate at which they can borrow from the central bank, and make a profit on the difference. Of course the banks knew they would always be reimbursed, because the central bank always stands ready to buy bonds. As the interest rates departed from the target chosen by the central bank, the latter bought short term bonds to prevent the short term rate from increasing. In order to convince a commercial bank to get rid of a financial instrument which is not risky and which bears interest, the only solution is to pay more than the current value of the bond, which amounts to a decrease of the interest rate on those bonds. The bank thus makes an immediate profit instead of a larger profit later. This difference goes directly into the net worth of the banker and amounts to money creation.

To conclude, we reach the same stage as if the Treasury had sold directly its bond to the central bank, except that now we have increased by a small amount the net worth of the bankers. By first selling the bonds to the commercial banks, instead of selling directly to the central bank, the bankers were able to realize a small profit. But this profit is an immediate and easy one. So they have on one side to pretend they do not like when the Treasury goes into debt, so as to be able to ask for the highest possible interest rate, and secretly enjoy it since either they make a profit when it falls due, or even better immediately if the central bank buys the bonds to control the interest rates.

The commercial banks will always end up with a part of their assets denominated directly in central bank money, which bears no interest, and T-bonds, which bear interest. If we adopt a consolidated state point of view, where we merge the Treasury and the central bank, then the commercial banks have two types of accounts. Deposits which bear no interests, and saving accounts which generate interests, just like everybody. In order to control the interest rate, the consolidated state shifts the amounts from the interest-less to the interest-bearing account and vice-versa.

Individuation. Thought of the Day 91.0

Figure-6-Concepts-of-extensionality

The first distinction is between two senses of the word “individuation” – one semantic, the other metaphysical. In the semantic sense of the word, to individuate an object is to single it out for reference in language or in thought. By contrast, in the metaphysical sense of the word, the individuation of objects has to do with “what grounds their identity and distinctness.” Sets are often used to illustrate the intended notion of “grounding.” The identity or distinctness of sets is said to be “grounded” in accordance with the principle of extensionality, which says that two sets are identical iff they have precisely the same elements:

SET(x) ∧ SET(y) → [x = y ↔ ∀u(u ∈ x ↔ u ∈ y)]

The metaphysical and semantic senses of individuation are quite different notions, neither of which appears to be reducible to or fully explicable in terms of the other. Since sufficient sense cannot be made of the notion of “grounding of identity” on which the metaphysical notion of individuation is based, focusing on the semantic notion of individuation is an easy way out. This choice of focus means that our investigation is a broadly empirical one drawn on empirical linguistics and psychology.

What is the relation between the semantic notion of individuation and the notion of a criterion of identity? It is by means of criteria of identity that semantic individuation is effected. Singling out an object for reference involves being able to distinguish this object from other possible referents with which one is directly presented. The final distinction is between two types of criteria of identity. A one-level criterion of identity says that two objects of some sort F are identical iff they stand in some relation RF:

Fx ∧ Fy → [x = y ↔ RF(x,y)]

Criteria of this form operate at just one level in the sense that the condition for two objects to be identical is given by a relation on these objects themselves. An example is the set-theoretic principle of extensionality.

A two-level criterion of identity relates the identity of objects of one sort to some condition on entities of another sort. The former sort of objects are typically given as functions of items of the latter sort, in which case the criterion takes the following form:

f(α) = f(β) ↔ α ≈ β

where the variables α and β range over the latter sort of item and ≈ is an equivalence relation on such items. An example is Frege’s famous criterion of identity for directions:

d(l1) = d(l2) ↔ l1 || l2

where the variables l1 and l2 range over lines or other directed items. An analogous two-level criterion relates the identity of geometrical shapes to the congruence of things or figures having the shapes in question. The decision to focus on the semantic notion of individuation makes it natural to focus on two-level criteria. For two-level criteria of identity are much more useful than one-level criteria when we are studying how objects are singled out for reference. A one-level criterion provides little assistance in the task of singling out objects for reference. In order to apply a one-level criterion, one must already be capable of referring to objects of the sort in question. By contrast, a two-level criterion promises a way of singling out an object of one sort in terms of an item of another and less problematic sort. For instance, when Frege investigated how directions and other abstract objects “are given to us”, although “we cannot have any ideas or intuitions of them”, he proposed that we relate the identity of two directions to the parallelism of the two lines in terms of which these directions are presented. This would be explanatory progress since reference to lines is less puzzling than reference to directions.