Banach Spaces

lbN6I

Some things in linear algebra are easier to see in infinite dimensions, i.e. in Banach spaces. Distinctions that seem pedantic in finite dimensions clearly matter in infinite dimensions.

The category of Banach spaces considers linear spaces and continuous linear transformations between them. In a finite dimensional Euclidean space, all linear transformations are continuous, but in infinite dimensions a linear transformation is not necessarily continuous.

The dual of a Banach space V is the space of continuous linear functions on V. Now we can see examples of where not only is V* not naturally isomorphic to V, it’s not isomorphic at all.

For any real p > 1, let q be the number such that 1/p  + 1/q = 1. The Banach space Lp is defined to be the set of (equivalence classes of) Lebesgue integrable functions f such that the integral of ||f||p is finite. The dual space of Lp is Lq. If p does not equal 2, then these two spaces are different. (If p does equal 2, then so does qL2 is a Hilbert space and its dual is indeed the same space.)

In the finite dimensional case, a vector space V is isomorphic to its second dual V**. In general, V can be embedded into V**, but V** might be a larger space. The embedding of V in V** is natural, both in the intuitive sense and in the formal sense of natural transformations. We can turn an element of V into a linear functional on linear functions on V as follows.

Let v be an element of V and let f be an element of V*. The action of v on f is simply fv. That is, v acts on linear functions by letting them act on it.

This shows that some elements of V** come from evaluation at elements of V, but there could be more. Returning to the example of Lebesgue spaces above, the dual of L1 is L, the space of essentially bounded functions. But the dual of L is larger than L1. That is, one way to construct a continuous linear functional on bounded functions is to multiply them by an absolutely integrable function and integrate. But there are other ways to construct linear functionals on L.

A Banach space V is reflexive if the natural embedding of V in V** is an isomorphism. For p > 1, the spaces Lp are reflexive.

Suppose that X is a Banach space. For simplicity, we assume that X is a real Banach space, though the results can be adapted to the complex case in the straightforward manner. In the following, B(x0,ε) stands for the closed ball of radius ε centered at x0 while B◦(x0,ε) stands for the open ball, and S(x0,ε) stands for the corresponding sphere.

Let Q be a bounded operator on X. Since we will be interested in the hyperinvariant subspaces of Q, we can assume without loss of generality that Q is one-to-one and has dense range, as otherwise ker Q or Range Q would be hyperinvariant for Q. By {Q}′ we denote the commutant of Q.

Fix a point x0 ≠ 0 in X and a positive real ε<∥x0∥. Let K= Q−1B(x0,ε). Clearly, K is a convex closed set. Note that 0 ∉ K and K≠ ∅ because Q has dense range. Let d = infK||z||, then d > 0. If X is reflexive, then there exists z ∈ K with ||z|| = d, such a vector is called a minimal vector for x0, ε and Q. Even without reflexivity condition, however, one can always find y ∈ K with ||y|| ≤ 2d, such a y will be referred to as a 2-minimal vector for x0, ε and Q.

The set K ∩ B(0, d) is the set of all minimal vectors, in general this set may be empty. If z is a minimal vector, since z ∈ K = Q−1B(x0, ε) then Qz ∈ B(x0, ε). As z is an element of minimal norm in K then, in fact, Qz ∈ S(x0, ε). Since Q is one-to-one, we have

QB(0, d) ∩ B(x0, ε) = Q B(0, d) ∩ K) ⊆ S(x0, ε).

It follows that QB(0,d) and B◦(x0,ε) are two disjoint convex sets. Since one of them has non-empty interior, they can be separated by a continuous linear functional. That is, there exists a functional f with ||f|| = 1 and a positive real c such that f|QB(0,d)  ≤ c and f|B◦(x0,ε) ≥ c. By continuity, f|B(x0,ε) ≥ c. We say that f is a minimal functional for x0, ε, and Q.

We claim that f(x0) ≥ ε. Indeed, for every x with ||x|| ≤ 1 we have x0 − εx ∈ B(x0,ε). It follows that f(x0 − εx) ≥ c, so that f(x0) ≥ c + εf(x). Taking sup over all x with ||x|| ≤ 1 we get f(x0) ≥ c + ε||f|| ≥ ε.

Observe that the hyperplane Qf = c separates K and B(0, d). Indeed, if z ∈ B(0,d), then (Qf)(z) = f(Qz) ≤ c, and if z ∈ K then Qz ∈ B(x0,ε) so that (Q∗f)(z) = f(Qz) ≥ c. For every z with ||z|| ≤ 1

we have dz ∈ B(0,d), so that (Qf)(dz) ≤ c, it follows that Qf ≤ c/d

On the other hand, for every δ > 0 there exists z ∈ K with ||z|| ≤ d+δ, then (Qf)(z) ≥ c ≥ c/(d+δ) ||z||, whence ||Qf|| ≥ c/(d+δ) . It follows that

||Q∗f|| = c/d.

For every z ∈ K we have (Qf)(z) ≥ c = d ||Qf||. In particular, if y is a 2-minimal vector then

(Qf)(y) ≥ 1/2 Qf ||y||….

Advertisement

Comment (Monotone Selection) on What’s a Market Password Anyway? Towards Defining a Financial Market Random Sequence. Note Quote.

CA Stochastic

Definition:

A selection rule is a partial function r : 2 → {yes, no}.

The subsequence of a sequence A selected by a selection rule r is that with r(A|n − 1) = yes. The sequence of selected places are those ni such that r(A|ni − 1) = yes. Then for a given selection rule r and a given real A, we generate a sequence n0, n1 , . . . of selected places, and we say that a real is stochastic with respect to admissible selection rules iff for any such selection rule, either the sequence of selected places is finite, or

What’s a Market Password Anyway? Towards Defining a Financial Market Random Sequence. Note Quote.

From the point of view of cryptanalysis, the algorithmic view based on frequency analysis may be taken as a hacker approach to the financial market. While the goal is clearly to find a sort of password unveiling the rules governing the price changes, what we claim is that the password may not be immune to a frequency analysis attack, because it is not the result of a true random process but rather the consequence of the application of a set of (mostly simple) rules. Yet that doesn’t mean one can crack the market once and for all, since for our system to find the said password it would have to outperform the unfolding processes affecting the market – which, as Wolfram’s PCE suggests, would require at least the same computational sophistication as the market itself, with at least one variable modelling the information being assimilated into prices by the market at any given moment. In other words, the market password is partially safe not because of the complexity of the password itself but because it reacts to the cracking method.

Figure-6-By-Extracting-a-Normal-Distribution-from-the-Market-Distribution-the-Long-Tail

Whichever kind of financial instrument one looks at, the sequences of prices at successive times show some overall trends and varying amounts of apparent randomness. However, despite the fact that there is no contingent necessity of true randomness behind the market, it can certainly look that way to anyone ignoring the generative processes, anyone unable to see what other, non-random signals may be driving market movements.

Von Mises’ approach to the definition of a random sequence, which seemed at the time of its formulation to be quite problematic, contained some of the basics of the modern approach adopted by Per Martin-Löf. It is during this time that the Keynesian kind of induction may have been resorted to as a starting point for Solomonoff’s seminal work (1 and 2) on algorithmic probability.

Per Martin-Löf gave the first suitable definition of a random sequence. Intuitively, an algorithmically random sequence (or random sequence) is an infinite sequence of binary digits that appears random to any algorithm. This contrasts with the idea of randomness in probability. In that theory, no particular element of a sample space can be said to be random. Martin-Löf randomness has since been shown to admit several equivalent characterisations in terms of compression, statistical tests, and gambling strategies.

The predictive aim of economics is actually profoundly related to the concept of predicting and betting. Imagine a random walk that goes up, down, left or right by one, with each step having the same probability. If the expected time at which the walk ends is finite, predicting that the expected stop position is equal to the initial position, it is called a martingale. This is because the chances of going up, down, left or right, are the same, so that one ends up close to one’s starting position,if not exactly at that position. In economics, this can be translated into a trader’s experience. The conditional expected assets of a trader are equal to his present assets if a sequence of events is truly random.

If market price differences accumulated in a normal distribution, a rounding would produce sequences of 0 differences only. The mean and the standard deviation of the market distribution are used to create a normal distribution, which is then subtracted from the market distribution.

Schnorr provided another equivalent definition in terms of martingales. The martingale characterisation of randomness says that no betting strategy implementable by any computer (even in the weak sense of constructive strategies, which are not necessarily computable) can make money betting on a random sequence. In a true random memoryless market, no betting strategy can improve the expected winnings, nor can any option cover the risks in the long term.

Over the last few decades, several systems have shifted towards ever greater levels of complexity and information density. The result has been a shift towards Paretian outcomes, particularly within any event that contains a high percentage of informational content, i.e. when one plots the frequency rank of words contained in a large corpus of text data versus the number of occurrences or actual frequencies, Zipf showed that one obtains a power-law distribution

Departures from normality could be accounted for by the algorithmic component acting in the market, as is consonant with some empirical observations and common assumptions in economics, such as rule-based markets and agents. The paper.

Kōjin Karatani versus Moishe Postone. Architectonics of Capitalism.

Kōjin Karatani’s theory of different modes of intercourse criticizes architectonic metaphor thinking that the logic of mods of production in terms of base and superstructure without ceding grounds on the centrality of the critique of political economy. the obvious question is what remains of theory when there is a departure not from the objective towards the subjective, but rather the simultaneous constitution of the subjective and the objective dimensions of the social under capitalism. One way of addressing the dilemma is to take recourse to the lesson of commodity form, where capitalism begets a uniform mode of mediation rather than disparate. The language of modes of production according to Moishe Postone happens to be a transhistorical language allowing for a transhistorical epistemology to sneak in through the backdoor thus outlining the necessity of critical theory’s existence only in so far as the object of critique stays in existence. Karatani’s first critique concerns a crude base-superstructure concept, in which nation and nationalism are viewed merely as phenomena of the ideological superstructure, which could be overcome by reason (enlightenment) or would disappear together with the state. But the nation functions autonomously, independent of the state, and as the imaginative return of community or reciprocal mode of exchange A, it is egalitarian in nature. As is the case with universal religions, the nation thus holds a moment of protest, of opposition, of emancipatory imagination. The second critique concerns the conception of the proletariat, which Marxism reduced to the process of production, in which its labor force is turned into a commodity. Production (i.e., consumption of labor power) as a fundamental basis to gain and to increase surplus value remains unchanged. Nonetheless, according to Karatani surplus value is only achieved by selling commodities, in the process of circulation, which does not generate surplus value itself, but without which there cannot be any surplus value. Understanding the proletariat as producer-consumer opens up new possibilities for resistance against the system. In late capitalism, in which capital and company are often separated, workers (in the broadest sense of wage and salary earners) are usually not able to resist their dependency and inferiority in the production process. By contrast, however, in the site of consumption, capital is dependent on the worker as consumer. Whereas capital can thus control the proletariat in the production process and force them to work, it loses its power over them in the process of circulation. If, says Karatani, we would view consumers as workers in the site of circulation, consumer movements could be seen as proletariat movements. They can, for example, resort to the legal means of boycott, which capital is unable to resist directly. Karatani bases his critique of capitalism not on the perspectives of globalization, but rather on what he terms neo-imperialism meaning state-based attempt of capital to subject the entire world to its logic of exploitation, and thus any logic to overcoming the modern world system of capital-nation-state by means of a world revolution and its sublation in a system is to be possible by justice based on exchange. For Postone Capital generates a system characteristically by the opposition of abstract universality, the value form, and particularistic specificity, the use value dimension. It seems to me that rather than viewing a socialist or an emancipatory movement as the heirs to the Enlightenment, as the classic working class movement did, critical movements today should be striving for a new form of universalism that encompasses the particular, rather than existing in opposition to the particular. This will not be easy, because a good part of the Left today has swung to particularity rather than trying to and a new form of universalism. I think this is a fatal mistake.

Stephen Wolfram and Stochasticity of Financial Markets. Note Quote.

The most obvious feature of essentially all financial markets is the apparent randomness with which prices tend to fluctuate. Nevertheless, the very idea of chance in financial markets clashes with our intuitive sense of the processes regulating the market. All processes involved seem deterministic. Traders do not only follow hunches but act in accordance with specific rules, and even when they do appear to act on intuition, their decisions are not random but instead follow from the best of their knowledge of the internal and external state of the market. For example, traders copy other traders, or take the same decisions that have previously worked, sometimes reacting against information and sometimes acting in accordance with it. Furthermore, nowadays a greater percentage of the trading volume is handled algorithmically rather than by humans. Computing systems are used for entering trading orders, for deciding on aspects of an order such as the timing, price and quantity, all of which cannot but be algorithmic by definition.

Algorithmic however, does not necessarily mean predictable. Several types of irreducibility, from non-computability to intractability to unpredictability, are entailed in most non-trivial questions about financial markets.

Wolfram asks

whether the market generates its own randomness, starting from deterministic and purely algorithmic rules. Wolfram points out that the fact that apparent randomness seems to emerge even in very short timescales suggests that the randomness (or a source of it) that one sees in the market is likely to be the consequence of internal dynamics rather than of external factors. In economists’ jargon, prices are determined by endogenous effects peculiar to the inner workings of the markets themselves, rather than (solely) by the exogenous effects of outside events.

Wolfram points out that pure speculation, where trading occurs without the possibility of any significant external input, often leads to situations in which prices tend to show more, rather than less, random-looking fluctuations. He also suggests that there is no better way to find the causes of this apparent randomness than by performing an almost step-by-step simulation, with little chance of besting the time it takes for the phenomenon to unfold – the time scales of real world markets being simply too fast to beat. It is important to note that the intrinsic generation of complexity proves the stochastic notion to be a convenient assumption about the market, but not an inherent or essential one.

Economists may argue that the question is irrelevant for practical purposes. They are interested in decomposing time-series into a non-predictable and a presumably predictable signal in which they have an interest, what is traditionally called a trend. Whether one, both or none of the two signals is deterministic may be considered irrelevant as long as there is a part that is random-looking, hence most likely unpredictable and consequently worth leaving out.

What Wolfram’s simplified model show, based on simple rules, is that despite being so simple and completely deterministic, these models are capable of generating great complexity and exhibit (the lack of) patterns similar to the apparent randomness found in the price movements phenomenon in financial markets. Whether one can get the kind of crashes in which financial markets seem to cyclicly fall into depends on whether the generating rule is capable of producing them from time to time. Economists dispute whether crashes reflect the intrinsic instability of the market, or whether they are triggered by external events. Sudden large changes are Wolfram’s proposal for modeling market prices would have a simple program generate the randomness that occurs intrinsically. A plausible, if simple and idealized behavior is shown in the aggregate to produce intrinsically random behavior similar to that seen in price changes.

27

In the figure above, one can see that even in some of the simplest possible rule-based systems, structures emerge from a random-looking initial configuration with low information content. Trends and cycles are to be found amidst apparent randomness.

An example of a simple model of the market, where each cell of a cellular automaton corresponds to an entity buying or selling at each step. The behaviour of a given cell is determined by the behaviour of its two neighbors on the step before according to a rule. A rule like rule 90 is additive, hence reversible, which means that it does not destroy any information and has ‘memory’ unlike the random walk model. Yet, due to its random looking behaviour, it is not trivial shortcut the computation or foresee any successive step. There is some randomness in the initial condition of the cellular automaton rule that comes from outside the model, but the subsequent evolution of the system is fully deterministic.

internally generated suggesting large changes are more predictable – both in magnitude and in direction as the result of various interactions between agents. If Wolfram’s intrinsic randomness is what leads the market one may think one could then easily predict its behaviour if this were the case, but as suggested by Wolfram’s Principle of Computational Equivalence it is reasonable to expect that the overall collective behaviour of the market would look complicated to us, as if it were random, hence quite difficult to predict despite being or having a large deterministic component.

Wolfram’s Principle of Computational Irreducibility says that the only way to determine the answer to a computationally irreducible question is to perform the computation. According to Wolfram, it follows from his Principle of Computational Equivalence (PCE) that

almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication: when a system reaches a threshold of computational sophistication often reached by non-trivial systems, the system will be computationally irreducible.

Neo-Kantians and Numbers. Note Quote.

mathematics-t2

At the beginning of the twentieth century, neo-Kantianism was the dominant force in German academic philosophy. Its most important schools were Marburg and Southwestern (or Baden). The Marburg school concentrated on logical, methodological and epistemological themes. Its founder and leader was Hermann Cohen (1842-1918), a professor of philosophy at Marburg between 1876 and 1912. Cohen’s most famous disciples were Paul Natorp (1854-1924) and Ernst Cassirer (1874-1945). The Southwestern school emphasised the theory of values. Its founder and leader was Wilhelm Windelband (1848-1915). Windelband’s student Heinrich Rickert (1863-1936) was the great system-builder of the Southwestern school. Among the members of the Southwestern school were Jonas Cohn (1869-1947) and Bruno Bauch (1877-1942). At the beginning of the twentieth century, the philosophy of mathematics in general and the nature of number in particular were subjects of lively discussion among the neo-Kantians. Natorp, Cassirer and Cohn, among others, constructed their own theories of number which also formed the basis of their critiques of Russell and Frege. The neo-Kantians, too, supported the idea that mathematics should be based on a logical foundation. However, their conception of the logical foundation differs greatly from that of Russell and Frege. The main difference is that although the neo-Kantians argue that mathematics should be based on a logical foundation, these two sciences must be strictly separated from one another. Consequently, they argued that if the logicist programme were carried out, there would not exist any line of demarcation between logic and mathematics. Cohn’s distinction between two possible ways to found the number concept on logic brings forward the main difference between Russell’s and the neo-Kantians’ viewpoint. According to Cohn, there exist two possible ways to found the number concept on logic. Either the number concept is reduced to a logical concept or it is shown that the number concept itself is a fundamental logical concept. Cohn says that while Russell’s theory of number is founded on logic in the first sense, his own theory of number is logical in the latter sense.

According to the neo-Kantians, deducing the number concept from the class concept is a petitio principii. In other words, Cassirer, Natorp and Cohn all argue that the class concept already presupposes the number concept. In Cassirer’s own words,

What it means to apprehend an object as “one” is here assumed to be known from the very beginning; for the numerical equality of two classes is known solely by the fact that we coordinate with each element of the first class one and only one of the second.

According to Cohn, the number concept is something that logically precedes the class concept. As Cohn sees it, Russell’s definitions often contain such expressions as “an object” and Russell himself admits that the sense in which every object is one is always involved when speaking of an object. Consequently, Cohn argues that Russell’s definition of number already presupposes the number concept. In Natorp’s view, Frege’s definition of number presupposes the use of such propositions as ‘X falls under the concept A’. As Natorp sees it, in this proposition an individual is presupposed in the sense of a singular number. Thus Frege’s definition of number already presupposes the singular number. According to Natorp, this mistake, consisting of a simple petitio principii, is shared by all attempts to derive the number concept from the concept of objects belonging to a class (or sets or aggregates). It is inevitable that these objects are thought of as individuals. It is noteworthy that Henri Poincaré presents a similar argument in his critique of the logicist programme. In his paper “Les mathématiques et la logique”  Poincaré, too, argues that the logicist definition of number already presupposes the number concept.

Absurdity. Drunken Risibility

ferr0488_orig

I feel that absurdity is not just a human condition but ineluctable and if i deviate from this thought I would just be exhibiting presumptuousness. The thought in here might be lacking brevity, but what is requested is a brickbat. I surely think that it ain’t a bumf. I would like to expatiate on the topic of absurdity; it might be sardonic on myself and at times these collection of phrases might be depicting platitude; for if they fail in their motives; they remain uncanny. Its not a motley; as a matter of fact it isn’t variegated also, due care has been taken not to smudge it; but the thought finds its essence in writing lest it should smother my thoughts and will not remain ethereal. After this prolegomenon, the absurdity is:

1) absurdity is a human condition.

2) culturally i’m an outsider, but would cease to be one and want to be balanced; would like to understand the human soul and escape from triviality, and to do this I need to know how to express myself, for it is the means by which i can know myself and the possibilities awaiting me.

3) destiny is all we want to escape but cannot, and since all our actions are directed against the inevitable they are absurd; because we sense this absurdity, we feel anguished.

4) on god, he is not merely dead, but along with him, even man is dead; standing alone under the empty heaven with no scope of remedy.

5) each man must come to his own personal vision of life, and it is not a particularly happy destiny is suffering and death; which he can defeat by affirming human dignity and participating in a sense of brotherhood with other men (sorry for the theosophical inputs!)

6) like the Tolstoy of “the death of Ivan Ilyich”, the hero can preserve his honesty by continuously fighting death and trying to forget its existence.

p.s. this could be because of my flirting with the surrealist movement in the last century and a complete break with the enlightenment and the romanticism so often in the limelight in the classical philosophy. To conclude I  would like to point out three things as prolegomenon: this will give you an insight into the twilight zone of absurdity:

the poet Cavafy who says:

you won’t find a new country, you won’t find a new shore..there is no ship for you, there is no road, as you have wastd your life here, in this small corner, you have destroyed it everywhere else in the world.

Samuel Beckett, in his often qouted phrase: nothing is more real than nothing.

The 4th century bc Sicilian rhetorician, Gorgias of Lentini:

nothing has any real existence, and that if anything real did exist it could not be known, and that if anything were to exist and be known it could not be expressed in speech.

7) man is seen more clearly as having, if he is honest, no purpose. although he can and does get trapped in fixed ideas about himself and the world which reify him rather than being a Being, he finds the over abundance of things, which limits freedom, nauseating and recognizes that just as habits conceal his attitudes, so, language too has become a dead thing, limiting communication and emphasizing hs solitude. He can no longer think that he has a nature proper to himself; he is simply the sum of his actions, each of which is a delibrate choice in a given situation.

8) existentialists have concluded that self is nothingness which could only ‘become’ through acts and words; scientists have suggested that all acts are meaningless, and philologists have shown that language, too, is arbitrary and meaningless as a means to knowing reality.

9) don’t we think that life as a mechanical quality and has a sense of loss of mystery, the loneliness of individuals and their difficulties in communicating in a language also deadened by habit….