Swastika (स्वस्तिक). Note Quote.


H. P. Blavatsky says:

Few world-symbols are more pregnant with real occult meaning than the Swastika.  It is symbolized by the figure 6; for, like that figure, it points in its concrete imagery, as the ideograph of the number does, to the Zenith and the Nadir, to North, South, West, and East; one finds the unit everywhere, and that unit reflected in all and every unit.  It is the emblem of the activity of Fohat, of the continual revolution of the ‘wheels’, and of the Four Elements, the ‘Sacred Four’, in their mystical, and not alone in their cosmic meaning; further, its four arms, bent at right angles, are intimately related, as shown elsewhere, to the Pythagorean and Hermetic scales.  One initiated into the mysteries of the meaning of the Swastika, say the Commentaries, ‘can trace on it, with mathematical precision, the evolution of Kosmos and the whole period of Sandhya’.

The swastika is, par excellence, the symbol of cosmic evolution. It is an image represented in many temples in India, Tibet, China and other countries with Hindu and Buddhist influence (and indeed the very symbol of esoteric Buddhism). Moreover, it is present in the traditions of the Nordic peoples and in pre-colonial Americas.

Being a universal symbol, the swastika cross is also present in the symbol of the theosophical movement.

The representations of Buddha with the Swastika cross on his chest, being called the ‘Seal of the Heart’, are well-known. The swastika is also present in many ancient Christian relics. About its universality, HPB states:

[The] ansated Egyptian cross, or tau, the Jaina cross, or Swastika, and the Christian cross have all the same meaning.

Despite these facts, or maybe because of them, Christian missionaries tried to classify the swastika as “diabolical”, thus trying to destroy one of the oldest sacred symbols, which is also at the origin of “their” own Christian cross. Yet to honestly recognize the evolution of the cross as a symbol would be like accepting that Christianity illegitimately adopted religious images belonging to much earlier traditions.

There is a right-handed swastika and a left-handed one, each revolving in opposite directions. The right-handed is called “swastika” while the left-handed is sometimes called “swavastika.” If clockwise movement signifies natural evolution and life, and counter-clockwise indicates regression or death and is an inversion of nature,  the Nazi “swastika” would represent this inversion. The symbol was chosen possibly because it was thought to be of Nordic origin, and it was used as a caricature of the Christian Cross. The swastika can clearly symbolize good or evil, thus echoing its inherent double nature. But many of its uses in past ages indicate both directions in contexts that were purely spiritual in nature and which strongly suggest another interpretation. The right arm points to heaven, the left to earth, and this varies depending upon the perspective. If the symbol faces away from one, the hooks point counter-clockwise. If the symbol faces one, the hooks point clockwise. These two perspectives symbolize the microcosmic and the macrocosmic. Man, the perceiver, embodies one while reaching out toward the other.


Being a double symbol, the swastika also represents male and female combined in the hermaphrodite. Thus, it is found carved upon the figure of Ardanari in South India, denoting the pre-sexual state of the Third Root Race. In another old Hindu carving, Vishnu is shown as double-sexed, floating on the water which rises in a semicircle and pours through a swastika representing the source of generation. All subsequent evolution takes place spirally from within, like the directional unfolding of the swastika implicit in the “wheel of Dharma,” the sacred law to which the Buddha pointed. The Upanishads teach that in accordance with natural law, it is necessary to turn the wheel from within, to emulate the fohatic force which expands throughout the Cosmos vitalizing every atom and awakening every conscious center.

Conjuncted: Unitary Representation of the Poincaré Group is a Fock Representation


The Fock space story is not completely abandoned within the algebraic approach to Quantum Field Theory. In fact, when conditions are good, Fock space emerges as the GNS Hilbert space for some privileged vacuum state of the algebra of observables. We briefly describe how this emergence occurs before proceeding to raise some problems for the naive Fock space story.

The algebraic reconstruction of Fock space arises from the algebraic version of canonical quantization. Suppose that S is a real vector space (equipped with some suitable topology), and that σ is a symplectic form on S. So, S represents a classical phase space . The Weyl algebra U[S,σ] is a specific C∗-algebra generated by elements of the form W(f), with f ∈ S and satisfying the canonical commutation relations in the Weyl-Segal form:

W(f)W(g) = e−iσ(f,g)/2W(f + g)

Suppose that there is also some notion of spacetime localization for elements of S, i.e. a mapping O → S(O) from double cones in Minkowski spacetime to subspaces of S. Then, if certain constraints are satisfied, the pair of mappings

O → S(O) → U(O) ≡ C{W(f) : f ∈ S(O)},

can be composed to give a net of C∗-algebras over Minkowski spacetime. (Here C∗X is the C∗-algebra generated by the set X.)

Now if we are given some dynamics on S, then we can — again, if certain criteria are satisfied — define a corresponding dynamical automorphism group αt on U[S,σ]. There is then a unique dynamically stable pure state ω0 of U[S,σ], and we consider the GNS representation (H,π) of U[S,σ] induced by ω0. To our delight, we find that the infinitesimal generators Φ(f) of the one-parameter groups {π(W(f))}t∈R behave just like the field operators in the old-fashioned Fock space approach. Furthermore, if we define operators

a(f) = 2−1/2(Φ(f) + iΦ(Jf)),
a∗(f) = 2−1/2(Φ(f)−iΦ(Jf)),

we find that they behave like creation and annihilation operators of particles. (Here J is the unique “complex structure” on S that is compatible with the dynamics.) In particular, by applying them to the vacuum state Ω, we get the entire GNS Hilbert space H. Finally, if we take an orthonormal basis {fi} of S, then the sum

i=1 a∗(fi)a(fi),

is the number operator N. Thus, the traditional Fock space formalism emerges as one special case of the GNS representation of a state of the Weyl algebra.

The Minkowski vacuum representation (H00) of A is Poincaré covariant, i.e. the action α(a,Λ) of the Poincaré group by automorphisms on A is implemented by unitary operators U(a,Λ) on H. When we say that H is isomorphic to Fock space F(H), we do not mean the trivial fact that H and F(H) have the same dimension. Rather, we mean that the unitary representation (H,U) of the Poincaré group is a Fock representation.


Fock Space


Fock space is just another separable infinite dimensional Hilbert space (and so isomorphic to all its separable infinite dimensional brothers). But the key is writing it down in a fashion that suggests a particle interpretation. In particular, suppose that H is the one-particle Hilbert space, i.e. the state space for a single particle. Now depending on whether our particle is a Boson or a Fermion, the state space of a pair of these particles is either Es(H ⊗ H) or Ea(H ⊗ H), where Es is the projection onto the vectors invariant under the permutation ΣH,H on H ⊗ H, and Ea is the projection onto vectors that change signs under ΣH,H. For

present purposes, we ignore these differences, and simply use H ⊗ H to denote one possibility or the other. Now, proceeding down the line, for n particles, we have the Hilbert space Hn ≡ H ⊗ · · · ⊗ H, etc..

A state in Hn is definitely a state of n particles. To get disjunctive states, we make use of the direct sum operation “⊕” on Hilbert spaces. So we define the Fock space F(H) over H as the infinite direct sum:

F (H ) = C ⊕ H ⊕ (H ⊗ H ) ⊕ (H ⊗ H ⊗ H ) ⊕ · · · .

So, the state vectors in Fock space include a state where there are no particles (the vector lies in the first summand), a state where there is one particle, a state where there are two particles, etc.. Furthermore, there are states that are superpositions of different numbers of particles.

One can spend time worrying about what it means to say that particle numbers can be superposed. But that is the “half empty cup” point of view. From the “half full cup” point of view, it makes sense to count particles. Indeed, the positive (unbounded) operator

N=0 ⊕ 1 ⊕ 2 ⊕ 3 ⊕ 4 ⊕···,

is the formal element of our model that permits us to talk about the number of particles.

In the category of Hilbert spaces, all separable Hilbert spaces are isomorphic – there is no difference between Fock space and the single particle space. If we are not careful, we could become confused about the bearer of the name “Fock space.”

The confusion goes away when we move to the appropriate category. According to Wigner’s analysis, a particle corresponds to an irreducible unitary representation of the identity component P of the Poincaré group. Then the single particle space and Fock space are distinct objects in the category of representations of P. The underlying Hilbert spaces of the two representations are both separable (and hence isomorphic as Hilbert spaces); but the two representations are most certainly not equivalent (one is irreducible, the other reducible).

Single Asset Optimal Investment Fraction


We first consider a situation, when an investor can spend a fraction of his capital to buy shares of just one risky asset. The rest of his money he keeps in cash.

Generalizing Kelly, we consider the following simple strategy of the investor: he regularly checks the asset’s current price p(t), and sells or buys some asset shares in order to keep the current market value of his asset holdings a pre-selected fraction r of his total capital. These readjustments are made periodically at a fixed interval, which we refer to as readjustment interval, and select it as the discrete unit of time. In this work the readjustment time interval is selected once and for all, and we do not attempt optimization of its length.

We also assume that on the time-scale of this readjustment interval the asset price p(t) undergoes a geometric Brownian motion:

p(t + 1) = eη(t)p(t) —– (1)

i.e. at each time step the random number η(t) is drawn from some probability distribution π(η), and is independent of it’s value at previous time steps. This exponential notation is particularly convenient for working with multiplicative noise, keeping the necessary algebra at minimum. Under these rules of dynamics the logarithm of the asset’s price, ln p(t), performs a random walk with an average drift v = ⟨η⟩ and a dispersion D = ⟨η2⟩ − ⟨η⟩2.

It is easy to derive the time evolution of the total capital W(t) of an investor, following the above strategy:

W(t + 1) = (1 − r)W(t) + rW(t)eη(t) —– (2)

Let us assume that the value of the investor’s capital at t = 0 is W(0) = 1. The evolution of the expectation value of the expectation value of the total capital ⟨W (t)⟩ after t time steps is obviously given by the recursion ⟨W (t + 1)⟩ = (1 − r + r⟨eη⟩)⟨W (t)⟩. When ⟨eη⟩ > 1, at first thought the investor should invest all his money in the risky asset. Then the expectation value of his capital would enjoy an exponential growth with the fastest growth rate. However, it would be totally unreasonable to expect that in a typical realization of price fluctuations, the investor would be able to attain the average growth rate determined as vavg = d⟨W(t)⟩/dt. This is because the main contribution to the expectation value ⟨W(t)⟩ comes from exponentially unlikely outcomes, when the price of the asset after a long series of favorable events with η > ⟨η⟩ becomes exponentially big. Such outcomes lie well beyond reasonable fluctuations of W (t), determined by the standard deviation √Dt of ln W (t) around its average value ⟨ln W (t)⟩ = ⟨η⟩t. For the investor who deals with just one realization of the multiplicative process it is better not to rely on such unlikely events, and maximize his gain in a typical outcome of a process. To quantify the intuitively clear concept of a typical value of a random variable x, we define xtyp as a median of its distribution, i.e xtyp has the property that Prob(x > xtyp) = Prob(x < xtyp) = 1/2. In a multiplicative process (2) with r = 1, W (t + 1) = eη(t)W (t), one can show that Wtyp(t) – the typical value of W(t) – grows exponentially in time: Wtyp(t) = e⟨η⟩t at a rate vtyp = ⟨η⟩, while the expectation value ⟨W(t)⟩ also grows exponentially as ⟨W(t)⟩ = ⟨eη⟩t, but at a faster rate given by vavg = ln⟨eη⟩. Notice that ⟨lnW(t)⟩ always grows with the typical growth rate, since those very rare outcomes when W (t) is exponentially big, do not make significant contribution to this average.

The question we are going to address is: which investment fraction r provides the investor with the best typical growth rate vtyp of his capital. Kelly has answered this question for a particular realization of multiplicative stochastic process, where the capital is multiplied by 2 with probability q > 1/2, and by 0 with probability p = 1 − q. This case is realized in a gambling game, where betting on the right outcome pays 2:1, while you know the right outcome with probability q > 1/2. In our notation this case corresponds to η being equal to ln 2 with probability q and −∞ otherwise. The player’s capital in Kelly’s model with r = 1 enjoys the growth of expectation value ⟨W(t)⟩ at a rate vavg = ln2q > 0. In this case it is however particularly clear that one should not use maximization of the expectation value of the capital as the optimum criterion. If the player indeed bets all of his capital at every time step, sooner or later he will loose everything and would not be able to continue to play. In other words, r = 1 corresponds to the worst typical growth of the capital: asymptotically the player will be bankrupt with probability 1. In this example it is also very transparent, where the positive average growth rate comes from: after T rounds of the game, in a very unlikely (Prob = qT) event that the capital was multiplied by 2 at all times (the gambler guessed right all the time!), the capital is equal to 2T. This exponentially large value of the capital outweighs exponentially small probability of this event, and gives rise to an exponentially growing average. This would offer condolence to a gambler who lost everything.

We generalize Kelly’s arguments for arbitrary distribution π(η). As we will see this generalization reveals some hidden results, not realized in Kelly’s “betting” game. As we learned above, the growth of the typical value of W(t), is given by the drift of ⟨lnW(t)⟩ = vtypt, which in our case can be written as

vtyp(r) = ∫ dη π(η) ln(1 + r(eη − 1)) —– (3)

One can check that vtyp(0) = 0, since in this case the whole capital is in the form of cash and does not change in time. In another limit one has vtyp(1) = ⟨η⟩, since in this case the whole capital is invested in the asset and enjoys it’s typical growth rate (⟨η⟩ = −∞ for Kelly’s case). Can one do better by selecting 0 < r < 1? To find the maximum of vtyp(r) one differentiates (3) with respect to r and looks for a solution of the resulting equation: 0 = v’typ(r) = ∫ dη π(η) (eη −1)/(1+r(eη −1)) in the interval 0 ≤ r ≤ 1. If such a solution exists, it is unique since v′′typ(r) = − ∫ dη π(η) (eη − 1)2 / (1 + r(eη − 1))2 < 0 everywhere. The values of the v’typ(r) at 0 and 1 are given by v’typ(0) = ⟨eη⟩ − 1, and v’typ(1) = 1−⟨e−η⟩. One has to consider three possibilities:

(1) ⟨eη⟩ is realized at r = 0 and is equal to 0. In other words, one should never invest in an asset with negative average return per capital ⟨eη⟩ − 1 < 0.

(2) ⟨eη⟩ > 1 , and ⟨e−η⟩ > 1. In this case v’typ(0) > 0, but v’typ(1) < 0 and the maximum of v(r) is realized at some 0 < r < 1, which is a unique solution to v’typ(r) = 0. The typical growth rate in this case is always positive (because you could have always selected r = 0 to make it zero), but not as big as the average rate ln⟨eη⟩, which serves as an unattainable ideal limit. An intuitive understanding of why one should select r < 1 in this case comes from the following observation: the condition ⟨e−η⟩ > 1 makes ⟨1/p(t)⟩ to grow exponentially in time. Such an exponential growth indicates that the outcomes with very small p(t) are feasible and give dominant contribution to ⟨1/p(t)⟩. This is an indicator that the asset price is unstable and one should not trust his whole capital to such a risky investment.

(3) ⟨eη⟩ > 1 , and ⟨e−η⟩ < 1. This is a safe asset and one can invest his whole capital in it. The maximum vtyp(r) is achieved at r = 1 and is equal to vtyp(1) = ln⟨η⟩. A simple example of this type of asset is one in which the price p(t) with equal probabilities is multiplied by 2 or by a = 2/3. As one can see this is a marginal case in which ⟨1/p(t)⟩ = const. For a < 2/3 one should invest only a fraction r < 1 of his capital in the asset, while for a ≥ 2/3 the whole sum could be trusted to it. The specialty of the case with a = 2/3 cannot not be guessed by just looking at the typical and average growth rates of the asset! One has to go and calculate ⟨e−η⟩ to check if ⟨1/p(t)⟩ diverges. This “reliable” type of asset is a new feature of the model with a general π(η). It is never realized in Kelly’s original model, which always has ⟨η⟩ = −∞, so that it never makes sense to gamble the whole capital every time.

An interesting and somewhat counterintuitive consequence of the above results is that under certain conditions one can make his capital grow by investing in asset with a negative typical growth rate ⟨η⟩ < 0. Such asset certainly loses value, and its typical price experiences an exponential decay. Any investor bold enough to trust his whole capital in such an asset is losing money with the same rate. But as long as the fluctuations are strong enough to maintain a positive average return per capital ⟨eη⟩ − 1 > 0) one can maintain a certain fraction of his total capital invested in this asset and almost certainly make money! A simple example of such mind-boggling situation is given by a random multiplicative process in which the price of the asset with equal probabilities is doubled (goes up by 100%) or divided by 3 (goes down by 66.7%). The typical price of this asset drifts down by 18% each time step. Indeed, after T time steps one could reasonably expect the price of this asset to be ptyp(T) = 2T/2 3−T/2 = (√2/3)T ≃ 0.82T. On the other hand, the average ⟨p(t)⟩ enjoys a 17% growth ⟨p(t + 1)⟩ = 7/6 ⟨p(t)⟩ ≃ 1.17⟨W (t)⟩. As one can easily see, the optimum of the typical growth rate is achieved by maintaining a fraction r = 1/4 of the capital invested in this asset. The typical rate in this case is a meager √(25/24) ≃ 1.02, meaning that in a long run one almost certainly gets a 2% return per time step, but it is certainly better than losing 18% by investing the whole capital in this asset.

Of course the properties of a typical realization of a random multiplicative process are not fully characterized by the drift vtyp(r)t in the position of the center of mass of P(h,t), where h(t) = lnW(t) is a logarithm of the wealth of the investor. Indeed, asymptotically P (h, t) has a Gaussian shape P (h, t) =1/ (√2π D(r)t) (exp(−(h−vtyp(r)t)2)/(2D(r)t), where vtyp(r) is given by eq. (3). One needs to know the dispersion D(r) to estimate √D(r)t, which is the magnitude of characteristic deviations of h(t) away from its typical value htyp(t) = vtypt. At the infinite time horizon t → ∞, the process with the biggest vtyp(r) will certainly be preferable over any other process. This is because the separation between typical values of h(t) for two different investment fractions r grows linearly in time, while the span of typical fluctuations grows only as a √t. However, at a finite time horizon the investor should take into account both vtyp(r) and D(r) and decide what he prefers: moderate growth with small fluctuations or faster growth with still bigger fluctuations. To quantify this decision one needs to introduce an investor’s “utility function” which we will not attempt in this work. The most conservative players are advised to always keep their capital in cash, since with any other arrangement the fluctuations will certainly be bigger. As a rule one can show that the dispersion D(r) = ∫ π(η) ln2[1 + r(eη − 1)]dη − v2typ monotonically increases with r. Therefore, among two solutions with equal vtyp(r) one should always select the one with a smaller r, since it would guarantee smaller fluctuations. Here it is more convenient to switch to the standard notation. It is customary to use the random variable

Λ(t)= (p(t+1)−p(t))/p(t) = eη(t) −1 —– (4)

which is referred to as return per unit capital of the asset. The properties of a random multiplicative process are expressed in terms of the average return per capital α = ⟨Λ⟩ = ⟨eη⟩ − 1, and the volatility (standard deviation) of the return per capital σ = √(⟨Λ2⟩ – ⟨Λ⟩2. In our notation, α = ⟨eη⟩ – 1, is determined by the average and not typical growth rate of the process. For η ≪ 1 , α ≃ v + D/2 + v2/2, while the volatility σ is related to D ( the dispersion of η) through σ ≃ √D.


Portfolio Optimization, When the Underlying Asset is Subject to a Multiplicative Continuous Brownian Motion With Gaussian Price Fluctuations


Imagine that you are an investor with some starting capital, which you can invest in just one risky asset. You decided to use the following simple strategy: you always maintain a given fraction 0 < r < 1 of your total current capital invested in this asset, while the rest (given by the fraction 1 − r) you wisely keep in cash. You select a unit of time (say a week, a month, a quarter, or a year, depending on how closely you follow your investment, and what transaction costs are involved) at which you check the asset’s current price, and sell or buy some shares of this asset. By this transaction you adjust the current money equivalent of your investment to the above pre-selected fraction of your total capital.

The question that is interesting is: which investment fraction provides the optimal typical long-term growth rate of investor’s capital? By typical, it is meant that this growth rate occurs at large-time horizon in majority of realizations of the multiplicative process. By extending time-horizons, one can make this rate to occur with probability arbitrary close to one. Contrary to the traditional economics approach, where the expectation value of an artificial “utility function” of an investor is optimized, the optimization of a typical growth rate does not contain any ambiguity.

Let us assume that during the timescale, at which the investor checks and readjusts his asset’s capital to the selected investment fraction, the asset’s price changes by a random factor, drawn from some probability distribution, and uncorrelated from price dynamics at earlier intervals. In other words, the price of an asset experiences a multiplicative random walk with some known probability distribution of steps. This assumption is known to hold in real financial markets beyond a certain time scale. Contrary to continuum theories popular among economists our approach is not limited to Gaussian distributed returns: indeed, we were able to formulate our strategy for a general probability distribution of returns per capital (elementary steps of the multiplicative random walk).

Thus risk-free interest rate, asset’s dividends, and transaction costs are ignored (when volatility is large they are indeed negligible). However, the task of including these effects in our formalism is rather straightforward. The quest of finding a strategy, which optimizes the long-term growth rate of the capital is by no means new: indeed it was first discussed by Daniel Bernoulli in about 1730 in connection with the St. Petersburg game. In the early days of information sciences, C. F. Shannon has considered the application of the concept of information entropy in designing optimal strategies in such games as gambling. Working from the foundations of Shannon, J. L. Kelly Jr. has specifically designed an optimal gambling strategy in placing bets, when a gambler has some incomplete information about the winning outcome (a “noisy information channel”). In modern day finance, especially the investment in very risky assets is no different from gambling. The point Shannon and Kelly wanted to make is that, given that the odds are slightly in your favor albeit with large uncertainty, the gambler should not bet his whole capital at every time step. On the other hand, he would achieve the biggest long-term capital growth by betting some specially optimized fraction of his whole capital in every game. This cautious approach to investment is recommended in situations when the volatility is very large. For instance, in many emergent markets the volatility is huge, but they are still swarming with investors, since the long-term return rate in some cautious investment strategy is favorable.

Later on Kelly’s approach was expanded and generalized in the works of Breiman. Our results for multi-asset optimal investment are in agreement with his exact but non-constructive equations. In some special cases, Merton has considered the problem of portfolio optimization, when the underlying asset is subject to a multiplicative continuous Brownian motion with Gaussian price fluctuations.

C∗-algebras and their Representations


Definition. A C∗-algebra is a pair consisting of a ∗-algebra U and a norm

∥ · ∥ : A → C such that
∥AB∥ ≤ ∥A∥ · ∥B∥, ∥A∗A∥ = ∥A∥2,

∀ A, B ∈ A. We usually use A to denote the algebra and its norm.

Definition. A state ω on A is a linear functional such that ω(A∗A) ≥ 0 ∀ A ∈ U, and ω(I) = 1.

Definition. A state ω of U is said to be mixed if ω = 1/2(ω12) with ω1 ≠ ω2. Otherwise ω is said to be pure.

Definition. Let U be a C∗-algebra. A representation of U is a pair (H,π), where H is a Hilbert space and π is a ∗-homomorphism of U into B(H). A representation (H,π) is said to be irreducible if π(U) is weakly dense in B(H). A representation (H,π) is said to be faithful if π is an isomorphism.

Definition. Let (H, π) and (K, φ) be representations of a C∗-algebra U. Then (H,π) and (K,φ) are said to be:

  1. unitarily equivalent if there is a unitary U : H → K such that Uπ(A) = φ(A)U for all A ∈ U.
  2. quasiequivalent if the von Neumann algebras π(U)′′ and φ(U)′′ are ∗-isomorphic.
  3. disjoint if they are not quasiequivalent.

Definition. A representation (K, φ) is said to be a subrepresentation of (H, π) just in case there is an isometry V : K → H such that π(A)V =Vφ(A) ∀ A ∈ U.

Two representations are quasiequivalent iff they have unitarily equivalent subrepresentations.

The Gelfand-Naimark-Segal (GNS) theorem shows that every C∗-algebraic state can be represented by a vector in a Hilbert space.


(GNS). Let ω be a state of U. Then there is a representation (H,π) of U, and a unit vector Ω ∈ H such that:

1. ω(A)=⟨Ω, π(A)Ω⟩, ∀ A ∈ U;

2. π(U)Ω is dense in H.

Furthermore, the representation (H,π) is the unique one (up to unitarily equivalence) satisfying the two conditions.


We construct the Hilbert space H from equivalence classes of elements in U, and the representation π is given by the action of left multiplication. In particular, define a bounded sesquilinear form on U by setting

⟨A, B⟩ω = ω(A∗B), A, B ∈ A.

von Neumann Algebras


The standard definition of a von Neumann algebra involves reference to a topology, and it is then shown (by von Neumann’s double commutant theorem) that this topological condition coincides with an algebraic condition (condition 2 in the Definition 1.2). But for present purposes, it will suffice to take the algebraic condition as basic.

1.1 Definition. Let H be a Hilbert space. Let B(H) be the set of bounded linear operators on H in the sense that for each A ∈ B(H) there is a smallest nonnegative number ∥A∥ such that ⟨Ax, Ax⟩1/2 ≤ ∥A∥ for all unit vectors x ∈ H. [Subsequently we use ∥ · ∥ ambiguously for the norm on H and the norm on B(H).] We use juxtaposition AB to denote the composition of two elements A,B of B(H). For each A ∈ B(H) we let A∗ denote the unique element of B(H) such that ⟨A∗x, y⟩ = ⟨x,Ay⟩, for all x,y ∈ R.

1.2 Definition. Let R be a ∗-subalgebra of B(H), the bounded operators on the Hilbert space H. Then R is a von Neumann algebra if

1. I ∈ R,

2. (R′)′ = R,

where R′ = {B ∈ B(H): [B,A] =0, ∀ A ∈ R}

1.3 Definition. We will need four standard topologies on the set B(H) of bounded linear operators on H. Each of these topologies is defined in terms of a family of seminorms.

  • The uniform topology on B(H) is defined in terms of a single norm: ∥A∥ = sup{∥Av∥ : v ∈ H, ∥v∥ ≤ 1}, where the norm on the right is the given vector norm on H. Hence, an operator A is a limit point of the sequence (Ai)i∈N iff (∥Ai − A∥)i∈N converges to 0.
  • The weak topology on B(H) is defined in terms of the family {pu,v : u, v ∈ H} of seminorms where pu,v(A) = ⟨u,Av⟩. The resulting topology is not generally first countable, and so the closure of a subset S of B(H) is generally larger than the set of all limit points of sequences in S. Rather, the closure of S is the set of limit points of generalized sequences (nets) in S. A net (Ai)i∈I in B(H) converges weakly to A just in case (pu,v(Ai))i∈I converges to pu,v(A) ∀ u,v ∈ H.
  • The strong topology on B(H) is defined in terms of the family {pv : v ∈ H} of seminorms where pv(A) = ∥Av∥. Thus, a net (Ai)i∈I converges strongly to A iff (pv (Ai))i∈I converges to pv(A), ∀ v ∈ H.
  • The ultraweak topology on B(H) is defined in terms of the family {pρ : ρ ∈ T (H)} where T (H) is the set of positive, trace 1 operators (“density operators”) on H and pρ(A) = Tr(ρA).

Thus a net (Ai)i∈I converges ultraweakly to A just in case (Tr(ρAi))i∈I converges to Tr(ρA), ∀ ρ ∈ T (H).

If S is a bounded, convex subset of B(H), then the weak, ultraweak, and norm closures of S are the same.

For a ∗-algebra R on H that contains I, the following are equivalent:

(i) R is weakly closed;

(ii) R′′ = R. This is von Neumann’s double commutant theorem.

1.4 Definition. Let R be a subset of B(H). A vector x ∈ H is said to be cyclic for R just in case [Rx] = H, where Rx = {Ax : A ∈ R}, and [Rx] is the closed linear span of Rx. A vector x ∈ H is said to be separating for R just in case Ax = 0 and A ∈ R entails A = 0.

Let R be a von Neumann algebra on H, and let x ∈ H. Then x is cyclic for R iff x is separating for R′.

1.5 Definition. A normal state of a von Neumann algebra R is an ultraweakly continuous state letting R∗ denote the normal state space of R.