Metaphysics of the Semantics of HoTT. Thought of the Day 73.0


Types and tokens are interpreted as concepts (rather than spaces, as in the homotopy interpretation). In particular, a type is interpreted as a general mathematical concept, while a token of a given type is interpreted as a more specific mathematical concept qua instance of the general concept. This accords with the fact that each token belongs to exactly one type. Since ‘concept’ is a pre-mathematical notion, this interpretation is admissible as part of an autonomous foundation for mathematics.

Expressions in the language are the names of types and tokens. Those naming types correspond to propositions. A proposition is ‘true’ just if the corresponding type is inhabited (i.e. there is a token of that type, which we call a ‘certificate’ to the proposition). There is no way in the language of HoTT to express the absence or non-existence of a token. The negation of a proposition P is represented by the type P → 0, where P is the type corresponding to proposition P and 0 is a type that by definition has no token constructors (corresponding to a contradiction). The logic of HoTT is not bivalent, since the inability to construct a token of P does not guarantee that a token of P → 0 can be constructed, and vice versa.

The rules governing the formation of types are understood as ways of composing concepts to form more complex concepts, or as ways of combining propositions to form more complex propositions. They follow from the Curry-Howard correspondence between logical operations and operations on types. However, we depart slightly from the standard presentation of the Curry-Howard correspondence, in that the tokens of types are not to be thought of as ‘proofs’ of the corresponding propositions but rather as certificates to their truth. A proof of a proposition is the construction of a certificate to that proposition by a sequence of applications of the token construction rules. Two different such processes can result in construction of the same token, and so proofs and tokens are not in one-to-one correspondence.

When we work formally in HoTT we construct expressions in the language according to the formal rules. These expressions are taken to be the names of tokens and types of the theory. The rules are chosen such that if a construction process begins with non-contradictory expressions that all name tokens (i.e. none of the expressions are ‘empty names’) then the result will also name a token (i.e. the rules preserve non-emptiness of names).

Since we interpret tokens and types as concepts, the only metaphysical commitment required is to the existence of concepts. That human thought involves concepts is an uncontroversial position, and our interpretation does not require that concepts have any greater metaphysical status than is commonly attributed to them. Just as the existence of a concept such as ‘unicorn’ does not require the existence of actual unicorns, likewise our interpretation of tokens and types as mathematical concepts does not require the existence of mathematical objects. However, it is compatible with such beliefs. Thus a Platonist can take the concept, say, ‘equilateral triangle’ to be the concept corresponding to the abstract equilateral triangle (after filling in some account of how we come to know about these abstract objects in a way that lets us form the corresponding concepts). Even without invoking mathematical objects to be the ‘targets’ of mathematical concepts, one could still maintain that concepts have a mind-independent status, i.e. that the concept ‘triangle’ continues to exist even while no-one is thinking about triangles, and that the concept ‘elliptic curve’ did not come into existence at the moment someone first gave the definition. However, this is not a necessary part of the interpretation, and we could instead take concepts to be mind-dependent, with corresponding implications for the status of mathematics itself.

Conjuncted: Avarice

Greed followed by avarice….We consider the variation in which events occur at a rate equal to the difference in capital of the two traders. That is, an individual is more likely to take capital from a much poorer person rather than from someone of slightly less wealth. For this “avaricious” exchange, the corresponding rate equations are

dck/dt = ck-1j=1k-1(k – 1 – j)cj + ck+1j=k+1(j – k – 1)cj – ckj=1|k – j|cj —– (1)

while the total density obeys,

dN/dt = -c1(1 – N) —– (2)

under the assumption that the total wealth density is set equal to one, ∑kck = 1

These equations can be solved by again applying scaling. For this purpose, it is first expedient to rewrite the rate equation as,

dck/dt = (ck-1 – ck)∑j=1k-1(k – j)cj – ck-1j=1k-1cj + (ck+1 – ck)∑j=k+1(j – k)cj – ck+1j=k+1cj —– (3)

taking the continuum limits

∂c/∂t = ∂c/∂k – N∂/∂k(kc) —— (3)

We now substitute the scaling ansatz,

ck(t) ≅ N2C(x), with x = kN to yield

C(0)[2C + xC′] = (x − 1)C′ + C —– (4)


dN/dt = -C(0)N2 —– (5)

Solving the above equations gives N ≅ [C(0)t]−1 and

C(x) = (1 + μ)(1 + μx)−2−1/μ —– (6)

with μ = C(0) − 1. The scaling approach has thus found a family of solutions which are parameterized by μ, and additional information is needed to determine which of these solutions is appropriate for our system. For this purpose, note that equation (6) exhibits different behaviors depending on the sign of μ. When μ > 0, there is an extended non-universal power-law distribution, while for μ = 0 the solution is the pure exponential, C(x) = e−x. These solutions may be rejected because the wealth distribution cannot extend over an unbounded domain if the initial wealth extends over a finite range.

The accessible solutions therefore correspond to −1 < μ < 0, where the distribution is compact and finite, with C(x) ≡ 0 for x ≥ xf = −μ−1. To determine the true solution, let us re-examine the continuum form of the rate equation, equation (3). From naive power counting, the first two terms are asymptotically dominant and they give a propagating front with kf exactly equal to t. Consequently, the scaled location of the front is given by xf = Nkf. Now the result N ≃ [C(0)t]−1 gives xf = 1/C(0). Comparing this expression with the corresponding value from the scaling approach, xf = [1 − C(0)]−1, selects the value C(0) = 1/2. Remarkably, this scaling solution coincides with the Fermi distribution that found for the case of constant interaction rate. Finally, in terms of the unscaled variables k and t, the wealth distribution is

ck(t) = 4/t2, k < t

= 0, k ≥ 0 —– (7)

This discontinuity is smoothed out by diffusive spreading. Another interesting feature is that if the interaction rate is sufficiently greedy, “gelation” occurs, whereby a finite fraction of the total capital is possessed by a single individual. For interaction rates, or kernels K(j, k) between individuals of capital j and k which do not give rise to gelation, the total density typically varies as a power law in time, while for gelling kernels N(t) goes to zero at some finite time. At the border between these regimes N(t) typically decays exponentially in time. We seek a similar transition in behavior for the capital exchange model by considering the rate equation for the density

dN/dt = -c1k=1k(1, k)ck —– (8)

For the family of kernels with K(1, k) ∼ kν as k → ∞, substitution of the scaling ansatz gives N ̇ ∼ −N3−ν. Thus N(t) exhibits a power-law behavior N ∼ t1/(2−ν) for ν < 2 and an exponential behavior for ν = 2. Thus gelation should arise for ν > 2.


In greedy exchange, when two individuals meet, the richer person takes one unit of capital from the poorer person, as represented by the reaction scheme (j, k) → (j + 1, k − 1) for j ≥ k. In the rate equation approximation, the densities ck(t) now evolve according to

dck/dt = ck-1j=1k-1cj + ck+1j=k+1cj – ckN – c2k —– (1)

The first two terms account for the gain in ck(t) due to the interaction between pairs of individuals of capitals (j, k−1), with j k, respectively. The last two terms correspondingly account for the loss of ck(t). One can check that the wealth density M1 ≡ ∑k=1 k ck(t) is conserved, and that the population density obeys

dN/dt = -c1N —– (2)

Equation (1) are conceptually similar to the Smoluchowski equations for aggregation with a constant reaction rate. Mathematically, however, they appear to be more complex and we have been unable to solve them analytically. Fortunately, equation (1) is amenable to a scaling solution. For this purpose, we first re-write equation (1) as

dck/dt = -ck(ck + ck+1) + N(ck-1 – ck) + (ck+1 – ck-1)∑j=kcj —– (3)

Taking the continuum limit and substituting the scaling ansatz,

ck(t) ≅ N2C(x), with x = kN —– (4)

transforms equations (2) and (3) to

dN/dt = -C(0)N3 —– (5)


C(0)[2C + xC’] = 2C2 + C'[1 – 2∫xdyC(y)] —– (6)

where C ′ = dC/dx. Note also that the scaling function must obey the integral relations

xdxC(x) = 1 and ∫xdxxC(x) = 1 —– (7)

The former follows from the definition of density, N = ∑ck(t) ≅ N∫dx C(x), while the latter follows if we set, without loss of generality, the conserved wealth density equal to unity, ∑kkck(t) = 1.

Introducing B(x) = ∫0x dyC(y) recasts equation (6) into C(0)[2B′ + xB′′] = 2B′2 + B′′[2B − 1]. Integrating twice gives [C(0)x − B][B − 1] = 0, with solution B(x) = C(0)x for x < xf and B(x) = 1 for x ≥ xf, from which we conclude that the scaled wealth distribution C(x) = B′(x) coincides with the zero-temperature Fermi distribution;

C(x) = C(0), for x < xf

= 0, for x ≥ xf —– (8)

Hence the scaled profile has a sharp front at x = xf, with xf = 1/C(0) found by matching the two branches of the solution for B(x). Making use of the second integral relation, equation (7), gives C(0) = 1/2 and thereby closes the solution. Thus, the unscaled wealth distribution ck(t) reads,

ck(t) = 1/(2t), for k < 2√t

= 0, for k ≥ 2√t —– (9)

and the total density is N(t) = t-1/2


Figure: Simulation results for the wealth distribution in greedy additive exchange based on 2500 configurations for 106 traders. Shown are the scaled distributions C(x) versus x = kN for t = 1.5n, with n = 18, 24, 30, and 36; these steepen with increasing time. Each data set has been av- eraged over a range of ≈ 3% of the data points to reduce fluctuations.

These predictions by numerical simulations are shown in the figure. In the simulation, two individuals are randomly chosen to undergo greedy exchange and this process is repeated. When an individual reaches zero capital he is eliminated from the system, and the number of active traders is reduced by one. After each reaction, the time is incremented by the inverse of the number of active traders. While the mean-field predictions are substantially corroborated, the scaled wealth distribution for finite time actually resembles a finite-temperature Fermi distribution. As time increases, the wealth distribution becomes sharper and approaches equation (9). In analogy with the Fermi distribution, the relative width of the front may be viewed as an effective temperature. Thus the wealth distribution is characterized by two scales; one of order √t characterizes the typical wealth of active traders and a second, smaller scale which characterizes the width of the front.

To quantify the spreading of the front, let us include the next corrections in the continuum limit of the rate equations, equation (3). This gives,

∂c/∂t = 2∂/∂k [c∫kdjc(j)] – c∂c/∂k – N∂c/∂k + N/2 ∂2c/∂k2 —– (10)

Here, the second and fourth terms on the RHS denote the second corrections. since, the convective third term determines the location of the front to be at kf = 2√t, it is natural to expect that the diffusive fourth term describes the spreading of the front. the term c∂c/∂k  turns out to be negligible in comparison to the diffusive spreading term and is henceforth neglected. The dominant convective term can be removed by transforming to a frame of reference which moves with the front namely, k → K = k − 2√t. among the remaining terms in the transformed rate equation, the width of the front region W can now be determined by demanding that the diffusion term has the same order of magnitude as the reactive terms, i.e. N ∂2c/∂k∼ c2. This implies W ∼ √(N/c). Combining this with N = t−1/2 and c ∼ t−1 gives W ∼ t1/4, or a relative width w = W/kf ∼ t−1/4. This suggests the appropriate scaling ansatz for the front region is

ck(t) = 1/t X(ξ), ξ = (k – 2√t)/ t1/4 —– (11)

Substituting this ansatz into equation (10) gives a non-linear single variable integro-differential equation for the scaling function X(ξ). Together with the appropriate boundary conditions, this represents, in principle, a more complete solution to the wealth distribution. However, the essential scaling behavior of the finite-time spreading of the front is already described by equation (11), so that solving for X(ξ) itself does not provide additional scaling information. Analysis gives w ∼ t−α with α ≅ 1/5. We attribute this discrepancy to the fact that w is obtained by differentiating C(x), an operation which generally leads to an increase in numerical errors.

Fortune of the Individuals Restricted to Integers: Random Economic Exchange Between Populations of Traders.


Consider a population of traders, each of which possesses a certain amount of capital which is assumed to be quantized in units of minimal capital. Taking this latter quantity as the basic unit, the fortune of an individual is restricted to the integers. The wealth of the population evolves by the repeated interaction of random pairs of traders. In each interaction, one unit of capital is transferred between the trading partners. To complete the description, we specify that if a poorest individual (with one unit of capital) loses all remaining capital by virtue of a “loss”, the bankrupt individual is considered to be economically dead and no longer participates in economic activity.

In the following, we consider a specific realization of additive capital exchange, the “random” exchange, where the direction of the capital exchange is independent of the relative capital of the traders. While this rule has little economic basis, the model is completely soluble and thus provides a helpful pedagogical point.

In a random exchange, one unit of capital is exchanged between trading partners as represented by the reaction scheme (j, k) → (j ± 1, k ∓ 1). Let ck(t) be the density of individuals with capital k. within a mean-field description, ck(t) evolves according to

dck(t)/dt = N(t) [ck+1(t) + ck-1(t) – 2ck(t)] —– (1)

with N(t) ≡ M0(t) = ∑k=1 ck(t), the population density. The first two terms account for gain in ck(t) due to the interactions (j, k + 1) → (j + 1, k) and (j, k − 1) → (j−1, k), respectively, while the last term accounts for the loss in ck(t) due to the interactions (j, k) → (j±1, k∓1).

By defining a modified time variable,

T = ∫0dt’N(t’) —– (2)

equation (1) is reduced to the discrete diffusion equation

dck(T)/dT = ck+1(T) + ck-1(T) – 2ck(T) —– (3)

The rate equation for the poorest density has the slightly different form, dc1/dT = c2 − 2c1, but may be written in the same form as equation (3) if we impose the boundary condition c0(T) = 0.

For illustrative purposes, let us assume that initially all individuals have one unit of capital, ck(0) = δk1. The solution to equation (3) subject to these initial and boundary conditions is

ck(T) = e−2T [Ik−1(2T) − Ik+1(2T)] —– (4)

where In denotes the modified Bessel function of order n. consequently, the total density N(t) is

N(T) = e−2T [I0(2T) + I1(2T)] —– (5)

To re-express this exact solution in terms of the physical time t, we first invert equation (2) to obtain t(T) = ∫0T dT′/N(T′), and then eliminate T in favor of t in the solution for ck(T). For simplicity and concreteness, let us consider the long-time limit. From equation (4),

ck(T) ≅ k/√(4πT3) exp (-k2/4T) —– (6)

and from equation (5),

N(T) ≅ (πT)−1/2 —– (7)

Equation (7) also implies t ≅ 2/3 √(πT3) which gives

N(T) ≅ (2/3πt)1/3 —– (8)


ck(t) ≅ k/3t exp [-(π/144)1/3 k2/t2/3] —– (9)

Note that this latter expression may be written in the scaling form ck(t) ∝ N2xe−x2, with the scaling variable x ∝ kN. One can also confirm that the scaling solution represents the basin of attraction for almost all exact solutions. Indeed, for any initial condition with ck(0) decaying faster than k−2, the system reaches the scaling limit ck(t) ∝ N2xe−x2. On the other hand, if ck(0) ∼ k−1−α, with 0 < α < 1, such an initial state converges to an alternative scaling limit which depends on α. These solutions exhibit a slower decay of the total density, N ∼ t−α/(1+α), while the scaling form of the wealth distribution is

ck(t) ∼ N2/αCα(x), x ∝ kN1/α —– (10)

with the scaling function

Cα(x) = e−x20 du e−u2 sinh(2ux)/u1+α —– (11)

Evaluating the integral by the Laplace method gives an asymptotic distribution which exhibits the same x−1−α as the initial distribution. This anomalous scaling in the solution to the diffusion equation is a direct consequence of the extended initial condition. This latter case is not physically relevant, however, since the extended initial distribution leads to a divergent initial wealth density.

Arbitrage, or Tensors thereof…


What is an arbitrage? Basically it means ”to get something from nothing” and a free lunch after all. More strict definition states the arbitrage as an operational opportunity to make a risk-free profit with a rate of return higher than the risk-free interest rate accured on deposit.

The arbitrage appears in the theory when we consider a curvature of the connection. A rate of excess return for an elementary arbitrage operation (a difference between rate of return for the operation and the risk-free interest rate) is an element of curvature tensor calculated from the connection. It can be understood keeping in mind that a curvature tensor elements are related to a difference between two results of infinitesimal parallel transports performed in different order. In financial terms it means that the curvature tensor elements measure a difference in gains accured from two financial operations with the same initial and final points or, in other words, a gain from an arbitrage operation.

In a certain sense, the rate of excess return for an elementary arbitrage operation is an analogue of the electromagnetic field. In an absence of any uncertanty (or, in other words, in an absense of walks of prices, exchange and interest rates) the only state is realised is the state of zero arbitrage. However, if we place the uncertenty in the game, prices and the rates move and some virtual arbitrage possibilities to get more than less appear. Therefore we can say that the uncertanty play the same role in the developing theory as the quantization did for the quantum gauge theory.

What of “matter” fields then, which interact through the connection. The “matter” fields are money flows fields, which have to be gauged by the connection. Dilatations of money units (which do not change a real wealth) play a role of gauge transformation which eliminates the effect of the dilatation by a proper tune of the connection (interest rate, exchange rates, prices and so on) exactly as the Fisher formula does for the real interest rate in the case of an inflation. The symmetry of the real wealth to a local dilatation of money units (security splits and the like) is the gauge symmetry of the theory.

A theory may contain several types of the “matter” fields which may differ, for example, by a sign of the connection term as it is for positive and negative charges in the electrodynamics. In the financial stage it means different preferances of investors. Investor’s strategy is not always optimal. It is due to partially incomplete information in hands, choice procedure, partially, because of investors’ (or manager’s) internal objectives. Physics of Finance



The Left Needs the Stupid to Survive…


Social pathologies, or the social pathologist undoubtedly. Orwell developed his Newspeak dictionary in order to explain the cognitive phenomenon he observed about him with regard to those committed to the left. Thats not to say that the cognitive phenomenon cannot be on the right, since many mass movement type ideologies are logically contradictory and to sustain themselves their adherents must engage themselves in mental gyrations to upkeep their belief. Orwell needed the Newspeak as part of the apparatus of totalitarian control, something forced on to an unwitting and unwilling public. It never occurred to Orwell that the masses would never care as long as their animal desires were being provided for. The party, much like the Juvenal before them, recognized that the public would not much care about the higher concepts such as truth or freedom as ling as their bread and circuses, in the form of the cynical statement Prolefeed were supplied. In fact, trying to pry them away from such materialities or ‘truth’ would likely cause them the to support the existing regime. This means that a capitalist totalitarianism, with its superior ability to provide for material goods would be harder to dislodge than a socialist one.

Take for example the notion of Doublethink, the idea of keeping two mutually opposing ideas in one’s head without noticing the difference. Orwell saw this mode as an aberration with regard to normal thought but never realized the fact that this was in the common man a mode of cognition. Or the concept of Bellyfeel, which Orwell states,

Consider, for example, a typical sentence from a Times leading article as “Oldthinkers unbellyfeel Ingsoc”. the shortest rendering one could make of this in Oldspeak would be: “Those whose ideas formed before the revolution cannot have a full understanding of the principle of English socialism.” But, this is not an adequate translation…only a person thoroughly grounded in Ingsoc could appreciate the full force of the word bellyful, which implied a blind, enthusiastic and casual acceptance difficult to imagine today.

“Gut-Instinct”, more than reason, is mass man’s mechanism of political orientation. This is why Fascism and Socialism is better understood as appeals to the gut-brain rather than logically and empirically justified modes of political thought. Totalitarian regimes cannot solely rely on oppression for their survival, they also need to rely on some of cooperation  amongst the population, and they bring this about by exploiting the cognitive miserliness of the average man. Orwell, just like many other left-wing intellectuals never really appreciated the mindset of just outside the proletariat that he was. His fundamental misunderstanding of Newspeak lay in the assumption of rationalist fallacy, which assumes that the average man is rational when it counts, but the problem lies in the fact that for the average man cognitive miserliness is the norm. the problem is that a lot of mainstream conservative thought is based on this premise, which in turn undermines its own survival and helps feed the leftist beast. Any conservatives that believes in the right of the conservative miser to choose is a dead man walking. This criticism of the prole-mind is not based on any snobbery, rather it is of functional basis. Competency, not class should be the eligibility for decision-making, and thus no wonder left needs the stupid to survive.

Malignant Acceleration in Tech-Finance. Some Further Rumination on Regulations. Thought of the Day 72.1


Regardless of the positive effects of HFT that offers, such as reduced spreads, higher liquidity, and faster price discovery, its negative side is mostly what has caught people’s attention. Several notorious market failures and accidents in recent years all seem to be related to HFT practices. They showed how much risk HFT can involve and how huge the damage can be.

HFT heavily depends on the reliability of the trading algorithms that generate, route, and execute orders. High-frequency traders thus must ensure that these algorithms have been tested completely and thoroughly before they are deployed into the live systems of the financial markets. Any improperly-tested, or prematurely-released algorithms may cause losses to both investors and the exchanges. Several examples demonstrate the extent of the ever-present vulnerabilities.

In August 2012, the Knight Capital Group implemented a new liquidity testing software routine into its trading system, which was running live on the NYSE. The system started making bizarre trading decisions, quadrupling the price of one company, Wizzard Software, as well as bidding-up the price of much larger entities, such as General Electric. Within 45 minutes, the company lost USD 440 million. After this event and the weakening of Knight Capital’s capital base, it agreed to merge with another algorithmic trading firm, Getco, which is the biggest HFT firm in the U.S. today. This example emphasizes the importance of implementing precautions to ensure their algorithms are not mistakenly used.

Another example is Everbright Securities in China. In 2013, state-owned brokerage firm, Everbright Securities Co., sent more than 26,000 mistaken buy orders to the Shanghai Stock Exchange (SSE of RMB 23.4 billion (USD 3.82 billion), pushing its benchmark index up 6 % in two minutes. This resulted in a trading loss of approximately RMB 194 million (USD 31.7 million). In a follow-up evaluative study, the China Securities Regulatory Commission (CSRC) found that there were significant flaws in Everbright’s information and risk management systems.

The damage caused by HFT errors is not limited to specific trading firms themselves, but also may involve stock exchanges and the stability of the related financial market. On Friday, May 18, 2012, the social network giant, Facebook’s stock was issued on the NASDAQ exchange. This was the most anticipated initial public offering (IPO) in its history. However, technology problems with the opening made a mess of the IPO. It attracted HFT traders, and very large order flows were expected, and before the IPO, NASDAQ was confident in its ability to deal with the high volume of orders.

But when the deluge of orders to buy, sell and cancel trades came, NASDAQ’s trading software began to fail under the strain. This resulted in a 30-minute delay on NASDAQ’s side, and a 17-second blackout for all stock trading at the exchange, causing further panic. Scrutiny of the problems immediately led to fines for the exchange and accusations that HFT traders bore some responsibility too. Problems persisted after opening, with many customer orders from institutional and retail buyers unfilled for hours or never filled at all, while others ended up buying more shares than they had intended. This incredible gaffe, which some estimates say cost traders USD 100 million, eclipsed NASDAQ’s achievement in getting Facebook’s initial IPO, the third largest IPO in U.S. history. This incident has been estimated to have cost investors USD 100 million.

Another instance occurred on May 6, 2010, when U.S. financial markets were surprised by what has been referred to ever since as the “Flash Crash” Within less than 30 minutes, the main U.S. stock markets experienced the single largest price declines within a day, with a decline of more than 5 % for many U.S.-based equity products. In addition, the Dow Jones Industrial Average (DJIA), at its lowest point that day, fell by nearly 1,000 points, although it was followed by a rapid rebound. This brief period of extreme intraday volatility demonstrated the weakness of the structure and stability of U.S. financial markets, as well as the opportunities for volatility-focused HFT traders. Although a subsequent investigation by the SEC cleared high-frequency traders of directly having caused the Flash Crash, they were still blamed for exaggerating market volatility, withdrawing liquidity for many U.S.-based equities (FLASH BOYS).

Since the mid-2000s, the average trade size in the U.S. stock market had plummeted, the markets had fragmented, and the gap in time between the public view of the markets and the view of high-frequency traders had widened. The rise of high-frequency trading had been accompanied also by a rise in stock market volatility – over and above the turmoil caused by the 2008 financial crisis. The price volatility within each trading day in the U.S. stock market between 2010 and 2013 was nearly 40 percent higher than the volatility between 2004 and 2006, for instance. There were days in 2011 in which volatility was higher than in the most volatile days of the dot-com bubble. Although these different incidents have different causes, the effects were similar and some common conclusions can be drawn. The presence of algorithmic trading and HFT in the financial markets exacerbates the adverse impacts of trading-related mistakes. It may lead to extremely higher market volatility and surprises about suddenly-diminished liquidity. This raises concerns about the stability and health of the financial markets for regulators. With the continuous and fast development of HFT, larger and larger shares of equity trades were created in the U.S. financial markets. Also, there was mounting evidence of disturbed market stability and caused significant financial losses due to HFT-related errors. This led the regulators to increase their attention and effort to provide the exchanges and traders with guidance on HFT practices They also expressed concerns about high-frequency traders extracting profit at the costs of traditional investors and even manipulating the market. For instance, high-frequency traders can generate a large amount of orders within microseconds to exacerbate a trend. Other types of misconduct include: ping orders, which is using some orders to detect other hidden orders; and quote stuffing, which is issuing a large number of orders to create uncertainty in the market. HFT creates room for these kinds of market abuses, and its blazing speed and huge trade volumes make their detection difficult for regulators. Regulators have taken steps to increase their regulatory authority over HFT activities. Some of the problems that arose in the mid-2000s led to regulatory hearings in the United States Senate on dark pools, flash orders and HFT practices. Another example occurred after the Facebook IPO problem. This led the SEC to call for a limit up-limit down mechanism at the exchanges to prevent trades in individual securities from occurring outside of a specified price range so that market volatility will be under better control. These regulatory actions put stricter requirements on HFT practices, aiming to minimize the market disturbance when many fast trading orders occur within a day.