Malignant Acceleration in Tech-Finance. Some Further Rumination on Regulations. Thought of the Day 72.1

these-stunning-charts-show-some-of-the-wild-trading-activity-that-came-from-a-dark-pool-this-morning

Regardless of the positive effects of HFT that offers, such as reduced spreads, higher liquidity, and faster price discovery, its negative side is mostly what has caught people’s attention. Several notorious market failures and accidents in recent years all seem to be related to HFT practices. They showed how much risk HFT can involve and how huge the damage can be.

HFT heavily depends on the reliability of the trading algorithms that generate, route, and execute orders. High-frequency traders thus must ensure that these algorithms have been tested completely and thoroughly before they are deployed into the live systems of the financial markets. Any improperly-tested, or prematurely-released algorithms may cause losses to both investors and the exchanges. Several examples demonstrate the extent of the ever-present vulnerabilities.

In August 2012, the Knight Capital Group implemented a new liquidity testing software routine into its trading system, which was running live on the NYSE. The system started making bizarre trading decisions, quadrupling the price of one company, Wizzard Software, as well as bidding-up the price of much larger entities, such as General Electric. Within 45 minutes, the company lost USD 440 million. After this event and the weakening of Knight Capital’s capital base, it agreed to merge with another algorithmic trading firm, Getco, which is the biggest HFT firm in the U.S. today. This example emphasizes the importance of implementing precautions to ensure their algorithms are not mistakenly used.

Another example is Everbright Securities in China. In 2013, state-owned brokerage firm, Everbright Securities Co., sent more than 26,000 mistaken buy orders to the Shanghai Stock Exchange (SSE of RMB 23.4 billion (USD 3.82 billion), pushing its benchmark index up 6 % in two minutes. This resulted in a trading loss of approximately RMB 194 million (USD 31.7 million). In a follow-up evaluative study, the China Securities Regulatory Commission (CSRC) found that there were significant flaws in Everbright’s information and risk management systems.

The damage caused by HFT errors is not limited to specific trading firms themselves, but also may involve stock exchanges and the stability of the related financial market. On Friday, May 18, 2012, the social network giant, Facebook’s stock was issued on the NASDAQ exchange. This was the most anticipated initial public offering (IPO) in its history. However, technology problems with the opening made a mess of the IPO. It attracted HFT traders, and very large order flows were expected, and before the IPO, NASDAQ was confident in its ability to deal with the high volume of orders.

But when the deluge of orders to buy, sell and cancel trades came, NASDAQ’s trading software began to fail under the strain. This resulted in a 30-minute delay on NASDAQ’s side, and a 17-second blackout for all stock trading at the exchange, causing further panic. Scrutiny of the problems immediately led to fines for the exchange and accusations that HFT traders bore some responsibility too. Problems persisted after opening, with many customer orders from institutional and retail buyers unfilled for hours or never filled at all, while others ended up buying more shares than they had intended. This incredible gaffe, which some estimates say cost traders USD 100 million, eclipsed NASDAQ’s achievement in getting Facebook’s initial IPO, the third largest IPO in U.S. history. This incident has been estimated to have cost investors USD 100 million.

Another instance occurred on May 6, 2010, when U.S. financial markets were surprised by what has been referred to ever since as the “Flash Crash” Within less than 30 minutes, the main U.S. stock markets experienced the single largest price declines within a day, with a decline of more than 5 % for many U.S.-based equity products. In addition, the Dow Jones Industrial Average (DJIA), at its lowest point that day, fell by nearly 1,000 points, although it was followed by a rapid rebound. This brief period of extreme intraday volatility demonstrated the weakness of the structure and stability of U.S. financial markets, as well as the opportunities for volatility-focused HFT traders. Although a subsequent investigation by the SEC cleared high-frequency traders of directly having caused the Flash Crash, they were still blamed for exaggerating market volatility, withdrawing liquidity for many U.S.-based equities (FLASH BOYS).

Since the mid-2000s, the average trade size in the U.S. stock market had plummeted, the markets had fragmented, and the gap in time between the public view of the markets and the view of high-frequency traders had widened. The rise of high-frequency trading had been accompanied also by a rise in stock market volatility – over and above the turmoil caused by the 2008 financial crisis. The price volatility within each trading day in the U.S. stock market between 2010 and 2013 was nearly 40 percent higher than the volatility between 2004 and 2006, for instance. There were days in 2011 in which volatility was higher than in the most volatile days of the dot-com bubble. Although these different incidents have different causes, the effects were similar and some common conclusions can be drawn. The presence of algorithmic trading and HFT in the financial markets exacerbates the adverse impacts of trading-related mistakes. It may lead to extremely higher market volatility and surprises about suddenly-diminished liquidity. This raises concerns about the stability and health of the financial markets for regulators. With the continuous and fast development of HFT, larger and larger shares of equity trades were created in the U.S. financial markets. Also, there was mounting evidence of disturbed market stability and caused significant financial losses due to HFT-related errors. This led the regulators to increase their attention and effort to provide the exchanges and traders with guidance on HFT practices They also expressed concerns about high-frequency traders extracting profit at the costs of traditional investors and even manipulating the market. For instance, high-frequency traders can generate a large amount of orders within microseconds to exacerbate a trend. Other types of misconduct include: ping orders, which is using some orders to detect other hidden orders; and quote stuffing, which is issuing a large number of orders to create uncertainty in the market. HFT creates room for these kinds of market abuses, and its blazing speed and huge trade volumes make their detection difficult for regulators. Regulators have taken steps to increase their regulatory authority over HFT activities. Some of the problems that arose in the mid-2000s led to regulatory hearings in the United States Senate on dark pools, flash orders and HFT practices. Another example occurred after the Facebook IPO problem. This led the SEC to call for a limit up-limit down mechanism at the exchanges to prevent trades in individual securities from occurring outside of a specified price range so that market volatility will be under better control. These regulatory actions put stricter requirements on HFT practices, aiming to minimize the market disturbance when many fast trading orders occur within a day.

Regulating the Velocities of Dark Pools. Thought of the Day 72.0

hft-robots630

On 22 September 2010 the SEC chair Mary Schapiro signaled US authorities were considering the introduction of regulations targeted at HFT:

…High frequency trading firms have a tremendous capacity to affect the stability and integrity of the equity markets. Currently, however, high frequency trading firms are subject to very little in the way of obligations either to protect that stability by promoting reasonable price continuity in tough times, or to refrain from exacerbating price volatility.

However regulating an industry working towards moving as fast as the speed of light is no ordinary administrative task: – Modern finance is undergoing a fundamental transformation. Artificial intelligence, mathematical models, and supercomputers have replaced human intelligence, human deliberation, and human execution…. Modern finance is becoming cyborg finance – an industry that is faster, larger, more complex, more global, more interconnected, and less human. C W Lin proposes a number of principles for regulating this cyber finance industry:

  1. Update antiquated paradigms of reasonable investors and compartmentalised institutions, and confront the emerging institutional realities, and realise the old paradigms of governance of markets may be ill-suited for the new finance industry;
  2. Enhance disclosure which recognises the complexity and technological capacities of the new finance industry;
  3. Adopt regulations to moderate the velocities of finance realising that as these approach the speed of light they may contain more risks than rewards for the new financial industry;
  4. Introduce smarter coordination harmonising financial regulation beyond traditional spaces of jurisdiction.

Electronic markets will require international coordination, surveillance and regulation. The high-frequency trading environment has the potential to generate errors and losses at a speed and magnitude far greater than that in a floor or screen-based trading environment… Moreover, issues related to risk management of these technology-dependent trading systems are numerous and complex and cannot be addressed in isolation within domestic financial markets. For example, placing limits on high-frequency algorithmic trading or restricting Un-filtered sponsored access and co-location within one jurisdiction might only drive trading firms to another jurisdiction where controls are less stringent.

In these regulatory endeavours it will be vital to remember that all innovation is not intrinsically good and might be inherently dangerous, and the objective is to make a more efficient and equitable financial system, not simply a faster system: Despite its fast computers and credit derivatives, the current financial system does not seem better at transferring funds from savers to borrowers than the financial system of 1910. Furthermore as Thomas Piketty‘s Capital in the Twenty-First Century amply demonstrates any thought of the democratisation of finance induced by the huge expansion of superannuation funds together with the increased access to finance afforded by credit cards and ATM machines, is something of a fantasy, since levels of structural inequality have endured through these technological transformations. The tragedy is that under the guise of technological advance and sophistication we could be destroying the capacity of financial markets to fulfil their essential purpose, as Haldane eloquently states:

An efficient capital market transfers savings today into investment tomorrow and growth the day after. In that way, it boosts welfare. Short-termism in capital markets could interrupt this transfer. If promised returns the day after tomorrow fail to induce saving today, there will be no investment tomorrow. If so, long-term growth and welfare would be the casualty.

Momentum of Accelerated Capital. Note Quote.

high-frequency-trading

Distinct types of high frequency trading firms include independent proprietary firms, which use private funds and specific strategies which remain secretive, and may act as market makers generating automatic buy and sell orders continuously throughout the day. Broker-dealer proprietary desks are part of traditional broker-dealer firms but are not related to their client business, and are operated by the largest investment banks. Thirdly hedge funds focus on complex statistical arbitrage, taking advantage of pricing inefficiencies between asset classes and securities.

Today strategies using algorithmic trading and High Frequency Trading play a central role on financial exchanges, alternative markets, and banks‘ internalized (over-the-counter) dealings:

High frequency traders typically act in a proprietary capacity, making use of a number of strategies and generating a very large number of trades every single day. They leverage technology and algorithms from end-to-end of the investment chain – from market data analysis and the operation of a specific trading strategy to the generation, routing, and execution of orders and trades. What differentiates HFT from algorithmic trading is the high frequency turnover of positions as well as its implicit reliance on ultra-low latency connection and speed of the system.

The use of algorithms in computerised exchange trading has experienced a long evolution with the increasing digitalisation of exchanges:

Over time, algorithms have continuously evolved: while initial first-generation algorithms – fairly simple in their goals and logic – were pure trade execution algos, second-generation algorithms – strategy implementation algos – have become much more sophisticated and are typically used to produce own trading signals which are then executed by trade execution algos. Third-generation algorithms include intelligent logic that learns from market activity and adjusts the trading strategy of the order based on what the algorithm perceives is happening in the market. HFT is not a strategy per se, but rather a technologically more advanced method of implementing particular trading strategies. The objective of HFT strategies is to seek to benefit from market liquidity imbalances or other short-term pricing inefficiencies.

While algorithms are employed by most traders in contemporary markets, the intense focus on speed and the momentary holding periods are the unique practices of the high frequency traders. As the defence of high frequency trading is built around the principles that it increases liquidity, narrows spreads, and improves market efficiency, the high number of trades made by HFT traders results in greater liquidity in the market. Algorithmic trading has resulted in the prices of securities being updated more quickly with more competitive bid-ask prices, and narrowing spreads. Finally HFT enables prices to reflect information more quickly and accurately, ensuring accurate pricing at smaller time intervals. But there are critical differences between high frequency traders and traditional market makers:

  1. HFT do not have an affirmative market making obligation, that is they are not obliged to provide liquidity by constantly displaying two sides quotes, which may translate into a lack of liquidity during volatile conditions.
  2. HFT contribute little market depth due to the marginal size of their quotes, which may result in larger orders having to transact with many small orders, and this may impact on overall transaction costs.
  3. HFT quotes are barely accessible due to the extremely short duration for which the liquidity is available when orders are cancelled within milliseconds.

Besides the shallowness of the HFT contribution to liquidity, are the real fears of how HFT can compound and magnify risk by the rapidity of its actions:

There is evidence that high-frequency algorithmic trading also has some positive benefits for investors by narrowing spreads – the difference between the price at which a buyer is willing to purchase a financial instrument and the price at which a seller is willing to sell it – and by increasing liquidity at each decimal point. However, a major issue for regulators and policymakers is the extent to which high-frequency trading, unfiltered sponsored access, and co-location amplify risks, including systemic risk, by increasing the speed at which trading errors or fraudulent trades can occur.

Although there have always been occasional trading errors and episodic volatility spikes in markets, the speed, automation and interconnectedness of today‘s markets create a different scale of risk. These risks demand that exchanges and market participants employ effective quality management systems and sophisticated risk mitigation controls adapted to these new dynamics in order to protect against potential threats to market stability arising from technology malfunctions or episodic illiquidity. However, there are more deliberate aspects of HFT strategies which may present serious problems for market structure and functioning, and where conduct may be illegal, for example in order anticipation seeks to ascertain the existence of large buyers or sellers in the marketplace and then to trade ahead of those buyers and sellers in anticipation that their large orders will move market prices. A momentum strategy involves initiating a series of orders and trades in an attempt to ignite a rapid price move. HFT strategies can resemble traditional forms of market manipulation that violate the Exchange Act:

  1. Spoofing and layering occurs when traders create a false appearance of market activity by entering multiple non-bona fide orders on one side of the market at increasing or decreasing prices in order to induce others to buy or sell the stock at a price altered by the bogus orders.
  2. Painting the tape involves placing successive small amount of buy orders at increasing prices in order to stimulate increased demand.

  3. Quote Stuffing and price fade are additional HFT dubious practices: quote stuffing is a practice that floods the market with huge numbers of orders and cancellations in rapid succession which may generate buying or selling interest, or compromise the trading position of other market participants. Order or price fade involves the rapid cancellation of orders in response to other trades.

The World Federation of Exchanges insists: ― Exchanges are committed to protecting market stability and promoting orderly markets, and understand that a robust and resilient risk control framework adapted to today‘s high speed markets, is a cornerstone of enhancing investor confidence. However this robust and resilient risk control framework‘ seems lacking, including in the dark pools now established for trading that were initially proposed as safer than the open market.

Being Mediatized: How 3 Realms and 8 Dimensions Explain ‘Being’ by Peter Blank.

Untitled

Experience of Reflection: ‘Self itself is an empty word’
Leary – The neuroatomic winner: “In the province of the mind, what is believed true is true, or becomes true within limits to be learned by experience and experiment.” (Dr. John Lilly)

Media theory had noted the shoring up or even annihilation of the subject due to technologies that were used to reconfigure oneself and to see oneself as what one was: pictures, screens. Depersonalization was an often observed, reflective state of being that stood for the experience of anxiety dueto watching a ‘movie of one’s own life’ or experiencing a malfunction or anomaly in one’s self-awareness.

To look at one’s scaffolded media identity meant in some ways to look at the redactionary product of an extreme introspective process. Questioning what one interpreted oneself to be doing in shaping one’s media identities enhanced endogenous viewpoints and experience, similar to focusing on what made a car move instead of deciding whether it should stay on the paved road or drive across a field. This enabled the individual to see the formation of identity from the ‘engine perspective’.

Experience of the Hyperreal: ‘I am (my own) God’
Leary – The metaprogramming winner: “I make my own coincidences, synchronities, luck, and Destiny.”

Meta-analysis of distinctions – seeing a bird fly by, then seeing oneself seeing a bird fly by, then thinking the self that thought that – becomes routine in hyperreality. Media represent the opposite: a humongous distraction from Heidegger’s goal of the search for ‘Thinking’: capturing at present the most alarming of what occupies the mind. Hyperreal experiences could not be traced back to a person’s ‘real’ identities behind their aliases. The most questionable therefore related to dismantled privacy: a privacy that only existed because all aliases were constituting a false privacy realm. There was nothing personal about the conversations, no facts that led back to any person, no real change achieved, no political influence asserted.

From there it led to the difference between networked relations and other relations, call these other relations ‘single’ relations, or relations that remained solemnly silent. They were relations that could not be disclosed against their will because they were either too vague, absent, depressing, shifty, or dangerous to make the effort worthwhile to outsiders.

The privacy of hyperreal being became the ability to hide itself from being sensed by others through channels of information (sight, touch, hearing), but also to hide more private other selves, stored away in different, more private networks from others in more open social networks.

Choosing ‘true’ privacy, then, was throwing away distinctions one experienced between several identities. As identities were space the meaning of time became the capacity for introspection. The hyperreal being’s overall identity to the inside as lived history attained an extra meaning – indeed: as alter- or hyper-ego. With Nietzsche, the physical body within its materiality occasioned a performance that subjected its own subjectivity. Then and only then could it become its own freedom.

With Foucault one could say that the body was not so much subjected but still there functioning on its own premises. Therefore the sensitory systems lived the body’s life in connection with (not separated from) a language based in a mediated faraway from the body. If language and our sensitory systems were inseparable, beings and God may as well be.

Being Mediatized

OnionBots: Subverting Privacy Infrastructure for Cyber Attacks

Untitled

Currently, bots are monitored and controlled by a botmaster, who issues commands. The transmission of theses commands, which are known as C&C messages, can be centralized, peer-to-peer or hybrid. In the centralized architecture the bots contact the C&C servers to receive instructions from the botmaster. In this construction the message propagation speed and convergence is faster, compared to the other architectures. It is easy to implement, maintain and monitor. However, it is limited by a single point of failure. Such botnets can be disrupted by taking down or blocking access to the C&C server. Many centralized botnets use IRC or HTTP as their communication channel. GT- Bots, Agobot/Phatbot, and clickbot.a are examples of such botnets. To evade detection and mitigation, attackers developed more sophisticated techniques to dynamically change the C&C servers, such as: Domain Generation Algorithm (DGA) and fast-fluxing (single flux, double flux).

Single-fluxing is a special case of fast-flux method. It maps multiple (hundreds or even thousands) IP addresses to a domain name. These IP addresses are registered and de-registered at rapid speed, therefore the name fast-flux. These IPs are mapped to particular domain names (e.g., DNS A records) with very short TTL values in a round robin fashion. Double-fluxing is an evolution of single-flux technique, it fluxes both IP addresses of the associated fully qualified domain names (FQDN) and the IP address of the responsible DNS servers (NS records). These DNS servers are then used to translate the FQDNs to their corresponding IP addresses. This technique provides an additional level of protection and redundancy. Domain Generation Algorithms (DGA), are the algorithms used to generate a list of domains for botnets to contact their C&C. The large number of possible domain names makes it difficult for law enforcements to shut them down. Torpig and Conficker are famous examples of these botnets.

A significant amount of research focuses on the detection of malicious activities from the network perspective, since the traffic is not anonymized. BotFinder uses the high-level properties of the bot’s network traffic and employs machine learning to identify the key features of C&C communications. DISCLOSURE uses features from NetFlow data (e.g., flow sizes, client access patterns, and temporal behavior) to distinguish C&C channels.

The next step in the arms race between attackers and defenders was moving from a centralized scheme to a peer-to-peer C&C. Some of these botnets use an already existing peer-to-peer protocol, while others use customized protocols. For example earlier versions of Storm used Overnet, and the new versions use a customized version of Overnet, called Stormnet. Meanwhile other botnets such as Walowdac and Gameover Zeus organize their communication channels in different layers….(onionbots Subverting Privacy Infrastructure for Cyber Attacks)

The Silicon Ideology

ramap

Traditional anti-fascist tactics have largely been formulated in response to 20th century fascism. Not confident that they will be sufficient to defeat neo-reactionaries. That is not to say they will not be useful; merely insufficient. Neo-reactionaries must be fought on their own ground (the internet), and with their own tactics: doxxing especially, which has been shown to be effective at threatening the alt-right. Information must be spread about neo-reactionaries, such that they lose opportunities to accumulate capital and social capital….

…Transhumanism, for many, seems to be the part of neo-reactionary ideology that “sticks out” from the rest. Indeed, some wonder how neo-reactionaries and transhumanists would ever mix, and why I am discussing LessWrong in the context of neo-reactionary beliefs. For the last question, this is because LessWrong served as a convenient “incubation centre” so to speak for neo-reactionary ideas to develop and spread for many years, and the goals of LessWrong: a friendly super-intelligent AI ruling humanity  for its own good, was fundamentally compatible with existing neo-reactionary ideology, which had already begun developing a futurist orientation in its infancy due, in part, to its historical and cultural influences. The rest of the question, however, is not just historical, but theoretical: what is transhumanism and why does it mix well with reactionary ideology?…..

…..In the words of Moldbug

A startup is basically structured as a monarchy. We don’t call it that, of course. That would seem weirdly outdated, and anything that’s not democracy makes people uncomfortable. We are biased toward the democratic-republican side of the spectrum. That’s what we’re used to from civics classes. But, the truth is that startups and founders lean toward the dictatorial side because that structure works better for startups.

He doesn’t, of course, claim that this would be a good way to rule a country, but that is the clear message sent by his political projects. Balaji Srinivasan made a similar rhetorical move, using clear neo-reactionary ideas without mentioning their sources, in a speech to a “startup school” affiliated with Y Combinator:

We want to show what a society run by Silicon Valley would look like. That’s where “exit” comes in . . . . It basically means: build an opt-in society, ultimately outside the US, run by technology. And this is actually where the Valley is going. This is where we’re going over the next ten years . . . [Google co-founder] Larry Page, for example, wants to set aside a part of the world for unregulated experimentation. That’s carefully phrased. He’s not saying, “take away the laws in the U.S.” If you like your country, you can keep it. Same with Marc Andreessen: “The world is going to see an explosion of countries in the years ahead—doubled, tripled, quadrupled countries.”

Well, thats the the-silicon-ideology through.

 

Duqu 2.0

 InfectionTime

unsigned int __fastcall xor_sub_10012F6D(int encrstr, int a2)

{

  unsigned int result; // eax@2
  int v3;              // ecx@4
  if ( encrstr )
  {
    result = *(_DWORD *)encrstr ^ 0x86F186F1;
    *(_DWORD *)a2 = result;
    if ( (_WORD)result )
    {
      v3 = encrstr - a2;

do

      {
        if ( !*(_WORD *)(a2 + 2) )

break;

        a2 += 4;
        result = *(_DWORD *)(v3 + a2) ^ 0x86F186F1;
        *(_DWORD *)a2 = result;
      }
      while ( (_WORD)result );

} }

else

  {
    result = 0;
    *(_WORD *)a2 = 0;

}

  return result;
}

A closer look at the above C code reveals that the string decryptor routine actually has two parameters: “encrstr” and “a2”. First, the decryptor function checks if the input buffer (the pointer of the encrypted string) points to a valid memory area (i.e., it does not contain NULL value). After that, the first 4 bytes of the encrypted string buffer is XORed with the key “0x86F186F1” and the result of the XOR operation is stored in variable “result”. The first DWORD (first 4 bytes) of the output buffer a2 is then populated by this resulting value (*(_DWORD *)a2 = result;). Therefore, the first 4 bytes of the output buffer will contain the first 4 bytes of the cleartext string.

If the first two bytes (first WORD) of the current value stored in variable “result” contain ‘\0’ characters, the original cleartext string was an empty string and the resulting output buffer will be populated by a zero value, stored on 2 bytes. If the first half of the actual decrypted block (“result” variable) contains something else, the decryptor routine checks the second half of the block (“if ( !*(_WORD *)(a2 + 2) )”). If this WORD value is NULL, then decryption will be ended and the output buffer will contain only one Unicode character with two closing ’\0’ bytes.

If the first decrypted block doens’t contain zero character (generally this is the case), then the decryption cycle continues with the next 4-byte encrypted block. The pointer of the output buffer is incremeted by 4 bytes to be able to store the next cleartext block (”a2 += 4;”). After that, the following 4-byte block of the ”ciphertext” will be decrypted with the fixed decryption key (“0x86F186F1”). The result is then stored within the next 4 bytes of the output buffer. Now, the output buffer contains 2 blocks of the cleartext string.

The condition of the cycle checks if the decryption reached its end by checking the first half of the current decrypted block. If it did not reached the end, then the cycle continues with the decryption of the next input blocks, as described above. Before the decryption of each 4-byte ”ciphertext” block, the routine also checks the second half of the previous cleartext block to decide whether the decoded string is ended or not.

The original Duqu used a very similar string decryption routine, which we printed in the following figure below. We can see that this routine is an exact copy of the previously discussed routine (variable ”a1” is analogous to ”encrstr” argument). The only difference between the Duqu 2.0 (duqu2) and Duqu string decryptor routines is that the XOR keys differ (in Duqu, the key is”0xB31FB31F”).

We can also see that the decompiled code of Duqu contains the decryptor routine in a more compact manner (within a ”for” loop instead of a ”while”), but the two routines are essentially the same. For example, the two boundary checks in the Duqu 2.0 routine (”if ( !*(_WORD *)(a2 + 2) )” and ”while ( (_WORD)result );”) are analogous to the boundary check at the end of the ”for” loop in the Duqu routine (”if ( !(_WORD)v4 || !*(_WORD *)(result + 2) )”). Similarly, the increment operation within the head of the for loop in the Duqu sample (”result += 4”) is analogous to the increment operation ”a2 += 4;” in the Duqu 2.0 sample.

int __cdecl b31f_decryptor_100020E7(int a1, int a2)

{

  _DWORD *v2;      // edx@1
  int result;      // eax@2
  unsigned int v4; // edi@6
  v2 = (_DWORD *)a1;

if ( a1 ) {

    for ( result = a2; ; result += 4 )
    {
v4 = *v2 ^ 0xB31FB31F;
      *(_DWORD *)result = v4;
if ( !(_WORD)v4 || !*(_WORD *)(result + 2) )
        break;

++v2; }

}

else

  {
    result = 0;
    *(_WORD *)a2 = 0;

}

  return result;
}