The Silicon Ideology


Traditional anti-fascist tactics have largely been formulated in response to 20th century fascism. Not confident that they will be sufficient to defeat neo-reactionaries. That is not to say they will not be useful; merely insufficient. Neo-reactionaries must be fought on their own ground (the internet), and with their own tactics: doxxing especially, which has been shown to be effective at threatening the alt-right. Information must be spread about neo-reactionaries, such that they lose opportunities to accumulate capital and social capital….

…Transhumanism, for many, seems to be the part of neo-reactionary ideology that “sticks out” from the rest. Indeed, some wonder how neo-reactionaries and transhumanists would ever mix, and why I am discussing LessWrong in the context of neo-reactionary beliefs. For the last question, this is because LessWrong served as a convenient “incubation centre” so to speak for neo-reactionary ideas to develop and spread for many years, and the goals of LessWrong: a friendly super-intelligent AI ruling humanity  for its own good, was fundamentally compatible with existing neo-reactionary ideology, which had already begun developing a futurist orientation in its infancy due, in part, to its historical and cultural influences. The rest of the question, however, is not just historical, but theoretical: what is transhumanism and why does it mix well with reactionary ideology?…..

…..In the words of Moldbug

A startup is basically structured as a monarchy. We don’t call it that, of course. That would seem weirdly outdated, and anything that’s not democracy makes people uncomfortable. We are biased toward the democratic-republican side of the spectrum. That’s what we’re used to from civics classes. But, the truth is that startups and founders lean toward the dictatorial side because that structure works better for startups.

He doesn’t, of course, claim that this would be a good way to rule a country, but that is the clear message sent by his political projects. Balaji Srinivasan made a similar rhetorical move, using clear neo-reactionary ideas without mentioning their sources, in a speech to a “startup school” affiliated with Y Combinator:

We want to show what a society run by Silicon Valley would look like. That’s where “exit” comes in . . . . It basically means: build an opt-in society, ultimately outside the US, run by technology. And this is actually where the Valley is going. This is where we’re going over the next ten years . . . [Google co-founder] Larry Page, for example, wants to set aside a part of the world for unregulated experimentation. That’s carefully phrased. He’s not saying, “take away the laws in the U.S.” If you like your country, you can keep it. Same with Marc Andreessen: “The world is going to see an explosion of countries in the years ahead—doubled, tripled, quadrupled countries.”

Well, thats the the-silicon-ideology through.


Duqu 2.0


unsigned int __fastcall xor_sub_10012F6D(int encrstr, int a2)


  unsigned int result; // eax@2
  int v3;              // ecx@4
  if ( encrstr )
    result = *(_DWORD *)encrstr ^ 0x86F186F1;
    *(_DWORD *)a2 = result;
    if ( (_WORD)result )
      v3 = encrstr - a2;


        if ( !*(_WORD *)(a2 + 2) )


        a2 += 4;
        result = *(_DWORD *)(v3 + a2) ^ 0x86F186F1;
        *(_DWORD *)a2 = result;
      while ( (_WORD)result );

} }


    result = 0;
    *(_WORD *)a2 = 0;


  return result;

A closer look at the above C code reveals that the string decryptor routine actually has two parameters: “encrstr” and “a2”. First, the decryptor function checks if the input buffer (the pointer of the encrypted string) points to a valid memory area (i.e., it does not contain NULL value). After that, the first 4 bytes of the encrypted string buffer is XORed with the key “0x86F186F1” and the result of the XOR operation is stored in variable “result”. The first DWORD (first 4 bytes) of the output buffer a2 is then populated by this resulting value (*(_DWORD *)a2 = result;). Therefore, the first 4 bytes of the output buffer will contain the first 4 bytes of the cleartext string.

If the first two bytes (first WORD) of the current value stored in variable “result” contain ‘\0’ characters, the original cleartext string was an empty string and the resulting output buffer will be populated by a zero value, stored on 2 bytes. If the first half of the actual decrypted block (“result” variable) contains something else, the decryptor routine checks the second half of the block (“if ( !*(_WORD *)(a2 + 2) )”). If this WORD value is NULL, then decryption will be ended and the output buffer will contain only one Unicode character with two closing ’\0’ bytes.

If the first decrypted block doens’t contain zero character (generally this is the case), then the decryption cycle continues with the next 4-byte encrypted block. The pointer of the output buffer is incremeted by 4 bytes to be able to store the next cleartext block (”a2 += 4;”). After that, the following 4-byte block of the ”ciphertext” will be decrypted with the fixed decryption key (“0x86F186F1”). The result is then stored within the next 4 bytes of the output buffer. Now, the output buffer contains 2 blocks of the cleartext string.

The condition of the cycle checks if the decryption reached its end by checking the first half of the current decrypted block. If it did not reached the end, then the cycle continues with the decryption of the next input blocks, as described above. Before the decryption of each 4-byte ”ciphertext” block, the routine also checks the second half of the previous cleartext block to decide whether the decoded string is ended or not.

The original Duqu used a very similar string decryption routine, which we printed in the following figure below. We can see that this routine is an exact copy of the previously discussed routine (variable ”a1” is analogous to ”encrstr” argument). The only difference between the Duqu 2.0 (duqu2) and Duqu string decryptor routines is that the XOR keys differ (in Duqu, the key is”0xB31FB31F”).

We can also see that the decompiled code of Duqu contains the decryptor routine in a more compact manner (within a ”for” loop instead of a ”while”), but the two routines are essentially the same. For example, the two boundary checks in the Duqu 2.0 routine (”if ( !*(_WORD *)(a2 + 2) )” and ”while ( (_WORD)result );”) are analogous to the boundary check at the end of the ”for” loop in the Duqu routine (”if ( !(_WORD)v4 || !*(_WORD *)(result + 2) )”). Similarly, the increment operation within the head of the for loop in the Duqu sample (”result += 4”) is analogous to the increment operation ”a2 += 4;” in the Duqu 2.0 sample.

int __cdecl b31f_decryptor_100020E7(int a1, int a2)


  _DWORD *v2;      // edx@1
  int result;      // eax@2
  unsigned int v4; // edi@6
  v2 = (_DWORD *)a1;

if ( a1 ) {

    for ( result = a2; ; result += 4 )
v4 = *v2 ^ 0xB31FB31F;
      *(_DWORD *)result = v4;
if ( !(_WORD)v4 || !*(_WORD *)(result + 2) )

++v2; }



    result = 0;
    *(_WORD *)a2 = 0;


  return result;

The Political: NRx, Neoreactionism Archived.

This one is eclectic and for the record.


The techno-commercialists appear to have largely arrived at neoreaction via right-wing libertarianism. They are defiant free marketeers, sharing with other ultra-capitalists such as Randian Objectivists a preoccupation with “efficiency,” a blind trust in the power of the free market, private property, globalism and the onward march of technology. However, they are also believers in the ideal of small states, free movement and absolute or feudal monarchies with no form of democracy. The idea of “exit,” predominantly a techno-commercialist viewpoint but found among other neoreactionaries too, essentially comes down to the idea that people should be able to freely exit their native country if they are unsatisfied with its governance-essentially an application of market economics and consumer action to statehood. Indeed, countries are often described in corporate terms, with the King being the CEO and the aristocracy shareholders.

The “theonomists” place more emphasis on the religious dimension of neoreaction. They emphasise tradition, divine law, religion rather than race as the defining characteristic of “tribes” of peoples and traditional, patriarchal families. They are the closest group in terms of ideology to “classical” or, if you will, “palaeo-reactionaries” such as the High Tories, the Carlists and French Ultra-royalists. Often Catholic and often ultramontanist. Finally, there’s the “ethnicist” lot, who believe in racial segregation and have developed a new form of racial ideology called “Human Biodiversity” (HBD) which says people of African heritage are naturally less intelligent than people of Caucasian and east Asian heritage. Of course, the scientific community considers the idea that there are any genetic differences between human races beyond melanin levels in the skin and other cosmetic factors to be utterly false, but presumably this is because they are controlled by “The Cathedral.” They like “tribal solidarity,” tribes being defined by shared ethnicity, and distrust outsiders.


Overlap between these groups is considerable, but there are also vast differences not just between them but within them. What binds them together is common opposition to “The Cathedral” and to “progressive” ideology. Some of their criticisms of democracy and modern society are well-founded, and some of them make good points in defence of the monarchical system. However, I don’t much like them, and I doubt they’d much like me.

Whereas neoreactionaries are keen on the free market and praise capitalism, unregulated capitalism is something I am wary of. Capitalism saw the collapse of traditional monarchies in Europe in the 19th century, and the first revolutions were by capitalists seeking to establish democratic, capitalist republics where the bourgeoisie replaced the aristocratic elite as the ruling class; setting an example revolutionary socialists would later follow. Capitalism, when unregulated, leads to monopolies, exploitation of the working class, unsustainable practices in pursuit of increased short-term profits, globalisation and materialism. Personally, I prefer distributist economics, which embrace private property rights but emphasise widespread ownership of wealth, small partnerships and cooperatives replacing private corporations as the basic units of the nation’s economy. And although critical of democracy, the idea that any form of elected representation for the lower classes is anathaema is not consistent with my viewpoint; my ideal government would not be absolute or feudal monarchy, but executive constitutional monarchy with a strong monarch exercising executive powers and the legislative role being at least partially controlled by an elected parliament-more like the Bourbon Restoration than the Ancien Régime, though I occasionally say “Vive l’Ancien Régime!” on forums or in comments to annoy progressive types. Finally, I don’t believe in racialism in any form. I tend to attribute preoccupations with racial superiority to deep insecurity which people find the need to suppress by convincing themselves that they are “racially superior” to others, in absence of any actual talent or especial ability to take pride in. The 20th century has shown us where dividing people up based on their genetics leads us, and it is not somewhere I care to return to.

I do think it is important to go into why Reactionaries think Cthulhu always swims left, because without that they’re vulnerable to the charge that they have no a priori reason to expect our society to have the biases it does, and then the whole meta-suspicion of the modern Inquisition doesn’t work or at least doesn’t work in that particular direction. Unfortunately (for this theory) I don’t think their explanation is all that great (though this deserves substantive treatment) and we should revert to a strong materialist prior, but of course I would say that, wouldn’t I.

And of course you could get locked up for wanting fifty Stalins! Just try saying how great Enver Hoxha was at certain places and times. Of course saying you want fifty Stalins is not actually advocating that Stalinism become more like itself – as Leibniz pointed out, a neat way of telling whether something is something is checking whether it is exactly like that thing, and nothing could possibly be more like Stalinism than Stalinism. Of course fifty Stalins is further in the direction that one Stalin is from our implied default of zero Stalins. But then from an implied default of 1.3 kSt it’s a plea for moderation among hypostalinist extremists. As Mayberry Mobmuck himself says, “sovereign is he who determines the null hypothesis.”

Speaking of Stalinism, I think it does provide plenty of evidence that policy can do wonderful things for life expectancy and so on, and I mean that in a totally unironic “hail glorious comrade Stalin!” way, not in a “ha ha Stalin sure did kill a lot people way.” But this is a super-unintuitive claim to most people today, so ill try to get around to summarizing the evidence at some point.

‘Neath an eyeless sky, the inkblack sea
Moves softly, utters not save a quiet sound
A lapping-sound, not saying what may be
The reach of its voice a furthest bound;
And beyond it, nothing, nothing known
Though the wind the boat has gently blown
Unsteady on shifting and traceless ground
And quickly away from it has flown.

Allow us a map, and a lamp electric
That by instrument we may probe the dark
Unheard sounds and an unseen metric
Keep alive in us that unknown spark
To burn bright and not consume or mar
Has the unbounded one come yet so far
For night over night the days to mark
His journey — adrift, without a star?

Chaos is the substrate, and the unseen action (or non-action) against disorder, the interloper. Disorder is a mere ‘messing up order’.  Chaos is substantial where disorder is insubstantial. Chaos is the ‘quintessence’ of things, chaotic itself and yet always-begetting order. Breaking down disorder, since disorder is maladaptive. Exit is a way to induce bifurcation, to quickly reduce entropy through separation from the highly entropic system. If no immediate exit is available, Chaos will create one.

Games and Virtual Environments: Playing in the Dark. Could These be Havens for Criminal Networks?

Pixelated unrecognizable hooded cyber criminal

Both British and American agencies have identified games and virtual environments, which they term “GVEs,” as havens for illegal activity. Released documents show that, because of fears that “criminal networks could use the games to communicate secretly, move money or plot attacks,” intelligence operatives have entered the video game terrain as virtual spies. While there, the spies create “make-believe characters to snoop,” “recruit informers,” and collect “data and contents of communications between players,” because features common to video games, such as “fake identities,” and “voice and text chats” provide an ideal place for criminal organizations to operate. A 2008 document released by the National Security Agency (NSA) warned that, although “[o]nline games might seem innocuous . . . they ha[ve] the potential to be a ‘target-rich communication network’ allowing intelligence suspects ‘a way to hide in plain sight.’”

Furthermore, according to the NSA, “Massively Multiplayer Online Games (MMOG) are ideal locations” for criminals “because of the enormous scale on which they are played,” featuring thousands of subscribers simultaneously using various servers hosted in a wide array of places, including on gamers’ own dedicated servers. Additionally, GVEs may often be accessed “via mobile devices connected wirelessly,” such as phones, handhelds, or laptops. Through connections to online gaming environments, these types of devices allow for an additional place where users can interact, connect, or share. These sites can be “advertised” in online games and password-protected so that they function essentially as private meeting places for criminal organizations.

Consequently, the online gaming landscape poses a unique challenge for law enforcement because it not only involves a new realm wherein criminal organizations thrive, but it also represents communications that more closely involve innocent parties and are more technically difficult to intercept. As a result, law enforcement around the world will need to make difficult decisions regarding surveillance and regulation of these types of communications.

The technical difficulties posed by in-game communications raise an especially difficult dilemma for law enforcement because they present issues in an area skirting the edge of law enforcement’s technological ability. Often times, even if law enforcement agencies have the legal authority to conduct surveillance, they do not have the technical capability to survey the use of communications like those that take place in online games. The Federal Bureau of Investigation (FBI) labels this difficulty the “going dark” problem, which explains how intelligence-gathering officials lack the technological ability to carry out intelligence gathering as quickly as required. That problem manifests itself as an inability for prosecutors to effectively track and counteract criminal behavior on large scales, as was the case in 2009, when the Drug Enforcement Agency learned of an international drug and weapons smuggling ring with operations in North and South America, Europe, and Africa. Because the leader of that ring knew which communications lacked “intercept solutions,” much of the ring still functions today. The primary difficulty in prosecuting crimes like these relates to law enforcement’s desire to access data in real or near-real time, rather than to access stored information.

In the wake of these interests, how governments approach the regulation and surveillance of online games will greatly affect their citizens and a broad swath of the business world. Examining how law enforcement can effectively monitor and combat organized criminal activity that involves the use of online games.

PLAYING IN THE DARK by Mathew Ruskin

Main Stuxnet DLL: Installing Stuxnet into the Infected Machine


When the main DLL begins the execution. It unupx itself (as the DLL is upxed) and then checks the configuration data of this stuxnet sample and checks the environment to choose if it will continue or exit from the beginning.

It checks if the configuration data is correct and recent and then it checks the admin rights. If it’s not running on administrator level, it uses one of two zero-day vulnerabilities to escalate the privileges and run in the administrator level.

CVE-2010-2743(MS-10-073) – Win32K.sys Keyboard Layout Vulnerability CVE-xxxx-xxxx(MS-xx-xxx) – Windows Task Scheduler Vulnerability

These two vulnerabilities allow the worm to escalate the privileges and run in a new process (“csrss.exe” in case of Win32K.sys) or as a new task in the Task Scheduler case.

It makes also some other checks like checking on 64bits or 32bits and so on.

After everything goes right and the environment is prepared to be infected by stuxnet, it injects itself into another process to install itself from that process. The injection begins by searching for an Antivirus application installed in the machine.

Depending on the antivirus application (AVP or McAfee or what?), stuxnet chooses the process to inject itself into. If there’s no antivirus program it chooses “lsass.exe”….

The Function #16 begins by checking the configuration data and be sure that everything is ready to begin the installation. And also, it checks if the there’s a value in the registry with this name “NTVDM TRACE” in

SOFTWARE\Microsoft\Windows\CurrentVersion\MS-DOS Emulation

And then, it checks if this value equal to “19790509”. This special number seems a date “May 9, 1979” and this date has a historical meaning “Habib Elghanian was executed by a firing squad in Tehran sending shock waves through the closely knit Iranian Jewish community”….


Viral Load


Even if viruses have been quarantined on a user’s system, the user is often not allowed to access the quarantined files. The ostensible reason for this high level of secrecy is the claim that open access to computer virus code would result in people writing more computer viruses – a difficult claim for an antivirus company to make given that once they themselves have a copy of a virus then machines running their antivirus software should already be protected from that virus. A more believable explanation for antivirus companies’ unwillingness to release past virus programs is that a large part of their business model is predicated upon their ability to exclusively control stockpiles of past computer virus specimens as closely guarded intellectual property.

None of this absence of archival material is helped by the fact that the concept of a computer virus is itself an ontologically ambiguous category. The majority of so-called malicious software entities that have plagued Internet users in the past few years have technically not been viruses but worms. Additionally, despite attempts to define clear nosological and epidemiological categories for computer viruses and worms, there is still no consistent system for stabilizing the terms themselves, let alone assessing their relative populations. Elizabeth Grosz commented during an interview with the editors of Found Object journal that part of the reason for the ontological ambiguity of computer viruses is that they are an application of a biological metaphor that is largely indeterminate itself. According to Grosz, we are as mystified, if not more so, by biological viruses as we are by computer viruses. Perhaps we know even more about computer viruses than we do about biological viruses! The same obscurities are there at the biological level that exists at the computer level (…)

As Grosz suggests, it is no wonder that computer viruses are so ontologically uncertain, given that their biological namesakes threaten to undermine many of the binarisms that anchor modern Western technoscience, such as distinctions between organic and inorganic, dead and living, matter and form, and sexual and asexual reproduction.

Viral Load

Layer 7 DDoS Attacks


Layer 7 attacks are some of the most difficult attacks to mitigate because they mimic normal user behavior and are harder to identify. The application layer (per the Open Systems Interconnection model) consists of protocols that focus on process-to-process communication across an IP network and is the only layer that directly interacts with the end user. A sophisticated Layer 7 attack may target specific areas of a website, making it even more difficult to separate from normal traffic. For example, a Layer 7 DDoS attack might target a website element (e.g., company logo or page graphic) to consume resources every time it is downloaded with the intent to exhaust the server. Additionally, some attackers may use Layer 7 DDoS attacks as diversionary tactics to steal information.

Verisign’s recent trends show that DDoS attacks are becoming more sophisticated and complex, including an increase in application layer attacks. Verisign has observed that Layer 7 attacks are regularly mixed in with Layer 3 and Layer 4 DDoS flooding attacks. In fact, 35 percent of DDoS attacks mitigated in Q2 2016 utilized three or more attack types.

In a recent Layer 7 DDoS attack mitigated by Verisign, the attackers started out with NTP and SSDP reflection attacks that generated volumetric floods of UDP traffic peaking over 50 Gbps and over 5 Mpps designed to consume the target organization’s bandwidth. Verisign’s analysis shows that the attack was launched from a well-distributed botnet of more than 30,000 bots from across the globe with almost half of the attack traffic originating in the United States.

Once the attackers realized that the volumetric attack was mitigated, they progressed to Layer 7 HTTP/HTTPS attacks. Hoping to exhaust the server, the attackers ooded the target organization with a large number of HTTPS GET/POST requests using the following methods, amongst others:

  • Basic HTTP Floods: Requests for URLs with an old version of HTTP no longer used by the latest browsers or proxies
  • WordPress Floods: WordPress pingback attacks where the requests bypassed all caching by including a random number in the URL to make each request appear unique
  • Randomized HTTP Floods: Requests for random URLs that do not exist – for example, if is the valid URL, the attackers were abusing this by requesting pages like id=12345, etc.

    The challenge with a Layer 7 DDoS attack lies in the ability to distinguish human traffic from bot traffic, which can make it harder to defend against the volumetric attacks. As Layer 7 attacks continue to grow in complexity with ever-changing attack signatures and patterns, organizations and DDoS mitigation providers will need to have a dynamic mitigation strategy in place. Layer 7 visibility along with proactive monitoring and advanced alerting are critical to effectively defend against increasing Layer 7 threats.

    As organizations develop their DDoS protection strategies, many may focus solely on solutions that can handle large network layer attacks. However, they should also consider whether the solution can detect and mitigate Layer 7 attacks, which require less bandwidth and fewer packets to achieve the same goal of bringing down a site.

    For the Full Report, visit here.