Pseudo Right – Social Pathologies

0*AWwYltPNSisQi3ec

Reality calibration is a prerequisite of successful action. If we define the Right as those who uphold the Truth, then the anti-Right will be those who uphold some form of error. Right politics, viewed through this definitional understanding, ceases to become an issue of Right vs Left, rather it’s an issue of Right vs wrong. However, current definitions of Right and Left seemed based more on intuitional approaches rather then objective ones. This way of tackling the classification is rather wooley and leaves lots of room for error to creep in. Defining the Right as a sort of Anti-Left – makes no demands on reality calibration and does nothing to protect the Right classified ideology from holding positions which are flat out wrong.  Intuition is not infallible, and as the Master teaches, one can fall from both the right AND left side of the narrow path.

Indeed when you think of the “Right” or  “Conservatism”, several intuitive concepts come to mind; the respect of tradition and place, social order and homogeneity, a preference for orthodoxy and an extremely limited tolerance for novelty, disorder and deviancy. Thinking about the right intuitively tends to result in a list of features which are associated with the concept without actually defining it. What you end up with this approach is not a definition but a laundry list of features which feel “Right,” and any ideology which expresses these features is thus classified accordingly. Pseudo-Right.

Advertisement

Homotopies. Thought of the Day 35.0

maxresdefault

One of the major innovations of Homotopy Type Theory is the alternative interpretation of types and tokens it provides using ideas from homotopy theory. Homotopies can be thought of as continuous distortions between functions, or between the images of functions. Facts about homotopy theory are therefore only given ‘up to continuous distortions’, and only facts that are preserved by all such distortions are well-defined. Homotopy is usually presented by starting with topological spaces. Given two such spaces X and Y , we say that continuous maps f, g : X → Y are homotopic, written ‘f ∼ g’, just if there is a continuous map h : [0, 1] × X → Y with h(0, x) = f(x) and h(1, x) = g(x) ∀ x ∈ X. Such a map is a homotopy between f and g. For example, any two curves between the same pair of points in the Euclidean plane are homotopic to one another, because they can be continuously deformed into one another. However, in a space with a hole in it (such as an annulus) there can be paths between two points that are not homotopic, since a path going one way around the hole cannot be continuously deformed into a path going the other way around the hole.

Two spaces X and Y are homotopy equivalent if there are maps f : X → Y and f′ : Y → X such that f′◦ f ∼ idX and f ◦ f′ ∼ idY. This is an equivalence relation between topological spaces, so we can define the equivalence class [X] of all topological spaces homotopy equivalent to X, called the homotopy type of X. Homotopy theory does not distinguish between spaces that are homotopy equivalent, and thus homotopy types, rather than the topological spaces themselves, are the basic objects of study in homotopy theory.

In the homotopy interpretation of the basic language of HoTT we interpret types as homotopy types or ‘spaces’. It is then natural to interpret tokens of a type as ‘points’ in a space. The points of topological space have what we might call absolute identity, being elements of the underlying set. But a homotopy equivalence will in general map a given point x ∈ X to some other x′ ∈ X, and so when we work with homotopy types the absolute identity of the points is lost. Rather, we must say that a token belonging to a type is interpreted as a function from a one-point space into the space.

Given two points a and b in a space X, a path between them is a function γ : [0, 1] → X with γ(0) = a and γ(1) = b. However, given any such path, X can be smoothly distorted by retracting the path along its length toward a. Thus a space containing two distinct points and a path between them is homotopic to a space in which both points coincide (and the path is just a constant path at this point). We may therefore interpret a path between points as an identification of those points. Thus the identity type IdX(a,b) corresponds to the path space of paths from a to b. This also gives a straightforward justification for the principle of path induction: since any path is homotopic to a constant path (which corresponds to a trivial self-identification), any property (that respects homotopy) that holds of all trivial self-identifications must hold of all identifications.

OnionBots: Subverting Privacy Infrastructure for Cyber Attacks

Untitled

Currently, bots are monitored and controlled by a botmaster, who issues commands. The transmission of theses commands, which are known as C&C messages, can be centralized, peer-to-peer or hybrid. In the centralized architecture the bots contact the C&C servers to receive instructions from the botmaster. In this construction the message propagation speed and convergence is faster, compared to the other architectures. It is easy to implement, maintain and monitor. However, it is limited by a single point of failure. Such botnets can be disrupted by taking down or blocking access to the C&C server. Many centralized botnets use IRC or HTTP as their communication channel. GT- Bots, Agobot/Phatbot, and clickbot.a are examples of such botnets. To evade detection and mitigation, attackers developed more sophisticated techniques to dynamically change the C&C servers, such as: Domain Generation Algorithm (DGA) and fast-fluxing (single flux, double flux).

Single-fluxing is a special case of fast-flux method. It maps multiple (hundreds or even thousands) IP addresses to a domain name. These IP addresses are registered and de-registered at rapid speed, therefore the name fast-flux. These IPs are mapped to particular domain names (e.g., DNS A records) with very short TTL values in a round robin fashion. Double-fluxing is an evolution of single-flux technique, it fluxes both IP addresses of the associated fully qualified domain names (FQDN) and the IP address of the responsible DNS servers (NS records). These DNS servers are then used to translate the FQDNs to their corresponding IP addresses. This technique provides an additional level of protection and redundancy. Domain Generation Algorithms (DGA), are the algorithms used to generate a list of domains for botnets to contact their C&C. The large number of possible domain names makes it difficult for law enforcements to shut them down. Torpig and Conficker are famous examples of these botnets.

A significant amount of research focuses on the detection of malicious activities from the network perspective, since the traffic is not anonymized. BotFinder uses the high-level properties of the bot’s network traffic and employs machine learning to identify the key features of C&C communications. DISCLOSURE uses features from NetFlow data (e.g., flow sizes, client access patterns, and temporal behavior) to distinguish C&C channels.

The next step in the arms race between attackers and defenders was moving from a centralized scheme to a peer-to-peer C&C. Some of these botnets use an already existing peer-to-peer protocol, while others use customized protocols. For example earlier versions of Storm used Overnet, and the new versions use a customized version of Overnet, called Stormnet. Meanwhile other botnets such as Walowdac and Gameover Zeus organize their communication channels in different layers….(onionbots Subverting Privacy Infrastructure for Cyber Attacks)