Bacteria’s Perception-Action Circle: Materiality of the Ontological. Thought of the Day 136.0

diatoms_in_the_ice

The unicellular organism has thin filaments protruding from its cell membrane, and in the absence of any stimuli, it simply wanders randomly around by changing between two characteristical movement patterns. One is performed by rotating the flagella counterclockwise. In that case, they form a bundle which pushes the cell forward along a curved path, a ‘run’ of random duration with these runs interchanging with ‘tumbles’ where the flagella shifts to clockwise rotation, making them work independently and hence moving the cell erratically around with small net displacement. The biased random walk now consists in the fact than in the presence of a chemical attractant, the runs happening to carry the cell closer to the attractant are extended, while runs in other directions are not. The sensation of the chemical attractant is performed temporally rather than spatially, because the cell moves too rapidly for concentration comparisons between its two ends to be possible. A chemical repellant in the environment gives rise to an analogous behavioral structure – now the biased random walk takes the cell away from the repellant. The bias saturates very quickly – which is what prevents the cell from continuing in a ‘false’ direction, because a higher concentration of attractant will now be needed to repeat the bias. The reception system has three parts, one detecting repellants such as leucin, the other detecting sugars, the third oxygen and oxygen-like substances.

Fig-4-Uexkull's-model-of-the-functional-cycle

The cell’s behavior forms a primitive, if full-fledged example of von Uexküll’s functional circle connecting specific perception signs and action signs. Functional circle behavior is thus no privilege for animals equipped with central nervous systems (CNS). Both types of signs involve categorization. First, the sensory receptors of the bacterium evidently are organized after categorization of certain biologically significant chemicals, while most chemicals that remain insignificant for the cell’s metabolism and survival are ignored. The self-preservation of metabolism and cell structure is hence the ultimate regulator which is supported by the perception-action cycles described. The categorization inherent in the very structure of the sensors is mirrored in the categorization of act types. Three act types are outlined: a null-action, composed of random running and tumbling, and two mirroring biased variants triggered by attractants and repellants, respectively. Moreover, a negative feed-back loop governed by quick satiation grants that the window of concentration shifts to which the cell is able to react appropriately is large – it so to speak calibrates the sensory system so that it does not remain blinded by one perception and does not keep moving the cell forward on in one selected direction. This adaptation of the system grants that it works in a large scale of different attractor/repellor concentrations. These simple signals at stake in the cell’s functional circle display an important property: at simple biological levels, the distinction between signs and perception vanish – that distinction is supposedly only relevant for higher CNS-based animals. Here, the signals are based on categorical perception – a perception which immediately categorizes the entity perceived and thus remains blind to internal differences within the category.

Pandemic e coli

The mechanism by which the cell identifies sugar, is partly identical to what goes on in human taste buds. Sensation of sugar gradients must, of course, differ from the consumption of it – while the latter, of course, destroys the sugar molecule, the former merely reads an ‘active site’ on the outside of the macromolecule. E . Coli – exactly like us – may be fooled by artificial sweeteners bearing the same ‘active site’ on their outer perimeter, even if being completely different chemicals (this is, of course, the secret behind such sweeteners, they are not sugars and hence do not enter the digestion process carrying the energy of carbohydrates). This implies that E . coli may be fooled. Bacteria may not lie, but a simpler process than lying (which presupposes two agents and the ability of being fooled) is, in fact, being fooled (presupposing, in turn, only one agent and an ambiguous environment). E . coli has the ability to categorize a series of sugars – but, by the same token, the ability to categorize a series of irrelevant substances along with them. On the one hand, the ability to recognize and categorize an object by a surface property only (due to the weak van der Waal-bonds and hydrogen bonds to the ‘active site’, in contrast to the strong covalent bonds holding the molecule together) facilitates perception economy and quick action adaptability. On the other hand, the economy involved in judging objects from their surface only has an unavoidable flip side: it involves the possibility of mistake, of being fooled by allowing impostors in your categorization. So in the perception-action circle of a bacterium, some of the self-regulatory stability of a metabolism involving categorized signal and action involvement with the surroundings form intercellular communication in multicellular organisms to reach out to complicated perception and communication in higher animals.

Advertisement

Expressivity of Bodies: The Synesthetic Affinity Between Deleuze and Merleau-Ponty. Thought of the Day 54.0

6

It is in the description of the synesthetic experience that Deleuze finds resources for his own theory of sensation. And it is in this context that Deleuze and Merleau-Ponty are closest. For Deleuze sees each sensation as a dynamic evolution, sensation is that which passes from one ‘order’ to another, from one ‘level’ to another. This means that each sensation is at diverse levels, of different orders, or in several domains….it is characteristic of sensation to encompass a constitutive difference of level and a plurality of constituting domains. What this means for Deleuze is that sensations cannot be isolated in a particular field of sense; these fields interpenetrate, so that sensation jumps from one domain to another, becoming-color in the visual field or becoming-music on the auditory level. For Deleuze (and this goes beyond what Merleau-Ponty explicitly says), sensation can flow from one field to another, because it belongs to a vital rhythm which subtends these fields, or more precisely, which gives rise to the different fields of sense as it contracts and expands, as it moves between different levels of tension and dilation.

If, as Merleau-Ponty says (and Deleuze concurs), synesthetic perception is the rule, then the act of recognition that identifies each sensation with a determinate quality or sense and operates their synthesis within the unity of an object, hides from us the complexity of perception, and the heterogeneity of the perceiving body. Synesthesia shows that the unity of the body is constituted in the transversal communication of the senses. But these senses are not pre given in the body; they correspond to sensations that move between levels of bodily energy – finding different expression in each other. To each of these levels corresponds a particular way of living space and time; hence the simultaneity in depth that is experienced in vision is not the lateral coexistence of touch, and the continuous, sensuous and overlapping extension of touch is lost in the expansion of vision. This heterogenous multiplicity of levels, or senses, is open to communication; each expresses its embodiment in its own way, and each expresses differently the contents of the other senses.

Thus sensation is not the causal process, but the communication and synchronization of senses within my body, and of my body with the sensible world; it is, as Merleau-Ponty says, a communion. And despite frequent appeal in the Phenomenology of Perception to the sameness of the body and to the common world to ground the diversity of experience, the appeal here goes in a different direction. It is the differences of rhythm and of becoming, which characterize the sensible world, that open it up to my experience. For the expressive body is itself such a rhythm, capable of synchronizing and coexisting with the others. And Merleau-Ponty refers to this relationship between the body and the world as one of sympathy. He is close here to identifying the lived body with the temporization of existence, with a particular rhythm of duration; and he is close to perceiving the world as the coexistence of such temporalizations, such rhythms. The expressivity of the lived body implies a singular relation to others, and a different kind of intercorporeity than would be the case for two merely physical bodies. This intercorporeity should be understood as inter-temporality. Merleau-Ponty proposes this at the end of the chapter on perception in his Phenomenology of Perception, when he says,

But two temporalities are not mutually exclusive as are two consciousnesses, because each one knows itself only by projecting itself into the present where they can interweave.

Thus our bodies as different rhythms of duration can coexist and communicate, can synchronize to each other – in the same way that my body vibrated to the colors of the sensible world. But, in the case of two lived bodies, the synchronization occurs on both sides – with the result that I can experience an internal resonance with the other when the experiences harmonize, or the shattering disappointment of a  miscommunication when the attempt fails. The experience of coexistence is hence not a guarantee of communication or understanding, for this communication must ultimately be based on our differences as expressive bodies and singular durations. Our coexistence calls forth an attempt, which is the intuition.

Quantum Energy Teleportation. Drunken Risibility.

dizzzdergunov

Time is one of the most difficult concepts in physics. It enters in the equations in a rather artificial way – as an external parameter. Although strictly speaking time is a quantity that we measure, it is not possible in quantum physics to define a time-observable in the same way as for the other quantities that we measure (position, momentum, etc.). The intuition that we have about time is that of a uniform flow, as suggested by the regular ticks of clocks. Time flows undisturbed by the variety of events that may occur in an irregular pattern in the world. Similarly, the quantum vacuum is the most regular state one can think of. For example, a persistent superconducting current flows at a constant speed – essentially forever. Can then one use the quantum vacuum as a clock? This is a fascinating dispute in condensed-matter physics, formulated as the problem of existence of time crystals. A time crystal, by analogy with a crystal in space, is a system that displays a time-regularity under measurement, while being in the ground (vacuum) state.

Then, if there is an energy (the zero-point energy) associated with empty space, it follows via the special theory of relativity that this energy should correspond to an inertial mass. By the principle of equivalence of the general theory of relativity, inertial mass is identical with the gravitational mass. Thus, empty space must gravitate. So, how much does empty space weigh? This question brings us to the frontiers of our knowledge of vacuum – the famous problem of the cosmological constant, a problem that Einstein was wrestling with, and which is still an open issue in modern cosmology.

Finally, although we cannot locally extract the zero-point energy of the vacuum fluctuations, the vacuum state of a field can be used to transfer energy from one place to another by using only information. This protocol has been called quantum energy teleportation and uses the fact that different spatial regions of a quantum field in the ground state are entangled. It then becomes possible to extract locally energy from the vacuum by making a measurement in one place, then communicating the result to an experimentalist in a spatially remote region, who would be able then to extract energy by making an appropriate (depending on the result communicated) measurement on her or his local vacuum. This suggests that the vacuum is the primordial essence, the ousia from which everything came into existence.

OnionBots: Subverting Privacy Infrastructure for Cyber Attacks

Untitled

Currently, bots are monitored and controlled by a botmaster, who issues commands. The transmission of theses commands, which are known as C&C messages, can be centralized, peer-to-peer or hybrid. In the centralized architecture the bots contact the C&C servers to receive instructions from the botmaster. In this construction the message propagation speed and convergence is faster, compared to the other architectures. It is easy to implement, maintain and monitor. However, it is limited by a single point of failure. Such botnets can be disrupted by taking down or blocking access to the C&C server. Many centralized botnets use IRC or HTTP as their communication channel. GT- Bots, Agobot/Phatbot, and clickbot.a are examples of such botnets. To evade detection and mitigation, attackers developed more sophisticated techniques to dynamically change the C&C servers, such as: Domain Generation Algorithm (DGA) and fast-fluxing (single flux, double flux).

Single-fluxing is a special case of fast-flux method. It maps multiple (hundreds or even thousands) IP addresses to a domain name. These IP addresses are registered and de-registered at rapid speed, therefore the name fast-flux. These IPs are mapped to particular domain names (e.g., DNS A records) with very short TTL values in a round robin fashion. Double-fluxing is an evolution of single-flux technique, it fluxes both IP addresses of the associated fully qualified domain names (FQDN) and the IP address of the responsible DNS servers (NS records). These DNS servers are then used to translate the FQDNs to their corresponding IP addresses. This technique provides an additional level of protection and redundancy. Domain Generation Algorithms (DGA), are the algorithms used to generate a list of domains for botnets to contact their C&C. The large number of possible domain names makes it difficult for law enforcements to shut them down. Torpig and Conficker are famous examples of these botnets.

A significant amount of research focuses on the detection of malicious activities from the network perspective, since the traffic is not anonymized. BotFinder uses the high-level properties of the bot’s network traffic and employs machine learning to identify the key features of C&C communications. DISCLOSURE uses features from NetFlow data (e.g., flow sizes, client access patterns, and temporal behavior) to distinguish C&C channels.

The next step in the arms race between attackers and defenders was moving from a centralized scheme to a peer-to-peer C&C. Some of these botnets use an already existing peer-to-peer protocol, while others use customized protocols. For example earlier versions of Storm used Overnet, and the new versions use a customized version of Overnet, called Stormnet. Meanwhile other botnets such as Walowdac and Gameover Zeus organize their communication channels in different layers….(onionbots Subverting Privacy Infrastructure for Cyber Attacks)

Deanonymyzing ToR

VbAKD

My anonymity is maintained in Tor as long as no single entity can link me to my destination. If an attacker controls the entry and the exit of my circuit, her anonymity can be compromised, as the attacker is able to perform traffic or timing analysis to link my traffic to the destination. For hidden services, this implies that the attacker needs to control the two entry guards used for the communication between the client and the hidden service. This significantly limits the attacker, as the probability that both the client and the hidden service select a malicious entry guard is much lower than the probability that only one of them makes a bad choice.

Our goal is to show that it is possible for a local passive adversary to deanonymize users with hidden service activities without the need to perform end-to-end traffic analysis. We assume that the attacker is able to monitor the traffic between the user and the Tor network. The attacker’s goal is to identify that a user is either operating or connected to a hidden service. In addition, the attacker then aims to identify the hidden service associated with the user.

In order for our attack to work effectively, the attacker needs to be able to extract circuit-level details such as the lifetime, number of incoming and outgoing cells, sequences of packets, and timing information. We discuss the conditions under which our assumptions are true for the case of a network admin/ISP and an entry guard.

Network administrator or ISP: A network administrator (or ISP) may be interested in finding out who is accessing a specific hidden service, or if a hidden service is being run from the network. Under some conditions, such an attacker can extract circuit-level knowledge from the TCP traces by monitoring all the TCP connections between me and  my entry guards. For example, if only a single active circuit is used in every TCP connection to the guards, the TCP segments will be easily mapped to the corresponding Tor cells. While it is hard to estimate how often this condition happens in the live network, as users have different usage models, we argue that the probability of observing this condition increases over time.

Malicious entry guard: Entry guard status is bestowed upon relays in the Tor network that offer plenty of bandwidth and demonstrate reliable uptime for a few days or weeks. To become one an attacker only needs to join the network as a relay, keep their head down and wait. The attacker can now focus their efforts to deanonymise users and hidden services on a much smaller amount of traffic. The next step is to observe the traffic and identify what’s going on inside it – something the researchers achieved with technique called website fingerprinting. Because each web page is different the network traffic it generates as it’s downloaded is different too. Even if you can’t see the content inside the traffic you can identify the page from the way it passes through the network, if you’ve seen it before. Controlling entry guards allows the adversary to perform the attack more realistically and effectively. Entry guards are in a perfect position to perform our traffic analysis attacks since they have full visibility to Tor circuits. In today’s Tor network, each OP chooses 3 entry guards and uses them for 45 days on average, after which it switches to other guards. For circuit establishment, those entry guards are chosen with equal probability. Every entry guard thus relays on average 33.3% of a user’s traffic, and relays 50% of a user’s traffic if one entry guard is down. Note that Tor is currently considering using a single fast entry guard for each user. This will provide the attacker with even better circuit visibility which will exacerbate the effectiveness of our attack. This adversary is shown in the figure below:

Tor-Anonymity-Tor-path

The Tor project has responded to the coverage generated by the research with an article of its own written by Roger Dingledine, Tor’s project leader and one of the project’s original developers. Fingerprinting home pages is all well and good he suggests, but hidden services aren’t just home pages:

…is their website fingerprinting classifier actually accurate in practice? They consider a world of 1000 front pages, but ahmia.fi and other onion-space crawlers have found millions of pages by looking beyond front pages. Their 2.9% false positive rate becomes enormous in the face of this many pages – and the result is that the vast majority of the classification guesses will be mistakes.

Autopoiesis Revisited

33_10klein1

Autopoiesis principally dealt with determining the essence of living beings to start off with, thus calling to attention a clarification between organization and structure. This distinction was highlighted with organization subtending the set of all possible relations of the autopoietic processes of an organism and structure as a synchronic snapshot from the organizational set that was active at any given instant. This distinction was tension ridden, for a possibility of a production of a novel functional structure was inhibited, and especially so, when the system had perturbations vis-à-vis the environment that housed it. Thus within the realm of autopoiesis, a diachronic emergence was conceivable only as a natural drift. John Protevi throws light on this perspective with his insistence on synchronic emergence as autonomous, and since autonomy is interest directed, the question of autopoiesis in the social realm is ruled out. The case of understanding rejection of extending autopoiesis to the social realm, especially Varela’s rejection, is a move conceived more to move beyond autopoiesis, rather than beyond neocybernetics as concerned with the organizational closure of informational systems, lest a risk of slipping into polarization should loom large. The aggrandizing threat of fascistic and authoritarian tendencies in Varela were indeed ill-conceived. This polarity that Varela considered later in his intellectual trajectory as comprising of fragments that constituted the whole, and collectively constructed, was a launch pad for Luhmann to enter the fray and use autopoiesis to social systems. Autopoiesis forms the central notion for his self-referential systems, where the latter are characterized by acknowledging their referring to themselves in every operation. Autopoietic system while organizationally closed nevertheless references an environment, background or context. This is an indication that pure auto-referentiality is generally lacking, replaced instead by a broader process of self- referentiality which comprises hetero-referentiality with a reference to an environment. This process is watchful of the distinction between itself and the environment, lest it should fail to take off. As Luhmann says that if an autopoietic system did not have an environment, it would be forced to invent one as the horizon of its auto-referentiality.

A system distinguishes itself from the environment by boundaries, where the latter is a zone of high-degree complexity, the former is a one of reduced complexity. Even Luhmann’s system believes in being interest-driven, where the communication is selective with the available information to the best of its efficiency. Luhmann likens the operation of autopoiesis to a program, making a series of logical distinctions. Here, Luhmann refers to the British mathematician G. Spencer Brown’s logic of distinctions that Maturana and Varela had identified as a model for the functioning of any cognitive process. The supreme criteria guiding the “self-creation” of any given system is a defining binary code. This binary code is taken by Luhmann to problematize the auto-referential system’s continuous confrontation with the dilemma of disintegration/continuation. Importantly, Luhmann treats systems on an ontological level, that is, systems exist, and this paradigm is attempted to be changed through the differential relations between the system and the environment.

Philosophically, complexity and self-organizational principles shifts trends into interdisciplinarity. To take a case of holism, emergentism within complexity abhors a study through reductionism. Scientifically, this notion of holism failed to stamp its authority due to a lack of any solid scientificity, and the hubristic Newtonian paradigm of reductionism as the panacea for all ills came to stay. The rapprochement was not possible until a German biologist Ludwig von Bertalanffy shocked the prevalent world view with his thesis on the openness of living systems through interactions with the surrounding systems for their continual survival. This idea deliberated on a system embedded within an environment separated by a boundary that lent the system its own identity. Input from the environment and output from the system could be conceived as a plurality of systems interacting with one another to form a network, which, if functionally coherent is a system in its own right, or a supersystem, with the initial conditions as its subsystems. This strips the subsystems of any independence, but determinable within a network via relations and/or mapping. This in general is termed constraint, that abhors independence from relations between the coupled systems (supersystem/subsystem). If the coupling between the systems is tight enough, an organization with its identity and autonomy results. Cybernetics deals precisely with such a formulation, where the autonomy in question is maintained through goal-directed seemingly intelligent action in line with the thoughts of Varela and Luhmann. This is significant because the perturbations originating in the environment are compensated for by the system actively in order to maintain its preferred state of affairs, with greater the amount of perturbations implying greater compensatory actions on the part of the system. One consequence of such a systemic perspective has gotten rid of Cartesian mind-matter split by thinking of it as nothing more than a special kind of relation. Such is the efficacy of autopoiesis in negotiating the dilemma surrounding the metaphysical question concerning the origin of order.