Derrida Contra Searle – Intentionality – Part 2…

searle big questionimg167

Part 1 is here

Another acerbic criticism by Searle concerns meaning as utterance meaning attributed to Derrida, where the positionality of intentions is confined to entities that are mysterious in nature and lying behind these utterances. The only way to smash this criticism is by showing that Derrida does respect the existence of distinction that mirrors Searle’s distinction between speaker’s utterance meaning and literal meaning. In accepting such a distinction, the seemingly apparent gulf becomes non-existent, and the irreducible polysemy or dissemination of Derrida lands on the same level as the literal ambiguity of Searle. Searle highlights the category mistake in underlying the supposition that the utterance of the token and the token are identical and the mistake only proliferates when the token acquires a different meaning from type in the case of utterance meaning as differing from sentence meaning. For him, excepting diachronic changes, special codes, and the like, the token’s meaning is always the same as the meaning of the type, and the only distinction worthy of name is the one between speaker’s utterance meaning from sentence meaning, type or token (John R. Searle – Expression and Meaning _ Studies in the Theory of Speech Acts). This can mean nothing more and nothing less than the fact that one’s use of token has no impact upon token’s type. Even if utterances and tokens are different from one another, utterances lose their existential status without tokens, implying that nothing rules out the possibility of utterances and token having different meanings, with the condition of utterance meaning as not affecting token’s literal meaning being strictly adhered to.1 In order to establish the nexus between literal meaning and the issue of intentionality, Searle takes a recourse to fungible intentionality that highlights the conventionality of intentionality when trying to connect to the notion of literal meaning. This introduces about-ness in intentions or intentional states. The word “fungible” is used to designate a literal meaning that can do the work of a state of mind that is about something. Searle’s vagaries are once again evident when he simply broaches this concept of fungible intentionality in his reply to Derrida, as the essay circulates around argument of the iterability of linguistic forms as facilitating a necessary condition for particular forms of intentionality as characterizing speech acts. This is so, primarily due to the status enjoyed by us human beings at mastering recursive rules that help in the proliferation of speech acts thus generating infinite number of new things, meanings in its wake. And this is true even in case of remainder or when the sentence gets alienated, weaned from its origin. As Searle points out,

There is no getting away from intentionality, because a meaningful sentence is just a standing possibility of the corresponding intentional speech act. To understand it, it is necessary to know that anyone who said it and meant it would be performing that speech act determined by the rules of the languages that give the sentence its meaning in the first place.

Another complication that surfaces in Searle is his insistence on the dissimilarity between utterance meaning’s context dependency vis-à-vis sentence meaning’s context dependency2, which incidentally is even accepted by Derrida. But, the problem lies in the non-clarity in Searle when he is trying his hand at distinguishing between meaning attached to the sentence and meaning attached to the utterance in his critique of Derrida in relation to Austin. To take an example, Derrida invokes a puzzling example from Nietzsche, “I forgot my umbrella”. This quote simply means ‘I forgot my umbrella’, even if one is unaware of the context underlying the remark. This quote also gives rise to a duality in that, on the one hand, one is aware of what is intended, whereas, on the other, one is not aware of the intention behind the statement. If this duality is considered, and if what Searle claims about intentionality as missing from writing holds true, then there is nothing that goes against Searle, for the consequence that a sentence would undergo would always be dictated by fungibility of intentions. But the point is missed for the thinker in question fails to apprehend what Derrida might have meant, that is, writer’s intention rather than fungible intention. The Derridean argument thereafter goes on to prove that intentions as such are never fully actualized. In a highly insightful passage, Derrida (Limited Inc 38) comments,

On the one hand, I am more or less in agreement with Sarl’s statement, “…there is no getting away from intentionality, because a meaningful statement is just a standing possibility of the corresponding intentional speech act”, I would, on the other hand, add, placing undue and artificial emphasis on -ful, that for reasons just stated, there cannot be a sentence that can be fully and actually meaningful and hence (or because) there can be no ‘corresponding (intentional) speech act’ that would be fulfilled, fully present, active and actual.

So, even if there are some traces of agreement between the two thinkers, Derrida rejects the thesis that intentions could be fully present within the text, thus proving his dissemination or irreducible polysemy as holding firm grounds. Moreover, his affirmation gets all the more strengthened because, iterability keeps account of dissemination, thus preventing intentions from ever getting actualized. Furthermore, if dissemination is to mark its presence, it is possible only with and within iterability. This goes on to prove the untenability of Searle’s “ideal hypothesis”, since the very structure of the mark excludes the hypothesis of idealization.

There is a nuance associated with irreducible polysemy, despite Searle’s thesis of vagueness and literal ambiguity that is no different than Derrida’s dissemination. Searle holds ambiguity to be finite, whereas, Derrida holds polysemy to be determinable, since irreducible polysemy never makes the arrogant claim on signs, words and sentences as having indeterminate meanings.3 Even a cursory look at the positions of the two thinkers  is enough to reach a conclusion that on the issue of meaning of sentences, these thinkers do not differ greatly, since both regard meaning as relatively contextual and meta- contextual, in addition to holding contexts as unchanging, and showing hardly any nuance amongst themselves in considering polysemy a characteristic feature of sentences. Well, this judgment appears to be slightly neutral laden or prejudiced with the usage of the word “nuance”, and could eventually mean as if the word is used rather strongly. But, this ain’t the sense in which it is employed here. There is a difference, and it lies in iterability, which, for Derrida, lends a polysemic status to sentential meanings, whereas the deviation wrought about by Searle lends legitimacy to the existence of univocal sentences.

Before getting into the discussion on parasitic discourses that formed a real contentious issue between Searle and Derrida, on the latter’s reading of Austin, it is necessary to provide a brief recapitulation. The major criticisms provided by Searle on Derrida’s take on Austin’s parasitic/normal/abnormal discourse are,

  1. a misplaced conflation of iterability, citationality and parasitism that slides into a misplaced accusation of Austin as implicitly denying quotability,
  2. a misplaced conflation of non-fiction/fiction distinction with speech/writing distinction as attributed to Austin,
  3. a mistaken understanding of Austin’s exclusion of parasitic discourse and,
  4. attaching an ethical status to this exclusion.

What is confounding for Searle is his understanding of Derrida, who according to former denies Austin any possible expressibility of quotations, since, Austin analyses serious speech acts before undertaking studies on parasitic ones. So, if Searle thinks of parasitism as not a matter of quotability on the one hand, he also considers Derrida’s position of commitment to parasitism as citationality on the other. Thus nothing differentiates citationality from quotability for Searle, whereas, for Derrida, quotation is just one aspect of citation. This Searlean argument falls flat on face, and a further decimation of it occurs, when one notes that Derridean parasitism is only an utterance, or a citation of an utterance in contexts that happen to be extraordinary. If non-serious citations were “the determined modification” of general citationality, it could only imply for non-serious utterances as a certain type of utterance in general4. One of the themes of Signature Event Context is to show that Austin excludes the determined modification of citationality, and with this exclusion, a successful performative misses its mark. So, it appears that there is a trade-off of exclusion for one type of citationality in favor of the other, viz, serious citation. This is a clear case of Searle misinterpreting citationality as mere quotability. Now, if there is a suggestion to the effect of non-serious citations as determined modifications of citationality, this could only be deciphered on the basis of conventionality, in that, whenever, these features are noticed, they should always be taken as utterances of a certain kind. If this is where Searle’s criticism aims at, Derrida takes a recourse to counter it by an augmented track to hit straight at former’s notions of idealization and semantic rules. This is to be accomplished in order to prove whether a distinction that is not sharp enough is a legitimate conceptual distinction in the first place. Derrida carries no qualms in admitting that it is not, whereas Searle insists on it being a legitimate conceptual distinction. The questions concerning the legitimate conceptual distinction is again a deviated path for the thinkers in question, since, both of them at least agree upon the premiss that a normal speech act is only comprehensible as a fiction following an aporetic situation in which a sharp distinction between normal and parasitic speech acts is encountered, thus considering these distinctions as nothing short of idealizations.

1 Kevin Halion correctly summarizes this with his reading of Searle as delineating two fundamental and separate distinctions viz, sentence/utterance and type/token. Speaker’s utterance meaning and sentence meaning are both context dependent. Over and above the context dependence of the utterance of ‘The cat is on the mat’ (where its indexicals are only determined relative to the context of utterance which decides which cat it is and where the mat is), there is a contextuality of its literal meaning. This dependence on contextual or background assumptions is easily shown. For instance, it would be problematic to speak of a cat’s being on a mat outside some gravitational field. However it might still be said and Searle gives an example to show this: looking from a space-ship window, mats float past with cats near them in such a relation that, relative to the ship, it can be said that in some cases the cat is on the mat and in the others the mat is on the cat. And there are innumerable other contexts to which the statement about the cat is also relative.

2 For to understand this opposition and differing kinds of context dependencies, it is worthwhile to have a look at the quote by Searle (Expression and Meaning 133f, linked above) below,

A … skeptical conclusion that I explicitly renounce is that the thesis of the relativity of literal meaning destroys or is in some way inconsistent with the system of distinctions … that centers around the distinction between the literal sentence meaning and the speaker’s utterance meaning, where the utterance meaning may depart in various ways from literal sentence meaning. …The modification that the thesis of relativity of meaning forces on that system of distinctions is that in the account of how context plays a role in the production and comprehension of metaphorical utterances, indirect speech acts, ironical utterances, and conversational implications, we will need to distinguish the special role of the context of utterance in these cases from the role that background assumptions play in the interpretation of literal meanings.

This clearly indicates the distinction made by Searle between utterance meaning and sentence meaning, even if they are both determined by context.

3 A couple of quotations from ‘Afterword: Toward an ethic Discussion’ (Limited Inc 115) lends legitimacy to Derrida’s views here.

I never proposed ‘a kind of “all or nothing” choice between pure realization of self-presence and complete freeplay or undecidability.’ I never believed in this and I never spoke of ‘complete freeeplay or undecidability’.

And again on page 148,

From the point of view of semantics…’deconstruction’ should never lead either to relativism or to any sort of indeterminism.

The quotations within the above quotes are from Searle (Caution: Subscribers’ only), that were reproduced by Gerald Graff in putting across his questions.

4 So it is not true that Derrida held that ‘the phenomenon of citationality’ (with citationality understood as quotability in the sense of mention but not of use) was ‘the same as the phenomenon of parasitic discourse’.

Derrida Contra Searle – Part 1…

BrJ7lWSIEAAZLyp

The three most essential features of writing for Derrida happen to be remainder, rupture and spacing, of which the first is mirrored in weaning of a text from its origin and the second finds its congruence in severance of expression from its meaning, and all of the three happen to be graphematic. Remainder could also be thought of as writing that absents itself from the original context, while rupture is to be seen primarily as the unlikelihood of a proper context that arrests it, or confines it. Even if weaning of a text from its origin fits the bill of being graphematic, Searle’s rejects rupture as being one. This implies that the status of permanence is accorded to writing, as unlike speech, it remains in its archival form even in the absence of speaker-writer. This position is non- Derridean, as it argues against all language use as characterized by the absence of the sender. Therefore, severance of meaning from the expression is denied any special status for writing by an advantageous denomination of quotability, which, even if, not a normal purpose of quotation1, could still be a possible one. Searle’s reading of severance of meaning from an expression or rupture goes for a tailspin here, for, rupture implies a signifier to be grafted onto innumerable contexts in which sense could be derived, rather than boundations imposed upon graphemes and phonemes as simple considerations of marks and sounds respectively, and alienated from any significations they might carry when considered as mere signifiers.

Derrida most patiently and appropriately, ironically launches into his own defense against these Searlean criticisms. Irony and/or mockery rules the roost in Limited Inc., for the style is a deliberate attempt to deal with the serious/non-serious distinction in response to Searle’s tone of high disdain. In the words of Spivak (Revolutions That As Yet Have No Model), Searle’s essay is brusque and all too brief, whereas, Derrida’s is long and parodistically courteous and painstaking.

Derrida in Signature Event Context thematically points out the exclusion of writing from speech act theory, and talks about the essential predicates that minimally determine the classical notion of writing. He does this through his reading of Husserl’s Logical Investigations and The Origin of Geometry, where Husserl had indicated a suspicion on speech as underlining certain of these predicates of writing, by supposing writing to imitate speech, but unable to share in the immediate link between speech and its context of production. Even if Signature Event Context considers every sign as cited without the quotation marks, a possibility of a break with every given context leading to illimitable new contexts cannot be ruled as a crisis ridden possibility in itself. Such a crisis has resolution in Husserl through his phenomenological reduction, and in Austin through a programmatic, initial, and initiating exclusion. For Searle, writing is nothing more than a transcription of speech, and his refutation of Derrida’s take on speech and writing is too quick a translation that finds its bottom in a standard and trivial idiom. For instance, Searle clearly misinterprets Derrida by noting some marks to be only iterable by citations exemplified in quotations. It is without any doubt that Derrida considers quotation as a form of iteration or citation, but is only one such form, since for him, use of any such mark is equally a case of/for citation and iteration.2

This is misinterpretation on Searle’s part primarily due to his treating/interpreting graphematic in the classical notion of writing.

When Searle reads Signature Event Context, he reads in it the absentia of intention from writing altogether, which he bases upon the mark as separated from its origin and context of production and is clearly stated in his reply to Derrida. He (Reiterating the Differences {linked in the footnotes}) says,

Intentionality plays exactly the same role in written as well as spoken communication. What differs in the two cases is not the intentions of the speaker but the role of the context of the utterance in the success of the communication.

So, if intentions are present in writing, and contexts differentiate themselves with respect to speech and writing, leading to speech as more implicit in its form as compared to writing that happens to be explicit, one can only adduce to the fact of Searle being caught up in the classical notion of writing, with writing relegated to a lower form of language vis-à-vis speech. This is despite the fact of classical notion holding writing as dependent on speech, with Searle breaking away from it marginally by holding this dependence to be a matter of contingency in the history of human languages, rather than construed as a logical matter, and simultaneously unsubscribing from the classical notion of intentions as somehow absent from writing. Derrida sees a problem with this particular take on intentions that have hitherto sought to actualize and totalize intentionality into self-presence and self-possession.3 One cannot miss the teleological overtones of classical notions of intentionality, and the resolution lies in problematizing this notion. One such solution lies in leveling the privileged status bestowed upon writer-reader’s presence brought about by deconstruction to call back to the center the necessary possibility of the absence of sender and receiver as the positive condition of possibility of communication.4 Such a critique should not be taken to mean in Searlean style that intentionality should be done away with, or effaced, but would only lay importance to its deployment as against disappearance. Intentions could very well themselves be the effects of a desire that lead to self-identical intentions in order to produce interpretations. A limit is imposed upon such desires to prevent it from being thought in terms of a fully intending subject. These limitations, however accentuate the very functionality of intentions, lest it should only focus on Derrida’s project as absurdly nihilist. According to Derrida (Limited Inc),

What is valid for intention, always differing, deferring, and without plenitude, is also valid correlatively, for the object (qua signified or referent) thus aimed at. However, this limit, I repeat (“without” plenitude), is also the (“positive”) condition of possibility of what is thus limited.

In short, in Derrida the originary self-division of intention “limits what it makes possible while rendering its rigor or purity impossible” (Revolutions that as yet have no model). Derrida sees intention as part of the total context5 that somehow carries the ability to intrinsically determine utterances, and is rigorously put forward, when he (Limited Inc) says,

Intention, itself marked by the context, is not foreign to the formation of the total context…to treat context as a factor from which one can abstract for the sake of refining one’s analysis, is to commit oneself to a description that cannot but miss the very contents and object it claims to isolate, for they are intrinsically determined by context.

This point of understanding intentionality is crucial here, for writer’s intending is bracketed by the same context as the actual production of graphemes, and Searle, who at times vehemently rejects any distinction between intention and context invokes it in his criticism of Derrida, thus exhibiting his own conflictual stance. To achieve explicitness, writing must be able to function without the presence of the writer, and the way this is attained is when something meaningful is being said, the intention behind it exhibits its non-presence. This helps clarifying the distinction between the intention to be meaningful and intention itself, or the intended meaning. The phrase “non-presence” is misleading however, and it is loaded with absence. In actuality, these are not to be employed synonymously.5 Non-presence entails intentions as never actualized, or made fully present in the language due to dissemination. Derrida (Limited Inc) explicitly never questions intentionality, but only its teleological aspirations through his text, since these aspirations orient the movements towards the possibility of fulfilling, realizing, and actualizing in a plenitude that would be present to and identical with itself. And this is precisely the reason why Derrida calls intention as not being present wholly. This position is bound to raise suspicion in Searle, when it is largely misinterpreted that radical absence of the receiver in general should connote the absence of trace of any sender. The confusion builds up around “radical absence”, as it is taken to mean the absence of intention, which, however, is not the case. What is really communicated here is the absence of consciousness of what one intended, as is clear from the fact that if a conscious act needs to be intentional, it does not assume intention as conscious.

Searle talks about the normal and the possible purpose of quotation in a note that follows his remark (Reiterating the Differences),

We can always consider words as just sounds and marks and we can always construe pictures as just material objects. But…this possibility of separating the sign from the signified is a feature of any system of representation whatever: there is nothing especially graphematic about it at all.

If every ark is iterable, then no mark belongs to language strictly speaking. Languages could be thought of as reifications, that for someone like Donald Davidson, help us construct theories of meaning, while at the same time engaging with consistent and idiomatic speech behaviors. This might seem like loose semantic conventions and habits, but nonetheless direct towards some sort of an engagement with the likes of Joyce and Mrs. Malaprop and inculcating in us the revisionary exercise towards the theory of what language our interlocutor is speaking in line with the principle of charity. 

This is one of the reasons why Derrida calls his critique as ethico-political in nature.

This is reviewed by Spivak (Revolutions that as yet have no model {linked above}), and she calls attention to an extensive quote attributed to Derrida on the same page, that I find very insightful and hence worth reproducing it here.

To affirm…that the receiver is present at the moment when I write a shopping list for myself, and , moreover, to turn this into an argument against the essential possibility of the receiver’s absence from every mark, is to settle for the shortest, most facile analysis. If both sender and receiver were entirely present when the mark was inscribed, and if they were thus present to themselves-since, by hypothesis, being present and being-present-to- oneself are here the same-how could they even be distinguished from one another? How could the message of the shopping list circulate among them? And the same hold force, a fortiori, for the other example, in which sender and receiver are hypothetically considered to be neighbors, it is true, but still as two separate persons occupying two different places, or seats…But these notes are only writable or legible to the extent that…these two possible absences construct the possibility of the message at the very instant of my writing or his reading.

This confusion is ameliorated when one sees non-presence as designating a less negated presence, rather than getting caught up in the principally binary presence/absence opposition that is usually interpreted.

Derrida Contra Austin – Irreducible Polysemy…

Architecture-of-the-grapheme-to-phoneme-converter-in-TTS-applications-be-reduced-to-a

The position of Austin seems to relegate writing vis-à-vis speech, even if he maintains that certain aspects of speech are imperfectly captured by writing. Even Searle joins his mentor in admitting the implicit context of speech when compared with the explicit context of writing. Another thematic insistence between speech and writing in Austin is an utterance that is tied to origin. When an utterance is not in the present indicative active, then the utterer is not typically referred to by name or personal pronoun ‘I’ but by the fact that it is he who is speaking and thus the origin of the utterance, and when he happens to be absent and does not use his name or the personal pronoun ‘I’, he will often indicate in the written document that it is he who is the origin by signing it with his name. Derrida voices his criticism on this position for the speaker’s intended meaning isn’t any more unequivocal, if he is present, than, if he would have written. This point is cogently argued by Derrida because for him the presence of the speaker is analogous to the one who signs. He says in Signature, Event, Context,

The signature also marks and retains [the writer’s] having-been present in a past now or present which will remain a future now or present…general maintenance is in some way inscribed, pinpointed in the always evident and singular present punctuality of the form of the signature…in order for the tethering to the source to occur, what must be retained is an absolute singularity of a signature-event and a signature-form: the pure reproducibility of a pure event.

One conclusion that could be favorably drawn from Derrida’s reading of Austin according to the above quote is that for the latter, a permanence is given to the signature that identifies the signer and his presence with/within the text. This also implies at the same time the reproducibility of the mark of signature to deduce that it is recognized as1 his signature , thus proving not only originality of signature but also its iterability. Derrida’s general criticism of Austin rest upon the latter’s failing to acknowledge the graphematic nature of locutions in addition to performative/constative and serious/parasitic distinctions not being able to fit in, when applied to locutions. This is deducible by arguments that run against the notion of proper contexts thereby hindering the discernment between speech acts that qualify as normal or parasitic and happy or unhappy. A careful reading of Austin’s How To Do Things With Words establishes a thematic rule of classifying and/or categorizing speech acts that are resistant to being unambiguously accounted for one way rather than other, or, in other words, the book’s primary aim is to root out the thesis that context is absolutely determinable, even if there is a recognition of serious and non-serious speech acts with the cautionary treatment of leaving out the non-serious acts during the examination of the serious ones. Derrida, on the contrary gives a lot of seriousness to the “non-serious/non-literal”’ linguistic use, as for him, they are determinate of meaning. This stand of Derrida goes opposite to Austin’s, for who, speech acts, even if they harbor felicities and infelicities, could only be investigated about within ordinary circumstances. In an amazing reading2 of Austin, Derrida claims non-serious citations of utterances qua citations, are nothing but instances of the iteration of the utterances that help determine its identity. Moreover, Derrida claims graphematic root of citationality as responsible for why Austin is unable to provide an exhaustive list of criteria to distinguish performatives with constatives, and also because Austin fails to take account of the structure of locution as already entailing predicates that blur the oppositions which are in turn unsuccessfully attempted to be established. Also, failure to recognize the necessity of impure performatives on Austin’s part made Derrida’s criticism more cogent, as for the latter, “impurities” are not just confined to performatives having a constative dimension, or constatives having a performative dimension, but, even normal and parasitic acts weren’t immune anymore to “impurities”. This criticism gains authority, since for Derrida, impurities are necessary and not any accidental facts, and in the absence of proper contexts, “hosts” maybe parasitic on “parasites” implying further that “normal” utterances are relatively normal and “parasitic” utterances are relatively parasitic, since the criterion invoked to differentiate them is the difference in contexts that is somehow missing or blurred in Austin. So, if the constative/performative distinction is an impure distinction in itself for Austin, then he is not successful in legitimizing the normal/parasitic distinction. Derrida claims that Austin’s work shows that the possibility of failure, or infelicity, is a permanent structural and/or necessary possibility of performative utterances, but Austin excludes the risk of such failures as accidental. In other words, Austin shows that performatives are characterized by an essential risk of failure and yet treats that risk as if it were accidental, which Derrida characterizes as a necessary impurity of performatives and constatives. Furthermore, Austin’s investigations of infelicities and total speech situations point to the fact that speakers and hearers can exercise control over speech situations in order to avoid infelicity and secure uptake, which meets its counter- argument in Derridean dissemination or irreducible polysemy by the establishing of locutions as graphematic, thus losing out on any such possibility of securing control on the speech act by either the speaker or the hearer.

In a nutshell, it is safe to say that Austin’s total speech act revolved around a dual notion of a possible elucidation within the total speech situations that left room for a generalized accountability for a formulation to comprehend parasitic deviations from the norm and speech acts construed as an exercise in exposing the lack of distinctions like parasitic/normal involved therein. Therefore, even if in his speech act theory, it is impossible for an utterance to take hold of normal and parasitic tones, it does not rule out the contingency of such distinctions from coming into being. The impossibility of distinctions for utterances in Austin’s case is what moves Searle away from his mentor, as for the latter, utterances could be tagged normal or parasitic due to his literal/utterance-meaning and representation/communication distinctions (his notion of intentionality achieves prominence here with the speaker-writer determining if her utterance is normal or parasitic). For Searle, sentences are loaded with literal ambiguities, since the possibilities of speaking literally or non-literally exist in some sort of a double bind, and this take of his has some parallels in Derrida’s citationality, iterability and dissemination. There is a difference though, in that, Derrida gives credence to the irreducible polysemy and parasitism and unhappiness as permanent and structural, that should in no way be counted as indeterminate or free play, but rather as mired in ambiguities, whereas Searle never thinks of all utterances as polysemic…

1 It should be noted that the ideal signature is one which can only be repeated by one individual, and for Derrida, it is the impossible ideal of something original that remains so even when it undergoes repetition. Effects of signature are the most common thing in the world, with the conditions of its possibility as simultaneously the conditions of its impossibility of a rigorous purity. Functionality is driven, when the signature enjoys repeatable, iterable and imitable forms, which is made possible, when a signature gets detached from its singular, intended production, or in other words, it is sameness which, by corrupting its identity and its singularity, divides its seal. 

2 This reading is evident in the quote (Derrida),

“..ultimately, isn’t it true that what Austin excludes as anomaly, exception, ‘non-serious’ citation (on stage, in a poem, or a soliloquy) is the determined modification of a general citationality- or rather, a general iterability – without which there would not even be a ‘successful’ performative? So that- a paradoxical but unavoidable conclusion – a successful performative is ‘necessarily’ an impure performative, to adopt the word advanced later on by Austin when he acknowledges that there is no pure performative.”

Catastrophe, Gestalt and Thom’s Natural Philosophy of 3-D Space as Underlying All Abstract Forms – Thought of the Day 157.0

The main result of mathematical catastrophe theory consists in the classification of unfoldings (= evolutions around the center (the germ) of a dynamic system after its destabilization). The classification depends on two sorts of variables:

(a) The set of internal variables (= variables already contained in the germ of the dynamic system). The cardinal of this set is called corank,

(b) the set of external variables (= variables governing the evolution of the system). Its cardinal is called codimension.

The table below shows the elementary catastrophes for Thom:

Screen Shot 2019-10-03 at 5.07.29 AM

The A-unfoldings are called cuspoids, the D-unfoldings umbilics. Applications of the E-unfoldings have only been considered in A geometric model of anorexia and its treatment. By loosening the condition for topological equivalence of unfoldings, we can enlarge the list, taking in the family of double cusps (X9) which has codimension 8. The unfoldings A3(the cusp) and A5 (the butterfly) have a positive and a negative variant A+3, A-3, A+5, A-5.

We obtain Thorn’s original list of seven “catastrophes” if we consider only unfoldings up to codimension 4 and without the negative variants of A3 and A5.

Screen Shot 2019-10-03 at 5.17.40 AM

Thom argues that “gestalts” are locally con­stituted by maximally four disjoint constituents which have a common point of equilibrium, a common origin. This restriction is ultimately founded in Gibb’s law of phases, which states that in three-dimensional space maximally four independent systems can be in equilibrium. In Thom’s natural philosophy, three-dimensional space is underlying all abstract forms. He, therefore, presumes that the restriction to four constituents in a “gestalt” is a kind of cognitive universal. In spite of the plausibility of Thom’s arguments there is a weaker assumption that the number of constituents in a gestalt should be finite and small. All unfoldings with codimension (i.e. number of external variables) smaller than or equal to 5 have simple germs. The unfoldings with corank (i.e. number of internal variables) greater than two have moduli. As a matter of fact the most prominent semantic archetypes will come from those unfoldings considered by René Thom in his sketch of catastrophe theoretic semantics.

From God’s Perspective, There Are No Fields…Justified Newtonian, Unjustified Relativistic Claim. Note Quote.

Electromagnetism is a relativistic theory. Indeed, it had been relativistic, or Lorentz invariant, before Einstein and Minkowski understood that this somewhat peculiar symmetry of Maxwell’s equations was not accidental but expressive of a radically new structure of time and space. Minkowski spacetime, in contrast to Newtonian spacetime, doesn’t come with a preferred space-like foliation, its geometric structure is not one of ordered slices representing “objective” hyperplanes of absolute simultaneity. But Minkowski spacetime does have an objective (geometric) structure of light-cones, with one double-light-cone originating in every point. The most natural way to define a particle interaction in Minkowski spacetime is to have the particles interact directly, not along equal-time hyperplanes but along light-cones

Particle-b-interacts-with-particle-a-at-point-x-via-retarded-and-advanced-waves-The-mass

In other words, if zi􏱁i)  and zjj􏱁) denote the trajectories of two charged particles, it wouldn’t make sense to say that the particles interact at “equal times” as it is in Newtonian theory. It would however make perfectly sense to say that the particles interact whenever

(zμi zμj)(zμi zμj) = (zi – zj)2 = 0 —– (1)

For an observer finding himself in a universe guided by such laws it might then seem like the effects of particle interactions were propagating through space with the speed of light. And this observer may thus insist that there must be something in addition to the particles, something moving or evolving in spacetime and mediating interactions between charged particles. And all this would be a completely legitimate way of speaking, only that it would reflect more about how things appear from a local perspective in a particular frame of reference than about what is truly and objectively going on in the physical world. From “Gods perspective” there are no fields (or photons, or anything of that kind) – only particles in spacetime interacting with each other. This might sound hypothetical, but, it actually is not entirely fictitious. for such a formulation of electrodynamics actually exists and is known as Wheeler-Feynman electrodynamics, or Wheeler-Feynman Absorber Theory. There is a formal property of field equations called “gauge invariance” which makes it possible to look at things in several different, but equivalent, ways. Because of gauge invariance, this theory says that when you push on something, it creates a disturbance in the gravitational field that propagates outward into the future. Out there in the distant future the disturbance interacts with chiefly the distant matter in the universe. It wiggles. When it wiggles it sends a gravitational disturbance backward in time (a so-called “advanced” wave). The effect of all of these “advanced” disturbances propagating backward in time is to create the inertial reaction force you experience at the instant you start to push (and cancel the advanced wave that would otherwise be created by you pushing on the object). So, in this view fields do not have a real existence independent of the sources that emit and absorb them. It is defined by the principle of least action.

Wheeler–Feynman electrodynamics and Maxwell–Lorentz electrodynamics are for all practical purposes empirically equivalent, and it may seem that the choice between the two candidate theories is merely one of convenience and philosophical preference. But this is not really the case since the sad truth is that the field theory, despite its phenomenal success in practical applications and the crucial role it played in the development of modern physics, is inconsistent. The reason is quite simple. The Maxwell–Lorentz theory for a system of N charged particles is defined, as it should be, by a set of mathematical equations. The equation of motion for the particles is given by the Lorentz force law, which is

The electromagnetic force F on a test charge at a given point and time is a certain function of its charge q and velocity v, which can be parameterized by exactly two vectors E and B, in the functional form:

describing the acceleration of a charged particle in an electromagnetic field. The electromagnetic field, represented by the field-tensor Fμν, is described by Maxwell’s equations. The homogenous Maxwell equations tell us that the antisymmetric tensor Fμν (a 2-form) can be written as the exterior derivative of a potential (a 1-form) Aμ(x), i.e. as

Fμν = ∂μ Aν – ∂ν Aμ —– (2)

The inhomogeneous Maxwell equations couple the field degrees of freedom to matter, that is, they tell us how the charges determine the configuration of the electromagnetic field. Fixing the gauge-freedom contained in (2) by demanding ∂μAμ(x) = 0 (Lorentz gauge), the remaining Maxwell equations take the particularly simple form:

□ Aμ = – 4π jμ —– (3)

where

□ = ∂μμ

is the d’Alembert operator and jμ the 4-current density.

The light-cone structure of relativistic spacetime is reflected in the Lorentz-invariant equation (3). The Liénard–Wiechert field at spacetime point x depends on the trajectories of the particles at the points of intersection with the (past and future) light-cones originating in x. The Liénard–Wiechert field (the solution of (3)) is singular precisely at the points where it is needed, namely on the world-lines of the particles. This is the notorious problem of the electron self-interaction: a charged particle generates a field, the field acts back on the particle, the field-strength becomes infinite at the point of the particle and the interaction terms blow up. Hence, the simple truth is that the field concept for managing interactions between point-particles doesn’t work, unless one relies on formal manipulations like renormalization or modifies Maxwell’s laws on small scales. However, we don’t need the fields and by taking the idea of a relativistic interaction theory seriously, we can “cut the middle man” and let the particles interact directly. The status of the Maxwell equation’s (3) in Wheeler–Feynman theory is now somewhat analogous to the status of Laplace’s equation in Newtonian gravity. We can get to the Gallilean invariant theory by writing the force as the gradient of a potential and having that potential satisfy the simplest nontrivial Galilean invariant equation, which is the Laplace equation:

∆V(x, t) = ∑iδ(x – xi(t)) —– (4)

Similarly, we can get the (arguably) simplest Lorentz invariant theory by writing the force as the exterior derivative of a potential and having that potential satisfy the simplest nontrivial Lorentz invariant equation, which is (3). And as concerns the equation of motion for the particles, the trajectories, if, are parametrized by proper time, then the Minkowski norm of the 4-velocity is a constant of motion. In Newtonian gravity, we can make sense of the gravitational potential at any point in space by conceiving its effect on a hypothetical test particle, feeling the gravitational force without gravitating itself. However, nothing in the theory suggests that we should take the potential seriously in that way and conceive of it as a physical field. Indeed, the gravitational potential is really a function on configuration space rather than a function on physical space, and it is really a useful mathematical tool rather than corresponding to physical degrees of freedom. From the point of view of a direct interaction theory, an analogous reasoning would apply in the relativistic context. It may seem (and historically it has certainly been the usual understanding) that (3), in contrast to (4), is a dynamical equation, describing the temporal evolution of something. However, from a relativistic perspective, this conclusion seems unjustified.

Albert Camus Reads Richard K. Morgan: Unsaid Existential Absurdism

Humanity has spread to the stars. We set out like ancient seafarers to explore the limitless ocean of space. But no matter how far we venture into the unknown, the worst monsters are those we bring with us. – Takeshi Kovacs

What I purport to do in this paper is pick up two sci-fi works of Richard Morgan, Altered Carbon (teaser to Netflix series), the first of Takeshi Kovacs trilogy and sometimes a grisly tale of switching bodies to gain immortality transhumanism, either by means of enhanced biology, technology, or biotechnology, and posthumanism. The second is Market Forces, a brutal journey into the heart of corporatized conflict investment by way of conscience elimination. Thereafter a conflation with Camus’ absurdity unravels the very paradoxical ambiguity underlying absurdism as a human condition. The paradoxical ambiguity is as a result of Camus’ ambivalence towards the neo-Platonist conception of the ultimate unifying principle, while accepting Plotinus’ principled pattern, but rejecting its culmination.

Richard Morgan’s is a parody, a commentary, or even en epic fantasy overcharged almost to the point of absurdity and bordering extropianism. If at all there is a semblance of optimism in the future as a result of Moore’s Law of dense hardware realizable through computational extravagance, it is spectacularly offset by complexities of software codes resulting in a disconnect that Morgan brilliantly transposes on to a society in a dystopian ethic. This offsetting disconnect between the physical and mental, between the tangible and the intangible is the existential angst writ large on the societal maneuvered by the powers that be.

Morgan’s Altered Carbon won’t be a deflection from William Gibson’s cyberpunk, or at places even Philip K Dick’s Do Androids Dream of Electric Sheep?, which has inspired the cult classic Ridley Scott’s Blade Runner, wherein the interface between man and machine is coalescing (sleeves as called in the novel), while the singularity pundits are making hay. But, what if the very technological exponent is used against the progenitors, a point that defines much of Artificial Intelligence ethics today? What if the human mind is now digitized, uploaded and downloaded as a mere file, and transferred across platforms (by way of needlecast transmitting DHF, individual digital human freight) rendering the hardware dis- posable, and at the same time the software as a data vulnerable to the vagaries of the networked age? These aren’t questions keeping the ethic at stake alone, but rather a reformatting of humanity off the leash. This forever changes the concept of morality and of death as we know it, for now anyone with adequate resources (note the excess of capitalism here) can technically extend their life for as long as they desire by reserving themselves into cloned organics or by taking a leaf off Orwell’s Government to archive citizen records in perpetual storage. Between the publication in 2002 and now, the fiction in science fiction as a genre has indeed gotten blurred, and what has been the Cartesian devil in mind-body duality leverages the technological metempsychosis of consciousness in bringing forth a new perception on morality.

Imagine, the needle of moral compass behaving most erratically, ranging from extreme apathy to moderate conscience in consideration of the economic of collateral damage, with the narrative wrenching through senses, thoughts and emotions before settling down into a dystopian plot dense with politics, societal disparity, corruption, abuse of wealth and power, and repressively conservative justice. If extreme violence is distasteful in Altered Carbon, the spectacle is countered by the fact that human bodies and memories are informational commodities as digitized freight and cortical stacks, busted and mangled physical shells already having access to a sleeve to reincarnate and rehabilitate on to, opening up new vistas of philosophical dispositions and artificially intelligent deliberation on the ethics of fast-disappearing human-machine interface.

If, Personal is Political, Altered Carbon results in a concussion of overloaded themes of cyberpunk tropes and is indicative of Morgan’s political takes, a conclusion only to be commissioned upon reading his later works. This detective melange heavily slithers through human condition both light and dark without succumbing to the derivatives of high-tech and low-life and keeping the potentials of speculative fiction to explorations. The suffusive metaphysics of longevity, multiplicity of souls and spiritual tentacles meeting its adversary in Catholicism paints a believable futuristic on the canvass of science-fiction spectra.

Market Forces, on the other hand is where cyberpunk-style sci-fi is suddenly replaced with corporatized economy of profit lines via the cogency of conflict investment. The world is in a state of dysphoria with diplomatic lines having given way to negotiations with violence, and contracts won on Ronin-esque car duels shifting the battlefield from the cyberspace of Altered Carbon to the more terrestrial grounds. Directly importing from Gordon Gekko’s “Greed is Good”, corporates enhance their share of GDP via legal funding of foreign wars. The limits of philosophy of liberal politics are stretched on analogizing the widening gap between the rich and the marginalized in the backdrop of crime-ravaged not-so futuristic London. Security is rarefied according to economic stratifications, and surveillance by the rich reach absurd levels of sophistication in the absence of sousveillance by the marginalized.

Enter Chris Faulkner, the protagonist defined by conscience that starts to wither away when confronted with taking hard and decisive actions for his firm, Shorn Associates, in the face of brutality of power dynamics. The intent is real-life testosterone absolutism maximizing the tenets of western capitalism in an ostentatious exhibition of masculinity and competition. The obvious collateral damage is fissuring of familial and societal values born as a result of conscience. Market Forces has certain parallels from the past, in the writings of Robert Sheckley, the American sci-fi author, who would take an element of society and extrapolate on its inherent violence to the extent of the absurd sliding into satire. It’s this sliding wherein lies the question of the beyond, the inevitability of an endowment of aggression defining, or rather questioning the purpose of the hitherto given legacy of societal ethic.

With no dearth of violence, the dystopian future stagnates into dysphoria characterized by law and apparatus at the mercy of corporations, which transcend the Government constitutionally along rapacious capitalism. A capitalism that is so rampant that it transforms the hero into an anti-hero in the unfolding tension between interest and sympathy, disgust and repulsion. The perfectly achievable Market Forces is a realization round the corner seeking birth between the hallucinogenic madness of speculations and hyperreality hinging on the philosophy of free-markets taken to its logical ends in the direction of an unpleasant future. The reductio ad absurdum of neoliberalism is an environment of feral brutality masked with the thinnest veneer of corporate civilization, and is the speculation that portrays the world where all power equates violence. This violence is manifested in aggression in a road rage death match against competitors every time there is a bid for a tender. What goes slightly over the board, and in a pretty colloquial usage of absurdity is why would any competition entail the best of staff on such idiotic suicide missions?

Camus’ absurdity is born in The Myth of Sisyphus, and continues well into the The Rebel, but is barely able to free itself from the clutches of triviality. This might appear to be a bold claim, but the efficacy is to be tested through Camus’ intellectual indebtedness to Plotinus, the Neo-Platonist thinker. Plotinus supplemented the One and Many idea of Plato with gradations of explanatory orders, for only then a coalescing of explanations with reality was conceivable. This coalescing converges into the absolute unity, the One, the necessarily metaphysical ground. Now, Camus accepts Plotinus in the steganographic, but strips the Absolute of its metaphysics. A major strand of absurdity for Camus stems from his dic- tum, “to understand is, above all, to unify”, and the absence of such unifying principle vindicates absurdity. Herein, one is confronted with the first of paradoxes, in that, if the Absolute is rejected, why then is there in Camus a nostalgia for unity? The reason is peculiarly caught between his version of empiricism and monism. His empiricism gives accord to comprehensibility of ordinary experiences by way of language and meaning, while anything transcending the same is meaninglessness and hinges on the Plotinus’ Absolute for comprehensibility, thus making him sound a monist. Add to this contradiction is the face of the Christian God to appear if the Absolute were not to be rejected, which would then have warranted a clash between good and evil in the face of the paradox of the existing of the latter when God was invested with qualities of the former. Invoking modernism’s core dictum, Camus then, questions spontaneity in the presence of Absolute by calling to attention scholastic perplexity.

Having rejected the Absolute, Camus takes the absurd condition as a fact. If one were to carefully tread The Myth of Sisyphus, it works thusly: If a man removes himself, he destroys the situation and hence the absurd condition. Since, the absurd condition is taken as a fact, one who destroys himself denies this fact. But he who denies this fact puts himself in opposition to what is, Truth. To oppose the Truth, recognizing it to be true, is to contradict oneself. Recognizing a truth, one ought to preserve it rather than deny it. Therefore, it follows that one ought not to commit metaphysical suicide in the face of the meaningless universe. This is a major paradox in his thought, where the evaluative absurdity is deemed to be preserved starting from the premise that man and the universe juxtaposed together is absurdity itself. So, what we have here is a logical cul-de-sac. But, what is of cardinal import is the retention of life in mediating between the man and universe as absurdity in polarities. If this were confronting the absurd in life, eschatology is another confrontation with the absurd, an absolute that needs to be opposed, a doctrine that becomes a further sense of the absurd, an ethic of the creation of the absolute rule in a drama of man as a struggle against death.

It is this conjecture that builds up in The Rebel, death as an antagonist subjected to rebellion. The absurdity of death lies across our desire for immortality, the inexplicability of it, and negating and denying the only meaningful existence known. Contradictorily, death would not be absurd if immortality were possible, and existence as is known isn’t the only meaningful existence that there is. Camus is prone to a meshwork logic here, for his thought fluctuates between viewing death as an absolute evil and also as a liberator, because of which it lends legitimacy to freedom. For, it isn’t the case that Camus is unaware of the double bind of his logic, and admittedly he ejects himself out of this quandary by deliberating on death not as a transcendental phenomenon, but as an ordinary lived-experience. If the Myth of Sisyphus holds murder and suicide in an absurdist position by denying the transcendent source of value, The Rebel revels in antagonisms with Nihilism, be it either in the sense of nothing is prohibited, or the absolutist nihilism of “permit all” with a fulcrum on the Absolute. The Rebel epitomizes the intellectual impotency of nihilism. But due credit for the logical progression of Camus is mandated here, for any utopia contains the seed of nihilism, in that, any acceptance of an Absolute other than life ultimately leads to tyranny. If this were to be one strand in the essay, the other is exposited in terms of an unrelenting and absolute opposition to death. Consequently, The Rebel, which is the embodiment of Camus’ ethic cannot kill. To avoid any danger of absolutism in the name of some positive good or value, the absolute value becomes oppositional to death, and hence the Rebel’s ethic is one of ceaseless rebellion, opposition and conflict.

Now, with a not very exhaustive treatment to Camus’ notion of absurdity as there is more than meets the eye in his corpus, let us turn to conflation with Richard Morgan and justify our thesis that we set out with. We shall bring this about by a series of observations.

If antagonism to death is the hallmark of rebellion, then Altered Carbon with its special hard-drives called “Stacks” installed in the brainstem immortalizes consciousness to be ported across humans across spacetimes. Needlecasting, the process by which human consciousness in the format of data gets teleported suffers disorientation across human hardwares, if it could even be called that. Interestingly, this disorientation aggrandizes the receiver conflict-ready, a theme that runs continuously in Market Forces as well as in Altered Carbon. The state of being conflict- and combat-ready is erecting armies to quash down rebellions. To prevent immortality from getting exploited in the hands of the privileged, these armies are trained to withstand torture, drudgery, while at the same time heightening their perception via steganography. But where the plot goes haywire for Camus’ rebel is Richard Morgan’s can neutralize and eliminate. Thats the first observation.

On to the second, which deals with transhumanism. A particular character, Kovac’s partner Kristen Ortega has a neo-Catholic family that’s split over God’s view of resurrecting a loved one. The split is as a result of choosing religious coding, a neo-Catholic semblance being the dead cannot be brought back to life. In these cases, Altered Carbon pushes past its Blade Runner fetish and reflexive cynicism to find something human. But, when the larger world is so thin, it’s hard to put something like neo-Catholicism in a larger context. Characters have had centuries to get used to the idea of stacks begging the larger question: why are many still blindsided by their existence? And why do so few people, including the sour Meths, seem to be doing anything interesting with technology? Now Camus’ man is confronted with his absurd and meaningless existence, which will be extinguished by death. There are two choices to consider here: either he can live inauthentically, implying hiding from truth, the fact that life is meaningless, and accepting the standards and values of the crowd, and in the process escaping the inner misery and despair that results from an honest appraisal of facts. Or, he can take the authentic choice and live heroically, implying facing the truth, life’s futility, and temporarily, submitting to despair which is a necessary consequence, but which, if it does not lead to suicide, will eventually purify him. Despair will drive him out of himself and away from trivialities, and by it he will be impelled to commit himself to a life of dramatic choices. This is ingrained in the intellectual idea of neo-Catholicism, with Camus’ allusion as only the use of the Will can cause a man truly to be. Both Takeshi Kovacs in Altered Carbon and Chris Faulkner in Market Forces amply epitomize this neo-Catholicism, albeit not directly, but rather, as an existential angst in the form of an intrusion.

Now for the third observation. The truth in Altered Carbon is an excavation of the self, more than searching data and tweaking it into information. It admonishes to keep going no matter whichever direction, a scalar over the vector territorialization in order to decrypt that which seems hidden, an exercise in futility. Allow me to quote Morgan in full,

You are still young and stupid. Human life has no value. Haven’t you learned that yet, Takeshi, with all you’ve seen? It has no value, intrinsic to itself. Machines cost money to build. Raw materials cost money to extract. But people? You can always get some more people. They reproduce like cancer cells, whether you want them or not. They are abundant, Takeshi. Why should they be valuable? Do you know that it costs us less to recruit and use up a real snuff whore that it does to set up and run virtual equivalent format. Real human flesh is cheaper than a machine. It’s the axiomatic truth of our times?

In full consciousness and setting aside the impropriety above, Morgan’s prejudicing the machine over human flesh extricates essentialism, mirroring Camusian take on the meaning of life as inessential, but for the burning problem of suicide. This is a direct import from Nietzsche, for who, illusion (the arts, Remember Wagner!) lends credibility to life and resolves despair to some extent, whereas for Camus, despair is only coming to terms with this absurd condition, by way of machination in the full knowhow of condition’s futility and pointlessness. This fact is most brilliantly corroborated in Morgan’s dictum about how constant repetition can even make the most obvious truths irritating enough to disagree with (Woken Furies).

To conclude: Imagine the real world extending into the fictive milieu, or its mirror image, the fictive world territorializing the real leaving it to portend such an intercourse consequent to an existential angst. Such an imagination now moves along the coordinates of hyperreality, where it collaterally damages meaning in a violent burst of EX/IM-plosion. This violent burst disturbs the idealized truth overridden by a hallucinogenic madness prompting iniquities calibrated for an unpleasant future. This invading dissonant realism slithers through the science fiction before culminating in the human characteristics of expediency. Such expediencies abhor fixation to being in the world built on deluded principles, where absurdity is not only a human condition, but an affliction of the transhuman and posthuman condition as well. Only the latter is not necessarily a peep into the future, which it might very well be, but rather a disturbing look into the present-day topographies, which for Camus was acquiescing to predicament, and for Richard Morgan a search for the culpable.

How the Alt-Right Infiltrated Architecture Twitter – and turned Notre-Dame into a Political Lighting Rod.

Screen Shot 2019-04-29 at 1.40.51 PM

“Buildings broadcast a message. Good and bad architecture can lift, or subdue a message… aesthetic ugliness promotes ugly behavior,” says 35-year-old Paul Joseph Watson, a commentator on Infowars, in a video titled “Why Modern Architecture SUCKS.” Watson refers to modernist architects — those who designed buildings after World War II, like Ernő Goldfinger, Owen Luder and John Bancroft — as “the social justice warriors of their time” who actively “rebelled against beauty.” By creating large concrete tower blocks — often with the intention of building social housing for the poor — Watson believes they attempted to “socially engineer society” like the Soviet Union.

He’s also far from the only critic to complain about the legacy of brutalism, a style of modern architecture that emerged in the 1950s and 1960s in the U.K., but was developed largely by French architects like Le Corbusier. Brutalist buildings were characterized by simple, block-like structures that often featured exposed concrete and were constructed in the belief that architects should design buildings with their function in mind first and foremost. As a result, brutalist architects would usually prioritize public space over monuments to gawk at. “Many Brutalist buildings expressed a progressive or even utopian vision of communal living and public ownership,” writes Felix Torkar in Jacobin magazine. (To that end, brutalist buildings were often favored by European governments as social housing for impoverished communities.) “The battle to protect them is also a fight to defend this social inheritance.”

Read on…

Albert Camus Reads Richard Morgan: Unsaid Existential Absurdism…(Abstract/Blurb)

For the upcoming conference on “The Intellectual Geography of Albert Camus” on the 3rd of May, 2019, at the Alliance Française, New Delhi. Watch this space..

Imagine the real world extending into the fictive milieu, or its mirror image, the fictive world territorializing the real leaving it to portend such an intercourse consequent to an existential angst. Such an imagination now moves along the coordinates of hyperreality, where it collaterally damages meaning in a violent burst of EX/IM-plosion. This violent burst disturbs the idealized truth overridden by a hallucinogenic madness prompting iniquities calibrated for an unpleasant future. This invading dissonant realism slithers through the science fiction of Richard Morgan before it culminates in human characteristics of expediency. Such expediencies abhor fixation to being in the world built on deluded principles, which in my reading is Camus’ recommendation of confrontation with the absurd. This paper attempts to unravel the hyperreal as congruent on the absurd in a fictitious landscape of “existentialism meets the intensity of a relatable yet cold future”. 

———————–

What I purport to do in this paper is pick up two sci-fi works of Richard Morgan, the first of which also happens to be the first of the Takeshi Kovacs Trilogy, Altered Carbon, while the second is Market Forces,  a brutal journey into the heart of conflict investment by way of conscience elimination. Thereafter a conflation with Camus’ absurdity unravels the very paradoxical ambiguity underlying absurdism as a human condition. The paradoxical ambiguity is as a result of Camus’ ambivalence towards the neo-Platonist conception of the ultimate unifying principle, while accepting Plotinus’ principled pattern or steganography, but rejecting its culmination. 

Richard Morgan’s is a parody, a commentary, or even en epic fantasy overcharged almost to the point of absurdity and bordering extropianism. If at all there is a semblance of optimism in the future as a result of Moore’s Law of dense hardware realizable through computational extravagance, it is spectacularly offset by complexities of software codes resulting in a disconnect that Morgan brilliantly transposes on to a society in a dystopian ethic underlining his plot pattern recognitions. This offsetting disconnect between the physical and mental, between the tangible and the intangible is the existential angst writ large on the societal maneuvered by the powers that be…..to be continued

Malicious Machine Learnings? Privacy Preservation and Computational Correctness Across Parties. Note Quote/Didactics.

Invincea_graph_DARPA

Multi-Party Computation deals with the following problem: There are n ≥ 2 parties P1, . . ., Pn where party Pi holds input ti, 1 ≤ i ≤ n, and they wish to compute together a functions = f (t1, . . . , tn) on their inputs. The goal is that each party will learn the output of the function, s, yet with the restriction that Pi will not learn any additional information about the input of the other parties aside from what can be deduced from the pair (ti, s). Clearly it is the secrecy restriction that adds complexity to the problem, as without it each party could announce its input to all other parties, and each party would locally compute the value of the function. Thus, the goal of Multi-Party Computation is to achieve the following two properties at the same time: correctness of the computation and privacy preservation of the inputs.

The following two generalizations are often useful:

(i) Probabilistic functions. Here the value of the function depends on some random string r chosen according to some distribution: s = f (t1, . . . , tn; r). An example of this is the coin-flipping functionality, which takes no inputs, and outputs an unbiased random bit. It is crucial that the value r is not controlled by any of the parties, but is somehow jointly generated during the computation.

(ii) Multioutput functions. It is not mandatory that there be a single output of the function. More generally there could be a unique output for each party, i.e., (s1, . . . , sn) = f(t1,…, tn). In this case, only party Pi learns the output si, and no other party learns any information about the other parties’ input and outputs aside from what can be derived from its own input and output.

One of the most interesting aspects of Multi-Party Computation is to reach the objective of computing the function value, but under the assumption that some of the parties may deviate from the protocol. In cryptography, the parties are usually divided into two types: honest and faulty. An honest party follows the protocol without any deviation. Otherwise, the party is considered to be faulty. The faulty behavior can exemplify itself in a wide range of possibilities. The most benign faulty behavior is where the parties follow the protocol, yet try to learn as much as possible about the inputs of the other parties. These parties are called honest-but-curious (or semihonest). At the other end of the spectrum, the parties may deviate from the prescribed protocol in any way that they desire, with the goal of either influencing the computed output value in some way, or of learning as much as possible about the inputs of the other parties. These parties are called malicious.

We envision an adversary A, who controls all the faulty parties and can coordinate their actions. Thus, in a sense we assume that the faulty parties are working together and can exert the most knowledge and influence over the computation out of this collusion. The adversary can corrupt any number of parties out of the n participating parties. Yet, in order to be able to achieve a solution to the problem, in many cases we would need to limit the number of corrupted parties. This limit is called the threshold k, indicating that the protocol remains secure as long as the number of corrupted parties is at most k.

Assume that there exists a trusted party who privately receives the inputs of all the participating parties, calculates the output value s, and then transmits this value to each one of the parties. This process clearly computes the correct output of f, and also does not enable the participating parties to learn any additional information about the inputs of others. We call this model the ideal model. The security of Multi-Party Computation then states that a protocol is secure if its execution satisfies the following: (1) the honest parties compute the same (correct) outputs as they would in the ideal model; and (2) the protocol does not expose more information than a comparable execution with the trusted party, in the ideal model.

Intuitively, the adversary’s interaction with the parties (on a vector of inputs) in the protocol generates a transcript. This transcript is a random variable that includes the outputs of all the honest parties, which is needed to ensure correctness, and the output of the adversary A. The latter output, without loss of generality, includes all the information that the adversary learned, including its inputs, private state, all the messages sent by the honest parties to A, and, depending on the model, maybe even include more information, such as public messages that the honest parties exchanged. If we show that exactly the same transcript distribution can be generated when interacting with the trusted party in the ideal model, then we are guaranteed that no information is leaked from the computation via the execution of the protocol, as we know that the ideal process does not expose any information about the inputs. More formally,

Let f be a function on n inputs and let π be a protocol that computes the function f. Given an adversary A, which controls some set of parties, we define REALA,π(t) to be the sequence of outputs of honest parties resulting from the execution of π on input vector t under the attack of A, in addition to the output of A. Similarly, given an adversary A′ which controls a set of parties, we define IDEALA′,f(t) to be the sequence of outputs of honest parties computed by the trusted party in the ideal model on input vector t, in addition to the output of A′. We say that π securely computes f if, for every adversary A as above, ∃ an adversary A′, which controls the same parties in the ideal model, such that, on any input vector t, we have that the distribution of REALA,π(t) is “indistinguishable” from the distribution of IDEALA′,f(t).

Intuitively, the task of the ideal adversary A′ is to generate (almost) the same output as A generates in the real execution or the real model. Thus, the attacker A′ is often called the simulator of A. The transcript value generated in the ideal model, IDEALA′,f(t), also includes the outputs of the honest parties (even though we do not give these outputs to A′), which we know were correctly computed by the trusted party. Thus, the real transcript REALA,π(t) should also include correct outputs of the honest parties in the real model.

We assumed that every party Pi has an input ti, which it enters into the computation. However, if Pi is faulty, nothing stops Pi from changing ti into some ti′. Thus, the notion of a “correct” input is defined only for honest parties. However, the “effective” input of a faulty party Pi could be defined as the value ti′ that the simulator A′ gives to the trusted party in the ideal model. Indeed, since the outputs of honest parties look the same in both models, for all effective purposes Pi must have “contributed” the same input ti′ in the real model.

Another possible misbehavior of Pi, even in the ideal model, might be a refusal to give any input at all to the trusted party. This can be handled in a variety of ways, ranging from aborting the entire computation to simply assigning ti some “default value.” For concreteness, we assume that the domain of f includes a special symbol ⊥ indicating this refusal to give the input, so that it is well defined how f should be computed on such missing inputs. What this requires is that in any real protocol we detect when a party does not enter its input and deal with it exactly in the same manner as if the party would input ⊥ in the ideal model.

As regards security, it is implicitly assumed that all honest parties receive the output of the computation. This is achieved by stating that IDEALA′,f(t) includes the outputs of all honest parties. We therefore say that our currency guarantees output delivery. A more relaxed property than output delivery is fairness. If fairness is achieved, then this means that if at least one (even faulty) party learns its outputs, then all (honest) parties eventually do too. A bit more formally, we allow the ideal model adversary A′ to instruct the trusted party not to compute any of the outputs. In this case, in the ideal model either all the parties learn the output, or none do. Since A’s transcript is indistinguishable from A′’s this guarantees that the same fairness guarantee must hold in the real model as well.

A further relaxation of the definition of security is to provide only correctness and privacy. This means that faulty parties can learn their outputs, and prevent the honest parties from learning theirs. Yet, at the same time the protocol will still guarantee that (1) if an honest party receives an output, then this is the correct value, and (2) the privacy of the inputs and outputs of the honest parties is preserved.

The basic security notions are universal and model-independent. However, specific implementations crucially depend on spelling out precisely the model where the computation will be carried out. In particular, the following issues must be specified:

  1. The faulty parties could be honest-but-curious or malicious, and there is usually an upper bound k on the number of parties that the adversary can corrupt.
  2. Distinguishing between the computational setting and the information theoretic setting, in the latter, the adversary is unlimited in its computing powers. Thus, the term “indistinguishable” is formalized by requiring the two transcript distributions to be either identical (so-called perfect security) or, at least, statistically close in their variation distance (so-called statistical security). On the other hand, in the computational, the power of the adversary (as well as that of the honest parties) is restricted. A bit more precisely, Multi-Party Computation problem is parameterized by the security parameter λ, in which case (a) all the computation and communication shall be done in time polynomial in λ; and (b) the misbehavior strategies of the faulty parties are also restricted to be run in time polynomial in λ. Furthermore, the term “indistinguishability” is formalized by computational indistinguishability: two distribution ensembles {Xλ}λ and {Yλ}λ are said to be computationally indistinguishable, if for any polynomial-time distinguisher D, the quantity ε, defined as |Pr[D(Xλ) = 1] − Pr[D(Yλ) = 1]|, is a “negligible” function of λ. This means that for any j > 0 and all sufficiently large λ, ε eventually becomes smaller than λ − j. This modeling helps us to build secure Multi-Party Computational protocols depending on plausible computational assumptions, such as the hardness of factoring large integers.
  3. The two common communication assumptions are the existence of a secure channel and the existence of a broadcast channel. Secure channels assume that every pair of parties Pi and Pj are connected via an authenticated, private channel. A broadcast channel is a channel with the following properties: if a party Pi (honest or faulty) broadcasts a message m, then m is correctly received by all the parties (who are also sure the message came from Pi). In particular, if an honest party receives m, then it knows that every other honest party also received m. A different communication assumption is the existence of envelopes. An envelope guarantees the following properties: a value m can be stored inside the envelope, it will be held without exposure for a given period of time, and then the value m will be revealed without modification. A ballot box is an enhancement of the envelope setting that also provides a random shuffling mechanism of the envelopes. These are, of course, idealized assumptions that allow for a clean description of a protocol, as they separate the communication issues from the computational ones. These idealized assumptions may be realized by a physical mechanisms, but in some settings such mechanisms may not be available. Then it is important to address the question if and under what circumstances we can remove a given communication assumption. For example, we know that the assumption of a secure channel can be substituted with a protocol, but under the introduction of a computational assumption and a public key infrastructure.

Fascism’s Incognito – Conjuncted

“Being asked to define fascism is probably the scariest moment for any expert of fascism,” Montague said.
Communism-vs-Fascism
Brecht’s circular circuitry is here.
Allow me to make cross-sectional (both historically and geographically) references. I start with Mussolini, who talked of what use fascism could be put to by stating that capitalism throws itself into the protection of the state when it is in crisis, and he illustrated this point by referring to the Great Depression as a failure of laissez-faire capitalism and thus creating an opportunity for fascist state to provide an alternative to this failure. This in a way points to the fact that fascism springs to life economically in the event of capitalism’s deterioration. To highlight this point of fascism springing to life as a reaction to capitalism’s failure, let me take recourse to Samir Amin, who calls the fascist choice for managing a capitalist society in crisis as a categorial rejection of democracy, despite having reached that stage democratically. The masses are subjected to values of submission to a unity of socio-economic, political and/or religious ideological discourses. This is one reason why I call fascism not as a derivative category of capitalism in the sense of former being the historic phase of the latter, but rather as a coterminous tendency waiting in dormancy for capitalism to deteriorate, so that fascism could then detonate. But, are fascism and capitalism related in a multiple of ways is as good as how socialism is related with fascism, albeit only differently categorically.
It is imperative for me to add by way of what I perceive as financial capitalism and bureaucracy and where exactly art gets sandwiched in between the two, for more than anything else, I would firmly believe in Brecht as continuing the artistic practices of Marxian sociology and political-economy.
The financial capitalism combined with the impersonal bureaucracy has inverted the traditional schematic forcing us to live in a totalitarian system of financial governance divorced from democratic polity. It’s not even fascism in the older sense of the term, by being a collusion of state and corporate power, since the political is bankrupt and has become a mediatainment system of control and buffer against the fact of Plutocracies. The state will remain only as long as the police systems are needed to fend off people claiming rights to their rights. Politicians are dramaturgists and media personalities rather than workers in law.  If one were to just study the literature and paintings of the last 3-4 decades, it is fathomable where it is all going. Arts still continue to speak what we do not want to hear. Most of our academics are idiots clinging on to the ideological culture of the left that has put on its blinkers and has only one enemy, which is the right (whatever the hell that is). Instead of moving outside their straightjackets and embracing the world of the present, they still seem to be ensconced in 19th century utopianism with the only addition to their arsenal being the dramatic affects of mass media. Remember Thomas Pynchon of Gravity’s Rainbow fame (I prefer calling him the illegitimate cousin of James Joyce for his craftiness and smoothly sailing contrite plots: there goes off my first of paroxysms!!), who likened the system of techno-politics as an extension of our inhuman core, at best autonomous, intelligent and ever willing to exist outside the control of politics altogether. This befits the operational closure and echoing time and time again that technology isn’t an alien thing, but rather a manifestation of our inhuman core, a mutation of our shared fragments sieved together in ungodly ways. This is alien technologies in gratitude.
We have never been natural, and purportedly so by building defence systems against the natural both intrinsically and extrinsically. Take for example, Civilisation, the most artificial construct of all humans had busied themselves building and now busying themselves upholding. what is it? A Human Security System staving off entropy of existence through the self-perpetuation of a cultural complex of temporal immortalisation, if nothing less and vulnerable to editions by scores of pundits claiming to a larger schemata often overlooked by parochiality. Haven’t we become accustomed to hibernating in an artificial time now exposed by inhabiting the infosphere, creating dividualities by reckoning to data we intake, partake and outtake. Isn’t analysing the part/whole dividuality really scoring our worthiness? I know the answer is yes, but merely refusing to jump off the tongue. Democracies have made us indolent with extremities ever so flirting with electronic knowledge waiting to be turned to digital ash when confronted with the existential threat to our locus standi.
But, we always think of a secret cabal conspiring to dehumanise us. But we also forget the impersonality of the dataverse, the infosphere, the carnival we simply cannot avoid being a part of. Our mistaken beliefs lie in reductionism, and this is a serious detriment to causes created ex nihilo, for a fight is inevitably diluted if we pay insignificance to the global meshwork of complex systems of economics and control, for these far outstrip our ability to pin down to a critical apparatus. This apparatus needs to be different from ones based on criticism, for the latter is prone to sciolist tendencies. Maybe, one needs to admit allegiance to perils of our position and go along in a Socratic irony before turning in against the admittance at opportune times. Right deserves tackling through the Socratic irony, lest taking offences become platitudinous. Let us not forget that the modern state is nothing but a PR firm to keep the children asleep and unthinking and believing in the dramaturgy of the political as real. And this is where Brecht comes right back in, for he considered creation of bureaucracies as affronting not just fascist states, but even communist ones. The above aside, or digression is just a reality check on how much complex capitalism has become and with it, its derivatives of fascism as these are too intertwined within bureaucratic spaces. Even when Brecht was writing in his heydays, he took a deviation from his culinary-as-ever epic theatre to found a new form of what he called theatre as learning to play that resembled his political seminars modeled on the rejection of the concept of bureaucratic elitism in partisan politics where the theorists and functionaries issued directives and controlled activities on behalf of the masses to the point of submission of the latter to the former. This point is highlighted not just for fascist states, but equally well for socialist/communist regimes reiterating the fact that fascism is potent enough to develop in societies other than capitalistic ones.
Moving on to the point when mentions of democracy as bourgeois democracy is done in the same breath as regards equality only for those who are holders of capital are turning platitudinous. Well, structurally yes, this is what it seems like, but reality goes a bit deeper and thereafter fissures itself into looking at if capital indeed is what it is perceived as in general, or is there more to it than meets the eye. I quip this to confront two theorists of equality with one another: Piketty and Sally Goerner. Piketty misses a great opportunity to tie the “r > g” idea (after tax returns on capital r > growth rate of economy g) to the “limits to growth”. With a careful look at history, there are several quite important choice points along the path from the initial hope it won’t work out that way… to the inevitable distressing end he describes, and sees, and regrets. It’s what seduces us into so foolishly believing we can maintain “g > r”, despite the very clear and hard evidence of that faiIing all the time… that sometimes it doesn’t. The real “central contradiction of capitalism” then, is that it promises “g > r”, and then we inevitably find it is only temporary. Growth is actually nature’s universal start-up process, used to initially build every life, including the lives of every business, and the lives of every society. Nature begins building things with growth. She’s then also happy to destroy them with more of the same, those lives that began with healthy growth that make the fateful choice of continuing to devote their resources to driving their internal and external strains to the breaking point, trying to make g > r perpetual. It can’t be. So the secret to the puzzle seems to be: Once you’ve taken growth from “g > r” to spoiling its promise in its “r > g” you’ve missed the real opportunity it presented. Sally Goerner writes about how systems need to find new ways to grow through a process of rising intricacy that literally reorganizes the system into a higher level of complexity. Systems that fail to do that collapse. So smart growth is possible (a cell divides into multiple cells that then form an organ of higher complexity and greater intricacy through working cooperatively). Such smart growth is regenerative in that it manifests new potential. How different that feels than conventional scaling up of a business, often at the expense of intricacy (in order to achieve so called economies of scale). Leaps of complexity do satisfy growing demands for productivity, but only temporarily, as continually rising demands of productivity inevitably require ever bigger leaps of complexity. Reorganizing the system by adopting ever higher levels of intricacy eventually makes things ever more unmanageable, naturally becoming organizationally unstable, to collapse for that reason. So seeking the rise in productivity in exchange for a rising risk of disorderly collapse is like jumping out of the fry pan right into the fire! As a path to system longevity, then, it is tempting but risky, indeed appearing to be regenerative temporarily, until the same impossible challenge of keeping up with ever increasing demands for new productivity drives to abandon the next level of complexity too! The more intricacy (tight, small-scale weave) grows horizontally, the more unmanageable it becomes. That’s why all sorts of systems develop what we would call hierarchical structures. Here, however, hierarchal structures serve primarily as connective tissue that helps coordinate, facilitate and communicate across scales. One of the reasons human societies are falling apart is because many of our hierarchical structures no longer serve this connective tissue role, but rather fuel processes of draining and self-destruction by creating sinks where refuse could be regenerated. Capitalism, in its present financial form is precisely this sink, whereas capitalism wedded to fascism as an historical alliance doesn’t fit the purpose and thus proving once more that the collateral damage would be lent out to fascist states if that were to be the case, which would indeed materialize that way.
That democracy is bourgeois democracy is an idea associated with Swedish political theorist Goran Therborn, who as recent as the 2016 US elections proved his point by questioning the whole edifice of inclusive-exclusive aspects of democracy, when he said,
Even if capitalist markets do have an inclusive aspect, open to exchange with anyone…as long as it is profitable, capitalism as a whole is predominantly and inherently a system of social exclusion, dividing people by property and excluding the non-profitable. a system of this kind is, of course, incapable of allowing the capabilities of all humankind to be realized. and currently the the system looks well fortified, even though new critical currents are hitting against it.
Democracy did take on a positive meaning, and ironically enough, it was through rise of nation-states, consolidation of popular sovereignty championed by the west that it met its two most vociferous challenges in the form of communism and fascism, of which the latter was a reactionary response to the discontents of capitalist modernity. Its radically lay in racism and populism. A degree of deference toward the privileged and propertied, rather than radical opposition as in populism, went along with elite concessions affecting the welfare, social security, and improvement of the working masses. This was countered by, even in the programs of moderate and conservative parties by using state power to curtail the most malign effects of unfettered market dynamics. It was only in the works of Hayek that such interventions were beginning to represent the road to serfdom thus paving way to modern-day right-wing economies, of which state had absolutely no role to play as regards markets fundamentals and dynamics. The counter to bourgeois democracy was rooted in social democratic movements and is still is, one that is based on negotiation, compromise, give and take a a grudgingly given respect for the others (whether ideologically or individually). The point again is just to reiterate that fascism, in my opinion is not to be seen as a nakedest form of capitalism, but is generally seen to be floundering on the shoals of an economic slowdown or crisis of stagflation.
On ideal categories, I am not a Weberian at heart. I am a bit ambiguous or even ambivalent to the role of social science as a discipline that could draft a resolution to ideal types and interactions between those generating efficacies of real life. Though, it does form one aspect of it. My ontologies would lie in classificatory and constructive forms from more logical grounds that leave ample room for deviations and order-disorder dichotomies. Complexity is basically an offspring of entropy.
And here is where my student-days of philosophical pessimism surface, or were they ever dead, as the real way out is a dark path through the world we too long pretended did not exist.