Albert Camus Reads Richard Morgan: Unsaid Existential Absurdism…(Abstract/Blurb)

For the upcoming conference on “The Intellectual Geography of Albert Camus” on the 3rd of May, 2019, at the Alliance Française, New Delhi. Watch this space..

Imagine the real world extending into the fictive milieu, or its mirror image, the fictive world territorializing the real leaving it to portend such an intercourse consequent to an existential angst. Such an imagination now moves along the coordinates of hyperreality, where it collaterally damages meaning in a violent burst of EX/IM-plosion. This violent burst disturbs the idealized truth overridden by a hallucinogenic madness prompting iniquities calibrated for an unpleasant future. This invading dissonant realism slithers through the science fiction of Richard Morgan before it culminates in human characteristics of expediency. Such expediencies abhor fixation to being in the world built on deluded principles, which in my reading is Camus’ recommendation of confrontation with the absurd. This paper attempts to unravel the hyperreal as congruent on the absurd in a fictitious landscape of “existentialism meets the intensity of a relatable yet cold future”. 

———————–

What I purport to do in this paper is pick up two sci-fi works of Richard Morgan, the first of which also happens to be the first of the Takeshi Kovacs Trilogy, Altered Carbon, while the second is Market Forces,  a brutal journey into the heart of conflict investment by way of conscience elimination. Thereafter a conflation with Camus’ absurdity unravels the very paradoxical ambiguity underlying absurdism as a human condition. The paradoxical ambiguity is as a result of Camus’ ambivalence towards the neo-Platonist conception of the ultimate unifying principle, while accepting Plotinus’ principled pattern or steganography, but rejecting its culmination. 

Richard Morgan’s is a parody, a commentary, or even en epic fantasy overcharged almost to the point of absurdity and bordering extropianism. If at all there is a semblance of optimism in the future as a result of Moore’s Law of dense hardware realizable through computational extravagance, it is spectacularly offset by complexities of software codes resulting in a disconnect that Morgan brilliantly transposes on to a society in a dystopian ethic underlining his plot pattern recognitions. This offsetting disconnect between the physical and mental, between the tangible and the intangible is the existential angst writ large on the societal maneuvered by the powers that be…..to be continued

Feed Forward Perceptron and Philosophical Representation

In a network that does not recognize a node’s relation with any specific concept, the hidden layer is populated by neurons, in a way that architecturally blue prints each neuron as connected with every input layer node. What happens to information passed into the network is interesting from the point of view of distribution over all of the neurons that populate the hidden layer. This distribution over the domain strips any particular neuron within the hidden layer of any privileged status, in turn meaning no privileged (ontological) status for any weights and nodes as well. With the absence of any privileged status accorded to nodes, weights and even neurons in the hidden layer, representation comes to mean something entirely different as compared with what it normally meant in semantic networks, and that being representations are not representative of any coherent concept. Such a scenario is representational with sub-symbolic features, and since all the weights have a share in participation, each time a network is faced with something like pattern recognition, the representation is what is called distributed representation. The best example of such a distributed representation is a multilayer perceptron, which is a feedforward artificial neural network that maps sets of input data onto a set of appropriate output, and finds its use in image recognition, pattern recognition and even speech recognition.

A multilayer perceptron is characterized by each neuron as using a nonlinear activation function to model firing of biological neurons in the brain. The activation functions for the current application are sigmoids, and the equations are:

Ф (yi) = tanh (vi) and Ф (yi) = (1 + e-vi)-1

where, yi is the output of the ith neuron, and vi is the weighted sum of the input synapses, and the former function is a hyperbolic tangent in the range of -1 to +1, and the latter is equivalent in shape but ranging from 0 to +1. Learning takes place through backpropagation. The connection weights are changed, after adjustments are made in the output compared with the expected result. To be on the technical side, let us see how backpropagation is responsible for learning to take place in the multilayer perceptron.

Error in the output node j in the nth data point is represented by,

ej (n) = dj (n) – yj (n),

where, d is the target value and y is the value produced by the perceptron. Corrections to the weights of the nodes that minimize the error in the entire output is made by,

ξ(n)=0.5 * ∑e2j (n)

With the help of gradient descent, the change in each weight happens to be given by,

∆ wji (n)=−η * (δξ(n)/δvj (n)) * yi (n)

where, yi is the output of the previous neuron, and η is the learning rate that is carefully selected to make sure that weights converge to a response quickly enough without undergoing any sort of oscillations. Gradient descent is based on the observation that if the real-valued function F (x) is defined and differentiable in a neighborhood of a point ‘a’, then F (x) decreases fastest if one goes from ‘a’ in the direction of the negative gradient of F at ‘a.

The derivative to be calculated depends on the local induced field vj, that is susceptible to variations. The derivative is simplified for the output node,

− (δξ(n)/δvj (n)) = ej (n) Ф'(vj (n))

where, Ф’ is the first-order derivative of the activation function Ф, and does not vary. The analysis is more difficult to a change in weights to a hidden node, but can be shown with the relevant derivative as,

− (δξ(n)/δvj (n)) = Ф'(vj (n))∑− (δξ(n)/δvk (n)) * wkj (n)

which depends on the change of weights of the kth node, representing the output layer. So to change the hidden layer weights, we must first change the output layer weights according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function. Perceptron as a distributed representation is gaining wider applications in AI project, but since biological knowledge is prone to change over time, its biological plausibility is doubtful. A major drawback despite scoring over semantic networks, or symbolic models is the loose modeling capabilities with neurons and synapses. At the same time, backpropagation multilayer perceptrons are not too closely resembling brain-like structures, and for near complete efficiency, require synapses to be varying. A typical multilayer perceptron would look something like,

mlfnwithweights

where, (x1,….,xp) are the predictor variable values as presented to the input layer. Note that the standardized values for these variables are in the range -1 to +1. Wji is the weight that multiplies with each of the values coming from the input neuron, and uj is the compounded combined value of the addition of the resulting weighted values in the hidden layer. The weighted sum is fed into a transfer function of a sigmoidal/non-linear kind, σ, that outputs a value hj, before getting distributed to an output layer. Arriving at a neuron in the output layer, the value from each hidden layer neuron is multiplied by a weight wkj, and the resulting weighted values are added together producing a compounded combined value vj. This weighted sum vj is fed into a transfer function of a sigmoid/non-linear kind, σ, that outputs a value yk, which are the outputs of the network.