Are Leibnizian ‘Monads’ Complex?

fso9aa

One crucial characteristic of complexity (complex systems) is the ignorance of any element constituting the system about the system as a whole, in that, if it were known to the element, the whole of complexity would be resident in the element in question begging the metaphysical question concerning the consciousness of the whole residing in an unit/part. Taking into account the non-linearity of the system, this ignorance turns into bliss, as complexity starts to emerge as a result of the patterns of interaction between the constituting elements. But, there might seem some similarity between the Leibnizian monads and elements, would the non-monadic characters of the elements constituting the system. The Leibnizian monads are subject to their own laws following individualism as a key property, non-interactive in nature and most importantly reflect the entire universe. The elements in the complex systems abhor these qualities. The monads are metaphysical, whereas, the elements shun the tag. Monads have ontological existence lying in their irreducible simplicity, a commonality with the elements constituting the complex systems, but get differentiated in their apparent interactions as compared with the real interactions for the elements.

Churchlands, Representational Disconcert via State-Space Physics & Phase-Space Mathematics

Feed Forward Perceptron and Philosophical Representation

In a network that does not recognize a node’s relation with any specific concept, the hidden layer is populated by neurons, in a way that architecturally blue prints each neuron as connected with every input layer node. What happens to information passed into the network is interesting from the point of view of distribution over all of the neurons that populate the hidden layer. This distribution over the domain strips any particular neuron within the hidden layer of any privileged status, in turn meaning no privileged (ontological) status for any weights and nodes as well. With the absence of any privileged status accorded to nodes, weights and even neurons in the hidden layer, representation comes to mean something entirely different as compared with what it normally meant in semantic networks, and that being representations are not representative of any coherent concept. Such a scenario is representational with sub-symbolic features, and since all the weights have a share in participation, each time a network is faced with something like pattern recognition, the representation is what is called distributed representation. The best example of such a distributed representation is a multilayer perceptron, which is a feedforward artificial neural network that maps sets of input data onto a set of appropriate output, and finds its use in image recognition, pattern recognition and even speech recognition.

A multilayer perceptron is characterized by each neuron as using a nonlinear activation function to model firing of biological neurons in the brain. The activation functions for the current application are sigmoids, and the equations are:

Ф (yi) = tanh (vi) and Ф (yi) = (1 + e-vi)-1

where, yi is the output of the ith neuron, and vi is the weighted sum of the input synapses, and the former function is a hyperbolic tangent in the range of -1 to +1, and the latter is equivalent in shape but ranging from 0 to +1. Learning takes place through backpropagation. The connection weights are changed, after adjustments are made in the output compared with the expected result. To be on the technical side, let us see how backpropagation is responsible for learning to take place in the multilayer perceptron.

Error in the output node j in the nth data point is represented by,

ej (n) = dj (n) – yj (n),

where, d is the target value and y is the value produced by the perceptron. Corrections to the weights of the nodes that minimize the error in the entire output is made by,

ξ(n)=0.5 * ∑e2j (n)

With the help of gradient descent, the change in each weight happens to be given by,

∆ wji (n)=−η * (δξ(n)/δvj (n)) * yi (n)

where, yi is the output of the previous neuron, and η is the learning rate that is carefully selected to make sure that weights converge to a response quickly enough without undergoing any sort of oscillations. Gradient descent is based on the observation that if the real-valued function F (x) is defined and differentiable in a neighborhood of a point ‘a’, then F (x) decreases fastest if one goes from ‘a’ in the direction of the negative gradient of F at ‘a.

The derivative to be calculated depends on the local induced field vj, that is susceptible to variations. The derivative is simplified for the output node,

− (δξ(n)/δvj (n)) = ej (n) Ф'(vj (n))

where, Ф’ is the first-order derivative of the activation function Ф, and does not vary. The analysis is more difficult to a change in weights to a hidden node, but can be shown with the relevant derivative as,

− (δξ(n)/δvj (n)) = Ф'(vj (n))∑− (δξ(n)/δvk (n)) * wkj (n)

which depends on the change of weights of the kth node, representing the output layer. So to change the hidden layer weights, we must first change the output layer weights according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function. Perceptron as a distributed representation is gaining wider applications in AI project, but since biological knowledge is prone to change over time, its biological plausibility is doubtful. A major drawback despite scoring over semantic networks, or symbolic models is the loose modeling capabilities with neurons and synapses. At the same time, backpropagation multilayer perceptrons are not too closely resembling brain-like structures, and for near complete efficiency, require synapses to be varying. A typical multilayer perceptron would look something like,

mlfnwithweights

where, (x1,….,xp) are the predictor variable values as presented to the input layer. Note that the standardized values for these variables are in the range -1 to +1. Wji is the weight that multiplies with each of the values coming from the input neuron, and uj is the compounded combined value of the addition of the resulting weighted values in the hidden layer. The weighted sum is fed into a transfer function of a sigmoidal/non-linear kind, σ, that outputs a value hj, before getting distributed to an output layer. Arriving at a neuron in the output layer, the value from each hidden layer neuron is multiplied by a weight wkj, and the resulting weighted values are added together producing a compounded combined value vj. This weighted sum vj is fed into a transfer function of a sigmoid/non-linear kind, σ, that outputs a value yk, which are the outputs of the network.

Reality as Contingently Generating the Actual

f1-large

If reality could be copied, the mapping or simulation of a neural network digitally is possible. This scheme has some nagging problems. Chief among them being the simulated nature of neural network, as the possibility of reality vis-a-vis natural neural network is susceptible to mismatch thus leading to what could be termed non-reductionist essentialism. The other option that could aid better apprehensibility of hyperrealism and simulation in terms of neural networks is from Bhaskar Roy’s idea of critical realism. This aspect differs significantly from Baudrillard’s in that the latter takes reality as potentially open to copying, whereas the former delves into reality as a generative mechanism. According to Bhaskar’s take, reality is not something that can be copied, but something that contingently generates the actual.

Lyotardian Libidinal Energies.0 (Addendum)

Lyotardian Libidinal Energies

cap2-scaled10001

For Lyotard, the turn away from philosophy encompassing the libidinal energy to PoMo was primarily based on his concern with the problem of representation, and with the commitment to the ontology of events. In the Libidinal Economy, Lyotard gets quite tied up in trying to resolve the problems associated with structures that harbor libidinal energies, as they tend to become hegemonic. With the investment of such hegemonic status, these structures are vulnerable to deny other libidinal intensities/energies themselves by claiming sole right to themselves as stable structures, and subsequently become nihilistic and limiting. Since, libidinal energies can exist only within structures, Lyotard fails to show a way out for liberating desire, and also does not set up a place beyond representation that would be immune to the effects of nihilism, but instead, comes up with a metaphysical system, in which both the structures and intensities are essential components for functioning libidinal economy. Nihilism of structures could only be checked by an adherence to notions of dissimulation, by considering the very libidinal energy as the event dormant with under-exploited, potentiality waiting for its release to other structures.

SMART CITIES: WHERE DOES THE FINANCIAL VIABILITY LIE?

‘Smart’ as an adjective or as a noun isn’t really the question anymore as the growing narratives around it seem to impose the latter part of speech almost overwhelmingly. This could a political strategy wrought by policy makers, IT honchos, urban planners amongst others to make a vision as expansive as it could be exclusionary. The exclusionary component only precipitates the divide in-between the inclusionary, thus swelling the former even denser. Turning from this generic stance about the notion of ‘Smart’, it is imperative to look at a juggernaut that is swamping the political, the policy makers, the architects-cum-urban planners, the financiers, and most crucially the urban dwellers belonging to a myriad of social and economic strata. While a few look at this as an opportunity in revamping of the urbane, for the majority, it is turning out to be a silent battle to eke out a future amidst uncertainty. In a nutshell, the viability of such ambitions depend on clear-sightedness, which seems to be filling up the void via the ersatz.

challenges-to-achieve-smart-cities-goal

Though, one thing that needs to be clarified here is the use of ‘smart’ is quite clearly similar to the use of ‘post’ in some theories, where it is not the temporal factor that is accounted for, but, rather an integrative one, a coalition of temporal and spatial aspects. Many a times, intentions such as these constructed as a need, and the difference between with the necessity is subtle. Smart Cities were conceived precisely because of such rationales, rather than as cahoots of impending neo-colonization conspiracies. There is a urban drift, and this dense diaspora is allegedly associated with pollution, resource crunch, dwindling infrastructure resulting in a stagflation of economic growth. So, instead of having kiosks that are decentralised, the idea is to have a control that is central addressing such constraining conditions. With a central control, inputs and outputs find monitoring in-housed through networking solutions. What’s more, this is more of an e-governance schema. But, digging deep, this e-governance could go for a tailspin because of two burning issues, viz. is it achievable, and how long would one look into the future as far as the handling and carrying capacity of data is concerned over these network solutions, since the load might exponentially rise without falling under any mathematical formulae, and could easily collapse the grid supporting this or these network(s). Strangely enough, this hypothesising takes on political robes, and starts thinking of technology as its enemy no. 1. There is no resolution to this constructed bitterness, unless one accommodates one into the other, whichever way that could be. The doctrines of Ludditism is the cadence of the dirge for the ‘Leftists’ today. The reality, irreality or surreality of smart cities are a corrosion of conformity of ideals spoken from the loudspeakers of ‘Left’, merely grounded on violations of basic human rights, and refusing to flip the coin to rationally transforming the wrongs into the rights.

While these discourses aren’t far and between, what needs a meritorious analysis of it is the finance industry and allied instruments. Now that the Government of India has scored a century of planned smart cities, their becoming dystopia and centres of social apathy and apartheid is gaining momentum on one side of the camp due to a host of issues, one amongst which is finances raised to see their materiality. In the immediate aftermath of Modi’s election, the BJP Government announced Rs. 70.6 billion for 100 smart cities, which shrank in the following year to Rs. 1.4 billion. But, aside from what has been allocated, the project is not run by the Government, as it is an integrative approach between the Government, State Government/Urban Local Bodies catalysed through a Special Purpose Vehicle (SPV). For understanding smart cities, it is obligatory to understand the viability of these SPVs through their architecture and governance. These SPVs are invested with responsibilities to plan, appraise, approve, releasing funds, implement, and evaluate development projects within the ambit of smart cities. According to the Union Government, every smart city will be headed by a full-time CEO, and will have nomination from the central and state government in addition to members from the elected ULBs on its Board. Who the CEO isn’t clearly defined, but if experts are to be believed, these might be from the corporate world. Another justification lending credence to this possibility is the proclivity of the Government to go in for Public-Private Partnerships (PPPs). The states and ULBs would ensure that a substantial and a dedicated revenue stream is made available to the SPV. Once this is accomplished, the SPV would have to become self-sustainable by inculcating practices of its own credit worthiness, which would be realised by its mechanisms of raising resources from the market. It needs to re-emphasised here that the role of the Union Government as far as allocation is concerned is in the form of a tied grant through creating infrastructure for the larger benefit of the people. This role, though lacks clarity, unless juxtaposed with the agenda that the Central Government has set out to achieve, which is through PPPs, JVs subsidiaries and turnkey contracts.

If one were to look at the architecture of SPV holdings, things get a bit muddled in that not only is the SPV a limited company registered under the Companies Act 2013, the promotion of SPV would lie chiefly with the state/union territory and elected ULB on a 50:50 equity holding. The state/UT and ULB have full onus to call upon private players as part of the equity, but with the stringent condition that the share of state/UT and ULB would always remain equal and upon addition be in majority of 50%. So, with permutations and combinations, it is deduced that the maximum share a private player can have will be 48% with the state/UT and ULB having 26% each. Initially, to ensure a minimum capital base for the SPV, the paid up capital of the SPV should be such that the ULB’s share is at least equal to Rs. 100 crore with an option to increase it to the full amount of the first instalment provided by the Government of India, which stands at Rs. 194 crore for each smart city. With a matching capital of Rs. 100 crore provided for by the ULB, the total initial paid-up capital for the SPV would rise to Rs. 200 crore. but, if one are to consider the GoI contribution of Rs 194 crore, then the total capital initially for the SPV would be Rs 394 crore. This paragraph commenced saying the finances are muddled, but on the contrary this arrangement looks pretty logical, right? There is more than meets the eye here, since a major component is the equity shareholding, and from here on things begin to get complex. This is also the stage where SPV gets down to fulfilling its responsibilities and where the role of elected representatives of the people, either at the state/UT level or at the ULB level appears to get hazy. Why is this so? The Board of the SPV, despite having these elected representatives has in no certain ways any clarity on the decisions of those represented making a strong mark when the SPV gets to apply its responsibilities. SPVs, now armed with finances can take on board consultative expertise from the market, thus taking on the role befitting their installation in the first place, i.e. going along the privatisation of services in tune with the market-oriented neoliberal policies. Probably, the only saving grace in such a scenario would be a list of such consultative experts drafted by the Ministry of Urban Development, which itself might be riding the highs of neoliberalism in accordance with the Government’s stance at the centre. Such an arrangement is essentially dressing up the Special Economic Zones in new clothes sewn with tax exemptions, duties and stringent labour laws in bringing forth the most dangerous aspect of smart cities, viz. privatised governance.

Whatever be the template of these smart cities, social apathy would be built into it, where only kinds of inhabitants would walk free, economically productive consumers and economically productive producers.