Stuxnet is a threat targeting a specific industrial control system likely in Iran, such as a gas pipeline or power plant. The ultimate goal of Stuxnet is to sabotage that facility by reprogramming programmable logic controllers (PLCs) to operate as the attackers intend them to, most likely out of their specified boundaries.

Stuxnet was discovered in July, but is confirmed to have existed at least one year prior and likely even before. The majority of infections were found in Iran. Stuxnet contains many features such as:

  • Self-replicates through removable drives exploiting a vulnerability allowing auto-execution. Microsoft Windows Shortcut ‘LNK/PIF’ Files Automatic File Execution Vulnerability (BID 41732)
  • Spreads in a LAN through a vulnerability in the Windows Print Spooler.
    Microsoft Windows Print Spooler Service Remote Code Execution Vulnerability (BID 43073)
  • Spreads through SMB by exploiting the Microsoft Windows Server Service RPC Handling Remote Code Execution Vulnerability (BID 31874).
  • Copies and executes itself on remote computers through network shares.
  • Copies and executes itself on remote computers running a WinCC database server.
  • Copies itself into Step 7 projects in such a way that it automatically executes when the Step 7 project is loaded.
  • Updates itself through a peer-to-peer mechanism within a LAN.
  • Exploits a total of four unpatched Microsoft vulnerabilities, two of which are previously mentioned vulnerabilities for self-replication and the other two are escalation of privilege vulnerabilities that have yet to be disclosed.
  • Contacts a command and control server that allows the hacker to download and execute code, including updated versions.
  • Contains a Windows rootkit that hide its binaries.
  • Attempts to bypass security products.
  • Fingerprints a specific industrial control system and modifies code on the Siemens PLCs to potentially sabotage the system.
  • Hides modified code on PLCs, essentially a rootkit for PLCs.

The following is a possible attack scenario. It is only speculation driven by the technical features of Stuxnet.

Industrial control systems (ICS) are operated by a specialized assembly like code on programmable logic controllers (PLCs). The PLCs are often programmed from Windows computers not connected to the Internet or even the internal network. In addition, the industrial control systems themselves are also unlikely to be connected to the Internet.

First, the attackers needed to conduct reconnaissance. As each PLC is configured in a unique manner, the attack- ers would first need the ICS’s schematics. These design documents may have been stolen by an insider or even retrieved by an early version of Stuxnet or other malicious binary. Once attackers had the design documents and potential knowledge of the computing environment in the facility, they would develop the latest version of Stuxnet. Each feature of Stuxnet was implemented for a specific reason and for the final goal of potentially sabotaging the ICS.

Attackers would need to setup a mirrored environment that would include the necessary ICS hardware, such as PLCs, modules, and peripherals in order to test their code. The full cycle may have taken six months and five to ten core developers not counting numerous other individuals, such as quality assurance and management.

In addition their malicious binaries contained driver files that needed to be digitally signed to avoid suspicion. The attackers compromised two digital certificates to achieve this task. The attackers would have needed to obtain the digital certificates from someone who may have physically entered the premises of the two companies and stole them, as the two companies are in close physical proximity.

To infect their target, Stuxnet would need to be introduced into the target environment. This may have occurred by infecting a willing or unknowing third party, such as a contractor who perhaps had access to the facility, or an insider. The original infection may have been introduced by removable drive.

Once Stuxnet had infected a computer within the organization it began to spread in search of Field PGs, which are typical Windows computers but used to program PLCs. Since most of these computers are non-networked, Stuxnet would first try to spread to other computers on the LAN through a zero-day vulnerability, a two year old vulnerability, infecting Step 7 projects, and through removable drives. Propagation through a LAN likely served as the first step and propagation through removable drives as a means to cover the last and final hop to a Field PG that is never connected to an untrusted network.

While attackers could control Stuxnet with a command and control server, as mentioned previously the key computer was unlikely to have outbound Internet access. Thus, all the functionality required to sabotage a system was embedded directly in the Stuxnet executable. Updates to this executable would be propagated throughout the facility through a peer-to-peer method established by Stuxnet.

When Stuxnet finally found a suitable computer, one that ran Step 7, it would then modify the code on the PLC. These modifications likely sabotaged the system, which was likely considered a high value target due to the large resources invested in the creation of Stuxnet.

Victims attempting to verify the issue would not see any rogue PLC code as Stuxnet hides its modifications.

While their choice of using self-replication methods may have been necessary to ensure they’d find a suitable Field PG, they also caused noticeable collateral damage by infecting machines outside the target organization. The attackers may have considered the collateral damage a necessity in order to effectively reach the intended target. Also, the attackers likely completed their initial attack by the time they were discovered.

Stuxnet dossier

Conjuncted: Gross Domestic Product. Part 2.

Conjuncted here.

The topology of the World Trade, which is encapsulated in its adjacency matrix aij defined by

aij(t) ≡ 1 if fij(t) > 0

aij(t) ≡ 0 if fij(t) = 0

, strongly depends on the GDP values wi. Indeed, the problem can be mapped onto the so-called fitness model where it is assumed that the probability pij for a link from i to j is a function p(xi, xj) of the values of a fitness variable x assigned to each vertex and drawn from a given distribution. The importance of this model relies in the possibility to write all the expected topological properties of the network (whose specification requires in principle the knowledge of the N2 entries of its adjacency matrix) in terms of only N fitness values. Several topological properties including the degree distribution, the degree correlations and the clustering hierarchy are determined by the GDP distribution. Moreover, an additional understanding of the World Trade as a directed network comes from the study of its reciprocity, which represents the strong tendency of the network to form pairs of mutual links pointing in opposite directions between two vertices. In this case too, the observed reciprocity structure can be traced back to the GDP values.

The probability that at time t a link exists from i to j (aij = 1) is empirically found to be

pt [xi(t), xj(t)] = [α(t) xi(t) xj(t)]/[1 + β(t) xi(t) xj(t)]

where xi is the rescaled GDP and the parameters α(t) and β(t) can be fixed by imposing that the expected number of links

Lexp(t) = ∑i≠j pt [xi(t), xj(t)]

equals its empirical value

L(t) = ∑i≠j aij(t)

and that the expected number of reciprocated links

Lexp(t) = ∑i≠j pt[xi(t), xj(t)] pt[xj(t), xi(t)]

equals its observed value

L(t) = ∑i≠j aij(t) aji(t)

This particular structure of the World Trade topology can be tested by comparing various expected topological properties with the empirical ones. For instance, we can compare the empirical and the theoretical plots of vertex degrees (at time t) versus their rescaled GDP xi(t). Note that since pt [xi(t), xj(t)] is symmetric under the exchange of i and j, at any given time the expected in-degree and the expected out-degree of a vertex i are equal. We denote both by kexpi, which can be expressed as

kexpi(t) = ∑j≠i pt[xi(t), xj(t)]

Since the number of countries N(t) increases in time, we define the rescaled degrees

k ̃i(t) ≡ ki(t)/[N(t) − 1]

that always represent the fraction of vertices which are connected to i (the term −1 comes from the fact that there are no self-loops in the network, hence the maximum degree is always N − 1). In this way, we can easily compare the data corresponding to different years and network sizes. The results are shown in the figure below for various snapshots of the system.


Figure: Plot of the rescaled degrees versus the rescaled GDP at four different years, and comparison with the expected trend. 

The empirical trends are in accordance with the expected ones. Then we can also compare the cumulative distribution Pexp>(k ̃exp) of the expected degrees with the empirical degree distributions Pin>(k ̃in) and Pout>(k ̃out). The results are shown in the following figure and are in conformity to a good agreement between the theoretical prediction and the observed behavior.


Figure: Cumulative degree distributions of the World Trade topology for four different years and comparison with the expected trend. 

Note that the accordance with the predicted behaviour is extremely important since the expected quantities are computed by using only the N GDP values of all countries, with no information regarding the N2 trade values. On the other hand, the empirical properties of the World Trade topology are extracted from trade data, with no knowledge of the GDP values. The agreement between the properties obtained by using these two independent sources of information is therefore surprising. This also shows that the World Trade topology crucially depends on the GDP distribution ρ(x).

Music Composition Using Long Short-Term Memory (LSTM) Recurrent Neural Networks


The most straight-forward way to compose music with an Recurrent Neural Network (RNN) is to use the network as single-step predictor. The network learns to predict notes at time t + 1 using notes at time t as inputs. After learning has been stopped the network can be seeded with initial input values – perhaps from training data – and can then generate novel compositions by using its own outputs to generate subsequent inputs. This note-by-note approach was first examined by Todd.

A feed-forward network would have no chance of composing music in this fashion. Lacking the ability to store any information about the past, such a network would be unable to keep track of where it is in a song. In principle an RNN does not suffer from this limitation. With recurrent connections it can use hidden layer activations as memory and thus is capable of exhibiting (seemingly arbitrary) temporal dynamics. In practice, however, RNNs do not perform very well at this task. As Mozer aptly wrote about his attempts to compose music with RNNs,

While the local contours made sense, the pieces were not musically coherent, lacking thematic structure and having minimal phrase structure and rhythmic organization.

The reason for this failure is likely linked to the problem of vanishing gradients (Hochreiter et al.) in RNNs. In gradient methods such as Back-Propagation Through Time (BPTT) (Williams and Zipser) and Real-Time Recurrent Learning (RTRL) error flow either vanishes quickly or explodes exponentially, making it impossible for the networks to deal correctly with long-term dependencies. In the case of music, long-term dependencies are at the heart of what defines a particular style, with events spanning several notes or even many bars contributing to the formation of metrical and phrasal structure. The clearest example of these dependencies are chord changes. In a musical form like early rock-and-roll music for example, the same chord can be held for four bars or more. Even if melodies are constrained to contain notes no shorter than an eighth note, a network must regularly and reliably bridge time spans of 32 events or more.

The most relevant previous research is that of Mozer, who did note-by-note composition of single-voice melodies accompanied by chords. In the “CONCERT” model, Mozer used sophisticated RNN procedures including BPTT, log-likelihood objective functions and probabilistic interpretation of the output values. In addition to these neural network methods, Mozer employed a psychologically-realistic distributed input encoding (Shepard) that gave the network an inductive bias towards chromatically and harmonically related notes. He used a second encoding method to generate distributed representations of chords.



A BPTT-trained RNN does a poor job of learning long-term dependencies. To offset this, Mozer used a distributed encoding of duration that allowed him to process a note of any duration in a single network timestep. By representing in a single timestep a note rather than a slice of time, the number of time steps to be bridged by the network in learning global structure is greatly reduced. For example, to allow sixteenth notes in a network which encodes slices of time directly requires that a whole note span at minimum 16 time steps. Though networks regularly outperformed third-order transition table approaches, they failed in all cases to find global structure. In analyzing this performance Mozer suggests that, for the note-by-note method to work it is necessary that the network can induce structure at multiple levels. A First Look at Music Composition using LSTM Recurrent Neural Networks

Deanonymyzing ToR


My anonymity is maintained in Tor as long as no single entity can link me to my destination. If an attacker controls the entry and the exit of my circuit, her anonymity can be compromised, as the attacker is able to perform traffic or timing analysis to link my traffic to the destination. For hidden services, this implies that the attacker needs to control the two entry guards used for the communication between the client and the hidden service. This significantly limits the attacker, as the probability that both the client and the hidden service select a malicious entry guard is much lower than the probability that only one of them makes a bad choice.

Our goal is to show that it is possible for a local passive adversary to deanonymize users with hidden service activities without the need to perform end-to-end traffic analysis. We assume that the attacker is able to monitor the traffic between the user and the Tor network. The attacker’s goal is to identify that a user is either operating or connected to a hidden service. In addition, the attacker then aims to identify the hidden service associated with the user.

In order for our attack to work effectively, the attacker needs to be able to extract circuit-level details such as the lifetime, number of incoming and outgoing cells, sequences of packets, and timing information. We discuss the conditions under which our assumptions are true for the case of a network admin/ISP and an entry guard.

Network administrator or ISP: A network administrator (or ISP) may be interested in finding out who is accessing a specific hidden service, or if a hidden service is being run from the network. Under some conditions, such an attacker can extract circuit-level knowledge from the TCP traces by monitoring all the TCP connections between me and  my entry guards. For example, if only a single active circuit is used in every TCP connection to the guards, the TCP segments will be easily mapped to the corresponding Tor cells. While it is hard to estimate how often this condition happens in the live network, as users have different usage models, we argue that the probability of observing this condition increases over time.

Malicious entry guard: Entry guard status is bestowed upon relays in the Tor network that offer plenty of bandwidth and demonstrate reliable uptime for a few days or weeks. To become one an attacker only needs to join the network as a relay, keep their head down and wait. The attacker can now focus their efforts to deanonymise users and hidden services on a much smaller amount of traffic. The next step is to observe the traffic and identify what’s going on inside it – something the researchers achieved with technique called website fingerprinting. Because each web page is different the network traffic it generates as it’s downloaded is different too. Even if you can’t see the content inside the traffic you can identify the page from the way it passes through the network, if you’ve seen it before. Controlling entry guards allows the adversary to perform the attack more realistically and effectively. Entry guards are in a perfect position to perform our traffic analysis attacks since they have full visibility to Tor circuits. In today’s Tor network, each OP chooses 3 entry guards and uses them for 45 days on average, after which it switches to other guards. For circuit establishment, those entry guards are chosen with equal probability. Every entry guard thus relays on average 33.3% of a user’s traffic, and relays 50% of a user’s traffic if one entry guard is down. Note that Tor is currently considering using a single fast entry guard for each user. This will provide the attacker with even better circuit visibility which will exacerbate the effectiveness of our attack. This adversary is shown in the figure below:


The Tor project has responded to the coverage generated by the research with an article of its own written by Roger Dingledine, Tor’s project leader and one of the project’s original developers. Fingerprinting home pages is all well and good he suggests, but hidden services aren’t just home pages:

…is their website fingerprinting classifier actually accurate in practice? They consider a world of 1000 front pages, but and other onion-space crawlers have found millions of pages by looking beyond front pages. Their 2.9% false positive rate becomes enormous in the face of this many pages – and the result is that the vast majority of the classification guesses will be mistakes.

Distributed Representation Revisited


If the conventional symbolic model mandates a creation of theory that is sought to address the issues pertaining to the problem, this mandatory theory construction is bypassed in case of distributed representational systems, since the latter is characterized by a large number of interactions occurring in a nonlinear fashion. No such attempts at theoretical construction are to be made in distributed representational systems for fear of high end abstraction, thereby sucking off the nutrient that is the hallmark of the model. Distributed representation is likely to encounter onerous issues if the size of the network inflates, but the issue is addressed through what is commonly known as redundancy technique, whereby, a simultaneous encoding of information generated by numerous interactions take place, thus ameliorating the adequacy of presenting the information to the network. In the words of Paul Cilliers, this is an important point, for,

the network used for the model of a complex system will have to have the same level of complexity as the system itself….However, if the system is truly complex, a network of equal complexity may be the simplest adequate model of such a system, which means that it would be just as difficult to analyze as the system itself.

Following, he also presents a caveat,

This has serious methodological implications for the scientists working with complex systems. A model which reduces the complexity may be easier to implement, and may even provide a number of economical descriptions of the system, but the price paid for this should be considered carefully.

One of the outstanding qualities of distributed representational systems is their adaptability. Adaptability, in the sense of reusing the network to be applicable to other problems to offer solutions. Exactly, what this connotes is, the learning process the network has undergone for a problem ‘A’, could be shared for problem ‘B’, since many of the input neurons are bounded by information learned through ‘A’ that could be applicable to ‘B’. In other words, the weights are the dictators for solving or resolving issues, no matter, when and for which problem the learning took place. There is a slight hitch here, and that being this quality of generalizing solutions could suffer, if the level of abstraction starts to shoot up. This itself could be arrested, if in the initial stages, the right kind of framework is decided upon, thus obscuring the hitch to almost non-affective and non-existence impacting factor. The very notion of weights is considered here by Sterelny as a problematic, and he takes it to attack distributed representation in general and connectionsim as a whole in particular. In an analogically witty paragraph, Sterelny says,

There is no distinction drawable, even in principle, between functional and non- functional connections. A positive linkage between two nodes in a distributed network might mean a constitutive link (eg. Catlike, in a network for tiger); a nomic one (carnivore, in the same network), or a merely associative one (in my case, a particular football team that play in black and orange.

It should be noted that this criticism on weights is derived, since for Sterelny, relationship between distributed representations and the micro-features that compose them is deeply problematic. If such is the criticism, then no doubt, Sterelny still seems to be ensconced within the conventional semantic/symbolic model. And since, all weights can take part in information processing, there is some sort of a democratic liberty that is accorded to the weights within a distributed representation, and hence any talk of constitutive, nomic, or even for that matter associative is mere humbug. Even if there is a disagreement prevailing that a large pattern of weights are not convincing enough for an explanation, as they tend to complicate matters, the distributed representational systems work consistently enough as compared to an alternative system that offers explanation through reasoning, and thereby, it is quite foolhardy to jettison the distributed representation by the sheer force of criticism. If the neural network can be adapted to produce the correct answer for a number of training cases that is large compared with the size of the network, it can be trusted to respond correctly to the previously unseen cases provided they are drawn from the same population using the same distribution as the training cases, thus undermining the commonly held idea that explanations are the necessary feature of the trustworthy systems (Baum and Haussler). Another objection that distributed representation faces is that, if representations are distributed, then the probability of two representations of the same thing as different from one another cannot be ruled out. So, one of them is the true representation, while the other is only an approximation of the representation.(1) This is a criticism of merit and is attributed to Fodor, in his influential book titled Psychosemantics.(2) For, if there is only one representation, Fodor would not shy from saying that this is the yucky solution, folks project believe in. But, since connectionism believes in the plausibility of indeterminate representations, the question of flexibility scores well and high over the conventional semantic/symbolic models, and is it not common sense to encounter flexibility in daily lives? The other response to this objection comes from post-structuralist theories (Baudrillard is quite important here. See the first footnote below). The objection of true representation, and which is a copy of the true representation meets its pharmacy in post-structuralism, where meaning is constituted by synchronic as well as diachronic contextualities, and thereby supplementing the distributed representation with a no-need-for concept and context, as they are inherent in the idea of such a representation itself. Sterelny, still seems to ride on his obstinacy, and in a vitriolic tone poses his demand to know as to why distributed representation should be regarded as states of the system at all. Moreover, he says,

It is not clear that a distributed representation is a representation for the connectionist system at all…given that the influence of node on node is local, given that there is no processor that looks at groups of nodes as a whole, it seems that seeing a distributed representation in a network is just an outsider’s perspective on the system.

This is moving around in circles, if nothing more. Or maybe, he was anticipating what G. F. Marcus would write and echo to some extent in his book The Algebraic Mind. In the words of Marcus,

…I agree with Stemberger(3) that connectionism can make a valuable contribution to cognitive science. The only place, we differ is that, first, he thinks that the contribution will be made by providing a way of eliminating symbols, whereas I think that connectionism will make its greatest contribution by accepting the importance of symbols, seeking ways of supplementing symbolic theories and seeking ways of explaining how symbols could be implemented in the brain. Second, Stemberger feels that symbols may play no role in cognition; I think that they do.

Whatever Sterelny claims, after most of the claims and counter-claims that have been taken into account, the only conclusion for the time being is that distributive representation has been undermined, his adamant position to be notwithstanding.

(1) This notion finds its parallel in Baudrillard’s Simulation. And subsequently, the notion would be invoked in studying the parallel nature. Of special interest is the order of simulacra in the period of post-modernity, where the simulacrum precedes the original, and the distinction between reality and representation vanishes. There is only the simulacrum and the originality becomes a totally meaningless concept.

(2) This book is known for putting folk psychology firmly on the theoretical ground by rejecting any external, holist and existential threat to its position.

(3) Joseph Paul Stemberger is a professor in the Department of Linguistics at The University of British Columbia in Vancouver, British Columbia, Canada, with primary interests in phonology, morphology, and their interactions. My theoretical orientations are towards Optimality Theory, employing our own version of the theory, and towards connectionist models.


Feed Forward Perceptron and Philosophical Representation

In a network that does not recognize a node’s relation with any specific concept, the hidden layer is populated by neurons, in a way that architecturally blue prints each neuron as connected with every input layer node. What happens to information passed into the network is interesting from the point of view of distribution over all of the neurons that populate the hidden layer. This distribution over the domain strips any particular neuron within the hidden layer of any privileged status, in turn meaning no privileged (ontological) status for any weights and nodes as well. With the absence of any privileged status accorded to nodes, weights and even neurons in the hidden layer, representation comes to mean something entirely different as compared with what it normally meant in semantic networks, and that being representations are not representative of any coherent concept. Such a scenario is representational with sub-symbolic features, and since all the weights have a share in participation, each time a network is faced with something like pattern recognition, the representation is what is called distributed representation. The best example of such a distributed representation is a multilayer perceptron, which is a feedforward artificial neural network that maps sets of input data onto a set of appropriate output, and finds its use in image recognition, pattern recognition and even speech recognition.

A multilayer perceptron is characterized by each neuron as using a nonlinear activation function to model firing of biological neurons in the brain. The activation functions for the current application are sigmoids, and the equations are:

Ф (yi) = tanh (vi) and Ф (yi) = (1 + e-vi)-1

where, yi is the output of the ith neuron, and vi is the weighted sum of the input synapses, and the former function is a hyperbolic tangent in the range of -1 to +1, and the latter is equivalent in shape but ranging from 0 to +1. Learning takes place through backpropagation. The connection weights are changed, after adjustments are made in the output compared with the expected result. To be on the technical side, let us see how backpropagation is responsible for learning to take place in the multilayer perceptron.

Error in the output node j in the nth data point is represented by,

ej (n) = dj (n) – yj (n),

where, d is the target value and y is the value produced by the perceptron. Corrections to the weights of the nodes that minimize the error in the entire output is made by,

ξ(n)=0.5 * ∑e2j (n)

With the help of gradient descent, the change in each weight happens to be given by,

∆ wji (n)=−η * (δξ(n)/δvj (n)) * yi (n)

where, yi is the output of the previous neuron, and η is the learning rate that is carefully selected to make sure that weights converge to a response quickly enough without undergoing any sort of oscillations. Gradient descent is based on the observation that if the real-valued function F (x) is defined and differentiable in a neighborhood of a point ‘a’, then F (x) decreases fastest if one goes from ‘a’ in the direction of the negative gradient of F at ‘a.

The derivative to be calculated depends on the local induced field vj, that is susceptible to variations. The derivative is simplified for the output node,

− (δξ(n)/δvj (n)) = ej (n) Ф'(vj (n))

where, Ф’ is the first-order derivative of the activation function Ф, and does not vary. The analysis is more difficult to a change in weights to a hidden node, but can be shown with the relevant derivative as,

− (δξ(n)/δvj (n)) = Ф'(vj (n))∑− (δξ(n)/δvk (n)) * wkj (n)

which depends on the change of weights of the kth node, representing the output layer. So to change the hidden layer weights, we must first change the output layer weights according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function. Perceptron as a distributed representation is gaining wider applications in AI project, but since biological knowledge is prone to change over time, its biological plausibility is doubtful. A major drawback despite scoring over semantic networks, or symbolic models is the loose modeling capabilities with neurons and synapses. At the same time, backpropagation multilayer perceptrons are not too closely resembling brain-like structures, and for near complete efficiency, require synapses to be varying. A typical multilayer perceptron would look something like,


where, (x1,….,xp) are the predictor variable values as presented to the input layer. Note that the standardized values for these variables are in the range -1 to +1. Wji is the weight that multiplies with each of the values coming from the input neuron, and uj is the compounded combined value of the addition of the resulting weighted values in the hidden layer. The weighted sum is fed into a transfer function of a sigmoidal/non-linear kind, σ, that outputs a value hj, before getting distributed to an output layer. Arriving at a neuron in the output layer, the value from each hidden layer neuron is multiplied by a weight wkj, and the resulting weighted values are added together producing a compounded combined value vj. This weighted sum vj is fed into a transfer function of a sigmoid/non-linear kind, σ, that outputs a value yk, which are the outputs of the network.


‘Smart’ as an adjective or as a noun isn’t really the question anymore as the growing narratives around it seem to impose the latter part of speech almost overwhelmingly. This could a political strategy wrought by policy makers, IT honchos, urban planners amongst others to make a vision as expansive as it could be exclusionary. The exclusionary component only precipitates the divide in-between the inclusionary, thus swelling the former even denser. Turning from this generic stance about the notion of ‘Smart’, it is imperative to look at a juggernaut that is swamping the political, the policy makers, the architects-cum-urban planners, the financiers, and most crucially the urban dwellers belonging to a myriad of social and economic strata. While a few look at this as an opportunity in revamping of the urbane, for the majority, it is turning out to be a silent battle to eke out a future amidst uncertainty. In a nutshell, the viability of such ambitions depend on clear-sightedness, which seems to be filling up the void via the ersatz.


Though, one thing that needs to be clarified here is the use of ‘smart’ is quite clearly similar to the use of ‘post’ in some theories, where it is not the temporal factor that is accounted for, but, rather an integrative one, a coalition of temporal and spatial aspects. Many a times, intentions such as these constructed as a need, and the difference between with the necessity is subtle. Smart Cities were conceived precisely because of such rationales, rather than as cahoots of impending neo-colonization conspiracies. There is a urban drift, and this dense diaspora is allegedly associated with pollution, resource crunch, dwindling infrastructure resulting in a stagflation of economic growth. So, instead of having kiosks that are decentralised, the idea is to have a control that is central addressing such constraining conditions. With a central control, inputs and outputs find monitoring in-housed through networking solutions. What’s more, this is more of an e-governance schema. But, digging deep, this e-governance could go for a tailspin because of two burning issues, viz. is it achievable, and how long would one look into the future as far as the handling and carrying capacity of data is concerned over these network solutions, since the load might exponentially rise without falling under any mathematical formulae, and could easily collapse the grid supporting this or these network(s). Strangely enough, this hypothesising takes on political robes, and starts thinking of technology as its enemy no. 1. There is no resolution to this constructed bitterness, unless one accommodates one into the other, whichever way that could be. The doctrines of Ludditism is the cadence of the dirge for the ‘Leftists’ today. The reality, irreality or surreality of smart cities are a corrosion of conformity of ideals spoken from the loudspeakers of ‘Left’, merely grounded on violations of basic human rights, and refusing to flip the coin to rationally transforming the wrongs into the rights.

While these discourses aren’t far and between, what needs a meritorious analysis of it is the finance industry and allied instruments. Now that the Government of India has scored a century of planned smart cities, their becoming dystopia and centres of social apathy and apartheid is gaining momentum on one side of the camp due to a host of issues, one amongst which is finances raised to see their materiality. In the immediate aftermath of Modi’s election, the BJP Government announced Rs. 70.6 billion for 100 smart cities, which shrank in the following year to Rs. 1.4 billion. But, aside from what has been allocated, the project is not run by the Government, as it is an integrative approach between the Government, State Government/Urban Local Bodies catalysed through a Special Purpose Vehicle (SPV). For understanding smart cities, it is obligatory to understand the viability of these SPVs through their architecture and governance. These SPVs are invested with responsibilities to plan, appraise, approve, releasing funds, implement, and evaluate development projects within the ambit of smart cities. According to the Union Government, every smart city will be headed by a full-time CEO, and will have nomination from the central and state government in addition to members from the elected ULBs on its Board. Who the CEO isn’t clearly defined, but if experts are to be believed, these might be from the corporate world. Another justification lending credence to this possibility is the proclivity of the Government to go in for Public-Private Partnerships (PPPs). The states and ULBs would ensure that a substantial and a dedicated revenue stream is made available to the SPV. Once this is accomplished, the SPV would have to become self-sustainable by inculcating practices of its own credit worthiness, which would be realised by its mechanisms of raising resources from the market. It needs to re-emphasised here that the role of the Union Government as far as allocation is concerned is in the form of a tied grant through creating infrastructure for the larger benefit of the people. This role, though lacks clarity, unless juxtaposed with the agenda that the Central Government has set out to achieve, which is through PPPs, JVs subsidiaries and turnkey contracts.

If one were to look at the architecture of SPV holdings, things get a bit muddled in that not only is the SPV a limited company registered under the Companies Act 2013, the promotion of SPV would lie chiefly with the state/union territory and elected ULB on a 50:50 equity holding. The state/UT and ULB have full onus to call upon private players as part of the equity, but with the stringent condition that the share of state/UT and ULB would always remain equal and upon addition be in majority of 50%. So, with permutations and combinations, it is deduced that the maximum share a private player can have will be 48% with the state/UT and ULB having 26% each. Initially, to ensure a minimum capital base for the SPV, the paid up capital of the SPV should be such that the ULB’s share is at least equal to Rs. 100 crore with an option to increase it to the full amount of the first instalment provided by the Government of India, which stands at Rs. 194 crore for each smart city. With a matching capital of Rs. 100 crore provided for by the ULB, the total initial paid-up capital for the SPV would rise to Rs. 200 crore. but, if one are to consider the GoI contribution of Rs 194 crore, then the total capital initially for the SPV would be Rs 394 crore. This paragraph commenced saying the finances are muddled, but on the contrary this arrangement looks pretty logical, right? There is more than meets the eye here, since a major component is the equity shareholding, and from here on things begin to get complex. This is also the stage where SPV gets down to fulfilling its responsibilities and where the role of elected representatives of the people, either at the state/UT level or at the ULB level appears to get hazy. Why is this so? The Board of the SPV, despite having these elected representatives has in no certain ways any clarity on the decisions of those represented making a strong mark when the SPV gets to apply its responsibilities. SPVs, now armed with finances can take on board consultative expertise from the market, thus taking on the role befitting their installation in the first place, i.e. going along the privatisation of services in tune with the market-oriented neoliberal policies. Probably, the only saving grace in such a scenario would be a list of such consultative experts drafted by the Ministry of Urban Development, which itself might be riding the highs of neoliberalism in accordance with the Government’s stance at the centre. Such an arrangement is essentially dressing up the Special Economic Zones in new clothes sewn with tax exemptions, duties and stringent labour laws in bringing forth the most dangerous aspect of smart cities, viz. privatised governance.

Whatever be the template of these smart cities, social apathy would be built into it, where only kinds of inhabitants would walk free, economically productive consumers and economically productive producers.