Black Hole Entropy in terms of Mass. Note Quote.

c839ecac963908c173c6b13acf3cd2a8--friedrich-nietzsche-the-portal

If M-theory is compactified on a d-torus it becomes a D = 11 – d dimensional theory with Newton constant

GD = G11/Ld = l911/Ld —– (1)

A Schwartzschild black hole of mass M has a radius

Rs ~ M(1/(D-3)) GD(1/(D-3)) —– (2)

According to Bekenstein and Hawking the entropy of such a black hole is

S = Area/4GD —– (3)

where Area refers to the D – 2 dimensional hypervolume of the horizon:

Area ~ RsD-2 —– (4)

Thus

S ~ 1/GD (MGD)(D-2)/(D-3) ~ M(D-2)/(D-3) GD1/(D-3) —– (5)

From the traditional relativists’ point of view, black holes are extremely mysterious objects. They are described by unique classical solutions of Einstein’s equations. All perturbations quickly die away leaving a featureless “bald” black hole with ”no hair”. On the other hand Bekenstein and Hawking have given persuasive arguments that black holes possess thermodynamic entropy and temperature which point to the existence of a hidden microstructure. In particular, entropy generally represents the counting of hidden microstates which are invisible in a coarse grained description. An ultimate exact treatment of objects in matrix theory requires a passage to the infinite N limit. Unfortunately this limit is extremely difficult. For the study of Schwarzchild black holes, the optimal value of N (the value which is large enough to obtain an adequate description without involving many redundant variables) is of order the entropy, S, of the black hole.

Considering the minimum such value for N, we have

Nmin(S) = MRs = M(MGD)1/D-3 = S —– (6)

We see that the value of Nmin in every dimension is proportional to the entropy of the black hole. The thermodynamic properties of super Yang Mills theory can be estimated by standard arguments only if S ≤ N. Thus we are caught between conflicting requirements. For N >> S we don’t have tools to compute. For N ~ S the black hole will not fit into the compact geometry. Therefore we are forced to study the black hole using N = Nmin = S.

Matrix theory compactified on a d-torus is described by d + 1 super Yang Mills theory with 16 real supercharges. For d = 3 we are dealing with a very well known and special quantum field theory. In the standard 3+1 dimensional terminology it is U(N) Yang Mills theory with 4 supersymmetries and with all fields in the adjoint repersentation. This theory is very special in that, in addition to having electric/magnetic duality, it enjoys another property which makes it especially easy to analyze, namely it is exactly scale invariant.

Let us begin by considering it in the thermodynamic limit. The theory is characterized by a “moduli” space defined by the expectation values of the scalar fields φ. Since the φ also represents the positions of the original DO-branes in the non compact directions, we choose them at the origin. This represents the fact that we are considering a single compact object – the black hole- and not several disconnected pieces.

The equation of state of the system, defined by giving the entropy S as a function of temperature. Since entropy is extensive, it is proportional to the volume ∑3 of the dual torus. Furthermore, the scale invariance insures that S has the form

S = constant T33 —– (7)

The constant in this equation counts the number of degrees of freedom. For vanishing coupling constant, the theory is described by free quanta in the adjoint of U(N). This means that the number of degrees of freedom is ~ N2.

From the standard thermodynamic relation,

dE = TdS —– (8)

and the energy of the system is

E ~ N2T43 —– (9)

In order to relate entropy and mass of the black hole, let us eliminate temperature from (7) and (9).

S = N23((E/N23))3/4 —– (10)

Now the energy of the quantum field theory is identified with the light cone energy of the system of DO-branes forming the black hole. That is

E ≈ M2/N R —– (11)

Plugging (11) into (10)

S = N23(M2R/N23)3/4 —– (12)

This makes sense only when N << S, as when N >> S computing the equation of state is slightly trickier. At N ~ S, this is precisely the correct form for the black hole entropy in terms of the mass.

Politics of Teleonomies of Blockchain…Thought of the Day 155.0

DrBSBLdXQAESkFp.jpg

All of this starts with the dictum, “There are no men at work”.

The notion of blockchain is a decentralized polity. Blockchain is immutable, for once written on to the block, it is practically un-erasable. And most importantly, it is collateralized, in that, even if there is a lack thereof of physical assets, the digital ownership could be traded as a collateral. So, once you have a blockchain, you create a stack that could be database controlled using a Virtual Machine, think of it as some sort of digital twin. So, what exactly are the benefits of this decentralized digital polity? One crucial is getting rid of intermediaries (unless, one considers escrow accounts as an invisible intermediary!, which seldom fulfills the definitional criteria). So, in short, digital twinning helps further social scalability by getting intermediaries o to an invisible mode. Now, when blockchains are juxtaposed with algorithmically run machines (AI is just one branch of it), one gets the benefits of social scalability with analytics, the ever-increasing ocean of raw data hermeneutically sealed into information for utilitarian purposes. The advantages of decentralized polity and social scalability compiles for a true democratic experience in an open-sourced modeling, where netizens (since we still are mired in the controversy of net neutrality) experience participatory democracy.
How would these combine with exigencies of scarce nature or resources? It is here that such hackathons combine the ingenuity of blockchain with AI in a process generally referred to as “mining”. This launch from the nature as we know is Nature 2.0. To repeat, decentralized polity and social scalability creates a self-sustaining ecosystem in a sense of Anti-Fragility (yes, Taleb’s anti-fragile is a feedback into this) with autonomously created machine learning systems that are largely correctional in nature on one hand and improving learning capacities from the environment on the other. These two hands coordinate giving rise to resource manipulation in lending a synthetic definition of materialities taken straight from physics textbooks and scared-to-apprehend materialities as thermodynamic quotients. And this is where AI steams up in a grand globalized alliance of machines embodying agencies always looking for cognitive enhancements to fulfill teleonomic life derived from the above stated thermodynamic quotient of randomness and disorder into gratifying sensibilities of self-sustenance. Synthetic biologists (of the Craig Venter and CRISPR-like lines) call this genetic programming, whereas singularitarians term it as evolution, a break away from simulated evolution that defined initial days of AI. The synthetic life is capable of decision making, the more it is subjected to the whims and fancies of surrounding environment via the process of machine learning leading to autonomous materialities with cognitive capabilities. These are parthenogenetic machines with unencumbered networking capacities. Such is the advent of self-ownership, and taking it to mean to nature as we have hitherto known is a cathectic fallacy in ethics. Taking to mean it differently in a sense of establishing a symbiotic relationship between biology and machines to yield bio machines with characteristics of biomachinations, replication (reproduction, CC and CV to be thrown open for editing via genetic programming) and self-actualization is what blockchain in composite with AI and Synthetic Biology is Nature 2.0.
Yes, there are downsides to traditional mannerisms of thought, man playing god with nature and so on and so on…these are ethical constraints and thus political in undertones, but with conservative theoretics and thus unable to come to terms with the politics of resource abundance that the machinic promulgates…

Meillassoux, Deleuze, and the Ordinal Relation Un-Grounding Hyper-Chaos. Thought of the Day 41.0

v1v2a

As Heidegger demonstrates in Kant and the Problem of Metaphysics, Kant limits the metaphysical hypostatization of the logical possibility of the absolute by subordinating the latter to a domain of real possibility circumscribed by reason’s relation to sensibility. In this way he turns the necessary temporal becoming of sensible intuition into the sufficient reason of the possible. Instead, the anti-Heideggerian thrust of Meillassoux’s intellectual intuition is that it absolutizes the a priori realm of pure logical possibility and disconnects the domain of mathematical intelligibility from sensibility. (Ray Brassier’s The Enigma of Realism: Robin Mackay – Collapse_ Philosophical Research and Development. Speculative Realism.) Hence the chaotic structure of his absolute time: Anything is possible. Whereas real possibility is bound to correlation and temporal becoming, logical possibility is bound only by non-contradiction. It is a pure or absolute possibility that points to a radical diachronicity of thinking and being: we can think of being without thought, but not of thought without being.

Deleuze clearly situates himself in the camp when he argues with Kant and Heidegger that time as pure auto-affection (folding) is the transcendental structure of thought. Whatever exists, in all its contingency, is grounded by the first two syntheses of time and ungrounded by the third, disjunctive synthesis in the implacable difference between past and future. For Deleuze, it is precisely the eternal return of the ordinal relation between what exists and what may exist that destroys necessity and guarantees contingency. As a transcendental empiricist, he thus agrees with the limitation of logical possibility to real possibility. On the one hand, he thus also agrees with Hume and Meillassoux that [r]eality is not the result of the laws which govern it. The law of entropy or degradation in thermodynamics, for example, is unveiled as nihilistic by Nietzsche s eternal return, since it is based on a transcendental illusion in which difference [of temperature] is the sufficient reason of change only to the extent that the change tends to negate difference. On the other hand, Meillassoux’s absolute capacity-to-be-other relative to the given (Quentin Meillassoux, Ray Brassier, Alain Badiou – After finitude: an essay on the necessity of contingency) falls away in the face of what is actual here and now. This is because although Meillassoux s hyper-chaos may be like time, it also contains a tendency to undermine or even reject the significance of time. Thus one may wonder with Jon Roffe (Time_and_Ground_A_Critique_of_Meillassou) how time, as the sheer possibility of any future or different state of affairs, can provide the (non-)ground for the realization of this state of affairs in actuality. The problem is less that Meillassoux’s contingency is highly improbable than that his ontology includes no account of actual processes of transformation or development. As Peter Hallward (Levi Bryant, Nick Srnicek and Graham Harman (editors) – The Speculative Turn: Continental Materialism and Realism) has noted, the abstract logical possibility of change is an empty and indeterminate postulate, completely abstracted from all experience and worldly or material affairs. For this reason, the difference between Deleuze and Meillassoux seems to come down to what is more important (rather than what is more originary): the ordinal sequences of sensible intuition or the logical lack of reason.

But for Deleuze time as the creatio ex nihilo of pure possibility is not just irrelevant in relation to real processes of chaosmosis, which are both chaotic and probabilistic, molecular and molar. Rather, because it puts the Principle of Sufficient Reason as principle of difference out of real action it is either meaningless with respecting to the real or it can only have a negative or limitative function. This is why Deleuze replaces the possible/real opposition with that of virtual/actual. Whereas conditions of possibility always relate asymmetrically and hierarchically to any real situation, the virtual as sufficient reason is no less real than the actual since it is first of all its unconditioned or unformed potential of becoming-other.

Thermodynamics of Creation. Note Quote.

original

Just like the early-time cosmic acceleration associated with inflation, a negative pressure can be seen as a possible driving mechanism for the late-time accelerated expansion of the Universe as well. One of the earliest alternatives that could provide a mechanism producing such accelerating phase of the Universe is through a negative pressure produced by viscous or particle production effects. The viscous pressure contributions can be seen as small nonequilibrium contributions for the energy-momentum tensor for nonideal fluids.

Let us posit the thermodynamics of matter creation for a single fluid. To describe the thermodynamic states of a relativistic simple fluid we use the following macroscopic variables: the energy-momentum tensor Tαβ ; the particle flux vector Nα; and the entropy flux vector sα. The energy-momentum tensor satisfies the conservation law, Tαβ = 0, and here we consider situations in which it has the perfect-fluid form

Tαβ = (ρ+P)uαuβ − P gαβ

In the above equation ρ is the energy density, P is the isotropic dynamical pressure, gαβ is the metric tensor and uα is the fluid four-velocity (with normalization uαuα = 1).

The dynamical pressure P is decomposed as

P = p + Π

where p is the equilibrium (thermostatic) pressure and Π is a term present in scalar dissipative processes. Usually, it is associated with the so-called bulk pressure. In the cosmological context, besides this meaning, Π can also be relevant when particle number is not conserved. In this case, Π ≡ pc is called the “creation pressure”. The bulk pressure,  can be seen as a correction to the thermostatic pressure when near to equilibrium, thus, it should be always smaller than the thermostatic pressure, |Π| < p. This restriction, however, does not apply for the creation pressure. So, when we have matter creation, the total pressure P may become negative and, in principle, drive an accelerated expansion.

The particle flux vector is assumed to have the following form

Nα = nuα

where n is the particle number density. Nα satisfies the balance equation Nα = nΓ, where Γ is the particle production rate. If Γ > 0, we have particle creation, particle destruction occurs when Γ < 0 and if Γ = 0 particle number is conserved.

The entropy flux vector is given by

sα = nσuα

where σ is the specific (per particle) entropy. Note that the entropy must satisfy the second law of thermodynamics sα ≥ 0. Here we consider adiabatic matter creation, that is, we analyze situations in which σ is constant. With this condition, by using the Gibbs relation, it follows that the creation pressure is related to Γ by

pc = − (ρ+p)/3H Γ

where H = a ̇/a is the Hubble parameter, a is the scale factor of the Friedmann-Robertson-Walker (FRW) metric and the overdot means differentiation with respect to the cosmic time. If σ is constant, the second law of thermodynamics implies that Γ ≥ 0 and, as a consequence, particle destruction (Γ < 0) is thermodynamically forbidden. Since Γ ≥ 0, it follows that, in an expanding universe (H > 0), the creation pressure pc cannot be positive.

Honey-Trap Catalysis or Why Chemistry Mechanizes Complexity? Note Quote.

Was browsing through Yuri Tarnopolsky’s Pattern Chemistry and its affect on/from humanities. Tarnopolsky’s states “chemistry” + “humanities” connectivity ideas thusly:

gw163h203

Practically all comments to the folk tales in my collection contained references to a book by the Russian ethnographer Vladimir Propp, who systematized Russian folk tales as ‘molecules‘ consisting of the same ‘atoms‘ of plot arranged in different ways, and even wrote their formulas. His book was published in the 30’s, when Claude Levi-Strauss, the founder of what became known as structuralism, was studying another kind of “molecules:” the structures of kinship in tribes of Brazil. Remarkably, this time a promise of a generalized and unifying vision of the world was coming from a source in humanities. What later happened to structuralism, however, is a different story, but the opportunity to build a bridge between sciences and humanities was missed. The competitive and pugnacious humanities could be a rough terrain.

I believed that chemistry carried a universal message about changes in systems that could be described in terms of elements and bonds between them. Chemistry was a particular branch of a much more general science about breaking and establishing bonds. It was not just about molecules: a small minority of hothead human ‘molecules’ drove a society toward change. A nation could be hot or cold. A child playing with Lego and a poet looking for a word to combine with others were in the company of a chemist synthesizing a drug.

Further on, Tarnopolsky, following his chemistry then thermodynamics leads, then found the pattern theory work of Swedish chemist Ulf Grenander, which he describes as follows:

In 1979 I heard about a mathematician who tried to list everything in the world. I easily found in a bookstore the first volume of Pattern Theory (1976) by Ulf Grenander, translated into Russian. As soon as I had opened the book, I saw that it was exactly what I was looking for and what I called ‘meta-chemistry’, i.e., something more general than chemistry, which included chemistry as an application, together with many other applications. I can never forget the physical sensation of a great intellectual power that gushed into my face from the pages of that book.

Although the mathematics in the book was well above my level, Grenander’s basic idea was clear. He described the world in terms of structures built of abstract ‘atoms’ possessing bonds to be selectively linked with each other. Body movements, society, pattern of a fabric, chemical compounds, and scientific hypothesis—everything could be described in the atomistic way that had always been considered indigenous for chemistry. Grenander called his ‘atoms of everything’ generators, which tells something to those who are familiar with group theory, but for the rest of us could be a good little metaphor for generating complexity from simplicity. Generators had affinities to each other and could form bonds of various strength. Atomism is a millennia old idea. In the next striking step so much appealing to a chemist, Ulf Grenander outlined the foundation of a universal physical chemistry able to approach not only fixed structures but also “reactions” they could undergo.

The two major means of control in chemistry and organic life: thermodynamic control (shift of equilibrium) and kinetic control (selective change of speed). People might not be aware that the same mechanisms are employed in social and political control, as well as in large historical events out of control, for example, the great global migration of people and jobs in our time or just the one-way flow of people across the US-Mexican border!!! Thus, with an awful degree of simplification, the intensification of a hunt for illegal immigrants looks like thermodynamic control by a honey trap, while the punishment for illegal employers is typical negative catalysis, although both may lead to a less stable and more stressed state. In both cases, new equilibrium will be established, different equilibria housed upon different sets of conditions.

dna_broken_wide

Should I treat people as molecules, unless I am from the Andromeda Galaxy. Complex-systems never come to global equilibrium, although local equilibrium can exist for some time. They can be in the state of homeostasis, which, again, is not the same as steady state in physics and chemistry. Homeostasis is the global complement of the classical local Darwinism of mutation and selection.

Taking other examples, the immigration discrimination in favor of educated or wealthy professionals is also a catalysis of affirmative action type. It speeds up the drive to equilibrium. Attractive salary for rare specialists is an equilibrium shift (honey trap) because it does not discriminate between competitors. Ideally, neither does exploitation of foreign labor. Bureaucracy is a global thermodynamic freeze that can be selectively overcome by 100% catalytic connections and bribes. Severe punishment for bribe is thermodynamic control. The use of undercover agents looks like a local catalyst: you can wait for the crook to make a mistake or you can speed it up. Tax incentive or burden is a shift of equilibrium. Preferred (or discouraging) treatment of competitors is catalysis (or inhibition).

There is no catalysis without selectivity and no selectivity without competition. Equilibrium, however, is not selective: it applies globally to the fluid enough system. Organic life, society, and economy operate by both equilibrium shift and catalysis. More examples: by manipulating the interest rate, the RBI employs thermodynamic control; by tax cuts for efficient use of energy, the government employs kinetic control, until saturation comes. Thermodynamic and kinetic factors are necessary for understanding Complex-systems, although only professionals can talk about them reasonably, but they are not sufficient. History is not chemistry because organic life and human society develop by design patterns, so to speak, or archetypal abstract devices, which do not follow from any physical laws. They all, together with René Thom morphologies, have roots not in thermodynamics but in topology. Anything that cannot be presented in terms of points, lines, and interactions between the points is far from chemistry. Topology is blind to metrics, but if Pattern Theory were not metrical, it would be just a version of graph theory.

Representation as a Meaningful Philosophical Quandary

1456831690974

The deliberation on representation indeed becomes a meaningful quandary, if most of the shortcomings are to be overcome, without actually accepting the way they permeate the scientific and philosophical discourse. The problem is more ideological than one could have imagined, since, it is only within the space of this quandary that one can assume success in overthrowing the quandary. Unless the classical theory of representation that guides the expert systems has been accepted as existing, there is no way to dislodge the relationship of symbols and meanings that build up such systems, lest the predicament of falling prey to the Scylla of metaphysically strong notion of meaningful representation as natural or the Charybdis of an external designer should gobble us up. If one somehow escapes these maliciously aporetic entities, representation as a metaphysical monster stands to block our progress. Is it really viable then to think of machines that can survive this representational foe, a foe that gets no aid from the clusters of internal mechanisms? The answer is very much in the affirmative, provided, a consideration of the sort of such a non-representational system as continuous and homogeneous is done away with. And in its place is had functional units that are no more representational ones, for the former derive their efficiency and legitimacy through autopoiesis. What is required is to consider this notional representational critique of distributed systems on the objectivity of science, since objectivity as a property of science has an intrinsic value of independence from the subject who studies the discipline. Kuhn  had some philosophical problems to this precise way of treating science as an objective discipline. For Kuhn, scientists operate under or within paradigms thus obligating hierarchical structures. Such hierarchical structures ensure the position of scientists to voice their authority on matters of dispute, and when there is a crisis within, or, for the paradigm, scientists, to begin with, do not outrightly reject the paradigm, but try their level best at resolution of the same. In cases where resolution becomes a difficult task, an outright rejection of the paradigm would follow suit, thus effecting what is commonly called the paradigm shift. If such were the case, obviously, the objective tag for science goes for a hit, and Kuhn argues in favor of a shift in social order that science undergoes, signifying the subjective element. Importantly, these paradigm shifts occur to benefit scientific progress and in almost all of the cases, occur non-linearly. Such a view no doubt slides Kuhn into a position of relativism, and has been the main point of attack on paradigms shifting. At the forefront of attacks has been Michael Polanyi and his bunch of supporters, whose work on epistemology of science have much of the same ingredients, but was eventually deprived of fame. Kuhn was charged with plagiarism. The commonality of their arguments could be measured by a dissenting voice for objectivity in science. Polanyi thought of it as a false ideal, since for him the epistemological claims that defined science were based more on personal judgments, and therefore susceptible to fallibilism. The objective nature of science that obligates the scientists to see things as they really are is kind of dislodged by the above principle of subjectivity. But, if science were to be seen as objective, then the human subjectivity would indeed create a rupture as far as the purified version of scientific objectivity is sought for. The subject or the observer undergoes what is termed the “observer effect” that refers to the change impacting an act of observation being observed. This effect is as good as ubiquitous in most of the domains of science and technology ranging from Heisenbug(1) in computing via particle physics, science of thermodynamics to quantum mechanics. The quantum mechanics observer effect is quite perplexing, and is a result of a phenomenon called “superposition” that signifies the existence in all possible states and all at once. The superposition gets its credit due to Schrödinger’s cat experiment. The experiment entails a cat that is neither dead nor alive until observed. This has led physicists to take into account the acts of “observation” and “measurement” to comprehend the paradox in question, and thereby come out resolving it. But there is still a minority of quantum physicists out there who vouch for the supremacy of an observer, despite the quantum entanglement effect that go on to explain “observation” and “measurement” impacts.(2) Such a standpoint is indeed reflected in Derrida (9-10) as well, when he says (I quote him in full),

The modern dominance of the principle of reason had to go hand in hand with the interpretation of the essence of beings as objects, and object present as representation (Vorstellung), an object placed and positioned before a subject. This latter, a man who says ‘I’, an ego certain of itself, thus ensures his own technical mastery over the totality of what is. The ‘re-‘ of repraesentation also expresses the movement that accounts for – ‘renders reason to’ – a thing whose presence is encountered by rendering it present, by bringing it to the subject of representation, to the knowing self.

If Derridean deconstruction needs to work on science and theory, the only way out is to relinquish the boundaries that define or divide the two disciplines. Moreover, if there is any looseness encountered in objectivity, the ramifications are felt straight at the levels of scientific activities. Even theory does not remain immune to these consequences. Importantly, as scientific objectivity starts to wane, a corresponding philosophical luxury of avoiding the contingent wanes. Such a loss of representation congruent with a certain theory of meaning we live by has serious ethical-political affectations.

(1) Heisenbug is a pun on the Heisenberg’s uncertainty principle and is a bug in computing that is characterized by a disappearance of the bug itself when an attempt is made to study it. One common example is a bug that occurs in a program that was compiled with an optimizing compiler, but not in the same program when compiled without optimization (e.g., for generating a debug-mode version). Another example is a bug caused by a race condition. A heisenbug may also appear in a system that does not conform to the command-query separation design guideline, since a routine called more than once could return different values each time, generating hard- to-reproduce bugs in a race condition scenario. One common reason for heisenbug-like behaviour is that executing a program in debug mode often cleans memory before the program starts, and forces variables onto stack locations, instead of keeping them in registers. These differences in execution can alter the effect of bugs involving out-of-bounds member access, incorrect assumptions about the initial contents of memory, or floating- point comparisons (for instance, when a floating-point variable in a 32-bit stack location is compared to one in an 80-bit register). Another reason is that debuggers commonly provide watches or other user interfaces that cause additional code (such as property accessors) to be executed, which can, in turn, change the state of the program. Yet another reason is a fandango on core, the effect of a pointer running out of bounds. In C++, many heisenbugs are caused by uninitialized variables. Another similar pun intended bug encountered in computing is the Schrödinbug. A schrödinbug is a bug that manifests only after someone reading source code or using the program in an unusual way notices that it never should have worked in the first place, at which point the program promptly stops working for everybody until fixed. The Jargon File adds: “Though… this sounds impossible, it happens; some programs have harbored latent schrödinbugs for years.”

(2) There is a related issue in quantum mechanics relating to whether systems have pre-existing – prior to measurement, that is – properties corresponding to all measurements that could possibly be made on them. The assumption that they do is often referred to as “realism” in the literature, although it has been argued that the word “realism” is being used in a more restricted sense than philosophical realism. A recent experiment in the realm of quantum physics has been quoted as meaning that we have to “say goodbye” to realism, although the author of the paper states only that “we would [..] have to give up certain intuitive features of realism”. These experiments demonstrate a puzzling relationship between the act of measurement and the system being measured, although it is clear from experiment that an “observer” consisting of a single electron is sufficient – the observer need not be a conscious observer. Also, note that Bell’s Theorem suggests strongly that the idea that the state of a system exists independently of its observer may be false. Note that the special role given to observation (the claim that it affects the system being observed, regardless of the specific method used for observation) is a defining feature of the Copenhagen Interpretation of quantum mechanics. Other interpretations resolve the apparent paradoxes from experimental results in other ways. For instance, the Many- Worlds Interpretation posits the existence of multiple universes in which an observed system displays all possible states to all possible observers. In this model, observation of a system does not change the behavior of the system – it simply answers the question of which universe(s) the observer(s) is(are) located in: In some universes the observer would observe one result from one state of the system, and in others the observer would observe a different result from a different state of the system.

Complexity and Entropy

04mandernach_4040
Complexity is not characterized by variegated forms of connectivity among the parts that go on to build up the system alone, but is also measured by ‘orderliness’. Entropy is the measure of disorder in a system.(1) Qualitative nature of entropy describes the changes, the system undergoes vis-à-vis its earlier stages by noting energy transformations from one state to the other and this has urged scientists to devise formulas to exactly measure the change or the degree of disorder the system underwent during such transformations. One such exponent is physicist Peter Landsberg, who used inputs from thermodynamics and information theory in arguing that under the constraints operating upon a system, whereby, it was prevented for a system to enter one or more of possible/permitted states, the measure of the total amount of disorder is always generated by using the formula:
Disorder = CD/CI
and
Order = 1 – CO/CI
where CD is the “disorder” capacity of the system, which is the entropy of the parts contained in the permitted ensemble, CI is the “information” capacity of the system, an expression similar to Shannon’s channel capacity (2), and CO is the “order” capacity of the system.
Despite certain strides being made in quantifying complexity using the techniques of entropy, the major hitch remains in the form of diversity in talking about the notion itself, and this affects the use value of entropy in dealing with complexity on a more practical level. Another disadvantage with entropy as measure of complexity would be lack of a detailed structure that the former provides.
(1) There are many ways of depicting or talking about Entropy, like, as based on thermodynamics, randomness or stochasticity and despite being mathematically equivalent are used in context to the systems under study. Thermodynamically, entropy entails system’s susceptibility towards a spontaneous change, with the isolated system never undergoing any decrease in entropy. Entropy is also used in Information Theory, as developed by Claude Shannon to denote the number of bits required for storage and/or communication. For Shannon, entropy quantifies the uncertainty involved in encountering a random variable. An excellent book dealing with the philosophical and physical dimensions of time reversal, second law of thermodynamics, asymmetries in our epistemic access to the past and the future is by David Albert.  
(2) In information theory, channel capacity is the tightest upper bound on the amount of information that can be reliably transmitted over a communications channel. Claude E. Shannon defines the notion of channel capacity and provides a mathematical model by which one can compute it. Capacity of the channel, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution.

Philosophy of Quantum Entanglement and Topology

58525360_35e55309c4_o

Many-body entanglement is essential for the existence of topological order in condensed matter systems and understanding many-body entanglement provides a promising approach to understand in general what topological orders exist. It also leads to tensor network descriptions of many-body wave functions potentializing the classification of phases of quantum matter. The generic many-body entanglement is reduced to specifically 2-body systems for choice of entanglement. Consider the equation,

S(A) ≡ −tr(ρA log2A)) —– (1)

where, ρA ≡ trBAB ⟩⟨ΨAB | is the density matrix for part A, and where we assumed that the whole system is in a pure state AB.

Specializing AB⟩ to a ground state in a local Hamiltonian in D dimensions spatially, the central observation being that the entanglement between of a region A of size LD and the (much larger) rest B of the lattice is then often proportional to the size |σ(A)| of the boundary σ(A) of region A,

S(A) ≈ |σ(A)| ≈ LD−1  —– (2)

where, the correction -1 is due to the topological order of the topic code, thus signifying adherence to Boundary Law observed in the ground state of gapped local Hamiltonian in arbitrary dimension D, as well as in some gapless systems in D > 1 dimensions. Instead, in gapless systems in D = 1 dimensions, as well as in certain gapless systems in D > 1 dimensions (namely systems with a Fermi surface of dimension D − 1), ground state entanglement displays a logarithmic correction to the boundary law,

S(A) ≈ |σ(A)| log2 (|σ(A)|) ≈ LD−1 log2(L) —– (3)

At an intuitive level, the boundary law of (2) is understood as resulting from entanglement that involves degrees of freedom located near the boundary between regions A and B. Also intuitively, the logarithmic correction of (3) is argued to have its origin in contributions to entanglement from degrees of freedom that are further away from the boundary between A and B. Given the entanglement between A and B, introducing an entanglement contour sA that assigns a real number sA(i) ≥ 0 to each lattice site i contained in region A such that the sum of sA(i) over all the sites i ∈ A is equal to the entanglement entropy S (A),

S(A) = Σi∈A sA(i) —– (4) 

and that aims to quantifying how much the degrees of freedom in site i participate in/contribute to the entanglement between A and B. And as Chen and Vidal put it, the entanglement contour sA(i) is not equivalent to the von Neumann entropy S(i) ≡ −tr ρ(i) log2 ρ(i) of the reduced density matrix ρ(i) at site i. Notice that, indeed, the von Neumann en- tropy of individual sites in region A is not additive in the presence of correlations between the sites, and therefore generically

S(A) ≠ Σi∈A S(i)

whereas the entanglement contour sA(i) is required to fulfil (4). Relatedly, when site i is only entangled with neighboring sites contained within region A, and it is thus uncorrelated with region B, the entanglement contour sA(i) will be required to vanish, whereas the one-site von Neumann entropy S(i) still takes a non-zero value due to the presence of local entanglement within region A.

As an aside, in the traditional approach to quantum mechanics, a physical system is described in a Hilbert space: Observables correspond to self-adjoint operators and statistical operators are associated with the states. In fact, a statistical operator describes a mixture of pure states. Pure states are the really physical states and they are given by rank one statistical operators, or equivalently by rays of the Hilbert space. Von Neumann associated an entropy quantity to a statistical operator and his argument was a gedanken experiment on the ground of phenomenological thermodynamics. Let us consider a gas of N(≫ 1) molecules in a rectangular box K. Suppose that the gas behaves like a quantum system and is described by a statistical operator D, which is a mixture λ|φ1⟩⟨φ1| + (1 − λ)|φ1⟩⟨φ2|, |φi⟩ ≡ φ is a state vector (i = 1, 2). We may take λN molecules in the pure state φ1 and (1−λ)N molecules in the pure state φ2. On the basis of phenomenological thermodynamics, we assume that if φ1 and φ2 are orthogonal, then there is a wall that is completely permeable for the φ1-molecules and isolating for the φ2-molecules. We add an equally large empty rectangular box K′ to the left of the box K and we replace the common wall with two new walls. Wall (a), the one to the left is impenetrable, whereas the one to the right, wall (b), lets through the φ1-molecules but keeps back the φ2-molecules. We add a third wall (c) opposite to (b) which is semipermeable, transparent for the φ2-molecules and impenetrable for the φ1-ones. Then we push slowly (a) and (c) to the left, maintaining their distance. During this process the φ1-molecules are pressed through (b) into K′ and the φ2-molecules diffuse through wall (c) and remain in K. No work is done against the gas pressure, no heat is developed. Replacing the walls (b) and (c) with a rigid absolutely impenetrable wall and removing (a) we restore the boxes K and K′ and succeed in the separation of the φ1-molecules from the φ2-ones without any work being done, without any temperature change and without evolution of heat. The entropy of the original D-gas ( with density N/V ) must be the sum of the entropies of the φ1- and φ2-gases ( with densities λ N/V and (1 − λ)N/V , respectively). If we compress the gases in K and K′ to the volumes λV and (1 − λ)V , respectively, keeping the temperature T constant by means of a heat reservoir, the entropy change amounts to κλN log λ and κ(1 − λ)N log(1 − λ), respectively. Indeed, we have to add heat in the amount of λiNκT logλi (< 0) when the φi-gas is compressed, and dividing by the temperature T we get the change of entropy. Finally, mixing the φ1- and φ2-gases of identical density we obtain a D-gas of N molecules in a volume V at the original temperature. If S0(ψ,N) denotes the entropy of a ψ-gas of N molecules (in a volume V and at the given temperature), we conclude that

S0(φ1,λN)+S0(φ2,(1−λ)N) = S0(D, N) + κλN log λ + κ(1 − λ)N log(1 − λ) —– (5)

must hold, where κ is Boltzmann’s constant. Assuming that S0(ψ,N) is proportional to N and dividing by N we have

λS(φ1) + (1 − λ)S(φ2) = S(D) + κλ log λ + κ(1 − λ) log(1 − λ) —– (6)

where S is certain thermodynamical entropy quantity ( relative to the fixed temperature and molecule density ). We arrived at the mixing property of entropy, but we should not forget about the initial assumption: φ1 and φ2 are supposed to be orthogonal. Instead of a two-component mixture, von Neumann operated by an infinite mixture, which does not make a big difference, and he concluded that

S (Σiλi|φi⟩⟨φi|) = ΣiλiS(|φi⟩⟨φi|) − κ Σiλi log λi —– (7)

Von Neumann’s argument does not require that the statistical operator D is a mixture of pure states. What we really needed is the property D = λD1 + (1 − λ)D2 in such a way that the possible mixed states D1 and D2 are disjoint. D1 and D2 are disjoint in the thermodynamical sense, when there is a wall which is completely permeable for the molecules of a D1gas and isolating for the molecules of a D2-gas. In other words, if the mixed states D1 and D2 are disjoint, then this should be demonstrated by a certain filter. Mathematically, the disjointness of D1 and D2 is expressed in the orthogonality of the eigenvectors corresponding to nonzero eigenvalues of the two density matrices. The essential point is in the remark that (6) must hold also in a more general situation when possibly the states do not correspond to density matrices, but orthogonality of the states makes sense:

λS(D1) + (1 − λ)S(D2) = S(D) + κλ log λ + κ(1 − λ) log(1 − λ) —– (8)

(7) reduces the determination of the (thermodynamical) entropy of a mixed state to that of pure states. The so-called Schatten decomposition Σi λi|φi⟩⟨φi| of a statistical operator is not unique even if ⟨φi , φj ⟩ = 0 is assumed for i ≠ j . When λi is an eigenvalue with multiplicity, then the corresponding eigenvectors can be chosen in many ways. If we expect the entropy S(D) to be independent of the Schatten decomposition, then we are led to the conclusion that S(|φ⟩⟨φ|) must be independent of the state vector |φ⟩. This argument assumes that there are no superselection sectors, that is, any vector of the Hilbert space can be a state vector. On the other hand, von Neumann wanted to avoid degeneracy of the spectrum of a statistical operator. Von Neumann’s proof of the property that S(|φ⟩⟨φ|) is independent of the state vector |φ⟩ was different. He did not want to refer to a unitary time development sending one state vector to another, because that argument requires great freedom in choosing the energy operator H. Namely, for any |φ1⟩ and |φ2⟩ we would need an energy operator H such that

eitH|φ1⟩ = |φ2⟩

This process would be reversible. Anyways, that was quite a digression.

Entanglement between A and B is naturally described by the coefficients {pα} appearing in the Schmidt decomposition of the state |ΨAB⟩,

AB⟩ = Σα √pαAα ⟩ ⊗ |ΨBα ⟩ —– (9)

These coefficients {pα} correspond to the eigenvalues of the reduced density matrix ρA, whose spectral decomposition reads

ρA = ΣαpAα⟩⟨ΨAα—– (10)

defining a probability distribution, pα ≥ 0, Σα pα = 1, in terms of which the von Neumann entropy S(A) is

S(A) = − Σαpα log2(pα—– (11)

On the other hand, the Hilbert space VA of region A factorizes as the tensor product

VA = ⊗ i∈A V(i) —– (12)

where V(i) describes the local Hilbert space of site i. The reduced density matrix ρA in (10) and the factorization of (12) define two inequivalent structures within the vector space VA of region A. The entanglement contours A is a function from the set of sites i∈A to the real numbers,

sA : A → ℜ —– (13)

that attempts to relate these two structures, by distributing the von-Neumann entropy S(A) of (11) among the sites i ∈ A. According to Chen and Vidal, there are five conditions/requirements on entanglement contours that need satiation.

a. Positivity: sA(i) ≥ 0

b. Normalization: Σi∈AsA(i) = S(A) 

These constraints amount to defining a probability distribution pi ≡ sA(i)/S(A) over the sites i ∈ A, with pi ≥ 0 and i Σipi = 1, such that sA(i) = piS(A), however, do not requiring sA to inform us about the spatial structure of entanglement in A, but only relating to the density matrix ρA through its total von Neumann entropy S(A).

c. Symmetry: if T is a symmetry of ρA, that is AT = ρA, and T exchanges site i with site j, then sA(i) = sA(j).

This condition ensures that the entanglement contour is the same on two sites i and j of region A that, as far as entanglement is concerned, play an equivalent role in region A. It uses the (possible) presence of a spatial symmetry, such as invariance under space reflection, or under discrete translations/rotations, to define an equivalence relation in the set of sites of region A, and requires that the entanglement contour be constant within each resulting equivalence class. Notice, however, that this condition does not tell us whether the entanglement contour should be large or small on a given site (or equivalence class of site). In particular, the three conditions above are satisfied by a canonical choice sA(i) = S (A)/|A|, that is a flat entanglement contour over the |A| sites contained in region A, which once more does not tell us anything about the spatial structure of the von Neumann entropy in ρA.

The remaining conditions refer to subregions within region A, instead of referring to single sites. It is therefore convenient to (trivially) extend the definition of entanglement contour to a set X of sites in region A, X ⊆ A, with vector space

VX = ⊗i∈X V(i) —– (14)

as the sum of the contour over the sites in X,

sA(X) ≡  Σi∈XsA(i) —– (15)

It follows from this extension that for any two disjoint subsets X1, X2 ⊆ A, with X1 ∩ X2 = ∅, the contour is additive,

sA(X1 ∪ X2) = sA(X1) + sA(X2—– (16)

In particular, condition 2 can be now recast as sA(A) =S(A). Similarly, if X, X ⊆ A, are such that all the sites of X1 are also contained in X2, X1X2 ,then the contour must be larger on X2 than on X1 (monotonicity of sA(X)),

sA(X1) ≤ sA(X2) if X1 ⊆ X2 —– (17)

d. Invariance under local unitary transformations: if the state |Ψ′AB is obtained from the state AB by means of a unitary transformation UX that acts on a subset X ⊆ A of sites of region A, that is |Ψ′AB⟩ ≡ UXAB, then the entanglement contour sA(X) must be the same for state AB and for state |Ψ′AB.

That is, the contribution of region X to the entanglement between A and B is not affected by a redefinition of the sites or change of basis within region X. Notice that it follows that  Ucan also not change sA(X’), where X’ ≡ A − X is the complement of X in A.

To motivate our last condition, let us consider a state AB that factorizes as the product

AB⟩ = |ΨXXB⟩ ⊗ |ΨX’X’B—– (18)

where X ⊆ A and XB ⊆ B are subsets of sites in regions A and B, respectively, and X’ ⊆ A and X’B ⊆ B are their complements within A and B, so that

VA = VX ⊗ VX’, —– (19)

VB = VXB ⊗ VX’B —– (20)

in this case the reduced density matrix ρA factorizes as ρA = ρX ⊗ ρX’ and the entanglement entropy is additive,

S(A) = S(X) + S(X’) —– (21)

Since the entanglement entropy S(X) of subregion X is well-defined, let the entanglement profile over X be equal to it,

sA(X) = S(X) —– (22)

The last condition refers to a more general situation where, instead of obeying (18), the state AB factorizes as the product

AB⟩ = |ΨΩAΩB⟩ ⊗ |ΨΩ’AΩ’B, —– (23)

with respect to some decomposition of VA and VB as

tensor products of factor spaces,

VA = VΩA ⊗ VΩ’A, —– (24)

VB = VΩB ⊗ VΩ’B —– (25)

Let S(ΩA) denote the entanglement entropy supported on the first factor space VΩA of  VA, that is

S(ΩA) = −tr(ρΩA log2ΩA)) —– (26)

ρΩA ≡ trΩB |Ψ ΩA ΩB⟩⟨Ψ ΩA ΩB| —– (27)

and let X ⊆ A be a subset of sites whose vector space VX is completely contained in VΩA , meaning that VΩA can be further decomposed as

VΩA  ≈ VX VX’ —– (28)

e. Upper bound: if a subregion X ⊆ A is contained in a factor space ΩA (24 and 28) then the entanglement contour of subregion X cannot be larger than the entanglement entropy S(ΩA) (26)

sA(X) S(ΩA) —– (29)

This condition says that whenever we can ascribe a concrete value S(ΩA) of the entanglement entropy to a factor space ΩA within region A (that is, whenever the state AB factorizes as in (24) then the entanglement contour has to be consistent with this fact, meaning that the contour S(X) in any subregion X contained in the factor space ΩA is upper bounded by S(ΩA).

Let us consider a particular case of condition e. When a region X ∈ A is not at all correlated with B, that is ρXBX ⊗ ρB,then it can be seen that X is contained in some factor space ΩA such that the state |Ψ ΩA ΩB itself further factorizes as |Ψ ΩA⟩ ⊗ |ΨΩB, so that (23) becomes

AB⟩ = |Ψ ΩA⟩ ⊗ |ΨΩB ⊗ |ΨΩ’AΩ’B ⟩, —– (30)

and S(ΩA) = 0. Condition e then requires that sA(X) = 0, that is

ρXBX ⊗ ρB sA(X) = 0, —– (31)

reflecting the fact that a region X ⊆ A that is not correlated with B does not contribute at all to the entanglement between A and B. Finally, the upper bound in e can be alternatively announced as a lower bound. Let Y ⊆ A be a subset of sites whose vector space VY completely contains VΩA in (24), meaning that VY can be further decomposed as

VY VΩA ⊗ VΩ’A —– (32)

e’. Lower bound: The entanglement contour of subregion Y is at least equal to the entanglement entropy S(ΩA) in (26),

sA(Y) ≥ S(ΩA) —– (33)

Conditions a-e (e’) are not expected to completely determine the entanglement contour. In other words, there probably are inequivalent functions sA : A → ℜ that conform to all the conditions above. So, where do we get philosophical from here? It is through the entanglement contour through selected states that a time evolution ensuing a global or a local quantum quench characterizing entanglement between regions rather than within regions, revealing a a detailed real-space structure of the entanglement of a region A and its dynamics, well beyond what is accessible from the entanglement entropy alone. But, that isn’t all. Questions of how to quantify entanglement and non-locality, and the need to clarify the relationship between them are important not only conceptually, but also practically, insofar as entanglement and non-locality seem to be different resources for the performance of quantum information processing tasks. Whether in a given quantum information protocol (cryptography, teleportation, and algorithm . . .) it is better to look for the largest amount of entanglement or the largest amount of non-locality becomes decisive. The ever-evolving field of quantum information theory is devoted to using the principles and laws of quantum mechanics to aid in the acquisition, transmission, and processing of information. In particular, it seeks to harness the peculiarly quantum phenomena of entanglement, superposition, and non-locality to perform all sorts of novel tasks, such as enabling computations that operate exponentially faster or more efficiently than their classical counterparts (via quantum computers) and providing unconditionally secure cryptographic systems for the transfer of secret messages over public channels (via quantum key distribution). By contrast, classical information theory is concerned with the storage and transfer of information in classical systems. It uses the “bit” as the fundamental unit of information, where the system capable of representing a bit can take on one of two values (typically 0 or 1). Classical information theory is based largely on the concept of information formalized by Claude Shannon in the late 1940s. Quantum information theory, which was later developed in analogy with classical information theory, is concerned with the storage and processing of information in quantum systems, such as the photon, electron, quantum dot, or atom. Instead of using the bit, however, it defines the fundamental unit of quantum information as the “qubit.” What makes the qubit different from a classical bit is that the smallest system capable of storing a qubit, the two-level quantum system, not only can take on the two distinct values |0 and |1 , but can also be in a state of superposition of these two states: |ψ = α0 |0 + α1 |1.

Quantum information theory has opened up a whole new range of philosophical and foundational questions in quantum cryptography or quantum key distribution, which involves using the principles of quantum mechanics to ensure secure communication. Some quantum cryptographic protocols make use of entanglement to establish correlations between systems that would be lost upon eavesdropping. Moreover, a quantum principle known as the no-cloning theorem prohibits making identical copies of an unknown quantum state. In the context of a C∗-algebraic formulation,  quantum theory can be characterized in terms of three information-theoretic constraints: (1) no superluminal signaling via measurement, (2) no cloning (for pure states) or no broadcasting (mixed states), and (3) no unconditionally secure bit commitment.

Entanglement does not refute the principle of locality. A sketch of the sort of experiment commonly said to refute locality runs as follows. Suppose that you have two electrons with entangled spin. For each electron you can measure the spin along the X, Y or Z direction. If you measure X on both electrons, then you get opposite values, likewise for measuring Y or Z on both electrons. If you measure X on one electron and Y or Z on the other, then you have a 50% probability of a match. And if you measure Y on one and Z on the other, the probability of a match is 50%. The crucial issue is that whether you find a correlation when you do the comparison depends on whether you measure the same quantity on each electron. Bell’s theorem just explains that the extent of this correlation is greater than a local theory would allow if the measured quantities were represented by stochastic variables (i.e. – numbers picked out of a hat). This fact is often misrepresented as implying that quantum mechanics is non-local. But in quantum mechanics, systems are not characterised by stochastic variables, but, rather, by Hermitian operators. There is an entirely local explanation of how the correlations arise in terms of properties of systems represented by such operators. But, another answer to such violations of the principle of locality could also be “Yes, unless you get really obsessive about it.” It has been formally proven that one can have determinacy in a model of quantum dynamics, or one can have locality, but cannot have both. If one gives up the determinacy of the theory in various ways, one can imagine all kinds of ‘planned flukes’ like the notion that the experiments that demonstrate entanglement leak information and pre-determine the environment to make the coordinated behavior seem real. Since this kind of information shaping through distributed uncertainty remains a possibility, folks can cling to locality until someone actually manages something like what those authors are attempting, or we find it impossible. If one gives up locality instead, entanglement does not present a problem, the theory of relativity does. Because the notion of a frame of reference is local. Experiments on quantum tunneling that violate the constraints of the speed of light have been explained with the idea that probabilistic partial information can ‘lead’ real information faster than light by pushing at the vacuum underneath via the ‘Casimir Effect’. If both of these make sense, then the information carried by the entanglement when it is broken would be limited as the particles get farther apart — entanglements would have to spontaneously break down over time or distance of separation so that the probabilities line up. This bodes ill for our ability to find entangled particles from the Big Bang, which seems to be the only prospect in progress to debunk the excessively locality-focussed.

But, much of the work remains undone and this is to be continued…..

 

Truth & Theories: Facetious Treatment

Truth is the main regulative principle in the criticism of theories, their power to solve them or raise new ones is another. As Popper says, the quest for precision is equivalent to the quest for certainty and both should be avoided. I think that that is an undesirable activity to raise the precision of a theory/statement for its own sake (for e.g. Linguistically) as it would tarnish the clarity one is seeking. Can we ever know what we are talking about, say a proposition ‘A’, since in all its likelihoods that the successor to the above mentioned proposition may one day refute it or may even one day entail it. Because to see as a matter of fact, to know what we are talking of is an hopeless task in itself as there might be an infinite number of logical irregularities that might confront us as we propagate through ‘A’, say theory/statement. To understand the truth content of a theory, the foremost thing is to link the logicality of the theory to the existing one posed by the theory, as it is a well known fact that there is more of a room for the understanding of the theory in a much better way. As a simile, let us take the remark made by Clifford Truesdell about the law of thermodynamics,

“Every physicist knows as to what is meant by the first and the second law of thermodynamics, but no two physicists agree on the two.”

joliet25-august-2016-the-university-of-st-francis-art-gallery-paul-schranz-entropy-artist-lecturecredit

All learning is a modification or at times a refutation of the knowledge already gained or also of the so called inborn knowledge. In a great work of art, the artist does not try to impose his ambitions to the use, but merely uses the faculty to serve the purpose his art is intended to expound. So for an artist or a musician, the adage should be “from the heart may it go back to the heart.” The point is that even the most scientifically tested theory is an approximation to the truth as was evident of Einstein’s revolutionizing of the Newton’s scientific theory of gravitation.

Theories are actively produced by our minds rather than being imposed upon us by the realities and also that they transcend our experience with the expressionist mind that we possess. As with the Kantian point of view, we should all ask ourselves this simple question…”Can we really know things in themselves or are they purely entailed in the deep seas of purely hypothetical conjectures?” Universal scientific theories are not deducible from singular statements, but can definitely be refuted by them as and when clash occurs with the known or knowable facts. We always find ourselves in a kind of a problematic situation and then we deliberately choose a situation that is no doubt problematic but at the same time conducive enough for us to solve. Now the solution to this situation is/might be a tentative one which might consist in the hypothetical theorization, and a conjecture which in the course makes us to believe that that the solution is nothing more than the nearest approximation to the truth. The various resulting theories are then critically discussed and compared to check their shortcomings, the discussions in turn bring further competing theories and this recurring structure constitutes science. Thus it lacks induction and as Popper says, “we never argue from facts to theories as it is possible only through falsification or refutation.” What really matters in case of theories is not their visual understanding but the logical basis which forces its way through up to understanding a theory, its explanatory content and most importantly its discourses in understanding the problems confronted in relation to other theories. No test of any theoretical statement is final or conclusive, and that the empirical, or the critical attitude involves the adherence to some methodological rules which direct us in not evading criticisms but in accepting refutations. As a consequence, the acceptance or a refutation is nearly as risky as a temporary adoption of a hypothesis, it is surely the acceptance of a conjecture. Knowledge grows out of trial and error and its elimination at times and this knowledge is likely the scientific knowledge as it is always wise enough to conscientiously acclaim or declaim a theory in its fullest criticalities. And thus as it is said that the critical adoption of a theory is what is responsible for the growth of the knowledge by its accepted as its major tool of instrument. The rationality of man consists in taking nothing for granted. Explanation is always incomplete, by enticing us into asking new questions which in turn give birth to new theories which not only correct the older version of the theory but substantiates new breeding grounds for other questions thus entering into the vicious circle of refuting the old and accepting the new…the theory clearly underlies the fact that science can never reach completion. So rightly put, the evolution of science is going to be an endless quest of corrections and new approximations. For Gödel’s theory of incompleteness would come into picture…in view of the mathematical background of physics, at best an infinite sequence of true theories would be needed to answer the questions which in any given formalized theory would be undecidable. Science not only begins with problems but at the same time ends with problems as has been rightly answered by Popper. Truth is conceived as an absolute, timeless discourse whose effect is to suppress the narrative differand and thus to conduct an ethico-discursive wrong against anyone who fails to accept its criterion or accepts its veridical status. Without the possibility of right understanding of truth as a precondition of any communicative act, we should have no means to recognize case where understanding has just broken down, or where translation comes up with some genuine problem of localized interpretative grasp. The truth is rather that the language is a condition for having conventions. And since language in that sense of the term adequately, not just involving words, signs or vocabularies taken out of context, but also sentences or propositions and higher level forms of discourses, it follows that the conventionalist doctrine along with their relativist progeny need not carry the day. Any problems of communication on the other side of our side can best be sorted out by making an appeal to those features that are not otiose but which make up the strictly ineliminable bi of all understanding within or between theories.

So formulating a theory is a never ending myth that could give birth to a cycle of myths which are so entailing in nature that expound the very fact before us that the truth content of a theory is limited to the extent till criticisms to it or at times cynicisms to it are not raised.