Yin & Yang Logarithmic Spirals

yin_yang

The figure depicts the well-known black and white symbol of Yin and Yang. The dots of different color in the area delimited by each force symbolize the fact that each force bears the seed of its counterpart within itself. According to the principle of Yin and Yang outlined above, neither Yin nor Yang can be observed directly. Both Yin and Yang are intertwined forces always occurring in pairs, rather than being isolated forces independent from each other. In Chinese philosophy, Yin and Yang assume the form of spirals. Let us now show that the net force in

K = −p(K ) ∗ ln(1 − p(K )/p(K))

the performance of a given confidence value K always matches exactly the expectation, i.e. in other words E = p(K), is a spiral too. In order to do so, let us introduce the general definition of the logarithmic spiral before illustrating the similarity to the famous Yin/Yang symbol.

A logarithmic spiral is a special type of spiral curve, which plays an important role in nature. It occurs in all different kinds of objects and processes, such as mollusk shells, hurricanes, galaxies, etc. In polar coordinates (r, θ), the general definition of a logarithmic spiral is

r = ae

Parameter a is a scale factor determining the size of the spiral, while parameter b con- trols the direction and tightness of the wrapping. For a logarithmic spiral, the distances between the turnings increase. This distinguishes the logarithmic spiral from the Archimedean spiral, which features constant distances between turnings. The figure below  depicts a typical example of a logarithmic spiral.

l2_636

Resolving r = ae for θ leads to the following general form of logarithmic spirals:

θ = 1/b ln (r/a)

In order to show that the net force in

K = −p(K ) ∗ ln(1 − p(K )/p(K))

defines a logarithmic spiral, and for the sake of easier illustration, let us look at the negative version of the net force in the above equation for net force and look at the polar coordinates (r, θ) it defines, namely:

θ = −p(K) ∗ ln(p(K)/(1−p(K)))

and

r = (1−p(K)) ∗ e−θ/p(K)

A comparison of θ = −p(K) ∗ ln(p(K)/(1−p(K))), r = (1−p(K)) ∗ e−θ/p(K) with the general form of logarithmic spirals in θ = 1/b ln (r/a) shows that the net force does indeed describe a spiral. Both of these equations match when we set the parameters a and b to the following values:

a = 1−p(K)

and

b = − 1/p(K)

In particular, we can check that a and b are identical when p(K) equals the golden ratio, which happens to be

φ ≈ 1.618 ∨ −0.618

If we let p(K) run from 0 to 1, and mirror the resulting spiral along both axes, we receive two spirals. Figure 8 shows both spirals plotted in a Cartesian coordinate system. Both spirals are, of course, symmetrical and their turnings approach the unit circle. A comparison of the Yin/Yang symbol with the spirals in the figure below shows the strong similarities between both figures.

30f9d0b34a48fa1d5095fad56567f645

A simple mirror operation transforms the spirals in the above figure into the Yin/Yang symbol. The addition of a time dimension to the above figure generates a three-dimensional object.

hqdefault2

The above is an informational universe. Note that the use of performance as time is reasonable because the exponential distribution is typically used to model dynamic time processes and the expectation value is thus typically associated with time.

Yin and Yang with deep roots in Chinese philosophy, stand for two principles that are opposites of each other, and which are constantly trying to gain the upper hand over each other. However, neither one will ever succeed in doing so, though one principle may temporarily dominate the other one. Both principles cannot exist without each other. It is rather the constant struggle between both principles that defines our world and produces the rhythm of life. According to Chinese philosophy, Yin and Yang are the foundation of our entire universe. They flow through, and thus affect, every being. Typical examples of Yin/Yang opposites are, for example, night/day, cold/hot, rest/activity, etc. Chinese philosophy does not confine itself to a mere description of Yin and Yang. It also provides guidelines on how to live in accordance with Yin and Yang. The central statement is that Yin and Yang need to be in harmony. Any imbalance of an economical, biological, physical, or chemical system can be directly attributed to a distorted equilibrium between Yin and Yang.

 

Advertisement

Nobel Prize in Economics and Crimino(logy)/(genic). How Contracts Work? Note Quote.

d40838e2-8ed2-11e6-af59-7ad1937f51f2_1280x720-777x437

How has the Swedish Central Bank’s committee that awards prizes in Economics in honor of Nobel responded to the field’s abject failures regarding the recent financial crisis and the Great Recession?  A lesser group would display humility, acknowledge its failures, and promise a fundamental rethink of the field.  Neoclassical economists, however, are made of sterner stuff.  The committee’s response is to praise the discipline for its theoretical advances and proposed policies related to finance, regulation, and corporate governance. Oliver Hart, and Bengt Holmström exemplify this pattern.

The economics prize is a bit different. It was created by Sweden’s Central Bank in 1969, nearly 75 years later. The award’s real name is the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel.” It was not established by Nobel, but supposedly in memory of Nobel. It’s a ruse and a PR trick, and I mean that literally. And it was done completely against the wishes of the Nobel family.

Sweden’s Central Bank quietly snuck it in with all the other Nobel Prizes to give free-market economics for the 1% credibility. One of the Federal Reserve banks explained it succinctly, “Few realize, especially outside of economists, that the prize in economics is not an “official” Nobel. . . . The award for economics came almost 70 years later—bootstrapped to the Nobel in 1968 as a bit of a marketing ploy to celebrate the Bank of Sweden’s 300th anniversary.” Yes, you read that right: “a marketing ploy.”

The Economics Prize has nestled itself in and is awarded as if it were a Nobel Prize. But it’s a PR coup by economists to improve their reputation,” Nobel’s great great nephew Peter Nobel told AFP in 2005, adding that “It’s most often awarded to stock market speculators …. There is nothing to indicate that [Alfred Nobel] would have wanted such a prize.

Members of the Nobel family are among the harshest, most persistent critics of the economics prize, and members of the family have repeatedly called for the prize to be abolished or renamed. In 2001, on the 100th anniversery of the Nobel Prizes, four family members published a letter in the Swedish paper Svenska Dagbladet, arguing that the economics prize degrades and cheapens the real Nobel Prizes. They aren’t the only ones.

Scientists never had much respect for the new economic Nobel prize. In fact, a scientist who headed Nixon’s Science Advisory Committee in 1969, was shocked to learn that economists were even allowed on stage to accept their award with the real Nobel laureates. He was incredulous: “You mean they sat on the platform with you?”

Why economics? To answer that question we have to go back to Sweden in the 1960s.

Around the time the prize was created, Sweden’s banking and business interests were busy trying to ram through various so-called “free-market” economic reforms. Their big objective at the time was to loosen political oversight and control over the country’s central bank. According to Philip Mirowski, a professor at the University of Notre Dame who specializes in the history of economics, the

Bank of Sweden was trying to become more independent of democratic accountability in the late 60s, and there was a big political dispute in Sweden as to whether the bank could have effective political independence. In order to support that position, the bank needed to claim that it had a kind of scientific credibility that was not grounded in political support.

Promoters of central bank independence couched their arguments in the obscure language of neoclassical economic theory of market efficiency. The problem was that few people in Sweden took their neoclassical babble very seriously, and saw their plan for central bank independence for what it was: an attempt to transfer control over economic matters from democratically elected government and place into the hands of big business interests, giving them a free hand in running Sweden’s economy without pesky interference from labor unions, voters and elected officials.

For the first few years, the Swedish Central Bank Prize in Economics went to fairly mainstream and maybe even semi-respectable economists. But after establishing the award as credible and serious, the prizes took a hard turn to the right. Over the next decade, the prize was awarded to the most fanatical supporters of theories that concentrated wealth among the top 1% of industrialized society of our time. At the time of the prizes, neoclassical economics were not fully accepted by the media and political establishment. But the Nobel Prize changed all that. What started as a project to help the Bank of Sweden achieve political independence, ended up boosting the credibility of the most regressive strains of free-market economics, and paving the way for widespread acceptance of libertarian ideology.

The Swedish Riksbank awarded this year’s Nobel prize for economic sciences to Oliver Hart, a British economist at Harvard University, and Bengt Holmstrom, a Finnish economist at MIT, for their work improving our understanding of how and why contracts work, and when they can be made to work better.

Their work focuses attention on the necessity of trade-offs in setting contract terms; it is yet another in a series of recent prizes which explores the unavoidable imperfections in many critical markets. Mr Holmstrom’s analyses of insurance contracts describe the inevitable trade-off between the completeness of an insurance contract and the extent to which that contract encourages moral hazard. From an insurance perspective, the co-payments that patients must sometimes make when receiving treatment are a waste; it would be better for people to be able to insure fully. Yet because insurers cannot know that all patients are receiving only the treatment they need and no more, they employ co-payments as a way to lean against the problem of moral hazard: that some people will choose to use much more health care than they need when the pool of all those being insured picks up the bill. A common and important thread in work by Messrs Hart and Holmstrom is the role of power in planning co-operative ventures. Individuals or firms with the ability to hold up arrangements – by withholding their service or the use of a resource they own – wield economic power. That power allows them to capture more of the value generated by a co-operative effort, and potentially to sink it entirely, even if the venture would yield big gains for all participants and society as a whole. Contracts exist to shape power relationships. In some cases, they are there to limit the exercise of hold-up power so that a venture can go forward. In others, they are intended to create or protect certain power relationships in order to encourage good behaviour: workers or firms with the right to exit a relationship, for instance, force other parties to that relationship to take their interests into account. The broader lesson – that power matters – is one economics too often neglects.

The theory holds that the contracting costs between economic units are shaped by the nature of the interaction between them. These costs are not operational costs, such as commission fees or transportation costs. Instead, they stem from the lack of clarity and enforceability of the terms of the interaction and each unit’s dependence on the interaction. And, in the words of today’s prize winners, they cause contracts to be incomplete. 

Difficulties in Negotiating a Transaction

Difficulties in Monitoring an Ongoing Transaction

Difficulties in Enforcing an Agreement

When managers spot these sorts of problems on the horizon, a deal that potentially will create value may not get done because the contract is bound to be incomplete. The danger is that the contract will not specify how to resolve conflicts in the future. This is because the agreement between the parties does not cover all contingencies, all issues, or all possible states of the world. To govern a partnership successfully, then, you need to manage the gaps in the contract. Traditional management techniques call for command and control in these situations, to respond quickly and decisively to new conditions. But this solution is missing from typical partnerships, most of which are characterized by a sharing of control. It may be a formal joint venture with shared ownership or a looser arrangement whereby one party controls certain parts of the joint project and the other party controls others. So, each partner’s control in these combinations is also incomplete.

Neoclassical economic dogma is that money is the “high power” incentive.  Normal humans know that this is preposterous.  The highest power incentives are rarely monetary.  People give up their lives for others.  Some of them do so nominally for “duty, honor, country,” but actually because of the effects of “small unit cohesion.” A second neoclassical dogma is ignoring fraud and predation.  The 2016 prizes show how, despite their knowledge of the falsity of the implicit assumption, neoclassical economists repeatedly ignore the manners in which CEOs shape perverse incentives and render the Laureates’ compensation and governance policies criminogenic.  A third neoclassical dogma is, implicitly, to assume that perverse incentives do not influence CEOs and those they suborn.  Holmström and Steven N. Kaplan’s article about corporate governance in light of the Enron-era frauds unintentionally displayed this third neoclassical dogma about incentives. The fourth dogma is that regulation cannot succeed because it lacks “high power” incentives. Criminologists’ understanding of incentives and how CEOs set and pervert incentives is far more sophisticated than neoclassical economists’ myths about incentives.  Criminologists provide the content to how CEOs that predate “rig the system.”  Criminologists agree that perverse financial incentives are important contributors to white-collar crime.

 

Gravity

Gravity has often been suggested as playing a role in quantum theory, principally as a mechanism that induces quantum state vector collapse. However, at an ontological level, the Invariant Set Postulate does not require superposed states and hence does not require a collapse mechanism, gravitational or otherwise.

gravity-probe-b-proves-space-time-warps-2

On the other hand, the order-of-magnitude estimates provided by Penrose, that gravitational processes can be locally significant when a quantum sub-system and a measuring apparatus interact, seem persuasive. Here, we would interpret these estimates as supporting the notion that gravity plays a key role in defining the state space geometry of the invariant set, in particular in defining the regions of relative stability (small local Lyapunov exponents) and relative instability (large local Lyapunov exponents). Black-hole thermodynamics may additionally provide the mechanism which leads to the dimensional reduction of the invariant set compared with that of the embedding state space.

Indeed this leads to the following rather radical suggestion. If the geometry of invariant set I is to be considered primitive, then the geometric properties of the invariant set which lead to certain regions being relatively stable and other regions unstable should be considered a generalization of the notion introduced by Einstein that the phenomenon we call ‘gravity’ is merely a manifestation of some more primitive notion of geometry—here the geometry of a dynamically invariant subset of state space. As such, a challenge will be to try to unify the notions of pseudo-Riemannian geometry for space–time, and fractal geometry for state space. This is a very different perspective on ‘quantum gravity’. 

From this we can make two gravitationally relevant predictions. Firstly, since gravitational processes are not needed to collapse the quantum state vector, experiments to detect gravitational decoherence may fail. By contrast with Objective Reduction, I could be seen as providing the preferred basis, with respect to which conventional non-gravitational decoherent processes operate. Secondly, if gravity should be seen as a manifestation of the heterogeneity in the geometry of the invariant set, then attempts to quantize gravity with the framework of standard quantum theory will also fail. As such, it is misguided to assume that ‘theories of everything’ can be formulated within conventional quantum theory.

Quantum-Theoretic Schrödinger Equation in Abstract Hilbert Space

If quantum theory could ‘see’ the intricate structure of the invariant set, it would ‘know’ whether a particular putative measurement orientation θ was counterfactual or not. It must be noted that counterfactuals in quantum mechanics appear in discussions of (a) non-locality, (b) pre- and post-selected systems, and (c) interaction-free measurement; Quantum interrogation. Only the first two issues are related to counterfactuals as they are considered in the general philosophical literature:

If it were that A, then it would be that B.

The truth value of a counterfactual is decided by the analysis of similarities between the actual and possible counterfactual worlds. The difference between a counterfactual (or counterfactual conditional) and a simple conditional: If A, then B, is that in the actual world A is not true and we need some “miracle” in the counterfactual world to make it true. In the analysis of counterfactuals out of the scope of physics, this miracle is crucial for deciding whether B is true. In physics, however, miracles are not involved. Typically:

A : A measurement M is performed
B : The outcome of M has property P

Physical theory does not deal with the questions of which measurement and whether a particular measurement is performed? Physics yields conditionals: “If Ai, then Bi“. The reason why in some cases these conditionals are considered to be counterfactual is that several conditionals with incompatible premises Ai are considered with regard to a single system. The most celebrated example is the Einstein–Podolsky–Rosen (EPR problem) argument in which incompatible measurements of the position or, instead, the momentum of a particle are considered. Stapp has applied a formal calculus of coun- terfactuals to various EPR-type proofs and in spite of extensive criticism, continues to claim that the nonlocality of quantum mechanics can be proved without the assumption “reality”.

However, since, by hypothesis, quantum theory is blind to the intricate structure of the invariant set I, it is unable to discriminate between factual and counterfactual measurement preparations and therefore admits them all as theoretically valid. Hence the quantum-theoretic notion of state is defined on a quantum sub-system in preparation for any measurement that could conceivably be performed on it, irrespective of whether this measurement turns out to be real or counterfactual. This raises a fundamental question. If we interpret the quantum-theoretic notion of state in terms of a sample space defined by a h ̄ neighbourhood on the invariant set, how are we to interpret the quantum-theoretic notion of state associated with counterfactual world states of unreality, not on the invariant set, where no corresponding sample space exists?

Hence, when p ∈ I (the real axis for the Gaussian integers), then α|A⟩ + β|B⟩ can be interpreted as a probability defined by some underlying sample space. However, when p ∉ I (the rest of the complex plane for the Gaussian integers), we define a probability-like state α|A⟩ + β|B⟩ from the algebraic properties of probability, i.e. in terms of the algebraic rules of vector spaces. Under such circumstances, α|A⟩+β|B⟩ can no more be associated with any underlying sample space. This ‘continuation off the invariant set’ does not contradict Hardy’s definition of state, since if p ∉ I, then its points are not elements of physical reality, and hence cannot be subject to actual measurement. A classical dynamical system is one defined by a set of deterministic differential equations. As such, there is no requirement in classical physics for states to lie on an invariant set, even if the differential equations support such a set.  As a result, for a classical system, every point in phase space is a point of ‘physical reality’, and the counterfactual states discussed above are as much states of ‘physical reality’ as are the real world states. Hence, the world of classical physics is perfectly non-contextual, and is not consistent with the invariant set postulate.

The following interpretation of the two-dimensional Hilbert Space spanned by the vectors |A⟩ and |B⟩, emerges from the Invariant Set Postulate. At any time t there corresponds a point in the Hilbert Space, where α|A⟩ + β|B⟩ can be interpreted straightforwardly as a frequentist probability based on an underlying sample space of trajectory √h ̄ neighbourhood on the invariant set. However, since the invariant set and hence its underlying deterministic dynamics are themselves non-computable, it is algorithmically undecidable as to whether any given point in the Hilbert Space can be associated with such a sample space or not; as such, each point of the Hilbert Space is as likely to support an underlying sample space as any other. For points in the Hilbert Space which have no correspondence with a sample space on the invariant set, α|A⟩ + β|B⟩ must be considered an abstract mathematical quantity defined purely in terms of the algebraic rules governing a vector space.

Consistent with the rather straightforward probabilistic interpretation of the quantum-theoretic notion of state on the invariant set, it is reasonable to suppose that, on the invariant set, the Schrödinger equation is itself a Liouville equation for conservation of probability in regions where dynamical evolution is Hamiltonian. Since quantum theory is blind to the intricate structure of the invariant set, the quantum-theoretic Schrödinger equation must be formulated in abstract Hilbert Space form, i.e. in terms of unitary evolution, using algebraic properties of probability without reference to an underlying sample space.

One algebraic property inherited from the Schrödinger equation’s interpretation as a Liouville equation on the invariant set is linearity: as an equation for conservation of probability, the Liouville equation, is always linear, even when the underlying dynamics X ̇ = f(X) are strongly nonlinear. This suggests that attempts to add nonlinear terms (deterministic or stochastic) to the Schrödinger equation, e.g. during measurement, are misguided.

Honey-Trap Catalysis or Why Chemistry Mechanizes Complexity? Note Quote.

Was browsing through Yuri Tarnopolsky’s Pattern Chemistry and its affect on/from humanities. Tarnopolsky’s states “chemistry” + “humanities” connectivity ideas thusly:

gw163h203

Practically all comments to the folk tales in my collection contained references to a book by the Russian ethnographer Vladimir Propp, who systematized Russian folk tales as ‘molecules‘ consisting of the same ‘atoms‘ of plot arranged in different ways, and even wrote their formulas. His book was published in the 30’s, when Claude Levi-Strauss, the founder of what became known as structuralism, was studying another kind of “molecules:” the structures of kinship in tribes of Brazil. Remarkably, this time a promise of a generalized and unifying vision of the world was coming from a source in humanities. What later happened to structuralism, however, is a different story, but the opportunity to build a bridge between sciences and humanities was missed. The competitive and pugnacious humanities could be a rough terrain.

I believed that chemistry carried a universal message about changes in systems that could be described in terms of elements and bonds between them. Chemistry was a particular branch of a much more general science about breaking and establishing bonds. It was not just about molecules: a small minority of hothead human ‘molecules’ drove a society toward change. A nation could be hot or cold. A child playing with Lego and a poet looking for a word to combine with others were in the company of a chemist synthesizing a drug.

Further on, Tarnopolsky, following his chemistry then thermodynamics leads, then found the pattern theory work of Swedish chemist Ulf Grenander, which he describes as follows:

In 1979 I heard about a mathematician who tried to list everything in the world. I easily found in a bookstore the first volume of Pattern Theory (1976) by Ulf Grenander, translated into Russian. As soon as I had opened the book, I saw that it was exactly what I was looking for and what I called ‘meta-chemistry’, i.e., something more general than chemistry, which included chemistry as an application, together with many other applications. I can never forget the physical sensation of a great intellectual power that gushed into my face from the pages of that book.

Although the mathematics in the book was well above my level, Grenander’s basic idea was clear. He described the world in terms of structures built of abstract ‘atoms’ possessing bonds to be selectively linked with each other. Body movements, society, pattern of a fabric, chemical compounds, and scientific hypothesis—everything could be described in the atomistic way that had always been considered indigenous for chemistry. Grenander called his ‘atoms of everything’ generators, which tells something to those who are familiar with group theory, but for the rest of us could be a good little metaphor for generating complexity from simplicity. Generators had affinities to each other and could form bonds of various strength. Atomism is a millennia old idea. In the next striking step so much appealing to a chemist, Ulf Grenander outlined the foundation of a universal physical chemistry able to approach not only fixed structures but also “reactions” they could undergo.

The two major means of control in chemistry and organic life: thermodynamic control (shift of equilibrium) and kinetic control (selective change of speed). People might not be aware that the same mechanisms are employed in social and political control, as well as in large historical events out of control, for example, the great global migration of people and jobs in our time or just the one-way flow of people across the US-Mexican border!!! Thus, with an awful degree of simplification, the intensification of a hunt for illegal immigrants looks like thermodynamic control by a honey trap, while the punishment for illegal employers is typical negative catalysis, although both may lead to a less stable and more stressed state. In both cases, new equilibrium will be established, different equilibria housed upon different sets of conditions.

dna_broken_wide

Should I treat people as molecules, unless I am from the Andromeda Galaxy. Complex-systems never come to global equilibrium, although local equilibrium can exist for some time. They can be in the state of homeostasis, which, again, is not the same as steady state in physics and chemistry. Homeostasis is the global complement of the classical local Darwinism of mutation and selection.

Taking other examples, the immigration discrimination in favor of educated or wealthy professionals is also a catalysis of affirmative action type. It speeds up the drive to equilibrium. Attractive salary for rare specialists is an equilibrium shift (honey trap) because it does not discriminate between competitors. Ideally, neither does exploitation of foreign labor. Bureaucracy is a global thermodynamic freeze that can be selectively overcome by 100% catalytic connections and bribes. Severe punishment for bribe is thermodynamic control. The use of undercover agents looks like a local catalyst: you can wait for the crook to make a mistake or you can speed it up. Tax incentive or burden is a shift of equilibrium. Preferred (or discouraging) treatment of competitors is catalysis (or inhibition).

There is no catalysis without selectivity and no selectivity without competition. Equilibrium, however, is not selective: it applies globally to the fluid enough system. Organic life, society, and economy operate by both equilibrium shift and catalysis. More examples: by manipulating the interest rate, the RBI employs thermodynamic control; by tax cuts for efficient use of energy, the government employs kinetic control, until saturation comes. Thermodynamic and kinetic factors are necessary for understanding Complex-systems, although only professionals can talk about them reasonably, but they are not sufficient. History is not chemistry because organic life and human society develop by design patterns, so to speak, or archetypal abstract devices, which do not follow from any physical laws. They all, together with René Thom morphologies, have roots not in thermodynamics but in topology. Anything that cannot be presented in terms of points, lines, and interactions between the points is far from chemistry. Topology is blind to metrics, but if Pattern Theory were not metrical, it would be just a version of graph theory.

Algebraic Representation of Space-Time as Esoteric?

If the philosophical analysis of the singular feature of space-time is able to shed some new light on the possible nature of space-time, one should not lose sight of the fact that, although connected to fundamental issues in cosmology, like the ‘initial’ state of our universe, space-time singularities involve unphysical behaviour (like, for instance, the very geodesic incompleteness implied by the singularity theorems or some possible infinite value for physical quantities) and constitute therefore a physical problem that should be overcome. We now consider some recent theoretical developments that directly address this problem by drawing some possible physical (and mathematical) consequences of the above considerations.

image001

Indeed, according to the algebraic approaches to space-time, the singular feature of space-time is an indicator for the fundamental non-local character of space-time: it is conceived actually as a very important part of General Relativity that reveals the fundamental pointless structure of space-time. This latter cannot be described by the usual mathematical tools like standard differential geometry, since, as we have seen above, it presupposes some “amount of locality” and is inherently point-like. The mathematical roots of such considerations are to be found in the full equivalence of, on the one hand, the usual (geometric) definition of a differentiable manifold M in terms of a set of points with a topology and a differential structure (compatible atlases) with, on the other hand, the definition using only the algebraic structure of the (commutative) ring C(M) of the smooth real functions on M (under pointwise addition and multiplication; indeed C(M) is a (concrete) algebra). For instance, the existence of points of M is equivalent to the existence of maximal ideals of C(M). Indeed, all the differential geometric properties of the space-time Lorentz manifold (M,g) are encoded in the (concrete) algebra C(M). Moreover, the Einstein field equations and their solutions (which represent the various space-times) can be constructed only in terms of the algebra C(M). Now, the algebraic structure of C(M) can be considered as primary (in exactly the same way in which space-time points or regions, represented by manifold points or sets of manifold points, may be considered as primary) and the manifold M as derived from this algebraic structure. Indeed, one can define the Einstein field equations from the very beginning in abstract algebraic terms without any reference to the manifold M as well as the abstract algebras, called the ‘Einstein algebras’, satisfying these equations. The standard geometric description of space-time in terms of a Lorentz manifold (M,g) can then be considered as inducing a mathematical representation of an Einstein algebra. Without entering into too many technical details, the important point for our discussion is that Einstein algebras and sheaf-theoretic generalizations thereof reveal the above discussed non-local feature of (essential) space-time singularities from a different point of view. In the framework of the b-boundary construction M = M ∪ ∂M, the (generalized) algebraic structure C corresponding to M can be prolonged to the (generalized) algebraic structure C corresponding to the b-completed M such that CM = C, where CM is the restriction of C to M; then in the singular cases, only constant functions (and therefore only zero vector fields) can be prolonged. This underlines the non-local feature of the singular behaviour of space-time, since constant functions are non-local in the sense that they do not distinguish points. This fundamental non-local feature suggests non-commutative generalizations of the Einstein algebras formulation of General Relativity, since non-commutative spaces are highly non-local. In general, non-commutative algebras have no maximal ideals, so that the very concept of a point has no counterpart within this non-commutative framework. Therefore, according to this line of thought, space-time, at the fundamental level, is completely non-local. Then it seems that the very distinction between singular and non-singular is not meaningful anymore at the fundamental level; within this framework, space-time singularities are ‘produced’ at a less fundamental level together with standard physics and its standard differential (commutative) geometric representation of space-time.

Although these theoretical developments are rather speculative, it must be emphasized that the algebraic representation of space-time itself is “by no means esoteric”. Starting from an algebraic formulation of the theory, which is completely equivalent to the standard geometric one, it provides another point of view on space-time and its singular behaviour that should not be dismissed too quickly. At least it underlines the fact that our interpretative framework for space-time should not be dependent on the traditional atomistic and local (point-like) conception of space-time (induced by the standard differential geometric formulation). Indeed, this misleading dependence on the standard differential geometric formulation seems to be at work in some reference arguments in contemporary philosophy of space-time, like in the field argument. According to the field argument, field properties occur at space-time points or regions, which must therefore be presupposed. Such an argument seems to fall prey to the standard differential geometric representation of space-time and fields, since within the algebraic formalism of General Relativity, (scalar) fields – elements of the algebra C – can be interpreted as primary and the manifold (points) as a secondary derived notion.

Kant, Poincaré, Sklar and Philosophico-Geometrical Problem of Under-Determination. Note Quote.

maxresdefault1

What did Kant really mean in viewing Euclidean geometry as the correct geometrical structure of the world? It is widely known that one of the main goals that Kant pursued in the First Critique was that of unearthing the a priori foundations of Newtonian physics, which describes the structure of the world in terms of Euclidean geometry. How did he achieve that? Kant maintained that our understanding of the physical world had its foundations not merely in experience, but in both experience and a priori concepts. He argues that the possibility of sensory experience depends on certain necessary conditions which he calls a priori forms and that these conditions structure and hold true of the world of experience. As he maintains in the “Transcendental Aesthetic”, Space and Time are not derived from experience but rather are its preconditions. Experience provides those things which we sense. It is our mind, though, that processes this information about the world and gives it order, allowing us to experience it. Our mind supplies the conditions of space and time to experience objects. Thus “space” for Kant is not something existing – as it was for Newton. Space is an a priori form that structures our perception of objects in conformity to the principles of the Euclidean geometry. In this sense, then, the latter is the correct geometrical structure of the world. It is necessarily correct because it is part of the a priori principles of organization of our experience. This claim is exactly what Poincaré criticized about Kant’s view of geometry. Poincaré did not agree with Kant’s view of space as precondition of experience. He thought that our knowledge of the physical space is the result of inferences made out of our direct perceptions.

This knowledge is a theoretical construct, i.e, we infer the existence and nature of the physical space as an explanatory hypothesis which provides us with an account for the regularity we experience in our direct perceptions. But this hypothesis does not possess the necessity of an a priori principle that structures what we directly perceive. Although Poincaré does not endorse an empiricist account, he seems to think, though, that an empiricist view of geometry is more adequate than Kantian conception. In fact, the idea that only a large number of observations inquiring the geometry of physical world can establish which geometrical structure is the correct one, is considered by him as more plausible. But, this empiricist approach is not going to work as well. In fact Poincaré does not endorse an empiricist view of geometry. The outcome of his considerations about a comparison between the empiricist and Kantian accounts of geometry is well described by Sklar:

Nevertheless the empiricist account is wrong. For, given any collections of empirical observations a multitude of geometries, all incompatible with one another, will be equally compatible with the experimental results.

This is the problem of under-determination of hypotheses about the geometrical structure of physical space by experimental evidence. The under-determination is not due to our ability to collect experimental facts. No matter how rich and sophisticated are our experimental procedures for accumulating empirical results, these results will be never enough compelling to support just one of the hypotheses about the geometry of physical space – ruling out the competitors once for all. Actually, it is even worse than that: empirical results seem not to give us any reason at all to think one of the other hypothesis correct. Poincaré thought that this problem was grist to the mill of the conventionalist approach to geometry. The adoption of a geometry for physical space is a matter of making a conventional choice. A brief description of Poincaré disk model might unravel a bit more the issue that is coming up here. The short story about this imaginary world shows that an empiricist account of geometry fails to be adequate. In fact, Poincaré describes a scenario in which Euclidean and hyperbolic geometrical descriptions of that physical space end up being equally consistent with the same collection of empirical data. However, what this story tells us can be generalized to any other scenario, including ours, in which a scientific inquiry concerning the intrinsic geometry of the world is performed.

The imaginary world described in Poincaré’s example is an Euclidean two dimensional disk heated to a constant temperature at the center, whereas, along the radius R, it is heated in a way that produces a temperature’s variation described by R2 − r2. Therefore, the edge of the disk is uniformly cooled to 00.

A group of scientists living on the disk are interested in knowing what the intrinsic geometry of their world is. As Sklar says, the equipment available to them consists in rods uniformly dilating with increasing temperatures, i.e. at each point of the space they all change their lengths in a way which is directly proportional to temperature’s value at that point. However, the scientists are not aware of this peculiar temperature distortion of their rods. So, without anybody knowing, every time a measurement is performed, rods shrank or dilated, depending if they are close to the edge or to the center. After repeated measurements all over the disk, they have a list of empirical data that seems to support strongly the idea that their world is a Lobachevskian plane. So, this view becomes the official one. However, a different data’s interpretation is presented by a member of the community who, striking a discordant note, claims that those empirical data can be taken to indicate that the world is in fact an Euclidean disk, but equipped with fields shrinking or dilating lengths.

Although the two geometrical theories about the structure of the physical space are competitors, the empirical results collected by the scientists support both of them. According to our external three-dimensional Euclidean perspective we know their bi-dimensional world is Euclidean and so we know that only the innovator’s interpretation is the correct one. Using our standpoint the problem of under-determination would seem indeed a problem of epistemic access due to the particular experimental repertoire of the inhabitants. After all expanding this repertoire and increasing the amount of empirical data can overcome the problem. But, according to Poincaré that would completely miss the point. Moving from our “superior” perspective to their one would collocate us exactly in the same situation as they are, i.e.in the impossibility to decide which geometry is the correct one. But more importantly, Poincaré seems to say that any arbitrarily large amount of empirical data cannot refute a geometric hypothesis. In fact, a scientific theory about space is divided in two branches, a geometric one and a physical one. These two parts are deeply related. It would be possible to save from experimental refutation any geometric hypothesis about space, suitably changing some features of the physical branch of the theory. According to Sklar, this fact forces Poincaré to the conclusion that the choice of one hypothesis among several competitors is purely conventional.

The problem of under-determination comes up in the analysis of dual string theories with two string theories postulating two geometrically inequivalent backgrounds, if dual, can produce the same experimental results: same expectation values, same scattering amplitude, and so on. Therefore, similarly to Poincaré’s short story, empirical data relative to physical properties and physical dynamics of strings are not sufficient to determine which one between the two different geometries postulated for the background is the right one, or if there is any more fundamental geometry at all influencing physical dynamics.

Genomic analysis of family data reveals additional genetic effects on intelligence and personality

H/T West Hunter.

Bookmarked.

The rare variants that affect IQ will generally decrease IQ – and since pleiotropy is the norm, usually they’ll be deleterious in other ways as well. Genetic load.

dd_races

But, considering what is now known about the biological origins of cognition and intelligence, it is generally difficult to take claims of discrimination seriously when underrepresented groups also display relatively lower intelligence profiles. However, in this case there is no reason to think that conservatives as a group have an intellectual profile below the general population. Social conservatives tend to be a little lower in intelligence relative to liberals, but free-market conservatives (libertarians) tend to be smarter than liberals. Being very partisan, either liberal or conservative, tends to be associated with high IQ as well. Increased income levels, which are a proxy for IQ, also moves people right ideologically. In other words, there is nothing that biologically determined intelligence can do to explain the lack of conservatives, and even moderates, in the humanities….Race Hustling /r/science

genomic-analysis-of-family-data-reveals-additional-genetic-effects-on-intelligence-and-personality

Bad loan crisis continues: 56.4 per cent rise in NPAs of banks

Gross non-performing assets (NPAs), or bad loans, of state owned banks surged 56.4 per cent to Rs 614,872 crore during the 12-month period ended December 2016, and appear set to rise further in the next two quarters with many units, especially in the small and medium sectors, struggling to repay after being hit by the government’s decision to withdraw currency notes of Rs 500 and Rs 1,000 denomination………The RBI discontinued fresh corporate debt restructuring (CDR) with effect from April 1, 2016. Here, promoters’ equity was financed by the borrowed amount, that added the burden of debt servicing on banks. The CDR cell faced problems on account of delay in the sale of unproductive assets due to various legalities that were involved.

loan-graph

And despite the discontinuation, some strands of CDR are retained to say the least. Whats wrong and what must have gone wrong or perceived as such for the Central Bank to have withdrawn support to CDR. A small take follows.

15 per cent is still talking about minimalist valuations. The most important part of the whole report lies in CDR failing, and that too when promoters’ equity is getting funded on borrowed money, resulting in an intensification of burdens on banks’-directed debt financing. This directly cross-purposes with Sebi regulations regarding companies/corporations pledging their shares and then discovering that when such valuations compared with market capitalization slump down, this is really a fix, as companies where promoters have pledged a large share of their holdings are viewed with caution in that if a promoter defaults on this debt, the lender transfers the shares into their own hands on one hand, and when they need funds they dump this stock on the market on the other, leading to sharp movements in share prices. These fluctuations really nosedive when economy is on the downturn, forcing promoters to borrow against their shares (not that they do not do that otherwise) and all the more prompting them to go out and borrow to meet volatility checks denting the balance sheet health.

Frege’s Ontological Correlates of Propositions

For Frege there were only two ontological correlates of propositions: the True and the False. All true propositions denote the True, and all false prepositions denote the False. From an ontological point of view, if all true propositions denote exactly one and the same entity, then the underlying philosophical position is the absolute monism of facts.

Lets disprove what Suszko called ‘Frege’s axiom’: namely the assumption that there exist only two referents for propositions.

Frege’s position on propositions was part of a more general view. Indeed, Frege adopted a principle of homogeneity (Perzanowski) according to which there are two fundamental categories of signs (Bedeutungen and truth-values) and two fundamental categories of senses (Sinn and Gedanken).

Both categories of signs (names and propositions) have sense and reference. The sense of a name is its Sinn, that way in which its referent is given, while the referent itself, the Bedeutung, is the object named by the name. As for propositions, their sense is the Gedanke, while their reference is their logical value.

Since the two semiotic triangles are entirely similar in structure, we need analyze only one of them: that relative to propositions.

img_20170218_112729

Here p is a proposition, s(p) is the sense of p, and r(p) is the referent of p. The functional composition states that s(p) is the way in which p yields r(p). The triangle has been drawn with the functions linking its vertexes explicitly shown. When the functions are composable, the triangle is said to commute, yielding

f(s(p)) = r(p), or f ° s(p) = r(p)

An interesting question now arises: is it possible to generalize the semiotic triangle? And if it is possible to do so, what is required? A first reorganization and generalization of the semiotic triangle therefore involves an explicit differentiation between the truth-value assigning function and the referent assigning function. We thus have the following double semiotic triangle:

img_20170218_113157_hdr

where r stands for the referent assigning function and t for the truth-value assigning function. Extending the original semiotic triangle by also considering utterances:

img_20170218_114406_hdr

Suszko uses the terms logical valuations for the procedures that assign truth-values, and algebraic valuations for those that assign referents. By arguing for the existence of only two referents, Frege ends up by collapsing logical and algebraic valuations together, thereby rendering them indistinguishable.

Having generalized the semiotic triangle into the double semiotic triangle, we must now address the following questions:

  1. when do two propositions have the same truth value?
  2. when do two propositions have the same referent?
  3. when do two propositions have the same sense?

Sameness of logical value will be denoted by (logical equivalence), while sameness of referent will be indicated with (not to be confused with the equiform to express indiscernibility) and sameness of sense (synonymy) by . Two propositions are synonymous when they have the same sense:

(p ≈ q) = 1 iff (s(p) = s(q)) = 1

Two propositions are identical when they have the same referent:

(p ≡ q) = 1 iff (r(p) = r(q)) = 1

Two propositions are equivalent when they have the same truth value:

(p ↔ q) = 1 iff (t(p) = t(q)) = 1

These various concepts are functionally connected as follows:

s(p) = s(q) implies r(p) = r(q), r(p) = r(q) implies t(p) = t(q)

In general, the constraints that we impose on referents correspond to the ontological assumptions that characterize the theory. The most general logic of all is the one that imposes no restriction at all on r valuations. Just as Fregean logic recognizes only two referents so the most general logic recognizes more than numerable set of them. Between these two extremes, of course, there are numerous intermediate cases. Pure non-Fregean logic is extremely weak, a chaos. If it is to yield something, it has to be strengthened.