Anthropocosmism. Thought of the Day 20.0

Anthropocosmic

Russian cosmism appeared as sort of antithesis to the classical physicalist paradigm of thinking that was based on strict a differentiation of man and nature. It made an attempt to revive the ontology of an integral vision that organically unites man and cosmos. These problems were discussed both in the scientific and the religious form of cosmism. In the religious form N. Fedorov’s conception was the most significant one. Like other cosmists, he was not satisfied with the split of the Universe into man and nature as opposed entities. Such an opposition, in his opinion, condemned nature to thoughtlessness and destructiveness, and man to submission to the existing “evil world”. Fedorov maintained the ideas of a unity of man and nature, a connection between “soul” and cosmos in terms of regulation and resurrection. He offered a project of resurrection that was not understood only as a resurrection of ancestors, but contained at least two aspects: raising from the dead in a narrow, direct sense, and in a wider, metaphoric sense that includes nature’s ability of self-reconstruction. Fedorov’s resurrection project was connected with the idea of the human mind’s going to outer space. For him, “the Earth is not bound”, and “human activity cannot be restricted by the limits of the terrestrial planet”, which is only the starting point of this activity. One should critically look at the Utopian and fantastic elements of N. Fedorov’s views, which contain a considerable grain of mysticism, but nevertheless there are important rational moments of his conception: the quite clearly expressed idea of interconnection, the unity of man and cosmos, the idea of the correlation of the rational and moral elements of man, the ideal of the unity of humanity as planetary community of people.

But while religious cosmism was more notable for the fantastic and speculative character of its discourses, the natural scientific trend, solving the problem of interconnection between man and cosmos, paid special attention to the comprehension of scientific achievements that confirmed that interconnection. N. G. Kholodny developed these ideas in terms of anthropocosmism, opposing it to anthropocentrism. He wrote: “Having put himself in the place of God, man destroyed his natural connections with nature and condemned himself to a long solitary existence”. In Kholodny ́s opinion, anthropocentrism passed through several stages in its development: at the first stage man did not oppose himself to nature and did not oppose it, he rather “humanized” the natural forces. At the second stage man, extracting himself from nature, man looks at it as the object for research, the base of his well-being. At the next stage man uplifts himself over nature, basing himself in this activity on spiritual forces he studies the Universe. And, lastly, the next stage is characterized by a crisis of the anthropocentric worldview, which starts to collapse under the influence of the achievements of science and philosophy. N. G. Kholodny was right noting that in the past anthropocentrism had played a positive role; it freed man from his fright at nature by means of uplifting him over the latter. But gradually, beside anthropocentrism there appeared sprouts of the new vision – anthropocosmism. Kholodny regarded anthropocosmism as a certain line of development of the human intellect, will and feelings, which led people to their aims. An essential element in anthropocosmism was the attempt to reconsider the question of man ́s place in nature and of his interrelations with cosmos on the foundation of natural scientific knowledge.

Cosmology: Friedmann-Lemaître Universes

slide_14

Cosmology starts by assuming that the large-scale evolution of spacetime can be determined by applying Einstein’s field equations of Gravitation everywhere: global evolution will follow from local physics. The standard models of cosmology are based on the assumption that once one has averaged over a large enough physical scale, isotropy is observed by all fundamental observers (the preferred family of observers associated with the average motion of matter in the universe). When this isotropy is exact, the universe is spatially homogeneous as well as isotropic. The matter motion is then along irrotational and shearfree geodesic curves with tangent vector ua, implying the existence of a canonical time-variable t obeying ua = −t,a. The Robertson-Walker (‘RW’) geometries used to describe the large-scale structure of the universe embody these symmetries exactly. Consequently they are conformally flat, that is, the Weyl tensor is zero:

Cijkl := Rijkl + 1/2(Rikgjl + Rjlgik − Ril gjk − Rjkgil) − 1/6R(gikgjl − gilgjk) = 0 —– (1)

this tensor represents the free gravitational field, enabling non-local effects such as tidal forces and gravitational waves which do not occur in the exact RW geometries.

Comoving coordinates can be chosen so that the metric takes the form:

ds2 = −dt2 + S2(t)dσ2, ua = δa0 (a=0,1,2,3) —– (2)

where S(t) is the time-dependent scale factor, and the worldlines with tangent vector ua = dxa/dt represent the histories of fundamental observers. The space sections {t = const} are surfaces of homogeneity and have maximal symmetry: they are 3-spaces of constant curvature K = k/S2(t) where k is the sign of K. The normalized metric dσ2 characterizes a 3-space of normalized constant curvature k; coordinates (r, θ, φ) can be chosen such that

2 = dr2 + f2(r) dθ2 + sin2θdφ2 —– (3)

where f (r) = {sin r, r, sinh r} if k = {+1, 0, −1} respectively. The rate of expansion at any time t is characterized by the Hubble parameter H(t) = S ̇/S.

To determine the metric’s evolution in time, one applies the Einstein Field Equations, showing the effect of matter on space-time curvature, to the metric (2,3). Because of local isotropy, the matter tensor Tab necessarily takes a perfect fluid form relative to the preferred worldlines with tangent vector ua:

Tab = (μ + p/c2)uaub + (p/c2)gab —– (4)

, c is the speed of light. The energy density μ(t) and pressure term p(t)/c2 are the timelike and spacelike eigenvalues of Tab. The integrability conditions for the Einstein Field Equations are the energy-density conservation equation

Tab;b = 0 ⇔ μ ̇ + (μ + p/c2)3S ̇/S = 0 —– (5)

This becomes determinate when a suitable equation of state function w := pc2/μ relates the pressure p to the energy density μ and temperature T : p = w(μ,T)μ/c2 (w may or may not be constant). Baryons have {pbar = 0 ⇔ w = 0} and radiation has {pradc2 = μrad/3 ⇔ w = 1/3,μrad = aT4rad}, which by (5) imply

μbar ∝ S−3, μrad ∝ S−4, Trad ∝ S−1 —– (6)

The scale factor S(t) obeys the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ + 3p/c2) + Λ —– (7)

, where κ is the gravitational constant and Λ is the cosmological constant. A cosmological constant can also be regarded as a fluid with pressure p related to the energy density μ by {p = −μc2 ⇔ w = −1}. This shows that the active gravitational mass density of the matter and fields present is μgrav := μ + 3p/c2. For ordinary matter this will be positive:

μ + 3p/c2 > 0 ⇔ w > −1/3 —– (8)

(the ‘Strong Energy Condition’), so ordinary matter will tend to cause the universe to decelerate (S ̈ < 0). It is also apparent that a positive cosmological constant on its own will cause an accelerating expansion (S ̈ > 0). When matter and a cosmological constant are both present, either result may occur depending on which effect is dominant. The first integral of equations (5, 7) when S ̇≠ 0 is the Friedmann equation

S ̇2/S2 = κμ/3 + Λ/3 – k/S2 —– (9)

This is just the Gauss equation relating the 3-space curvature to the 4-space curvature, showing how matter directly causes a curvature of 3-spaces. Because of the spacetime symmetries, the ten Einstein Filed Equations are equivalent to the two equations (7, 9). Models of this kind, that is with a Robertson-Walker (‘RW’) geometry with metric (2, 3) and dynamics governed by equations (5, 7, 9), are called Friedmann-Lemaître universes (‘FL’). The Friedmann equation (9) controls the expansion of the universe, and the conservation equation (5) controls the density of matter as the universe expands; when S ̇≠ 0 , equation (7) will necessarily hold if (5, 9) are both satisfied. Given a determinate matter description (specifying the equation of state w = w(μ, T) explicitly or implicitly) for each matter component, the existence and uniqueness of solutions follows both for a single matter component and for a combination of different kinds of matter, for example μ = μbar + μrad + μcdm + μν where we include cold dark matter (cdm) and neutrinos (ν). Initial data for such solutions at an arbitrary time t0 (eg. today) consists of,

• The Hubble constant H0 := (S ̇/S)0 = 100h km/sec/Mpc;

• A dimensionless density parameter Ωi0 := κμi0/3H02 for each type of matter present (labelled by i);

• If Λ ≠ 0, either ΩΛ0 := Λ/3H20, or the dimensionless deceleration parameter q := −(S ̈/S) H−20.

Given the equations of state for the matter, this data then determines a unique solution {S(t), μ(t)}, i.e. a unique corresponding universe history. The total matter density is the sum of the terms Ωi0 for each type of matter present, for example

Ωm0 = Ωbar0 + Ωrad0 + Ωcdm0 + Ων0, —– (10)

and the total density parameter Ω0 is the sum of that for matter and for the cosmological constant:

Ω0 = Ωm0 + ΩΛ0 —– (11)

Evaluating the Raychaudhuri equation (7) at the present time gives an important relation between these parameters: when the pressure term p/c2 can be ignored relative to the matter term μ (as is plausible at the present time, and assuming we represent ‘dark energy’ as a cosmological constant.),

q0 = 1/2 Ωm0 − ΩΛ0 —– (12)

This shows that a cosmological constant Λ can cause an acceleration (negative q0); if it vanishes, the expression simplifies: Λ = 0 ⇒ q = 1 Ωm0, showing how matter causes a deceleration of the universe. Evaluating the Friedmann equation (9) at the time t0, the spatial curvature is
K0:= k/S02 = H020 − 1) —– (13)
The value Ω0 = 1 corresponds to spatially flat universes (K0 = 0), separating models with positive spatial curvature (Ω0 > 1 ⇔ K0 > 0) from those with negative spatial curvature (Ω0 < 1 ⇔ K0 < 0).
The FL models are the standard models of modern cosmology, surprisingly effective in view of their extreme geometrical simplicity. One of their great strengths is their explanatory role in terms of making explicit the way the local gravitational effect of matter and radiation determines the evolution of the universe as a whole, this in turn forming the dynamic background for local physics (including the evolution of the matter and radiation).

Natural History of Ashkenazi Intelligence

1.9_GDPvsIQ

There are a number of genetic diseases that are unusually common among the Ashkenazim. We also know a fair amount about genetic disease among the Sephardic and Asian Jews: How can we categorize these diseases and the associated mutations?

Most fall into a few categories, as noted by Ostrer: sphingolipid storage diseases, glycogen storage diseases, clotting disorders, disorders of adrenal steroid biosynthesis, and disorders of DNA repair. It is interesting that although several Jewish disorders fall into each of these categories, sometimes several in the same population, none of the Finnish genetic diseases, for example, fall into any of these categories (Norio), while only one of the genetic disorders common in Quebec does, Tay-Sachs (Scriver). But that is as expected: genetic diseases made common by drift would be very unlikely to cluster in only a few metabolic paths, as if on a few pages of a biochemistry text. The existence of these categories or disease clusters among the Jews suggests selective forces at work, just as the many different genetic disorders affecting hemoglobin and the red cell in the Old World tropics suggest selection, which we know is for malaria resistance.

The two most important genetic disease clusters among the Ashkenazim are the sphingolipid storage disorders (Tay-Sachs, Gaucher, Niemann-Pick, and mucolipidosis type IV) and the disorders of DNA repair (BRCA1, BRCA2, Fanconi’s anemia type C, and Bloom syndrome) but there are several others that are at quite elevated frequency in Ashkenazim. Using published allele frequencies we can calculate that the probability at conception of having at least one allele of the sphingolipid or DNA repair complex is 15%. If we add Canavan disease, familial dysautonomia, Factor XI deficiency (Peretz et al.), and the I1307K allele of the APC locus (Gryfe et al.) this figure grows to 32%, and if we further include non-classical congenital adrenal hyperplasia the probability of having at least one allele from these disorders is 59%.

The sphingolipid storage mutations were probably favored and became common because of natural selection, yet we don’t see them in adjacent populations. We suggest that this is because the social niche favoring intelligence was key, rather than geographic location. It is unlikely that these mutations led to disease resistance in heterozygotes for two reasons. First, there is no real evidence for any disease resistance in heterozygotes (claims of TB resistance are unsupported) and most of the candidate serious diseases (smallpox, TB, bubonic plague, diarrheal diseases) affected the neighboring populations, that is people living literally across the street, as well as the Ashkenazim. Second and most important, the sphingolipid mutations look like IQ boosters. The key datum is the effect of increased levels of the storage compounds. Glucosylceramide, the Gaucher storage compound, promotes axonal growth and branching (Schwartz et al.). In vitro, decreased glucosylceramide results in stunted neurons with short axons while an increase over normal levels (caused by chemically inhibiting glucocerebrosidase) increases axon length and branching. There is a similar effect in Tay-Sachs decreased levels of GM2 ganglioside inhibit dendrite growth, while an increase over normal levels causes a marked increase in dendritogenesis. This increased dendritogenesis also occurs in Niemann-Pick type A cells, and in animal models of Tay- Sachs and Niemann-Pick.

Dendritogenesis appears to be a necessary step in learning. Associative learning in mice significantly increases hippocampal dendritic spine density, while enriched environments are also known to increase dendrite density. It is likely that a tendency to increased dendritogenesis (in Tay-Sachs and Niemann-Pick heterozygotes) or to increased axonal growth and branching (in Gaucher heterozygotes) facilitates learning. Heterozygotes have half the normal amount of the lysosomal hydrolases and should show modest elevations of the sphingolipid storage compounds. A prediction is that Gaucher, Tay-Sachs, and Niemann-Pick heterozygotes will have higher tested IQ than control groups, probably on the order of 5 points.

 

Natural History of Ashkenazi Intelligence

Duqu 2.0

 InfectionTime

unsigned int __fastcall xor_sub_10012F6D(int encrstr, int a2)

{

  unsigned int result; // eax@2
  int v3;              // ecx@4
  if ( encrstr )
  {
    result = *(_DWORD *)encrstr ^ 0x86F186F1;
    *(_DWORD *)a2 = result;
    if ( (_WORD)result )
    {
      v3 = encrstr - a2;

do

      {
        if ( !*(_WORD *)(a2 + 2) )

break;

        a2 += 4;
        result = *(_DWORD *)(v3 + a2) ^ 0x86F186F1;
        *(_DWORD *)a2 = result;
      }
      while ( (_WORD)result );

} }

else

  {
    result = 0;
    *(_WORD *)a2 = 0;

}

  return result;
}

A closer look at the above C code reveals that the string decryptor routine actually has two parameters: “encrstr” and “a2”. First, the decryptor function checks if the input buffer (the pointer of the encrypted string) points to a valid memory area (i.e., it does not contain NULL value). After that, the first 4 bytes of the encrypted string buffer is XORed with the key “0x86F186F1” and the result of the XOR operation is stored in variable “result”. The first DWORD (first 4 bytes) of the output buffer a2 is then populated by this resulting value (*(_DWORD *)a2 = result;). Therefore, the first 4 bytes of the output buffer will contain the first 4 bytes of the cleartext string.

If the first two bytes (first WORD) of the current value stored in variable “result” contain ‘\0’ characters, the original cleartext string was an empty string and the resulting output buffer will be populated by a zero value, stored on 2 bytes. If the first half of the actual decrypted block (“result” variable) contains something else, the decryptor routine checks the second half of the block (“if ( !*(_WORD *)(a2 + 2) )”). If this WORD value is NULL, then decryption will be ended and the output buffer will contain only one Unicode character with two closing ’\0’ bytes.

If the first decrypted block doens’t contain zero character (generally this is the case), then the decryption cycle continues with the next 4-byte encrypted block. The pointer of the output buffer is incremeted by 4 bytes to be able to store the next cleartext block (”a2 += 4;”). After that, the following 4-byte block of the ”ciphertext” will be decrypted with the fixed decryption key (“0x86F186F1”). The result is then stored within the next 4 bytes of the output buffer. Now, the output buffer contains 2 blocks of the cleartext string.

The condition of the cycle checks if the decryption reached its end by checking the first half of the current decrypted block. If it did not reached the end, then the cycle continues with the decryption of the next input blocks, as described above. Before the decryption of each 4-byte ”ciphertext” block, the routine also checks the second half of the previous cleartext block to decide whether the decoded string is ended or not.

The original Duqu used a very similar string decryption routine, which we printed in the following figure below. We can see that this routine is an exact copy of the previously discussed routine (variable ”a1” is analogous to ”encrstr” argument). The only difference between the Duqu 2.0 (duqu2) and Duqu string decryptor routines is that the XOR keys differ (in Duqu, the key is”0xB31FB31F”).

We can also see that the decompiled code of Duqu contains the decryptor routine in a more compact manner (within a ”for” loop instead of a ”while”), but the two routines are essentially the same. For example, the two boundary checks in the Duqu 2.0 routine (”if ( !*(_WORD *)(a2 + 2) )” and ”while ( (_WORD)result );”) are analogous to the boundary check at the end of the ”for” loop in the Duqu routine (”if ( !(_WORD)v4 || !*(_WORD *)(result + 2) )”). Similarly, the increment operation within the head of the for loop in the Duqu sample (”result += 4”) is analogous to the increment operation ”a2 += 4;” in the Duqu 2.0 sample.

int __cdecl b31f_decryptor_100020E7(int a1, int a2)

{

  _DWORD *v2;      // edx@1
  int result;      // eax@2
  unsigned int v4; // edi@6
  v2 = (_DWORD *)a1;

if ( a1 ) {

    for ( result = a2; ; result += 4 )
    {
v4 = *v2 ^ 0xB31FB31F;
      *(_DWORD *)result = v4;
if ( !(_WORD)v4 || !*(_WORD *)(result + 2) )
        break;

++v2; }

}

else

  {
    result = 0;
    *(_WORD *)a2 = 0;

}

  return result;
}