# Day: May 14, 2017

# Cosmology: Friedmann-Lemaître Universes

Cosmology starts by assuming that the large-scale evolution of spacetime can be determined by applying Einstein’s field equations of Gravitation everywhere: global evolution will follow from local physics. The standard models of cosmology are based on the assumption that once one has averaged over a large enough physical scale, isotropy is observed by all fundamental observers (the preferred family of observers associated with the average motion of matter in the universe). When this isotropy is exact, the universe is spatially homogeneous as well as isotropic. The matter motion is then along irrotational and shearfree geodesic curves with tangent vector ua, implying the existence of a canonical time-variable t obeying u_{a} = −t_{,a}. The Robertson-Walker (‘RW’) geometries used to describe the large-scale structure of the universe embody these symmetries exactly. Consequently they are conformally flat, that is, the Weyl tensor is zero:

C_{ijkl} := R_{ijkl} + 1/2(R_{ik}g_{jl} + R_{jl}g_{ik} − R_{il} g_{jk} − R_{jk}g_{il}) − 1/6R(g_{ik}g_{jl} − g_{il}g_{jk}) = 0 —– (1)

this tensor represents the free gravitational field, enabling non-local effects such as tidal forces and gravitational waves which do not occur in the exact RW geometries.

Comoving coordinates can be chosen so that the metric takes the form:

ds^{2} = −dt^{2} + S^{2}(t)dσ^{2}, u^{a} = δ^{a}_{0} (a=0,1,2,3) —– (2)

where S(t) is the time-dependent scale factor, and the worldlines with tangent vector u^{a} = dx^{a}/dt represent the histories of fundamental observers. The space sections {t = const} are surfaces of homogeneity and have maximal symmetry: they are 3-spaces of constant curvature K = k/S^{2}(t) where k is the sign of K. The normalized metric dσ^{2} characterizes a 3-space of normalized constant curvature k; coordinates (r, θ, φ) can be chosen such that

dσ^{2} = dr^{2} + f^{2}(r) dθ^{2} + sin^{2}θdφ^{2} —– (3)

where f (r) = {sin r, r, sinh r} if k = {+1, 0, −1} respectively. The rate of expansion at any time t is characterized by the Hubble parameter H(t) = S ̇/S.

To determine the metric’s evolution in time, one applies the Einstein Field Equations, showing the effect of matter on space-time curvature, to the metric (2,3). Because of local isotropy, the matter tensor T_{ab} necessarily takes a perfect fluid form relative to the preferred worldlines with tangent vector u^{a}:

T_{ab} = (μ + p/c^{2})u_{a}u_{b} + (p/c^{2})g_{ab} —– (4)

, c is the speed of light. The energy density μ(t) and pressure term p(t)/c^{2} are the timelike and spacelike eigenvalues of T_{ab}. The integrability conditions for the Einstein Field Equations are the energy-density conservation equation

T^{ab}_{;b} = 0 ⇔ μ ̇ + (μ + p/c^{2})3S ̇/S = 0 —– (5)

This becomes determinate when a suitable equation of state function w := pc^{2}/μ relates the pressure p to the energy density μ and temperature T : p = w(μ,T)μ/c^{2} (w may or may not be constant). Baryons have {p_{bar} = 0 ⇔ w = 0} and radiation has {p_{rad}c^{2} = μ_{rad}/3 ⇔ w = 1/3,μ_{rad} = aT^{4}_{rad}}, which by (5) imply

μ_{bar} ∝ S^{−3}, μ_{rad} ∝ S^{−4}, T_{rad} ∝ S^{−1} —– (6)

The scale factor S(t) obeys the Raychaudhuri equation

3S ̈/S = -1/2 κ(μ + 3p/c^{2}) + Λ —– (7)

, where κ is the gravitational constant and Λ is the cosmological constant. A cosmological constant can also be regarded as a fluid with pressure p related to the energy density μ by {p = −μc^{2} ⇔ w = −1}. This shows that the active gravitational mass density of the matter and fields present is μ_{grav} := μ + 3p/c^{2}. For ordinary matter this will be positive:

μ + 3p/c^{2} > 0 ⇔ w > −1/3 —– (8)

(the ‘Strong Energy Condition’), so ordinary matter will tend to cause the universe to decelerate (S ̈ < 0). It is also apparent that a positive cosmological constant on its own will cause an accelerating expansion (S ̈ > 0). When matter and a cosmological constant are both present, either result may occur depending on which effect is dominant. The first integral of equations (5, 7) when S ̇≠ 0 is the Friedmann equation

S ̇^{2}/S^{2} = κμ/3 + Λ/3 – k/S^{2} —– (9)

This is just the Gauss equation relating the 3-space curvature to the 4-space curvature, showing how matter directly causes a curvature of 3-spaces. Because of the spacetime symmetries, the ten Einstein Filed Equations are equivalent to the two equations (7, 9). Models of this kind, that is with a Robertson-Walker (‘RW’) geometry with metric (2, 3) and dynamics governed by equations (5, 7, 9), are called Friedmann-Lemaître universes (‘FL’). The Friedmann equation (9) controls the expansion of the universe, and the conservation equation (5) controls the density of matter as the universe expands; when S ̇≠ 0 , equation (7) will necessarily hold if (5, 9) are both satisfied. Given a determinate matter description (specifying the equation of state w = w(μ, T) explicitly or implicitly) for each matter component, the existence and uniqueness of solutions follows both for a single matter component and for a combination of different kinds of matter, for example μ = μ_{bar} + μ_{rad} + μ_{cdm} + μ_{ν} where we include cold dark matter (cdm) and neutrinos (ν). Initial data for such solutions at an arbitrary time t_{0} (eg. today) consists of,

• The Hubble constant H_{0} := (S ̇/S)_{0} = 100h km/sec/Mpc;

• A dimensionless density parameter Ω_{i0} := κμ_{i0}/3H_{0}^{2} for each type of matter present (labelled by i);

• If Λ ≠ 0, either Ω_{Λ0} := Λ/3H^{2}_{0}, or the dimensionless deceleration parameter q := −(S ̈/S) H^{−2}_{0}.

Given the equations of state for the matter, this data then determines a unique solution {S(t), μ(t)}, i.e. a unique corresponding universe history. The total matter density is the sum of the terms Ω_{i0} for each type of matter present, for example

Ω_{m0} = Ω_{bar0} + Ω_{rad0} + Ω_{cdm0} + Ω_{ν0}, —– (10)

and the total density parameter Ω_{0} is the sum of that for matter and for the cosmological constant:

Ω_{0} = Ω_{m0} + Ω_{Λ0} —– (11)

Evaluating the Raychaudhuri equation (7) at the present time gives an important relation between these parameters: when the pressure term p/c^{2} can be ignored relative to the matter term μ (as is plausible at the present time, and assuming we represent ‘dark energy’ as a cosmological constant.),

q_{0} = 1/2 Ω_{m0} − Ω_{Λ0} —– (12)

_{0}); if it vanishes, the expression simplifies: Λ = 0 ⇒ q = 1 Ω

_{m0}, showing how matter causes a deceleration of the universe. Evaluating the Friedmann equation (9) at the time t

_{0}, the spatial curvature is

_{0}:= k/S

_{0}

^{2}= H

_{0}

^{2}(Ω

_{0}− 1) —– (13)

_{0}= 1 corresponds to spatially flat universes (K

_{0}= 0), separating models with positive spatial curvature (Ω

_{0}> 1 ⇔ K

_{0}> 0) from those with negative spatial curvature (Ω

_{0}< 1 ⇔ K

_{0}< 0).

# Natural History of Ashkenazi Intelligence

There are a number of genetic diseases that are unusually common among the Ashkenazim. We also know a fair amount about genetic disease among the Sephardic and Asian Jews: How can we categorize these diseases and the associated mutations?

Most fall into a few categories, as noted by * Ostrer*: sphingolipid storage diseases, glycogen storage diseases, clotting disorders, disorders of adrenal steroid biosynthesis, and disorders of DNA repair. It is interesting that although several Jewish disorders fall into each of these categories, sometimes several in the same population, none of the Finnish genetic diseases, for example, fall into any of these categories (

*), while only one of the genetic disorders common in Quebec does, Tay-Sachs (*

**Norio***). But that is as expected: genetic diseases made common by drift would be very unlikely to cluster in only a few metabolic paths, as if on a few pages of a biochemistry text. The existence of these categories or disease clusters among the Jews suggests selective forces at work, just as the many different genetic disorders affecting hemoglobin and the red cell in the Old World tropics suggest selection, which we know is for malaria resistance.*

**Scriver**The two most important genetic disease clusters among the Ashkenazim are the sphingolipid storage disorders (Tay-Sachs, Gaucher, Niemann-Pick, and mucolipidosis type IV) and the disorders of DNA repair (BRCA1, BRCA2, Fanconi’s anemia type C, and Bloom syndrome) but there are several others that are at quite elevated frequency in Ashkenazim. Using published allele frequencies we can calculate that the probability at conception of having at least one allele of the sphingolipid or DNA repair complex is 15%. If we add Canavan disease, familial dysautonomia, Factor XI deficiency (* Peretz et al.*), and the I1307K allele of the APC locus (

*) this figure grows to 32%, and if we further include non-classical congenital adrenal hyperplasia the probability of having at least one allele from these disorders is 59%.*

**Gryfe et al.**The sphingolipid storage mutations were probably favored and became common because of natural selection, yet we don’t see them in adjacent populations. We suggest that this is because the social niche favoring intelligence was key, rather than geographic location. It is unlikely that these mutations led to disease resistance in heterozygotes for two reasons. First, there is no real evidence for any disease resistance in heterozygotes (claims of TB resistance are unsupported) and most of the candidate serious diseases (smallpox, TB, bubonic plague, diarrheal diseases) affected the neighboring populations, that is people living literally across the street, as well as the Ashkenazim. Second and most important, the sphingolipid mutations look like IQ boosters. The key datum is the effect of increased levels of the storage compounds. Glucosylceramide, the Gaucher storage compound, promotes axonal growth and branching (* Schwartz et al.*). In vitro, decreased glucosylceramide results in stunted neurons with short axons while an increase over normal levels (caused by chemically inhibiting glucocerebrosidase) increases axon length and branching. There is a similar effect in Tay-Sachs decreased levels of GM2 ganglioside inhibit dendrite growth, while an increase over normal levels causes a marked increase in dendritogenesis. This increased dendritogenesis also occurs in Niemann-Pick type A cells, and in animal models of Tay- Sachs and Niemann-Pick.

Dendritogenesis appears to be a necessary step in learning. Associative learning in mice significantly increases hippocampal dendritic spine density, while enriched environments are also known to increase dendrite density. It is likely that a tendency to increased dendritogenesis (in Tay-Sachs and Niemann-Pick heterozygotes) or to increased axonal growth and branching (in Gaucher heterozygotes) facilitates learning. Heterozygotes have half the normal amount of the lysosomal hydrolases and should show modest elevations of the sphingolipid storage compounds. A prediction is that Gaucher, Tay-Sachs, and Niemann-Pick heterozygotes will have higher tested IQ than control groups, probably on the order of 5 points.

Natural History of Ashkenazi Intelligence

# Duqu 2.0

unsigned int __fastcall xor_sub_10012F6D(int encrstr, int a2)

{

unsigned int result; // eax@2 int v3; // ecx@4if ( encrstr ) {result = *(_DWORD *)encrstr ^ 0x86F186F1; *(_DWORD *)a2 = result;if ( (_WORD)result ) {v3 = encrstr - a2;do

{ if ( !*(_WORD *)(a2 + 2) )break;

a2 += 4; result = *(_DWORD *)(v3 + a2) ^ 0x86F186F1; *(_DWORD *)a2 = result;} while ( (_WORD)result );} }

else

{ result = 0; *(_WORD *)a2 = 0;}

return result; }

A closer look at the above C code reveals that the string decryptor routine actually has two parameters: “encrstr” and “a2”. First, the decryptor function checks if the input buffer (the pointer of the encrypted string) points to a valid memory area (i.e., it does not contain NULL value). After that, the first 4 bytes of the encrypted string buffer is XORed with the key “0x86F186F1” and the result of the XOR operation is stored in variable “result”. The first DWORD (first 4 bytes) of the output buffer a2 is then populated by this resulting value (*(_DWORD *)a2 = result;). Therefore, the first 4 bytes of the output buffer will contain the first 4 bytes of the cleartext string.

If the first two bytes (first WORD) of the current value stored in variable “result” contain ‘\0’ characters, the original cleartext string was an empty string and the resulting output buffer will be populated by a zero value, stored on 2 bytes. If the first half of the actual decrypted block (“result” variable) contains something else, the decryptor routine checks the second half of the block (“if ( !*(_WORD *)(a2 + 2) )”). If this WORD value is NULL, then decryption will be ended and the output buffer will contain only one Unicode character with two closing ’\0’ bytes.

If the first decrypted block doens’t contain zero character (generally this is the case), then the decryption cycle continues with the next 4-byte encrypted block. The pointer of the output buffer is incremeted by 4 bytes to be able to store the next cleartext block (”a2 += 4;”). After that, the following 4-byte block of the ”ciphertext” will be decrypted with the fixed decryption key (“0x86F186F1”). The result is then stored within the next 4 bytes of the output buffer. Now, the output buffer contains 2 blocks of the cleartext string.

The condition of the cycle checks if the decryption reached its end by checking the first half of the current decrypted block. If it did not reached the end, then the cycle continues with the decryption of the next input blocks, as described above. Before the decryption of each 4-byte ”ciphertext” block, the routine also checks the second half of the previous cleartext block to decide whether the decoded string is ended or not.

The original Duqu used a very similar string decryption routine, which we printed in the following figure below. We can see that this routine is an exact copy of the previously discussed routine (variable ”a1” is analogous to ”encrstr” argument). The only difference between the Duqu 2.0 (duqu2) and Duqu string decryptor routines is that the XOR keys differ (in Duqu, the key is”0xB31FB31F”).

We can also see that the decompiled code of Duqu contains the decryptor routine in a more compact manner (within a ”for” loop instead of a ”while”), but the two routines are essentially the same. For example, the two boundary checks in the Duqu 2.0 routine (”if ( !*(_WORD *)(a2 + 2) )” and ”while ( (_WORD)result );”) are analogous to the boundary check at the end of the ”for” loop in the Duqu routine (”if ( !(_WORD)v4 || !*(_WORD *)(result + 2) )”). Similarly, the increment operation within the head of the for loop in the Duqu sample (”result += 4”) is analogous to the increment operation ”a2 += 4;” in the Duqu 2.0 sample.

int __cdecl b31f_decryptor_100020E7(int a1, int a2)

{

_DWORD *v2; // edx@1 int result; // eax@2 unsigned int v4; // edi@6v2 = (_DWORD *)a1;if ( a1 ) {

for ( result = a2; ; result += 4 ) {v4 = *v2 ^ 0xB31FB31F; *(_DWORD *)result = v4;

if ( !(_WORD)v4 || !*(_WORD *)(result + 2) ) break;++v2; }

}

else

{ result = 0; *(_WORD *)a2 = 0;}

return result; }