Poincaré and Geometry of Curvature. Thought of the Day 60.0

9683f20685891eeb1dd5e928d73f9115

It is not clear that Poincaré regarded Riemannian, variably curved, “geometry” as a bona fide geometry. On the one hand, his insistence on generality and the iterability of mathematical operations leads him to dismiss geometries of variable curvature as merely “analytic”. Distinctive of mathematics, he argues, is generality and the fact that induction applies to its processes. For geometry to be genuinely mathematical, its constructions must be everywhere iterable, so everywhere possible. If geometry is in some sense about rigid motion, then a manifold of variable curvature, especially where the degree of curvature depends on something contingent like the distribution of matter, would not allow a thoroughly mathematical, idealized treatment. Yet Poincaré also writes favorably about Riemannian geometries, defending them as mathematically coherent. Furthermore, he admits that geometries of constant curvature rest on a hypothesis – that of rigid body motion – that “is not a self evident truth”. In short, he seems ambivalent. Whether his conception of geometry includes or rules out variable curvature is unclear. We can surmise that he recognized Riemannian geometry as mathematical, and interesting, but as very different and more abstract than geometries of constant curvature, which are based on the further limitations discussed above (those motivated by a world satisfying certain empirical preconditions). These limitations enable key idealizations, which in turn allow constructions and synthetic proofs that we recognize as “geometric”.

Music Composition Using Long Short-Term Memory (LSTM) Recurrent Neural Networks

LSTM

The most straight-forward way to compose music with an Recurrent Neural Network (RNN) is to use the network as single-step predictor. The network learns to predict notes at time t + 1 using notes at time t as inputs. After learning has been stopped the network can be seeded with initial input values – perhaps from training data – and can then generate novel compositions by using its own outputs to generate subsequent inputs. This note-by-note approach was first examined by Todd.

A feed-forward network would have no chance of composing music in this fashion. Lacking the ability to store any information about the past, such a network would be unable to keep track of where it is in a song. In principle an RNN does not suffer from this limitation. With recurrent connections it can use hidden layer activations as memory and thus is capable of exhibiting (seemingly arbitrary) temporal dynamics. In practice, however, RNNs do not perform very well at this task. As Mozer aptly wrote about his attempts to compose music with RNNs,

While the local contours made sense, the pieces were not musically coherent, lacking thematic structure and having minimal phrase structure and rhythmic organization.

The reason for this failure is likely linked to the problem of vanishing gradients (Hochreiter et al.) in RNNs. In gradient methods such as Back-Propagation Through Time (BPTT) (Williams and Zipser) and Real-Time Recurrent Learning (RTRL) error flow either vanishes quickly or explodes exponentially, making it impossible for the networks to deal correctly with long-term dependencies. In the case of music, long-term dependencies are at the heart of what defines a particular style, with events spanning several notes or even many bars contributing to the formation of metrical and phrasal structure. The clearest example of these dependencies are chord changes. In a musical form like early rock-and-roll music for example, the same chord can be held for four bars or more. Even if melodies are constrained to contain notes no shorter than an eighth note, a network must regularly and reliably bridge time spans of 32 events or more.

The most relevant previous research is that of Mozer, who did note-by-note composition of single-voice melodies accompanied by chords. In the “CONCERT” model, Mozer used sophisticated RNN procedures including BPTT, log-likelihood objective functions and probabilistic interpretation of the output values. In addition to these neural network methods, Mozer employed a psychologically-realistic distributed input encoding (Shepard) that gave the network an inductive bias towards chromatically and harmonically related notes. He used a second encoding method to generate distributed representations of chords.

Untitled

 

A BPTT-trained RNN does a poor job of learning long-term dependencies. To offset this, Mozer used a distributed encoding of duration that allowed him to process a note of any duration in a single network timestep. By representing in a single timestep a note rather than a slice of time, the number of time steps to be bridged by the network in learning global structure is greatly reduced. For example, to allow sixteenth notes in a network which encodes slices of time directly requires that a whole note span at minimum 16 time steps. Though networks regularly outperformed third-order transition table approaches, they failed in all cases to find global structure. In analyzing this performance Mozer suggests that, for the note-by-note method to work it is necessary that the network can induce structure at multiple levels. A First Look at Music Composition using LSTM Recurrent Neural Networks

Quantum Music

Human neurophysiology suggests that artistic beauty cannot easily be disentangled from sexual attraction. It is, for instance, very difficult to appreciate Sandro Botticelli’s Primavera, the arguably “most beautiful painting ever painted,” when a beautiful woman or man is standing in front of that picture. Indeed so strong may be the distraction, and so deep the emotional impact, that it might not be unreasonable to speculate whether aesthetics, in particular beauty and harmony in art, could be best understood in terms of surrogates for natural beauty. This might be achieved through the process of artistic creation, idealization and “condensation.”

1200px-Botticelli-primavera

In this line of thought, in Hegelian terms, artistic beauty is the sublimation, idealization, completion, condensation and augmentation of natural beauty. Very different from Hegel who asserts that artistic beauty is “born of the spirit and born again, and the higher the spirit and its productions are above nature and its phenomena, the higher, too, is artistic beauty above the beauty of nature” what is believed here is that human neurophysiology can hardly be disregarded for the human creation and perception of art; and, in particular, of beauty in art. Stated differently, we are inclined to believe that humans are invariably determined by (or at least intertwined with) their natural basis that any neglect of it results in a humbling experience of irritation or even outright ugliness; no matter what social pressure groups or secret services may want to promote.

Thus, when it comes to the intensity of the experience, the human perception of artistic beauty, as sublime and refined as it may be, can hardly transcend natural beauty in its full exposure. In that way, art represents both the capacity as well as the humbling ineptitude of its creators and audiences.

Leaving these idealistic realms and come back to the quantization of musical systems. The universe of music consists of an infinity – indeed a continuum – of tones and ways to compose, correlate and arrange them. It is not evident how to quantize sounds, and in particular music, in general. One way to proceed would be a microphysical one: to start with frequencies of sound waves in air and quantize the spectral modes of these (longitudinal) vibrations very similar to phonons in solid state physics.

For the sake of relating to music, however, a different approach that is not dissimilar to the Deutsch-Turing approach to universal (quantum) computability, or Moore’s automata analogues to complementarity: a musical instrument is quantized, concerned with an octave, realized by the eight white keyboard keys typically written c, d, e, f, g, a, b, c′ (in the C major scale).

In analogy to quantum information quantization of tones is considered for a nomenclature in analogy to classical musical representation to be further followed up by introducing typical quantum mechanical features such as the coherent superposition of classically distinct tones, as well as entanglement and complementarity in music…..quantum music