Conjuncted: Axiomatizing Artificial Intelligence. Note Quote.

machinelearningalgorithms1

Solomonoff’s work was seminal in that he has single-handedly axiomatized AI, discovering the minimal necessary conditions for any machine to attain general intelligence.

Informally, these axioms are:

AI0 AI must have in its possession a universal computer M (Universality). AI1 AI must be able to learn any solution expressed in M’s code (Learning recursive solutions).
AI2 AI must use probabilistic prediction (Bayes’ theorem).
AI3 AI must embody in its learning a principle of induction (Occam’s razor).

While it may be possible to give a more compact characterization, these are ultimately what is necessary for the kind of general learning that Solomonoff induction achieves. ALP can be seen as a complete formalization of Occam’s razor (as well as Epicurus’s principle)  and thus serve as the foundation of universal induction, capable of solving all AI problems of significance. The axioms are important because they allow us to assess whether a system is capable of general intelligence or not.

Obviously, AI1 entails AI0, therefore AI0 is redundant, and can be omitted entirely, however we stated it separately only for historical reasons, as one of the landmarks of early AI research, in retrospect, was the invention of the universal computer, which goes back to Leibniz’s idea of a universal language (characteristica universalis) that can express every statement in science and mathematics, and has found its perfect embodiment in Turing’s research. A related achievement of early AI was the development of LISP, a universal computer based on lambda calculus (which is a functional model of computation) that has shaped much of early AI research.

Minimum Message Length (MML) principle introduced in 1968 is a formalization of induction developed within the framework of classical information theory, which establishes a trade-off between model complexity and fit-to-data by finding the minimal message that encodes both the model and the data. This trade-off is quite similar to the earlier forms of induction that Solomonoff developed, however independently discovered. Dowe points out that Occam’s razor means choosing the simplest single theory when data is equally matched, which MML formalizes perfectly (and is functional otherwise in the case of inequal fits) while Solomonoff induction maintains a mixture of alternative solutions. However, it was Solomonoff who first observed the importance of universality for AI (AI0-AI1). The plurality of probabilistic approaches to induction supports the importance of AI3 (as well as hinting that diversity of solutions may be useful). AI2, however, does not require much explanation. 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s