Quantum K-NN Algo

machinelearningalgorithms

k-nearest neighbours (k-NN) is a supervised algorithm where test vectors are compared against labelled training vectors. Classification of a test vector is performed by taking a majority vote of the class for the k nearest training vectors. In the case of k=1, this algorithm reduces to an equivalent of nearest-centroid. k-NN algorithms lend themselves to applications such as handwriting recognition and useful approximations to the traveling salesman problem.

image_07

k-nearest-neighbor classifiers applied to the simulation data above. The broken purple curve in the background is the Bayes decision boundary.

k-NN has two subtleties. Firstly, for datasets where a particular class has the majority of the training data points, there is a bias towards classifying into this class. One solution is to weight each classification calculation by the distance of the test vector from the training vector, however this may still yield poor classifications for particularly under-represented classes. Secondly, the distance between each test vector and all training data must be calculated for each classification, which is resource intensive. The goal is to seek an algorithm with a favourable scaling in the number of training data vectors.

An extension of the nearest-centroid algorithm has been developed by Wiebe. First, the algorithm prepares a superposition of qubit states with the distance between each training vector and the input vector, using a suitable quantum sub-routine that encodes the distances in the qubit amplitudes. Rather than measuring the state, the amplitudes are transferred onto an ancilla register using coherent amplitude estimation. Grover’s search is then used to find the smallest valued register, corresponding to the training vector closest to the test vector. Therefore, the entire classification occurs within the quantum computer, and we can categorize the quantum k-NN as an L2 algorithm. The advantage over Lloyd’s algorithm is that the power of Grover’s search has been used to provide a speedup and it provides a full and clear recipe for implementation. The time scaling of the quantum k-NN algorithm is complex, however it scales as O ̃(√nlog(n)) to first order. The dependence on m no longer appears except at higher orders.

The quantum k-NN algorithm is not a panacea. There are clearly laid out conditions on the application of quantum k-NN, though, such as the dependence on the sparsity of the data. The classification is decided by majority rule with no weighting and, as such, it is unsuitable for biased datasets. K-NN works well with a small number of input variables (p), but struggles when the number of inputs is very large. Each input variable can be considered a dimension of a p-dimensional input space. For example, if you had two input variables x1 and x2, the input space would be 2-dimensional. As the number of dimensions increases the volume of the input space increases at an exponential rate. In high dimensions, points that may be similar may have very large distances. All points will be far away from each other and our intuition for distances in simple 2 and 3-dimensional spaces breaks down. This might feel unintuitive at first, but this general problem is called the “Curse of Dimensionality“.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s