* k-nearest neighbours* (

*) is a supervised algorithm where test vectors are compared against labelled training vectors. Classification of a test vector is performed by taking a majority vote of the class for the*

**k-NN***nearest training vectors. In the case of*

**k***, this algorithm reduces to an equivalent of nearest-centroid.*

**k=1***algorithms lend themselves to applications such as handwriting recognition and useful approximations to the traveling salesman problem.*

**k-NN****k-nearest-neighbor classifiers applied to the simulation data above. The broken purple curve in the background is the Bayes decision boundary.**

* k-NN* has two subtleties. Firstly, for datasets where a particular class has the majority of the training data points, there is a bias towards classifying into this class. One solution is to weight each classification calculation by the distance of the test vector from the training vector, however this may still yield poor classifications for particularly under-represented classes. Secondly, the distance between each test vector and all training data must be calculated for each classification, which is resource intensive. The goal is to seek an algorithm with a favourable scaling in the number of training data vectors.

An extension of the nearest-centroid algorithm has been developed by * Wiebe*. First, the algorithm prepares a superposition of qubit states with the distance between each training vector and the input vector, using a suitable quantum sub-routine that encodes the distances in the qubit amplitudes. Rather than measuring the state, the amplitudes are transferred onto an ancilla register using coherent amplitude estimation.

*is then used to find the smallest valued register, corresponding to the training vector closest to the test vector. Therefore, the entire classification occurs within the quantum computer, and we can categorize the quantum*

**Grover’s search***as an*

**k-NN***. The advantage over*

**L2 algorithm***is that the power of Grover’s search has been used to provide a speedup and it provides a full and clear recipe for implementation. The time scaling of the quantum*

**Lloyd’s algorithm***algorithm is complex, however it scales as*

**k-NN***to first order. The dependence on*

**O ̃(√nlog(n))***no longer appears except at higher orders.*

**m**The quantum * k-NN* algorithm is not a panacea. There are clearly laid out conditions on the application of quantum

*, though, such as the dependence on the sparsity of the data. The classification is decided by majority rule with no weighting and, as such, it is unsuitable for biased datasets.*

**k-NN***works well with a small number of input variables (*

**K-NN***), but struggles when the number of inputs is very large. Each input variable can be considered a dimension of a*

**p***input space. For example, if you had two input variables x*

**p-dimensional**_{1}and x

_{2}, the input space would be

*. As the number of dimensions increases the volume of the input space increases at an exponential rate. In high dimensions, points that may be similar may have very large distances. All points will be far away from each other and our intuition for distances in simple*

**2-dimensiona****l***spaces breaks down. This might feel unintuitive at first, but this general problem is called the “*

**2 and 3-dimensional***“.*

**Curse of Dimensionality**