* k*-means clustering is a popular

*that structures an unlabelled dataset into*

**machine learning algorithm***classes.*

**k***-means clustering is an NP-hard problem, but examining methods that reduce the average-case complexity is an open area of research. A popular way of classifying the input vectors is to compare the distance of a new vector with the centroid vector of each class (the latter being calculated from the mean of the vectors already in that class). The class with the shortest distance to the vector is the one to which the vector is classified. We refer to this form of classification sub-routine for*

**k***-means clustering, as the nearest-centroid algorithm.*

**k*** Seth Lloyd and others* have constructed a quantum nearest-centroid algorithm, only classifying vectors after the optimal clustering has been found. They show that the distance between an input vector,

*, and the set of n reference vectors*

**|u⟩***of length m in class*

**{|v**_{i}^{C}}*, can be efficiently calculated to within error*

**C***in*

**ε***steps on a quantum computer. The algorithm works by constructing the state*

**O(ε**^{−1}log nm)**|Ψ⟩ = 1/√2 (|u⟩|0⟩ + 1/√n Σ ^{n}_{j=1} |v_{j}^{C }|j⟩**

and performing a swap test with the state

**|Φ⟩ = 1/√Z (|u⟩||0⟩ + 1/√n Σ ^{n}_{j=1} |v_{j}^{C }|j⟩**

where * Z = |u|^{2} + (1/n) Σ_{j} |vj|^{2}*.

The distance between the input vector and the weighted average of the vectors in class * C* is then proportional to the probability of a successful swap test. The algorithm is repeated for each class until a desired confidence is reached, with the vector being classified into the class from which it has the shortest distance. The complexity arguments on the dependence of m were rigorously confirmed by Lloyd and others using the QPCA (Quantum principal component analysis) construction for a support vector machine (SVM) algorithm. This can roughly be thought of as a

*-means clustering problem with*

**k***. A speedup is obtained due to the classical computation of the required inner products being*

**k=2***.*

**O(nm)**^{2}The algorithm has some caveats, in particular it only classifies data without performing the harder task of clustering, and assumes access to a QRAM (Quantum random access memory). In the same paper, Lloyd and others develop a * k*-means algorithm, including clustering, for implementation on an adiabatic quantum computer. The potential of this algorithm is hard to judge, and is perhaps less promising due to the current focus of the quantum computing field on circuit-based architectures.

Machine learning data is usually (very) high dimensional, containing redundant or irrelevant information. Thus, machine learning benefits from pre-processing data through statistical procedures such as principal component analysis (PCA). PCA reduces the dimensionality by transforming the data to a new set of uncorrelated variables (the principal components) of which the first few retain most of the variation present in the original dataset. The standard way to calculate the principal components boils down to finding the eigenvalues of the data covariance matrix.

Lloyd and others suggested a quantum version of PCA (QPCA). The bulk of the algorithm consists of the ability to generate the exponent of an arbitrary density matrix ρ efficiently. More specifically, given * n* copies of

*, Lloyd and others propose a way to apply the unitary operator*

**ρ***to any state σ with accuracy*

**e**^{−iρt}*. This is done using repeated infinitesimal applications of the swap operator on*

**ε = O(t**^{2}/n)*. Using phase estimation, the result is used to generate a state which can be sampled from to attain information on the eigenvectors and eigenvalues of the state*

**ρ ⊗ σ***. The algorithm is most effective when*

**ρ***contains a few large eigenvalues and can be represented well by its principal components. In this case, the subspace spanned by the principal components*

**ρ***closely approximates*

**ρ′***, such that*

**ρ***, where*

**||ρ − P ρP || ≤ ε***is the projector onto*

**P***. This method of QPCA allows construction of the eigenvectors and eigenvalues of the matrix*

**ρ′***in time*

**ρ***, where*

**O(Rlog(d))***and*

**d***are the dimensions of the space spanned by the*

**R***and*

**ρ***respectively. For low-rank matrices, this is an improvement over the classical algorithm that requires*

**ρ′***time. In a machine learning context, if*

**O(d)***is the covariance matrix of the data, this procedure performs PCA in the desired fashion.*

**ρ**The QPCA algorithm has a number of caveats that need to be covered before one can apply it to machine learning scenarios. For example, to gain a speedup, some of the eigenvalues of * ρ* need to be large (i.e.

*needs to be well approximated by*

**ρ***). For the case where all eigenvalues are equal and of size*

**ρ′***, the algorithm reduces to scaling in time*

**O(1/d)***which offers no improvement over classical algorithms. Other aspects that need to be considered include the necessity of QRAM and the scaling of the algorithm with the allowed error*

**O(d)***. As of yet, it is unclear how these requirements affect the applicability of the algorithm to real scenarios.*

**ε**
[…] the quantum computer, and we can categorize the quantum k-NN as an L2 algorithm. The advantage over Lloyd’s algorithm is that the power of Grover’s search has been used to provide a speedup and it provides a full […]

[…] In order to classify hand-written numbers, Li et al. used a quantum support vector machine, which is simply a more rigorous version of Lloyd’s quantum nearest centroid algorithm. […]