Machine learning/Unsupervised Learning/K-means Clustering

From testwiki
Revision as of 08:28, 11 March 2023 by imported>MathXplore (added Category:Machine learning using HotCat)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

K-means is a method of clustering which is an unsupervised learning problem.

  • In this method the number of clusters is an input to the algorithm (hyper-parameter).
  • K-means method is a greedy algorithm. It is a special case of expectation maximization (EM) algorithm, in which we try to find maximum likelihood expectation (MLE).
  • K-means algorithm is not guaranteed to converge to the global mean of the loss function (sum of Euclidean distances from each cluster center). The global minimum problem is an NP hard problem.

Relation to Gaussian Mixture Model (GMM):

  • GMM is a more probabilistic approach to clustering.
  • Expectation maximization (EM) algorithm is used to find a good gaussian mixture model to cluster the data.

Intuition

Data points in each cluster are closer to the center of each cluster than the center of other clusters.

Algorithm

Assume that the number of clusters is given by k and the cluster centers are shown with μ1,μ2,,μkd.

We define a loss function for clustering and try to minimize it through the following greedy algorithm.

The loss function is defined as

L=j=1kiaij||xiμj||2whereaij={1if xi assigned to j0else

Minimize L with respect to a and μ by following these two steps until convergence is achieved:

  1. Choose optimal a for fixed μ by assigning xi to the nearest μjaij={1if j=argminl|xiμl|20else
  2. Choose optimal μ for fixed a by updating μj to be the empirical mean of the points assigned to each clusterμj=1nji:xi in jxi where nj=i=1maij=number of data points assigned to j

Justification

In this section, we show that choosing cluster center, μj, according to step 2 of the algorithm minimized loss function for a fixed set of assignment factors (aij)

In order to find the minimum value of loss function (L) as a function of μj, we find the point for which the gradient of the function is zero

μjL=0Therefore, we have

μjL=iaijμj(xiμj)T(xiμj)=iaijμj(xiTxi2μjTxi+μjTμj)=iaij(2xi+2μj)=0Getting rid of the factor of 2 in the last expression we have μjiaij=iaijxiμj=iaijxiiaijWe also have nj=i=1naij=#{i:xi is assigned to j}, which simplifies μj toμj=1nji:xi in jxi