Improved Outlier Robust Seeding for k-means

09/06/2023
by   Amit Deshpande, et al.
0

The k-means is a popular clustering objective, although it is inherently non-robust and sensitive to outliers. Its popular seeding or initialization called k-means++ uses D^2 sampling and comes with a provable O(log k) approximation guarantee <cit.>. However, in the presence of adversarial noise or outliers, D^2 sampling is more likely to pick centers from distant outliers instead of inlier clusters, and therefore its approximation guarantees w.r.t. k-means solution on inliers, does not hold. Assuming that the outliers constitute a constant fraction of the given data, we propose a simple variant in the D^2 sampling distribution, which makes it robust to the outliers. Our algorithm runs in O(ndk) time, outputs O(k) clusters, discards marginally more points than the optimal number of outliers, and comes with a provable O(1) approximation guarantee. Our algorithm can also be modified to output exactly k clusters instead of O(k) clusters, while keeping its running time linear in n and d. This is an improvement over previous results for robust k-means based on LP relaxation and rounding <cit.>, <cit.> and robust k-means++ <cit.>. Our empirical results show the advantage of our algorithm over k-means++ <cit.>, uniform random seeding, greedy sampling for k means <cit.>, and robust k-means++ <cit.>, on standard real-world and synthetic data sets used in previous work. Our proposal is easily amenable to scalable, faster, parallel implementations of k-means++ <cit.> and is of independent interest for coreset constructions in the presence of outliers <cit.>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset