Greedy algorithms such as adaptive sampling (k-means++) and furthest point traversal are popular choices for clustering problems. One the one hand, they possess ...
They prove that for k-means clustering, one can obtain a factor 50 approximation to the value of OPT, while declaring at most z points as outliers, as desired.
May 5, 2023 · Greedy algorithms such as adaptive sampling (k-means++) and furthest point traversal are popular choices for clustering problems.
Greedy algorithms such as adaptive sampling (k-means++) and furthest point traversal are popular choices for clustering problems. One the one hand, ...
This work shows that for k-means and k-center clustering, simple modifications to the well-studied greedy algorithms result in nearly identical guarantees, ...
This paper proposes simple algorithmic modifications to popular approximation algorithms for the k-centers and k-means clustering problems.
What if we have outliers? What if some of the data points are outliers? Suppose outliers are far away from true clusters..
Feb 15, 2022 · We study the problem of k-clustering with fair outlier removal and provide the first approximation algorithm for well-known clustering formulations.
Greedy sampling for approximate clustering in the presence of outliers. A Bhaskara, S Vadgama, H Xu. Advances in Neural Information Processing Systems 32, 2019.
Sep 6, 2023 · We propose a simple variant in the D^2 sampling distribution, which makes it robust to the outliers. Our algorithm runs in O(ndk) time, outputs O(k) clusters.