I have a set of objects {obj1, obj2, obj3, ..., objn}
. I have calculated the pairwise distances of all possible pairs. The distances are stored in a n*n
matrix M
, with Mij
being the distance between obji
and objj
. Then it is natural to see M
is a symmetric matrix.
Now I wish to perform unsupervised clustering to these objects. After some searching, I find Spectral Clustering may be a good candidate, since it deals with such pairwise-distance cases.
However, after carefully reading its description, I find it unsuitable in my case, as it requires the number of clusters as the input. Before clustering, I don't know the number of clusters. It has to be figured out by the algorithm while performing the clustering, like DBSCAN.
Considering these, please suggest me some clustering methods that fit my case, where
- The pairwise distances are all available.
- The number of clusters is unknown.
You can try to use hierarchical clustering. It has two types:
It's easy to do with the
metric='precomputed'
argument in sklearn clustering algorithms. You fit the model with the pairwise distance matrix rather than original features.The idea how to do this is the following (for the case when you need to create a pairwise distance matrix too):
Have you considered Correlation Clustering?
If you read carefully section 2.1 in that paper you'll see a probabilistic interpretation to the recovered number of clusters.
The only modification you need for your
M
matrix is to set a threshold deciding what distance is considered "same" and what distance is too large and should be considered as "not-same".Section 7.2 in the aforementioned paper deals with a clustering of a full matrix where the recovering of the underlying number of clusters is an important part of the task at hand.
You can try multidimensional scaling (MDS). After you use MDS to convert the distance-like data into a geometrical picture, you can apply common clustering methods (like k-means) for clustering. See here and here for more.
There are many possible clustering methods, and none of them can be considered "best", everything depends on the data, as always:
Another approach that nobody has suggested thus far, if you like probabilistic clustering, is Bayesian non-parametrics (Dirichlet process priors being the simplest case). You can use multinomial likelihood for count-type data, or multivariate Gaussian likelihood if your data are continuous.