I have a collection of objects, each of which has a numerical 'weight'. I would like to create groups of these objects such that each group has approximately the same arithmetic mean of object weights.
The groups won't necessarily have the same number of members, but the size of groups will be within one of each other. In terms of numbers, there will be between 50 and 100 objects and the maximum group size will be about 5.
Is this a well-known type of problem? It seems a bit like a knapsack or partition problem. Are efficient algorithms known to solve it?
As a first step, I created a python script that achieves very crude equivalence of mean weights by sorting the objects by weight, subgrouping these objects, and then distributing a member of each subgroup to one of the final groups.
I am comfortable programming in python, so if existing packages or modules exist to achieve part of this functionality, I'd appreciate hearing about them.
Thank you for your help and suggestions.
The program that follows is a low-cost heuristic. What it does is distribute the values among "buckets" placing large values along with small ones by choosing the values from one end of a sorted list on one round, and from the other end on the next. Doing the distribution in round-robin guarantees that the rules about the number of elements per bucket are met. It is a heuristic and not an algorithm because it tends produce good solutions, but without guarantee that better ones don't exist.
In theory, if there are enough values, and they are uniformly or normally distributed, then chances are that just randomly placing the values in the buckets will result in alike means for the buckets. Assuming that that the dataset is small, this heuristic improves the chances of a good solution. Knowing more about the size and statistical distribution of the datasets would help devise a better heuristic, or an algorithm.
Complexity is logarithmic because of the sorting step. These are sample results:
Note that the requirement on the size of the buckets dominates, which means that the means won't be close if the variance in the original data is large. You can try with this dataset:
The bucket containing
100000
will get at least another 3 datums.I came up with this heuristic thinking that it's what a group of kids would do if handed a pack of bills of different denominations and told to share them according to this game's rules. It's statistically reasonable, and O(log(N)).
You might try using k-means clustering:
The code above generates a random sequence of N weights. It uses scipy.cluster.vq.kmeans to partition the sequence into
k
clusters of numbers which are close together. If the distortion is above a threshold, the kmeans is recomputed withk
increased. This repeats until the distortion is below the given threshold.It yields clusters such as this:
Note that the k-means clustering algorithm uses random guesses to initially pick centers of the
k
groups. This means that repeated runs of the same code can produce different results, particularly if the weights do not separate themselves into clearly distinct groups.You'll also have to twiddle the threshold parameter to produce the desired number of groups.
You could also try a centroid-based linkage algorithm, which achieves the same.
See this for the code and this for understanding.
UPGMA (aka centroid-based) is what you probably want to do.