optimizing manually-coded k-means in MATLAB?

2020-02-11 08:36发布

问题:

So I'm writing a k-means script in MATLAB, since the native function doesn't seem to be very efficient, and it seems to be fully operational. It appears to work on the small training set that I'm using (which is a 150x2 matrix fed via text file). However, the runtime is taking exponentially longer for my target data set, which is a 3924x19 matrix.

I'm not the greatest at vectorization, so any suggestions would be greatly appreciated. Here's my k-means script so far (I know I'm going to have to adjust my convergence condition, since it's looking for an exact match, and I'll probably need more iterations for a dataset this large, but I want it to be able to finish in a reasonable time first, before I crank that number up):

clear all;

%take input file (manually specified by user
disp('Please type input filename (in working directory): ')
target_file = input('filename: ', 's');

%parse and load into matrix
data = load(target_file);

%prompt name of output file for later) UNCOMMENT BELOW TWO LINES LATER
% disp('Please type output filename (to be saved in working directory): ')
% output_name = input('filename:', 's')

%prompt number of clusters
disp('Please type desired number of clusters: ')
c = input ('number of clusters: ');

%specify type of kmeans algorithm ('regular' for regular, 'fuzzy' for fuzzy)
%UNCOMMENT BELOW TWO LINES LATER
% disp('Please specify type (regular or fuzzy):')
% runtype = input('type: ', 's')

%initialize cluster centroid locations within bounds given by data set

%initialize rangemax and rangemin row vectors
%with length same as number of dimensions
rangemax = zeros(1,size(data,2)); 
rangemin = zeros(1,size(data,2));

%map max and min values for bounds
for dim = 1:size(data,2)
    rangemax(dim) = max(data(:,dim));
    rangemin(dim) = min(data(:,dim));
end

% rangemax
% rangemin

%randomly initialize mu_k (center) locations in (k x n) matrix where k is
%cluster number and n is number of dimensions/coordinates

mu_k = zeros(c,size(data,2));

for k = 1:size(data,2)
    mu_k(k,:) = rangemin + (rangemax - rangemin).*rand(1,1);
end

mu_k

%iterate k-means

%initialize holding variable for distance comparison
comparisonmatrix = [];

%initialize assignment vector
assignment = zeros(size(data,1),1);

%initialize distance holding vector
dist = zeros(1,size(data,2));

%specify convergence threshold
%threshold = 0.001;

for iteration = 1:25

    %save current assignment values to check convergence condition
    hold_assignment = assignment;

    for point = 1:size(data,1)

        %calculate distances from point to centers
        for k = 1:c
            %holding variables
            comparisonmatrix = [data(point,:);mu_k(k,:)];

            dist(k) = pdist(comparisonmatrix);
        end

        %record location of mininum distance (location value will be between 1
        %and k)
        [minval, location] = min(dist);

        %assign cluster number (analogous to location value)
        assignment(point) = location;

    end

    %check convergence criteria

    if isequal(assignment,hold_assignment)
        break
    end

    %revise mu_k locations

    %count number of each label
    assignment_count = zeros(1,c);

    for i = 1:size(data,1)
        assignment_count(assignment(i)) = assignment_count(assignment(i)) + 1;
    end

    %compute centroids
    point_total = zeros(size(mu_k));

    for row = 1:size(data,1)
        point_total(assignment(row),:) = point_total(assignment(row)) + data(row,:);
    end

    %move mu_k values to centroids
    for center = 1:c
        mu_k(center,:) = point_total(center,:)/assignment_count(center);
    end
end

There are a lot of loops in there, so I feel that there's a lot of optimization to be made. However, I think I've just been staring at this code for far too long, so some fresh eyes could help. Please let me know if I need to clarify anything in the code block.

When the above code block is executed (in context) on the large dataset, it takes 3732.152 seconds, according to MATLAB's profiler, to make the full 25 iterations (I'm assuming it hasn't "converged" according to my criteria yet) for 150 clusters, but about 130 of them return NaNs (130 rows in mu_k).

回答1:

Profiling will help, but the place to rework your code is to avoid the loop over the number of data points (for point = 1:size(data,1)). Vectorize that.

In your for iteration loop here is a quick partial example,

[nPoints,nDims] = size(data);

% Calculate all high-dimensional distances at once
kdiffs = bsxfun(@minus,data,permute(mu_k,[3 2 1])); % NxDx1 - 1xDxK => NxDxK
distances = sum(kdiffs.^2,2); % no need to do sqrt
distances = squeeze(distances); % Nx1xK => NxK

% Find closest cluster center for each point
[~,ik] = min(distances,[],2); % Nx1

% Calculate the new cluster centers (mean the data)
mu_k_new = zeros(c,nDims);
for i=1:c,
    indk = ik==i;
    clustersizes(i) = nnz(indk);
    mu_k_new(i,:) = mean(data(indk,:))';
end

This isn't the only (or the best) way to do it, but it should be a decent example.

Some other comments:

  1. Instead of using input, make this script into a function to efficiently handle input arguments.
  2. If you want an easy way to specify a file, see uigetfile.
  3. With many MATLAB functions, such as max, min, sum, mean, etc., you can specify a dimension over which the function should operate. This way you an run it on a matrix and compute values for multiple conditions/dimensions at the same time.
  4. Once you get decent performance, consider iterating longer, specifically until the centers no longer change or the number of samples that change clusters becomes small.
  5. The cluster with the smallest distance for each point, ik, will be the same with squared Euclidean distance.