Can this code be vectorised further to eliminate l

2019-04-28 08:39发布

问题:

I am working on a ray-tracing geometry problem in MATLAB and have reached a bottleneck in my program.

The function takes in the start and end points of a ray (lStart and lEnd), a set of plane-points and normals (pPoint and norms). The function then computes the distance along the ray at which it intersects each of the planes.

Here is a reference to the original equation: https://en.wikipedia.org/wiki/Line%E2%80%93plane_intersection#Algebraic_form

The code I have so far is as follows:

dists = (diag(bsxfun(@minus, pPoint, lStart) * norms')) ./ ((lEnd - lStart) * norms')';

Which was called in a loop such as:

nRays   = size(lStart, 1);
nPlanes = size(pPoint, 1);
dists   = zeros(nPlanes, nRays);

for rayCtr = 1:nRays

    dists(:, rayCtr) = (diag(bsxfun(@minus, pPoint, lStart(rayCtr, :)) * norms')) ./...
         ((lEnd(rayCtr, :) - lStart(rayCtr, :)) * norms')';

end

This works perfectly well for a single ray.

Given one ray [1 x 3] and 300 planes [300 x 3], I get a [300 x 1] matrix with the distance of each plane intersection.

What I am struggling with is, vectorising this to work on a list of rays.

Sizes in a typical dataset are:

lStart, lEnd  = [14e6, 3];
pPoint, norms = [300,  3];

The ray processing is usually batched into tens of thousands - to fit in memory. For each batch, I'd like to eliminate the rayCtr loop. With this method the entire program takes just over 8 hours (32-bit, Windows, 2GB RAM).

Here are some coordinates for six planes (forming a cuboid) and two rays as a MWE:

pPoint = [-0.5000   -0.5000   -0.5000;
          -0.5000   -0.5000    0.5000;
          -0.5000   -0.5000   -0.5000;
          -0.5000    0.5000   -0.5000;
          -0.5000   -0.5000   -0.5000;
           0.5000   -0.5000   -0.5000]

norms = [0  0   1;
         0  0   1;
         0  1   0;
         0  1   0;
         1  0   0;
         1  0   0]

lStart = [-1 0 0;
          -1 0.25 0]

lEnd   = [1 0 0;
          1 0.25 0]

The expected output from the example is:

dists = [-Inf -Inf;
          Inf  Inf; 
         -Inf -Inf; 
          Inf  Inf; 
          0.25 0.25; 
          0.75 0.75]

Many thanks.

UPDATE: With the solutions proposed in the accepted answer, runtime is down to approximately 30 mins - now limited by read-write operations and voxel lookup.

回答1:

I think what you need is

dists=sum(bsxfun(@times,bsxfun(@minus,...
                               permute(pPoint,[1 3 2]),permute(lStart,[3 1 2])),...
                 permute(norms,[1 3 2])),3)...
        ./(sum(bsxfun(@times,...
                      permute(lEnd-lStart,[3 1 2]),permute(norms,[1 3 2])),3))

This assumes that pPoint and norms are size [nlay 3], while lStart and lEnd are size [nray 3]. The result is of size [nlay nray], each corresponding to a (layer,ray) pair.

This gives the correct result for your example:

dists =

      -Inf      -Inf
       Inf       Inf
      -Inf      -Inf
       Inf       Inf
    0.2500    0.2500
    0.7500    0.7500

Here's another way to introduce some fast matrix-multiplication into play for the denominator part calculations -

p1 = sum(bsxfun(@times,bsxfun(@minus,pPoint,permute(lStart,[3 2 1])),norms),2)
p2 = norms*(lEnd - lStart).'
dists = squeeze(p1)./p2

Since lStart is listed as a heavy dataset, it might be better to keep it as it is and permute things around it. Thus, one alternative approach to get squeeze(p1) would be with -

squeeze(sum(bsxfun(@times,bsxfun(@minus,permute(pPoint,[3 2 1]),lStart),permute(norms,[3 2 1])),2)).'