I am working on a ray-tracing geometry problem in MATLAB and have reached a bottleneck in my program.
The function takes in the start and end points of a ray (lStart and lEnd), a set of plane-points and normals (pPoint and norms). The function then computes the distance along the ray at which it intersects each of the planes.
Here is a reference to the original equation: https://en.wikipedia.org/wiki/Line%E2%80%93plane_intersection#Algebraic_form
The code I have so far is as follows:
dists = (diag(bsxfun(@minus, pPoint, lStart) * norms')) ./ ((lEnd - lStart) * norms')';
Which was called in a loop such as:
nRays = size(lStart, 1);
nPlanes = size(pPoint, 1);
dists = zeros(nPlanes, nRays);
for rayCtr = 1:nRays
dists(:, rayCtr) = (diag(bsxfun(@minus, pPoint, lStart(rayCtr, :)) * norms')) ./...
((lEnd(rayCtr, :) - lStart(rayCtr, :)) * norms')';
end
This works perfectly well for a single ray.
Given one ray [1 x 3] and 300 planes [300 x 3], I get a [300 x 1] matrix with the distance of each plane intersection.
What I am struggling with is, vectorising this to work on a list of rays.
Sizes in a typical dataset are:
lStart, lEnd = [14e6, 3];
pPoint, norms = [300, 3];
The ray processing is usually batched into tens of thousands - to fit in memory. For each batch, I'd like to eliminate the rayCtr
loop. With this method the entire program takes just over 8 hours (32-bit, Windows, 2GB RAM).
Here are some coordinates for six planes (forming a cuboid) and two rays as a MWE:
pPoint = [-0.5000 -0.5000 -0.5000;
-0.5000 -0.5000 0.5000;
-0.5000 -0.5000 -0.5000;
-0.5000 0.5000 -0.5000;
-0.5000 -0.5000 -0.5000;
0.5000 -0.5000 -0.5000]
norms = [0 0 1;
0 0 1;
0 1 0;
0 1 0;
1 0 0;
1 0 0]
lStart = [-1 0 0;
-1 0.25 0]
lEnd = [1 0 0;
1 0.25 0]
The expected output from the example is:
dists = [-Inf -Inf;
Inf Inf;
-Inf -Inf;
Inf Inf;
0.25 0.25;
0.75 0.75]
Many thanks.
UPDATE: With the solutions proposed in the accepted answer, runtime is down to approximately 30 mins - now limited by read-write operations and voxel lookup.