Here is an example of a subset of the matrix I would like to use:
1 3 5
2 3 6
1 1 1
3 5 4
5 5 5
8 8 0
This matrix is in fact 3000 x 3.
For the first 3 rows, I wish to subtract each of these rows with the first row of these three.
For the second 3 rows, I wish to subtract each of these rows with the first of these three, and so on.
As such, the output matrix will look like:
0 0 0
1 0 1
0 -2 -4
0 0 0
2 0 1
5 3 -4
What code in MATLAB will do this for me?
Here's another way to implement this with
bsxfun
, slightly different from natan's bsxfun implementation -a slightly shorter and vectorized way will be (if
a
is your matrix) :let's test:
Edit
Here's a bsxfun solution (less elegant, but hopefully faster):
Edit 2
Ok, this got me curios, as I know bsxfun starts to be less efficient for bigger array sizes. So I tried to check using
timeit
my two solutions (because they are one liners it's easy). And here it is:So I didn't test for the for loop and Divkar's bsxfun, but you can see that for arrays smaller than 3e4 kron is better than bsxfun, and this changes at larger arrays (ratio of <1 means kron took less time given the size of the array). This was done at Matlab 2012a win7 (i5 machine)
You can do it using just indexing:
Of course, the
3
above can be replaced by any other number. It works even if that number doesn't divide the number of rows.Simple
for
loop. This does each 3x3 block separately.Whilst it may be possible to vectorise this, I don't think the performance gains would be worth it, unless you will do this many times. For a 3000x3 matrix it doesn't take long at all.
Edit: In fact this seems to be pretty fast. I think that's because Matlab's JIT compilation can speed up simple
for
loops well.You could also do this completely vectorized by using
mat2cell
,cellfun
, thencell2mat
. Assuming our matrix is stored inA
, try:The first line figures out how many 3 x 3 blocks we need. This is assuming that the number of rows is a multiple of 3. The second line uses
mat2cell
to decompose each 3 x 3 block and places them into individual cells. The third line then usescellfun
so that for each cell in our cell array (which is a 3 x 3 matrix), it takes each row of the 3 x 3 matrix and subtracts itself with the first row. This is very much like what @David did, except I didn't userepmat
to minimize overhead. The fourth line then takes each of these matrices and stacks them back so that we get our final matrix in the end.Example (this is using the matrix that was defined in your post):
Output:
In hindsight, I think @David is right with respect to performance gains. Unless this code is repeated many times, I think the
for
loop will be more efficient. Either way, I wanted to provide another alternative. Cool exercise!Edit: Timing and Size Tests
Because of our discussion earlier, I have decided to do timing and size tests. These tests were performed on an Intel i7-4770 @ 3.40 GHz CPU with 16 GB of RAM, using MATLAB R2014a on Windows 7 Ultimate. Basically, I did the following:
Test #1 - Set the random seed generator to 1 for reproducibility. I wrote a loop that cycled 10000 times. For each iteration in the loop, I generated a random integer 3000 x 3 matrix, then performed each of the methods that were described here. I took note of how long it took for each method to complete after 10000 cycles. The timing results are:
0.092129 seconds
1.9828 seconds
0.20097 seconds
bsxfun
method:0.10972 seconds
bsxfun
method:0.0689 seconds
As such, Divakar's method is the fastest, followed by David's
for
loop method, followed closely by natan'sbsxfun
method, followed by natan's originalkron
method, followed by the sloth (a.k.a mine).stem
plot that shows you the timing for each size of matrix. I subsetted the plot so that it displays timings from 200000 x 3 to 300000 x 3. Take note that the horizontal axis records the number of rows at each iteration. The first stem is for 3000 rows, the next is for 6000 rows and so on. The columns remain the same at 3 (of course).I can't explain the random spikes throughout the graph.... probably attributed to something happening in RAM. However, I'm very sure I'm clearing the variables at each iteration to ensure no bias. In any case, Divakar and David are closely tied. Next comes natan's
bsxfun
method, then natan'skron
method, followed last by mine. Interesting to see how Divakar'sbsxfun
method and David'sfor
method are side-by-side in timing.I have plotted on a semi-logarithmic scale on the horizontal axis, while the vertical axis is still a linear scale. Again, the horizontal axis describes the number of rows in the matrix.
I changed the vertical limits so we can see most of the methods. My method is so poor performing that it would squash the other timings towards the lower end of the graph. As such, I changed the viewing limits to take my method out of the picture. Essentially what was seen in Test #2 is verified here.