I am currently working on a project where I need do some steps of processing with legacy Matlab code (using the Matlab engine) and the rest in Python (numpy).
I noticed that converting the results from Matlab's matlab.mlarray.double
to numpy's numpy.ndarray
seems horribly slow.
Here is some example code for creating an ndarray with 1000 elements from another ndarray, a list and an mlarray:
import timeit
setup_range = ("import numpy as np\n"
"x = range(1000)")
setup_arange = ("import numpy as np\n"
"x = np.arange(1000)")
setup_matlab = ("import numpy as np\n"
"import matlab.engine\n"
"eng = matlab.engine.start_matlab()\n"
"x = eng.linspace(0., 1000.-1., 1000.)")
print 'From other array'
print timeit.timeit('np.array(x)', setup=setup_arange, number=1000)
print 'From list'
print timeit.timeit('np.array(x)', setup=setup_range, number=1000)
print 'From matlab'
print timeit.timeit('np.array(x)', setup=setup_matlab, number=1000)
Which takes the following times:
From other array
0.00150722111994
From list
0.0705359556928
From matlab
7.0873282467
The conversion takes about 100 times as long as a conversion from list.
Is there any way to speed up the conversion?
Moments after posting the question I found the solution.
For one-dimensional arrays, access only the
_data
property of the Matlab array.prints
For multi-dimensional arrays you need to reshape the array afterwards. In the case of two-dimensional arrays this means calling
Tim's answer is great for 2D arrays, but a way to adapt it to N dimensional arrays is to use the
order
parameter of np.reshape() :np_x = np.array(x._data).reshape(x.size, order='F')