I am writing a thin wrapper around ArUco augmented reality library (which is based on OpenCV). An interface I am trying to build is very simple:
- Python passes image to C++ code;
- C++ code detects markers and returns their locations and other info to Python as tuple of dicts.
However, I couldn't figure out how to represent an image in Python to pass it to C++. For GUI and camera management I am going to use PyQt, so initially it is going to be QImage, but I can't simply pass it to OpenCV (or I can?). At first, I tried to use nested tuples to represent row, column and color of each pixel, so I ended up with this sample code:
using namespace cv;
namespace py = boost::python;
void display(py::tuple pix)
{
/*
Receive image from Python and display it.
*/
Mat img(py::len(pix), py::len(pix[0]), CV_8UC3, Scalar(0, 0, 255));
for (int y = 0; y < py::len(pix); y++)
for (int x = 0; x < py::len(pix[y]); x++)
{
Vec3b rgb;
for (int i = 0; i < 3; i++)
rgb[i] = py::extract<int>(pix[y][x][i]);
img.at<Vec3b>(Point(x, y)) = rgb;
}
imshow("Image", img);
waitKey(0);
}
BOOST_PYTHON_MODULE(aruco)
{
py::def("display", display);
}
It turned out to be painfully slow (a few seconds for a single frame), so I went googling and found solution that should be much faster: use NumPy arrays, so the code would look something like that:
void display(py::object array)
{
Mat img;
// ... some magic here to convert NumPy array to Mat ...
imshow("Image", img);
waitKey(0);
}
However, I have no idea how to convert NumPy Array (which in C++ level is just a Python Object) to OpenCV Mat. I would appreciate any help here.
Alternatively, maybe NumPy is not really needed, so I could just pass QImage Python object directly to C++ layer? Or maybe there is a different approach to this problem? Any advice is appreciated!