I have a lot of code that is based on open cv but there are many ways in which the Arm Compute library improves performance, so id like to integrate some arm compute library code into my project. Has anyone tried converting between the two corresponding Image structures? If so, what did you do? Or is there a way to share a pointer to the underlying data buffer without needing to copy image data and just set strides and flags appropriately?
相关问题
- How to get the background from multiple images by
- Try to load image with Highgui.imread (OpenCV + An
- CV2 Image Error: error: (-215:Assertion failed) !s
- Is it a bug of opencv RotatedRect?
- How do I apply a perspective transform with more t
相关文章
- How to use cross_val_score with random_state
- opencv fails to build with ipp support enabled
- How to measure overfitting when train and validati
- McNemar's test in Python and comparison of cla
- Code completion is not working for OpenCV and Pyth
- How to disable keras warnings?
- socket() returns 0 in C client server application
- Invert MinMaxScaler from scikit_learn
I was able to configure an arm_compute::Image corresponding to my cv::Mat properties, allocate the memory, and point it to the data portion of my cv:Mat.
This way, I can process my image efficiently using arm_compute and maintain the opencv infrastructure I had for the rest of my project.
Update for ACL 18.05 or newer
You need to implement IMemoryRegion.h
I have created a gist for that: link