I created a C DLL out of my C++ class which uses OpenCV
for image manipulations and want to use this DLL in my C# application. Currently, this is how I have implemented it:
#ifdef CDLL2_EXPORTS
#define CDLL2_API __declspec(dllexport)
#else
#define CDLL2_API __declspec(dllimport)
#endif
#include "../classification.h"
extern "C"
{
CDLL2_API void Classify_image(unsigned char* img_pointer, unsigned int height, unsigned int width, char* out_result, int* length_of_out_result, int top_n_results = 2);
//...
}
C# related code:
DLL Import section:
//Dll import
[DllImport(@"CDll2.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
static extern void Classify_Image(IntPtr img, uint height, uint width, byte[] out_result, out int out_result_length, int top_n_results = 2);
The actual function sending the image to the DLL:
//...
//main code
private string Classify(int top_n)
{
byte[] res = new byte[200];
int len;
Bitmap img = new Bitmap(txtImagePath.Text);
BitmapData bmpData = img.LockBits(new Rectangle(0, 0, img.Width, img.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
Classify_Image(bmpData.Scan0, (uint)bmpData.Height, (uint)bmpData.Width, res, out len, top_n);
img.UnlockBits(bmpData); //Remember to unlock!!!
//...
}
and the C++ code in the DLL :
CDLL2_API void Classify_Image(unsigned char* img_pointer, unsigned int height, unsigned int width,
char* out_result, int* length_of_out_result, int top_n_results)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
cv::Mat img = cv::Mat(height, width, CV_8UC3, (void*)img_pointer, Mat::AUTO_STEP);
std::vector<Prediction> result = classifier->Classify(img, top_n_results);
//...
*length_of_out_result = ss.str().length();
}
This works perfectly with some images but it doesn't work with others, for example when I try to imshow
the image in the Classify_Image
, right after being created from the data sent by C# application, I am faced with images like this :
Problematic example:
Fine example:
Your initial issue is to do with what is called stride or pitch of image buffers. Basically for performance reasons pixel row values can be memory aligned, here we see that in your case it's causing the pixel rows to not align because the row size is not equal to the pixel row width.
The general case is:
in your case the bitmap class state:
So if we look at the problematic image, it has a resolution of 1414 pixel width, this is a 8-bit RGB bitmap so if we do the maths:
So now divide by 4-bytes:
So we are left with
0.5 * 4 bytes = 2 bytes padding
So the stride is in fact 4244 bytes.
So this needs to be passed through so that the stride is correct.
Looking at what you're doing, I'd pass the file as memory to your openCV dll, this should be able to call
imdecode
which will sniff the file type, additionally you can pass the flagcv::IMREAD_GRAYSCALE
which will load the image and convert the grayscale on the fly.