I'm trying to use the Microsoft Azure OCR API found here for a React Native app.
I can get the API to work fine on local images with Postman, but for some reason, I get an "Unsupported Media Type" when I try using fetch
within my app.
I originally called the api with this code:
_analyzeImage = () => {
const { image } = this.state;
const url = 'https://westcentralus.api.cognitive.microsoft.com/vision/v1.0/ocr';
const data = new FormData();
data.append(image);
fetch(url, {
method: 'post',
body: data,
headers: {
'Ocp-Apim-Subscription-Key': '***********************',
}
}).then(res => {
console.log(res)
});
}
Where image
looks like:
That, when ran using the XCode simulator, yields:
And the response:
{
"code":"UnsupportedMediaType",
"requestId":"6ff43374-e5f9-4992-9657-82ec1e95b238",
"message": "Supported media types: application/octet-stream, multipart/form-data or application/json"
}
Wierdly, the content-type
seemed to be test/plain
. So, even though I thought that the FormData object was supposed to take care of content type, I tried adding 'content-type': 'multipart/form-data'
, but got the same response (although the content-type
header in the network inspector did change to multipart/form-data
.
I used create-react-native-app
to set up the project, and want to to work on iOS and android. If anyone has any ideas - or any other ways to do OCR, if there's a better native solution - I'd appreciate it!
As stated in the doc page you link to, if you send
application/json, your payload must look like this:
if application/octet-stream,
if multipart/form-data,
Right now you're not sending anything that matches expectations.
Example POST
The image,
Pass image by URL:
or pass image by raw bytes: