I'm trying to build an app that takes a streamed audio input (eg: a line in microphone) and does Speech-to-Text using IBM Bluemix (Watson).
I briefly modified the example Java code found here. This example sends a WAV but instead I'm sending a FLAC... this is [should] be irrelevant.
The results are bad, very bad. This is what I get when using the Java Websockets code:
{
"result_index": 0,
"results": [
{
"final": true,
"alternatives": [
{
"transcript": "it was six weeks ago today the terror ",
"confidence": 0.92
}
]
}
]
}
Now, compare the above results with the ones below. These are the results when sending the same thing but instead using cURL (HTTP POST):
{
"results": [
{
"alternatives": [
{
"confidence": 0.945,
"transcript": "it was six weeks ago today the terrorists attacked the U. S. consulate in Benghazi Libya now we've obtained email alerts that were put out by the state department as the attack unfolded as you know four Americans were killed including ambassador Christopher Stevens "
}
],
"final": true
},
{
"alternatives": [
{
"confidence": 0.942,
"transcript": "sharyl Attkisson has our story "
}
],
"final": true
}
],
"result_index": 0
}
That's an almost flawless result.
Why the difference when using Websockets?
The issue was fixed in the
3.0.0-RC1
version.You can get the new jar from:
Maven
Gradle
JAR
Download the jar-with-dependencies(~1.4MB)
Here is an example of how to recognize a flac audio file using WebSockets
}
FLAC file to test: https://s3.amazonaws.com/mozart-company/tmp/4.flac
NOTE:
3.0.0-RC1
is a release candidate. We will do a production release next week (3.0.1
).