-->

how to create a blob in node.js to be used in a we

2019-09-05 15:09发布

问题:

I'm trying to use IBM's websocket implementation of their speech-to-text service. Currently I'm unable to figure out how to send a .wav file over the connection. I know I need to transform it into a blob, but I'm not sure how to do it. Right now I'm getting errors of:

You must pass a Node Buffer object to WebSocketConnec

-or-

Could not read a WAV header from a stream of 0 bytes

...depending on what I try to pass to the service. It should be noted that I am correctly sending the start message and am making it to the state of listening.

回答1:

Starting from the v1.0 (still in beta) the watson-developer-cloud npm module has support for websockets.

npm install watson-developer-cloud@1.0.0-beta.2

Recognize a wav file:

var watson = require('watson-developer-cloud');
var fs = require('fs');

var speech_to_text = watson.speech_to_text({
  username: 'INSERT YOUR USERNAME FOR THE SERVICE HERE',
  password: 'INSERT YOUR PASSWORD FOR THE SERVICE HERE',
  version: 'v1',
});


// create the stream
var recognizeStream = speech_to_text.createRecognizeStream({ content_type: 'audio/wav' });

// pipe in some audio
fs.createReadStream('audio-to-recognize.wav').pipe(recognizeStream);

// and pipe out the transcription
recognizeStream.pipe(fs.createWriteStream('transcription.txt'));


// listen for 'data' events for just the final text
// listen for 'results' events to get the raw JSON with interim results, timings, etc.

recognizeStream.setEncoding('utf8'); // to get strings instead of Buffers from `data` events

['data', 'results', 'error', 'connection-close'].forEach(function(eventName) {
  recognizeStream.on(eventName, console.log.bind(console, eventName + ' event: '));
});

See more examples here.