I've tried to implement continuous SpeechRecognition mechanism. When I start speech recognition I get following messages in logcat:
06-05 12:22:32.892 11753-11753/com.aaa.bbb D/SpeechManager: startSpeechRecognition:
06-05 12:22:33.022 11753-11753/com.aaa.bbb D/SpeechManager: onError: Error 7
06-05 12:22:33.352 11753-11753/com.aaa.bbb D/SpeechManager: onReadyForSpeech:
06-05 12:22:33.792 11753-11753/com.aaa.bbb D/SpeechManager: onBeginningOfSpeech: Beginning
06-05 12:22:34.492 11753-11753/com.aaa.bbb D/SpeechManager: onEndOfSpeech: Ending
06-05 12:22:34.612 11753-11753/com.aaa.bbb D/SpeechManager: onError: Error 7
and this error 7 which means ERROR_NO_MATCH. As you see it is called almost immediately. Isn't it inproper behavior?
Here are full logs between startSpeechRecognition and first error 7:
06-05 12:22:32.892 11753-11753/com.aaa.bbb D/SpeechManager: startSpeechRecognition:
06-05 12:22:32.932 4600-4600/? I/GRecognitionServiceImpl: #startListening [en-US]
--------- beginning of system
06-05 12:22:32.932 3510-7335/? V/AlarmManager: remove PendingIntent] PendingIntent{6307291: PendingIntentRecord{2af25f6 com.google.android.googlequicksearchbox startService}}
06-05 12:22:32.932 4600-4600/? W/LocationOracle: Best location was null
06-05 12:22:32.932 3510-4511/? D/AudioService: getStreamVolume 3 index 90
06-05 12:22:32.942 3510-7335/? D/SensorService: SensorEventConnection::SocketBufferSize, SystemSocketBufferSize - 102400, 2097152
06-05 12:22:32.942 3510-7360/? D/Sensors: requested delay = 66667000, modified delay = 0
06-05 12:22:32.942 3510-7360/? I/Sensors: Proximity old sensor_state 16384, new sensor_state : 16512 en : 1
06-05 12:22:32.952 4600-4600/? D/SensorManager: registerListener :: 5, TMD4903 Proximity Sensor, 66667, 0,
06-05 12:22:32.952 4600-11932/? D/SensorManager: Proximity, val = 8.0 [far]
06-05 12:22:32.952 3510-5478/? I/Sensors: Acc old sensor_state 16512, new sensor_state : 16513 en : 1
06-05 12:22:32.952 3510-4705/? I/Sensors: Mag old sensor_state 16513, new sensor_state : 16529 en : 1
06-05 12:22:32.952 3510-4037/? I/AppOps: sendInfoToFLP, code=41 , uid=10068 , packageName=com.google.android.googlequicksearchbox , type=startOp
06-05 12:22:32.962 3510-4511/? D/SensorService: GravitySensor2 setDelay ns = 66667000 mindelay = 66667000
06-05 12:22:32.962 3510-4511/? I/Sensors: RotationVectorSensor old sensor_state 16529, new sensor_state : 147601 en : 1
06-05 12:22:32.972 3510-3617/? V/BroadcastQueue: [background] Process cur broadcast BroadcastRecord{f9fab82 u0 com.google.android.apps.gsa.search.core.location.GMS_CORE_LOCATION qIdx=4}, state= (APP_RECEIVE) DELIVERED for app ProcessRecord{cb66323 4600:com.google.android.googlequicksearchbox:search/u0a68}
06-05 12:22:32.972 3510-4040/? D/NetworkPolicy: isUidForegroundLocked: 10068, mScreenOn: true, uidstate: 2, mProxSensorScreenOff: false
06-05 12:22:32.982 3510-7360/? D/AudioService: getStreamVolume 3 index 90
06-05 12:22:32.982 3510-3971/? I/Sensors: ProximitySensor - 8(cm)
06-05 12:22:32.992 4600-11315/? I/MicrophoneInputStream: mic_starting com.google.android.apps.gsa.speech.audio.ah@ef02224
06-05 12:22:32.992 3140-3989/? I/APM::AudioPolicyManager: getInputForAttr() source 6, samplingRate 16000, format 1, channelMask 10,session 84, flags 0
06-05 12:22:32.992 3140-3989/? V/audio_hw_primary: adev_open_input_stream: request sample_rate:16000
06-05 12:22:32.992 3140-3989/? V/audio_hw_primary: in->requested_rate:16000, pcm_config_in.rate:48000 in->config.channels=2
06-05 12:22:32.992 3140-3989/? D/audio_hw_primary: adev_open_input_stream: call echoReference_init(12)
06-05 12:22:32.992 3140-3989/? V/echo_reference_processing: echoReference_init +
06-05 12:22:32.992 3140-3989/? I/audio_hw_primary: adev_open_input_stream: input is null, set new input stream
06-05 12:22:32.992 4600-11932/? D/SensorManager: Proximity, val = 8.0 [far]
06-05 12:22:32.992 3510-3555/? I/MediaFocusControl: AudioFocus requestAudioFocus() from android.media.AudioManager$8c7dfbdcom.google.android.apps.gsa.speech.audio.c.a$1$c7409b2 req=4flags=0x0
06-05 12:22:32.992 3140-11937/? I/AudioFlinger: AudioFlinger's thread 0xecac0000 ready to run
06-05 12:22:33.012 4600-11317/? W/CronetAsyncHttpEngine: Upload request without a content type.
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get()
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() : Execute directly (BG thread)
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get()
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() : Execute directly (BG thread)
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get()
06-05 12:22:33.012 3510-4533/? D/BatteryService: !@BatteryListener : batteryPropertiesChanged!
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() : Execute directly (BG thread)
06-05 12:22:33.012 3510-4533/? D/BatteryService: level:80, scale:100, status:2, health:2, present:true, voltage: 4093, temperature: 337, technology: Li-ion, AC powered:false, USB powered:true, POGO powered:false, Wireless powered:false, icon:17303446, invalid charger:0, maxChargingCurrent:0
06-05 12:22:33.012 3510-4533/? D/BatteryService: online:4, current avg:48, charge type:1, power sharing:false, high voltage charger:false, capacity:280000, batterySWSelfDischarging:false, current_now:240
06-05 12:22:33.012 3510-3510/? D/BatteryService: Sending ACTION_BATTERY_CHANGED.
06-05 12:22:33.022 11753-11753/com.aaa.bbb D/SpeechManager: onError: Error 7
And here's my code:
public class SpeechManager {
private static final String TAG = "SpeechManager";
private final MainActivity mActivity;
private final SpeechRecognizer mSpeechRecognizer;
private boolean mTurnedOn = false;
private final Intent mRecognitionIntent;
private final Handler mHandler;
public SpeechManager(@NonNull MainActivity activity) {
mActivity = activity;
mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(mActivity.getApplicationContext());
mSpeechRecognizer.setRecognitionListener(new MySpeechRecognizer());
mHandler = new Handler(Looper.getMainLooper());
mRecognitionIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
// mRecognitionIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
mRecognitionIntent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS, false);
mRecognitionIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "en-US");
}
public void startSpeechRecognition() {
Log.d(TAG, "startSpeechRecognition: ");
mTurnedOn = true;
mSpeechRecognizer.startListening(mRecognitionIntent);
}
public void stopSpeechRecognition() {
Log.d(TAG, "stopSpeechRecognition: ");
if (mTurnedOn) {
mTurnedOn = false;
mSpeechRecognizer.stopListening();
}
}
public void destroy() {
Log.d(TAG, "destroy: ");
mSpeechRecognizer.destroy();
}
private class MySpeechRecognizer implements RecognitionListener {
@Override
public void onReadyForSpeech(Bundle params) {
Log.d(TAG, "onReadyForSpeech: ");
}
@Override
public void onBeginningOfSpeech() {
Log.d(TAG, "onBeginningOfSpeech: Beginning");
}
@Override
public void onRmsChanged(float rmsdB) {
}
@Override
public void onBufferReceived(byte[] buffer) {
Log.d(TAG, "onBufferReceived: ");
}
@Override
public void onEndOfSpeech() {
Log.d(TAG, "onEndOfSpeech: Ending");
}
@Override
public void onError(int error) {
Log.d(TAG, "onError: Error " + error);
if (error == SpeechRecognizer.ERROR_NETWORK || error == SpeechRecognizer.ERROR_CLIENT) {
mTurnedOn = false;
return;
}
if (mTurnedOn)
mHandler.postDelayed(new Runnable() {
@Override
public void run() {
// mSpeechRecognizer.cancel();
startSpeechRecognition();
}
}, 100);
}
@Override
public void onResults(Bundle results) {
Log.d(TAG, "onResults: ");
ArrayList<String> partialResults = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
if (partialResults != null && partialResults.size() > 0) {
for (String str : partialResults) {
Log.d(TAG, "onResults: " + str);
if (str.equalsIgnoreCase(mActivity.getString(R.string.turn_off_recognition))) {
FlashManager.getInstance().turnOff();
mTurnedOn = false;
return;
}
}
}
mHandler.postDelayed(new Runnable() {
@Override
public void run() {
startSpeechRecognition();
}
}, 100);
}
@Override
public void onPartialResults(Bundle partialResults) {
Log.d(TAG, "onPartialResults: ");
}
@Override
public void onEvent(int eventType, Bundle params) {
Log.d(TAG, "onEvent: " + eventType);
}
}
}
My device is Samsung Note5. How can I fix that?
Yes you are correct. Google speech recognition is linked to their "Google App". Since version 5.10 (Feb.2016) things have started to go wrong and with recent version 6 it is a complete mess! See my thread here in the Basic4Android forum: https://www.b4x.com/android/forum/threads/google-speech-recognition-sr-not-working-properly-anymore.68871/
This is a known bug, which I filed a report on. You can reproduce the issue using this simple gist.
The only way around it, is to recreate thesee edit. This causes other issues, as mentioned in the gist, but none that will cause a problem for your app.SpeechRecognizer
object every time.Google will eventually find a way to prevent continuous listening, as it's not what the API was designed for. You're better off looking at PocketSphinx as a long term option.
EDIT 22.06.16 - With the most recent Google release, the behaviour has changed for the worse. A new solution is linked from the gist which subclasses the
RecognitionListener
to only react to 'genuine' callbacks.EDIT 01.07.16" - See this question for another new bug