I'm trying to make a human like dictionary (db) of Persian speech, so I tried to speak and have c# produce it phonetically, but the problem is I cant find any event to find what I spoke. For example, we have SpeechRecognized
event but it fired after recognizing the speech. Here is my code sample:
<pre>
<code>
SpeechRecognizer rec = new SpeechRecognizer();
public Form1(){
InitializeComponent();
rec.SpeechRecognized += rec_SpeechRecognized(rec_SpeechRecognized); // this will fire after recognize
rec.SpeechDetected +=new EventHandler(rec_SpeechDetected); // this will fired each time but with no return
rec.enabled = true;
}
</code>
<pre>
Note:
I want c# to produce the phonetic value of what I say, not to recognize it.
I don't think System.Speech.Recognition will expose a phonetic interpretation of what you said. The Windows recognizer uses a language specific model to try to match words in the specified language.
The speech engine in Windows 7 supports the following languages: Chinese (Simplified), Chinese (Traditional), French, German, Japanese, Spanish, UK English, and US English. See http://msdn.microsoft.com/en-us/goglobal/ee426904
The Microsoft server speech engine supports 26 languages. I don't believe Persian is supported. See http://www.microsoft.com/downloads/en/details.aspx?FamilyID=F704CD64-1DBF-47A7-BA49-27C5843A12D5
Perhaps using C++ and SAPI you can get to the underlying phonemes. If you search "SAPI Phoneme Extraction" you may find something helpful. In particular look at
Speech Recognition with SAPI: Custom Language Support through phenomes which describes building a custom grammar to try to extract phonemes from an alternate language.
Other interesting references I saw at http://developer.valvesoftware.com/wiki/Phoneme_Tool and http://www.mail-archive.com/hlcoders@list.valvesoftware.com/msg19793.html