I am looking for a way to convert text(string) in ENG to speech(sound) in c#. do anyone know for a way or some open-source lib that can help me with this task?
相关问题
- Sorting 3 numbers without branching [closed]
- Graphics.DrawImage() - Throws out of memory except
- Generic Generics in Managed C++
- Why am I getting UnauthorizedAccessException on th
- 求获取指定qq 资料的方法
Recently Google published Google Cloud Text To Speech.
.NET Client version of Google.Cloud.TextToSpeech can be found here: https://github.com/jhabjan/Google.Cloud.TextToSpeech.V1
Nuget:
Install-Package JH.Google.Cloud.TextToSpeech.V1
Here is short example how to use the client:
You can do this with the help of System.Speech.Synthesis namespace. For that, you need to add a reference to System.speech.dll first.
Try this:
You can use .NET lib(System.Speech.Synthesis).
According to Microsoft:
A speech synthesizer takes text as input and produces an audio stream as output. Speech synthesis is also referred to as text-to-speech (TTS).
A synthesizer must perform substantial analysis and processing to accurately convert a string of characters into an audio stream that sounds just as the words would be spoken. The easiest way to imagine how this works is to picture the front end and back end of a two-part system.
Text Analysis
The front end specializes in the analysis of text using natural language rules. It analyzes a string of characters to determine where the words are (which is easy to do in English, but not as easy in languages such as Chinese and Japanese). This front end also figures out grammatical details like functions and parts of speech. For instance, which words are proper nouns, numbers, and so forth; where sentences begin and end; whether a phrase is a question or a statement; and whether a statement is past, present, or future tense.
All of these elements are critical to the selection of appropriate pronunciations and intonations for words, phrases, and sentences. Consider that in English, a question usually ends with a rising pitch, or that the word "read" is pronounced very differently depending on its tense. Clearly, understanding how a word or phrase is being used is a critical aspect of interpreting text into sound. To further complicate matters, the rules are slightly different for each language. So, as you can imagine, the front end must do some very sophisticated analysis.
Sound Generation
The back end has quite a different task. It takes the analysis done by the front end and, through some non-trivial analysis of its own, generates the appropriate sounds for the input text. Older synthesizers (and today's synthesizers with the smallest footprints) generate the individual sounds algorithmically, resulting in a very robotic sound. Modern synthesizers, such as the one in Windows Vista and Windows 7, use a database of sound segments built from hours and hours of recorded speech. The effectiveness of the back end depends on how good it is at selecting the appropriate sound segments for any given input and smoothly splicing them together.
Ready to Use
The text-to-speech capabilities described above are built into the Windows Vista and Windows 7 operating systems, allowing applications to easily use this technology. This eliminates the need to create your own speech engines. You can invoke all of this processing with a single function call. See Speak the Contents of a String.
try this code:
This functionality exists in the main Class Library in the System.Speech namespace. Particularly, look in System.Speech.Synthesis.
Note that you will likely need to add a reference to System.Speech.dll.
As with all MSDN documentation, there are code samples to use. The following is from the System.Speech.Synthesis.SpeechSynthesizer class.
You can do that using System.Speech library. take a look at this example
Change VoiceGender
Change VoiceAge