Identifying segments when a person is speaking?

2019-01-14 04:21发布

Does anyone know a (preferably C# .Net) library that would allow me to locate, in voice recordings, those segments in which a specific person is speaking?

2条回答
成全新的幸福
2楼-- · 2019-01-14 04:43

While the above answer is accurate, I have an update to the installation issue occured to me on Linux while installing SHoUT. undefined reference to pthread_join whose solution I found was to open configure-make.sh from SHoUT installation zip and modify the line

CXXFLAGS="-O3 -funroll-loops -mfpmath=sse -msse -msse2" LDFLAGS="-lpthread" ../configure

to

CXXFLAGS="-O3 -funroll-loops -mfpmath=sse -msse -msse2" LDFLAGS="-pthread" ../configure

NOTE the lpthread to changed to pthread on Linux Systems.

OS: Linux Mint 18 where SHoUT version: release-2010-version-0-3

查看更多
家丑人穷心不美
3楼-- · 2019-01-14 04:58

It's possible with the toolkit SHoUT: http://shout-toolkit.sourceforge.net/index.html

It's written in C++ and tested for Linux, but it should also run under Windows or OSX.

The toolkit was a by-product of my PhD research on automatic speech recognition (ASR). Using it for ASR itself is perhaps not that straightforward, but for Speech Activity Detection (SAD) and diarization (finding all speech of one specific person) it is quite easy to use. Here is an example:

  1. Create a headerless pcm audio file of 16KHz, 16bits, little-endian, mono. I use ffmpeg to create the raw files: ffmpeg -i [INPUT_FILE] -vn -acodec pcm_s16le -ar 16000 -ac 1 -f s16le [RAW_FILE] Prefix the headerless data with little endian encoded file size (4 bytes). Be sure the file has .raw extension, as shout_cluster detects file type based on extension.

  2. Perform speech/non-speech segmentation: ./shout_segment -a [RAW_FILE] -ams [SHOUT_SAD_MODEL] -mo [SAD_OUTPUT] The output file will provide you with segments in which someone is speaking (labeled with "SPEECH". Of course, because it is all done automatically, the system might make mistakes..), in which there is sound that is not speech ("SOUND"), or silence ("SILENCE").

  3. Perform diarization: ./shout_cluster -a [RAW_FILE] -mo [DIARIZATION_OUTPUT] -mi [SAD_OUTPUT] Using the output of the shout_segment, it will try to determine how many speakers were active in the recording, label each speaker ("SPK01", "SPK02", etc) and then find all speech segments of each of the speakers.

I hope this will help!

查看更多
登录 后发表回答