Is there prior open-source work done in the field of 'Audio analysis' to detect human-voice (say in spite of some background noise), determine speaker's gender, possibly determine no. of speakers, age of speaker(s), and the emotion of speakers?
My hunch is that the speech recognition software like CMU Sphinx could be a good place to start, but if there's something better, it'd be great.
For your speech/non-speech classification and diarization question (determine number of speakers and when they are speaking): there is an open-source toolkit that can do this (automatically, so there will be mistakes in the output of course). Have a look at this post:
stackoverflow question on diarization
It can be kind of difficult to extract low level information such as pitch and power using CMU Sphinx 4 (though the older version might have the capability). I would suggest you use Praat. You can write scripts to extract the pitch tier and each of the formants in a speaker's voice. Honestly, the Praat scripting language is horrific, but it does many things quickly that would otherwise take a long time. Many Praat scripts are posted online, too. See http://www.fon.hum.uva.nl/praat/.
I'm a graduate student doing speech recognition research. These are open research problems, and, unfortunately, I'm not aware of open-source packages that can do these things out of the box.
If you have some background in implementing signal-processing or machine-learning algorithms, you could try looking up academic papers using some of these search terms:
According to http://cmusphinx.sourceforge.net/sphinx4/doc/Sphinx4-faq.html#speaker_identification, CMU Sphinx, which is probably the leading open-source speech recognizer out there, does not support speaker identification (http://cmusphinx.sourceforge.net/sphinx4/doc/Sphinx4-faq.html#speaker_identification); I'm doubtful that it has any of the other capabilities described above.
Some academic researchers post their code online, and/or might be willing to share it with you. A search of Google Scholar reveals many people who've written Master's or PhD theses using Sphinx, so that could be a good place to start.
Lastly, you could try to implement a very crude gender-recognition algorithm without getting into the speech recognizer itself, if you know a little bit of signal processing. Basically, male and female voices differ in their fundamental frequency - according to Wikipedia (http://en.wikipedia.org/wiki/Voice_frequency), male voices are between 85-180Hz, while female voices are 165Hz-255Hz. You could use something like
sox
to determine the frequency spectrum (using something called the fast Fourier transform) of an utterance and classify speech as "male" or "female" depending on some summary statistic like the average frequency (see http://classicalconvert.com/tag/sox/). To make this work robustly (i.e. with many speakers, microphones, or recording environments), there are plenty of things that you can do. I'm not sure if I can predict how much time and effort would be required to get 70% accuracy, since it would depend on the nature of your task; my sense is that 90%+ would definitely be very hard.Good luck!