I'm trying to figure out how to write an app that can decode audio morse code on the fly. I found this document which explains how to record audio from the microphone in Android. What I'd like to know is whether it's possible to access the raw input from the microphone or whether it has to be written/read to a file.
Thanks.
there is a sensing framework from MIT media labs called funf: http://code.google.com/p/funf-open-sensing-framework/
They already created classes for audio input and some analysis (FFT and the like), also saving to files or uploading is implemented as far as I've seen, and they handle most of the sensors available on the phone. You can also get inspired from the code they wrote, which I think is pretty good.
It looks like it has to be dumped first to a file.
If you peek at the android.media.AudioRecord source, the native audio data byte buffers are not exposed to the public API.
In my experience, having built an audio synthesizer for Android, it's hard to achieve real-time performance and maintain audio fidelity. A Morse Code 'translator' is certainly doable though, and sounds like a fun little project. Good Luck!
Using AudioRecord is overkill. Just check MediaRecorder.getMaxAmplitude() every 1000 milliseconds for loud noises versus silence.
If you really need to analyze the waveform, then yes you need AudioRecord. Get the raw data and calculate something like the root mean squared of the part of the raw bytes you are concerned with to get a sense of the volume.
But, why do all that when MediaRecorder.getMaxAmplitude() is so much easier to use.
see my code from this answer: this question
If you use MediaRecorder (the example, above) it will save compressed audio to a file.
If you use AudioRecord, you can get audio samples directly.
Yes, what you want to do should be possible.