I'm planning to make an universal application that analyses audio samples. When I say 'universal' I mean that any technology (Javascript, C, Java, etc) can use it. Basically I made an application on iOS, using Apple's AVFoundation, that receives on real time the microphone samples at a lenght of 512 (bufferSize = 512). At Python I made the same thing, using PyAudio, but unfortunately I received very different values...
Look the samples:
Samples of bufferSize = 512 on iOS:
[0.0166742969, 0.0181432627, 0.0184620395, 0.0182254426, 0.0181945376, 0.0185530782, 0.0192517322, 0.0199078992, 0.0204724055, 0.0212812237, 0.022370765, 0.0230008475, 0.0225516111, 0.0213304944, 0.0200473778, 0.019841563, 0.0206818394, 0.0211550407, 0.0207783803, 0.020227218 ....
Samples of bufferSize = 512 on Python:
[ -52. -32. -11. 10. 24. 31. 37. 38. 33. 25. 10. -4.
-18. -26. -29. -39. ....
For more:
https://pastebin.com/jrM2VWXR
The Python code:
https://gist.github.com/denisb411/7c6f601175e8bb9f735d8aa43a0db340
On both cases I used the same computer.
How do I find a way to 'convert'(don't know if this is the proper word) them to the same scale?
If I wasn't clear at the question please notify me.