This question is not about a plugin, it's about a standalone application program design and is connected with few questions I've asked before.
I have to write a multi-threaded audio synthesizing function, whose amount of data crunching by far exceeds what can get accomodated on the CoreAudio render thread: several thousands of independent amplitude and phase interpolating sample-accurate sine-wave oscillators in real time. This requires more CPU power than any single processor core can bear, with all the optimizations available.
I'm doing my best to learn it, but it seems a wall, not a curve. The consumer thread may be a simple CA real-time priority render callback accepting AudioBufferList iodata, etc… …but what should be the producer thread(s)? If choosing another AudioComponent, it does no better than having it all on the output thread - it only gets more complicated and introduces additional latency.
If putting n parallel AudioComponents into a graph which feeds a ring buffer, which feeds the consumer thread, how can one guarantee it'll not end up on the same thread, stay in sync and sample-accurate?
If writing n traditional POSIX threads with joining outputs, how to achieve the CoreAudio pull-model would coexist with such a push model in real time?
Is there any such freely available example code ? Is there a reference, a textbook or a tutorial in writing such a code? I haven't found any publicly available information. It makes me kind of wonder that such question hadn't been asked before?
Thanks in advance!