Short
I would like to stream multiple overlapping audio files (some sound effects that play at certain random times). So kind of a generated audio stream that will NEVER repeat exactly the same way. Some audio files are looping, some are at specific times. Probably kind of realtime stream insertion would be good I guess.
What is the best way to write such a server software? What protocols should be used for streaming that (I prefer over HTTP). I would probably want to expose an url for each configuration (tracks & timing of sound effects).
Any pointers to code/libraries? Best if in any language like java/kotlin/go/rust/ruby/python/node/...
Example
Url: https://server.org/audio?file1=loop&file2=every30s&file2_volume=0.5
Response: Audio stream
(that plays on cast devices)
Stream loops the file1. At every 30s it plays file2 with 50% volume (overlayed over file1 which plays at 100%). File 1 is like 10m9s long. So the the combination never repeats really. So we can not just provide a pregenerated mp3 file.
Some background
I currently have an android application that plays different audio files at random. Some are looping, some play every x seconds. Sometimes as many as 10 at the same time.
Now I would like to add support for chromecast/chromecast audio/google home/... . I guess best would be to have a server that streams that. Every user would have his/her own stream when playing. No need for having multiple users listen to the same stream (even though it probably would be supported as well).
The server would basically read the url, get the configuration and then respond with a audio stream. The server opens one (or multiple audio files) that it then combines/overlays into a single stream. At certain times those audio files are looped. Some other audio files are opened at specific times and added/overlayed to the stream. Each audio file played is played at a different volume level (some are louder, some are quieter). The question is how to make such an audio stream and how to add the different files in in realtime.
So there are two parts to your problem
I can help you with the later part and you need to figure out the first part yourself
Below is a sample nodejs script. Run it create a directory and run
and then save the below file
server.js
Run it using below command
Now in
VLC
openhttp://localhost:9090/merged
Now for your requirement the below part will change
But I am no
ffmpeg
expert to guide you around that area. Perhaps that calls for another question or taking lead from lot of existing SO threadsffmpeg - how to merge multiple audio with time offset into a video?
How to merge two audio files while retaining correct timings with ffmpeg
ffmpeg mix audio at specific time
https://superuser.com/questions/850527/combine-three-videos-between-specific-time-using-ffmpeg