I have 3 threads in my program
t1
reads a frame1 of data and
writes it onto a hard disk
t2
reads a frame2 of data and
writes it onto a hard disk
t2
reads a frame3 of data and
writes it onto a hard disk
When the program runs, and t1 t2 and t3 are scheduled for execution one by one, how are the operations performed internally?
Ex:
say t1 -> t2 -> t3 get scheduled in this order
Scenario 1:
will t1 finish one full cycle of read frame1 and write frame1 before t2 is scheduled and whether t2 finishes one full cycle of read frame2 and write frame2 before t3 is scheduled and so on?
or
Scenario 2:
can t1 or t2 or t3 or few or all of these threads' execution be stopped in the middle of their execution before the next thread gets scheduled?
Which of these scenarios is correct?
I am especially mentioning hard disk write as there is a possibility of a blocking fwrite call, which cannot be left in the middle of its execution
You should consider (and code and think) as if all threads are running concurrently (e.g. a the same time on different cores of your processor).
A thread usually don't write directly to the disk: it is writing files to some file system (and the kernel is buffering, e.g. in the page cache, so the disk IO can happen several seconds later).
If you need synchronization, you should make it explicitly (e.g. with mutexes). If you need to synchronize file contents, consider using some file locking machinery à la lockf(3) (but you should really avoid having several threads or processes accessing and writing the same file). BTW stdio is buffered (so you might want to fflush(3) after fwrite(3)...)
And when the kernel is scheduling some thread or process, it will schedule preemptively at arbitrary time (at any machine instruction).
Read some pthread tutorial and Operating Systems: Three Easy Pieces. Read also about memory models (it is tricky).
So all your scenarii could and are likely to be wrong.
How much of a thread's code get executed ever time it is scheduled?
You should not care, and you cannot know. It can be as tiny as nothing (read about thrashing), and as large as several millions machine instructions. BTW, be aware of optimizing compilers and of sequence points in C; so actually the question does not even make sense (from the observable point of view of a C programmer).
I am especially mentioning hard disk write as there is a possibility of a blocking fwrite
call
When the stdio library (or directly your application) is actually write(2)-ing a file descriptor, it is likely (but not certain) that the kernel will schedule tasks during such system calls. However, the actual disk IO will happen probably later.
PS. Read also about undefined behavior.
It depends on the method (or methods) these threads are calling. If all these threads are calling a same method and if that method is synchronized then only one thread will be processing it at a time. During that time rest of the threads will wait for currently running thread to complete. If not synchronized or threads are calling different methods then there is no guarantee which thread will get processed first or finish first. They also may end up overwriting class-level variables.