I read the man page of a signal using man 7 signal where I see two types of signal. So, I have question,
What is the difference between POSIX reliable signals and POSIX real-time signals in Linux?
I read the man page of a signal using man 7 signal where I see two types of signal. So, I have question,
What is the difference between POSIX reliable signals and POSIX real-time signals in Linux?
These days, it might be better to phrase these as ordinary signal semantics versus realtime signal semantics.
In some early UNIX systems, signals were unreliable in that they could be "lost", because there was no facility to block signals (to keep them pending). For example, code about to call pause() after checking a wake_up_flag
set by a signal handler might miss the wake up instruction if the signal arrived just after the check but before the pause(). Signal blocking and sigpause() are reliable improvements to this situation.
Additionally, the semantics of signal() meant that a user-defined signal handler was reset to SIG_DFL upon entry into the handler. The usual technique, then, was to immediately re-install the user-defined disposition inside the signal hander. However, since signals could not be blocked, that meant that there was a race condition in which a program could be signaled again and suffer the consequences of SIG_DFL. In modern systems, sigaction() addresses this situation reliably.
So, "reliable" signals are what most of us these days think of as ordinary signal semantics. (For more information here, I'd recommend Advanced Programming in the UNIX Environment by Stephens and Rago, specifically § 10.4 "Unreliable Signals")
POSIX realtime signals add a few features over ordinary signals, for example, a new range of signals for application purposes (SIGRTMIN ... SIGRTMAX), the ability to queue pending signals, and the ability to deliver a word of data with a signal.