I'm using the kernel_fpu_begin
and kernel_fpu_end
functions in asm/i387.h to protect the FPU register states for some simple floating point arithmetic inside of a Linux kernel module.
I'm curious about the behavior of calling the kernel_fpu_begin
function twice before the kernel_fpu_end
function, and vice versa. For example:
#include <asm/i387.h>
double foo(unsigned num){
kernel_fpu_begin();
double x = 3.14;
x += num;
kernel_fpu_end();
return x;
}
...
kernel_fpu_begin();
double y = 1.23;
unsigned z = 42;
y -= foo(z);
kernel_fpu_end();
In the foo
function, I call kernel_fpu_begin
and kernel_fpu_end
; but kernel_fpu_begin
was already called before the call to foo
. Would this result in undefined behavior?
Furthermore, should I even be calling kernel_fpu_end
inside the foo
function? I return a double after the kernel_fpu_end
call, which means accessing floating point registers is unsafe right?
My initial guess is just not to use the kernel_fpu_begin
and kernel_fpu_end
calls inside the foo
function; but what if foo
returned the double cast to unsigned instead -- the programmer wouldn't know to use kernel_fpu_begin
and kernel_fpu_end
outside of foo
?
Short answer: no, it is incorrect to nest kernel_fpu_begin()
calls, and it will lead to the userspace FPU state getting corrupted.
Medium answer: This won't work because kernel_fpu_begin()
use the current thread's struct task_struct
to save off the FPU state (task_struct
has an architecture-dependent member thread
, and on x86, thread.fpu
holds the thread's FPU state), and doing a second kernel_fpu_begin()
will overwrite the original saved state. Then doing kernel_fpu_end()
will end up restoring the wrong FPU state.
Long answer: As you saw looking at the actual implementation in <asm/i387.h>
, the details are a bit tricky. In older kernels (like the 3.2 source you looked at), the FPU handling is always "lazy" -- the kernel wants to avoid the overhead of reloading the FPU until it really needs it, because the thread might run and be scheduled out again without ever actually using the FPU or needing its FPU state. So kernel_fpu_end()
just sets the TS flag, which causes the next access of the FPU to trap and cause the FPU state to be reloaded. The hope is that we don't actually use the FPU enough of the time for this to be cheaper overall.
However, if you look at newer kernels (3.7 or newer, I believe), you'll see that there is actually a second code path for all of this -- "eager" FPU. This is because newer CPUs have the "optimized" XSAVEOPT instruction, and newer userspace uses the FPU more often (for SSE in memcpy, etc). The cost of XSAVEOPT / XRSTOR is less and the chance of the lazy optimization actually avoiding an FPU reload is less too, so with a new kernel on a new CPU, kernel_fpu_end()
just goes ahead and restores the FPU state. (
However in both the "lazy" and "eager" FPU modes, there is still only one slot in the task_struct
to save the FPU state, so nesting kernel_fpu_begin()
will end up corrupting userspace's FPU state.
I'm commenting the asm/i387.h Linux source code (version 3.2) with what I understand to be occurring.
static inline void kernel_fpu_begin(void)
{
/* get thread_info structure for current thread */
struct thread_info *me = current_thread_info();
/* preempt_count is incremented by 1
* (preempt_count > 0 disables preemption,
* while preempt_count < 0 signifies a bug) */
preempt_disable();
/* check if FPU has been used before by this thread */
if (me->status & TS_USEDFPU)
/* save the FPU state to prevent clobbering of
* FPU registers, then reset the TS_USEDFPU flag */
__save_init_fpu(me->task);
else
/* clear the CR0.TS bit to prevent
* unnecessary FPU task context saving */
clts();
}
static inline void kernel_fpu_end(void)
{
/* set CR0.TS bit (signifying the processor switched
* to a new task) to enable FPU task context saving */
stts();
/* attempt to re-enable preemption
* (preempt_count is decremented by 1);
* reschedule thread if needed
* (thread will not be preempted if preempt_count != 0) */
preempt_enable();
}
The FXSAVE
instruction is typically used to save the FPU state. However, I believe the memory destination stays the same every time kernel_fpu_begin
is called within the same thread; unfortunately that would mean that FXSAVE
will overwrite the previously saved FPU state.
Therefore I suspect that you CANNOT safely nest kernel_fpu_begin
calls.
What I still cannot understand though is how the FPU state is being restored, since the kernel_fpu_end
call does not appear to execute a FXRSTOR
instruction. Also, why is the CR0.TS
bit set in the kernel_fpu_end
call if we are no longer using the FPU?
Yes, as you defined some double variable & foo
is also returning the double value; you have to use kernel_fpu_begin
and kernel_fpu_end
calls outside foo
also.
Similar Problem also have this which have certain instances where you can code without using kernel_fpu_begin
and kernel_fpu_end
calls.