Windows: avoid pushing full x86 context on stack

2020-01-31 02:16发布

问题:

I have implemented PARLANSE, a language under MS Windows that uses cactus stacks to implement parallel programs. The stack chunks are allocated on a per-function basis and are just the right size to handle local variables, expression temp pushes/pops, and calls to libraries (including stack space for the library routines to work in). Such stack frames can be as small as 32 bytes in practice and often are.

This all works great unless the code does something stupid and causes a hardware trap... at which point Windows appears to insist on pushing the entire x86 machine context "on the stack". This is some 500+ bytes if you include the FP/MMX/etc. registers, which it does. Naturally, a 500 byte push on a 32 byte stack smashes things it should not. (The hardware pushes a few words on a trap, but not the entire context).

[EDIT 11/27/2012: See this for measured details on the rediculous amount of stack Windows actually pushes]

Can I get Windows to store the exception context block someplace else (e.g., to a location specific to a thread)? Then the software could take the exception hit on the thread and process it without overflowing my small stack frames.

I don't think this is possible, but I thought I'd ask a much larger audience. Is there an OS standard call/interface that can cause this to happen?

It would be trivial to do in the OS, if I could con MS into letting my process optionally define a context storage location, "contextp", which is initialized to enable the current legacy behavior by default. Then replacing the interrrupt/trap vector codee:

  hardwareint:   push  context
                mov   contextp, esp

... with ...

  hardwareint:  mov <somereg> contextp
                test <somereg>
                jnz  $2
                push  context
                mov   contextp, esp
                jmp $1 
         $2:    store context @ somereg
         $1:    equ   *

with the obvious changes required to save somereg, etc.

[What I do now is: check the generated code for each function. If it has a chance of generating a trap (e.g., divide by zero), or we are debugging (possible bad pointer deref, etc.), add enough space to the stack frame for the FP context. Stack frames now end up being ~~ 500-1000 bytes in size, programs can't recurse as far, which is sometimes a real problem for the applicaitons we are writing. So we have a workable solution, but it complicates debugging]

EDIT Aug 25: I've managed to get this story to a Microsoft internal engineer who has the authority apparantly to find out who in MS might actually care. There might be faint hope for a solution.

EDIT Sept 14: MS Kernal Group Architect has heard the story and is sympathetic. He said MS will consider a solution (like the one proposed) but unlikely to be in a service pack. Might have to wait for next version of Windows. (Sigh...I might grow old...)

EDIT: Sept 13, 2010 (1 year later). No action on Microsoft's part. My latest nightmare: does taking a trap running a 32 bit process on Windows X64, push the entire X64 context on the stack before the interrupt handler fakes pushing a 32 bit context? That'd be even larger (twice as many integer registers twice as wide, twice as many SSE registers(?))?

EDIT: February 25, 2012: (1.5 years have gone by...) No reaction on Microsoft's part. I guess they just don't care about my kind of parallelism. I think this is a disservice to the community; the "big stack model" used by MS under normal circumstance limits the amount of parallel computations one can have alive at any one instant by eating vast amounts of VM. The PARLANSE model will let one have an application with a million live "grains" in various states of running/waiting; this really occurs in some of our applications where a 100 million node graph is processed "in parallel". The PARLANSE scheme can do this with about 1Gb of RAM, which is pretty manageable. If you tried that with MS 1Mb "big stacks" you'd need 10^12 bytes of VM just for the stack space and I'm pretty sure Windows won't let you manage a million threads.

EDIT: April 29, 2014: (4 years have gone by). I guess MS just doesn't read SO. I've done enough engineering on PARLANSE so we only pay the price of large stack frames during debugging or when there are FP operations going on, so we've managed to find very practical ways to live with this. MS has continued to disappoint; the amount of stuff pushed on the stack by various versions of Windows seems to vary considerably and egregiously above and beyond the need for just the hardware context. There's some hint that some of this variability is caused by non-MS products sticking (e.g. antivirus) sticking their nose in the exception handling chain; why can't they do that from outside my address space? Any, we handle all this by simply adding a large slop factor for FP/debug traps, and waiting for the inevitable MS system in the field that exceeds that amount.

回答1:

Basically you would need to re-implement many interrupt handlers, i.e. hook yourself into the Interrupt Descriptor Table (IDT). The problem is, that you would also need to re-implement a kernelmode -> usermode callback (for SEH this callback resides in ntdll.dll and is named KiuserExceptionDispatcher, this triggers all the SEH logic). The point is, that the rest of the system relies upon SEH working the way it does right now, and your solution would break things because you were doing it system wide. Maybe you could check in which process you are at the time of the interrupt. However, the overall concept is prone to errors and very badly affects system stability imho.
These are actually rootkit-like techniques.

Edit:
Some more details: the reason why you would need to re-implement interrupt handlers is, that exceptions (e.g. divide by zero) are essentially software interrupts and those always go through the IDT. When the exception has been thrown, the kernel collects the context and signals the exception back to usermode (through the aforementioned KiUserExceptionDispatcher in ntdll). You'd need to interfere at this point and therefore you would also need to provide a mechanism to get back to user mode. (There is a function in ntdll which is used as the entry point from kernel mode - I don't remember the name but its something with KiUserACP.....)



回答2:

Consider decoupling the parameter/local stack from the real one. Use another register (e. g. EBP) as the effective stack pointer, leave the ESP-based stack the way Windows wants it.

You can't use PUSH/POP anymore. You'd have to use SUB/MOV/MOV/MOV combo instead of PUSH. But hey, beats patching the OS.



回答3:

If Windows uses x86 hardware to implement their trap code, you need ring 0 access (via driver or API) to change which gate is used for traps.

The x86 concept of gate points one of:

  • an interrupt address (code segment + offset pointer) which is called while the whole register context, including return address, is pushed on current stack (=current esp), or
  • a task descriptor, which switches to another task (can be looked upon as hardware-supported thread). All relevant data is pushed to the stack (esp) of that task instead.

You ofcourse want the latter. I would have looked at how Wine implemented it, that might prove more effective than asking google.

My guess is that you unfortunately need to implement a driver to get it working on x86, and according to Wikipedia it is impossible for drivers to change it on IA64 plattform. The second best option might be to interleave space in your stacks, so that a context push from a trap always fits?



回答4:

I ran out of space in the comment box...

Anyways I'm not sure where the vector points, I was basing the comment off of SDD's answer and mention of "KiUserExceptionDispatcher"... except upon further searching (http://www.nynaeve.net/?p=201) it looks like at this point it might be too late.

SIDT can be executed in ring 3... this will reveal the contents of the interrupt table, and you may be able to load the segment and at least read the contents of the table. With any luck you can then read the entry for (for example) vector 0/divide by zero, and read the contents of the handler.

At this point I'd try to match hex bytes to match the code with a system file, but there may be a better way to determine which file the code belongs to (it's not necessarily a DLL, it could be win32k.sys, or it could be dynamically generated, who knows). I don't know if there's a way dump the physical memory layout from user-mode.

If all else fails, you could either set up a kernel-mode debugger or emulate Windows (Bochs), where you can view the interrupt tables and memory layout directly. Then you could trace until the point the CONTEXT is pushed, and look for an opportunity to gain control before that happens.



回答5:

Windows exception handling is called SEH. IIRC you can disable it, but the runtime of the language you are using might not like it.