I'm not exactly sure how to tag this question or how to write the title, so if anyone has a better idea, please edit it
Here's the deal:
Some time ago I had written a little but cruicial part of a computing olympiad management system. The system's job is to get submissions from participants (code files), compile them, run them against predefined test cases, and return results. Plus all the rest of the stuff you can imagine it should do.
The part I had written was called Limiter. It was a little program whose job was to take another program and run it in a controlled environment. Controlled in this case means limitations on available memory, computing time and access to system resources. Plus if the program crashes I should be able to determine the type of the exception and report that to the user. Also, when the process terminated, it should be noted how long it executed (with a resolution of at least 0.01 seconds, better more).
Of course, the ideal solution to this would be virtualization, but I'm not that experienced to write that.
My solution to this was split into three parts.
The simplest part was the access to system resources. The program would simply be executed with limited access tokens. I combined some of the basic (Everyone, Anonymous, etc.) access tokens that are available to all processes in order to provide practically a read-only access to the system, with the exception of the folder it was executing in.
The limitation of memory was done through job objects - they allow to specify maximum memory limit.
And lastly, to limit execution time and catch all the exceptions, my Limiter attaches to the process as a debugger. Thus I can monitor the time it has spent and terminate it if it takes too long. Note, that I cannot use Job objects for this, because they only report Kernel Time and User Time for the job. A process might do something like Sleep(99999999)
which would count in none of them, but still would disable the testing machine. Thus, although I don't count a processes idle time in its final execution time, it still has to have a limit.
Now, I'm no expert in low-level stuff like this. I spent a few days reading MSDN and playing around, and came up with a solution as best I could. Unfortunately it seems it's not running as well as it could be expected. For most part it seems to work fine, but weird cases keep creeping up. Just now I have a little C++ program which runs in a split second on its own, but my Limiter reports 8 seconds of User mode time (taken from job counters). Here's the code. It prints the output in about half a second and then spends more than 7 seconds just waiting:
#include <iostream>
#include <vector>
using namespace std;
int main()
{
vector< vector<int> > dp(50000, vector<int>(4, -1));
cout << dp.size();
}
The code of the limiter is pretty lengthy, so I'm not including it here. I also feel that there might be something wrong with my approach - perhaps I shouldn't do the debugger stuff. Perhaps there are some common pitfalls that I don't know of.
I would like some advice on how other people would tackle this problem. Perhaps there is already something that does this, and my Limiter is obsolete?
Added: The problem seems to be in the little program that I posted above. I've opened a new question for it, since it is somewhat unrelated. I'd still like comments on this approach for limiting a program.