This question comes from the recent question "Correct way to cap Mathematica memory use?"
I wonder, is it possible to programmatically restart MathKernel keeping the current FrontEnd process connected to new MathKernel process and evaluating some code in new MathKernel session? I mean a "transparent" restart which allows a user to continue working with the FrontEnd while having new fresh MathKernel process with some code from the previous kernel evaluated/evaluating in it?
The motivation for the question is to have a way to automatize restarting of MathKernel when it takes too much memory without breaking the computation. In other words, the computation should be automatically continued in new MathKernel process without interaction with the user (but keeping the ability for user to interact with the Mathematica as it was originally). The details on what code should be evaluated in new kernel are of course specific for each computational task. I am looking for a general solution how to automatically continue the computation.
The following approach runs one kernel to open a front-end with its own kernel, which is then closed and reopened, renewing the second kernel.
This file is the MathKernel input, C:\Temp\test4.m
The demo notebook, C:\Temp\run.nb contains two cells:
The initial kernel opens a front-end and runs the first cell, then it quits the front-end, reopens it and runs the second cell.
The whole thing can be run either by pasting (in one go) the MathKernel input into a kernel session, or it can be run from a batch file, e.g. C:\Temp\RunTest2.bat
It's a little elaborate to set up, and in its current form it depends on knowing how long to wait before closing and restarting the second kernel.
You can programmatically terminate the kernel using
Exit[]
. The front end (notebook) will automatically start a new kernel when you next try to evaluate an expression.Preserving "some code from the previous kernel" is going to be more difficult. You have to decide what you want to preserve. If you think you want to preserve everything, then there's no point in restarting the kernel. If you know what definitions you want to save, you can use
DumpSave
to write them to a file before terminating the kernel, and then use<<
to load that file into the new kernel.On the other hand, if you know what definitions are taking up too much memory, you can use
Unset
,Clear
,ClearAll
, orRemove
to remove those definitions. You can also set $HistoryLength to something smaller thanInfinity
(the default) if that's where your memory is going.Sounds like a job for CleanSlate.
From: http://library.wolfram.com/infocenter/TechNotes/4718/
"CleanSlate, tries to do everything possible to return the kernel to the state it was in when the CleanSlate.m package was initially loaded."
Perhaps the parallel computation machinery could be used for this? Here is a crude set-up that illustrates the idea:
This is an over-elaborate setup to generate a list of 1,000 triples of numbers.
getTheJobDone
runs a loop that continues until the result list contains the desired number of elements. Each iteration of the loop is evaluated in a subkernel. If the subkernel evaluation fails, the subkernel is relaunched. Otherwise, its return value is added to the result list.To try this out, evaluate:
To demonstrate the recovery mechanism, open the Parallel Kernel Status window and kill the subkernel from time-to-time.
getTheJobDone
will feel the pain and print Ouch! whenever the subkernel dies. However, the overall job continues and the final result is returned.The error-handling here is very crude and would likely need to be bolstered in a real application. Also, I have not investigated whether really serious error conditions in the subkernels (like running out of memory) would have an adverse effect on the main kernel. If so, then perhaps subkernels could kill themselves if
MemoryInUse[]
exceeded a predetermined threshold.Update - Isolating the Main Kernel From Subkernel Crashes
While playing around with this framework, I discovered that any use of shared variables between the main kernel and subkernel rendered Mathematica unstable should the subkernel crash. This includes the use of
DistributeDefinitions[resultSoFar]
as shown above, and also explicit shared variables usingSetSharedVariable
.To work around this problem, I transmitted the
resultSoFar
through a file. This eliminated the synchronization between the two kernels with the net result that the main kernel remained blissfully unaware of a subkernel crash. It also had the nice side-effect of retaining the intermediate results in the event of a main kernel crash as well. Of course, it also makes the subkernel calls quite a bit slower. But that might not be a problem if each call to the subkernel performs a significant amount of work.Here are the revised definitions:
I have a similar requirement when I run a CUDAFunction for a long loop and CUDALink ran out of memory (similar here: https://mathematica.stackexchange.com/questions/31412/cudalink-ran-out-of-available-memory). There's no improvement on the memory leak even with the latest Mathematica 10.4 version. I figure out a workaround here and hope that you may find it's useful. The idea is that you use a bash script to call a Mathematica program (run in batch mode) multiple times with passing parameters from the bash script. Here is the detail instruction and demo (This is for Window OS):
Here is a demo of the test.m file
This mathematica code read the parameter from a commandline and use it for calculation. Here is the bash script (script.sh) to run test.m many times with different parameters.
In the cygwin terminal type "chmod a+x script.sh" to enable the script then you can run it by typing "./script.sh".
From a comment by Arnoud Buzing yesterday, on Stack Exchange Mathematica chat, quoting entirely:
In a notebook, if you have multiple cells you can put Quit in a cell by itself and set this option:
Then if you have a cell above it and below it and select all three and evaluate, the kernel will Quit but the frontend evaluation queue will continue (and restart the kernel for the last cell).
-- Arnoud Buzing