I use Jupyter Notebook to run a series of experiments that take some time. Certain cells take way too much time to execute so it's normal that I'd like to close the browser tab and come back later. But when I do the kernel interrupts running.
I guess there is a workaround for this but I can't find it
The simplest workaround to this seems to be the built-in cell magic
%%capture
:Save, close tab, come back later. The output is now stored in the
output
variable:This will show all interim
print
results as well as the plain or rich output cell.TL;DR:
Code doesn't stop on tab closes, but the output can no longer find the current browser session and loses data on how it's supposed to be displayed, causing it to throw out all new output received until the code finishes that was running when the tab closed.
Long Version:
Unfortunately, this isn't implemented (Nov 24th). If there's a workaround, I can't find it either. (Still looking, will update with news.) There is a workaround that saves output then reprints it, but won't work if code is still running in that notebook. An alternative would be to have a second notebook that you can get the output in.
I also need this functionality, and for the same reason. The kernel doesn't shut down or interrupt on tab closes. And the code doesn't stop running when you close a tab. The warning given is exactly correct, "The kernel is busy, outputs may be lost."
Running
in one box, then closing the tab, opening it up again, and then running
from another box will cause it to hang until the 100 seconds have finished and the code completes, then it will print 100.
When a tab is closed, when you return, the python process will be in the same state you left it (when the last save completed). That was their intended behavior, and what they should have been more clear about in their documentation. The output from the run code actually gets sent to the browser upon reopening it, (lost the reference that explains this,) so hacks like the one in this comment will work as it can receive those and just throw them into some cell.
Output is kind of only saved in an accessible way through the endpoint connection. They've been working on this for a while (before Jupyter), although I cannot find the current bug in the Jupyter repository (this one references it, but is not it).
The only general workaround seems to be finding a computer you can always leave on, and leaving that on the page while it runs, then remote in or rely on autosave to be able to access it elsewhere. This is a bad way to do it, but unfortunately, the way I have to for now.
Related questions:
I am struggling with this issue as well for some time now.
My workaround was to write all my logs to a file, so that when my browser closes (indeed when a lot of logs come through browser it hangs up too) I can see the kernel job process by opening the log file (the log file can be open using Jupyter too).
First, install
And now run your notebook in the background with the below command:
now the output file will be saved and also you can see the logs while running with: