I'm used to debug my code using ghci. Often, something like this happens (not so obvious, of course):
ghci> let f@(_:x) = 0:1:zipWith(+)f x
ghci> length f
Then, nothing happens for some time, and if I don't react fast enough, ghci has eaten maybe 2 GB of RAM, causing my system to freeze. If it's too late, the only way to solve this problem is [ALT] + [PRINT] + [K].
My question: Is there an easy way to limit the memory, which can be consumed by ghci to, let's say 1 GB? If limit is exceed, the calculation should ve aborted or ghci should be killed.
Running it under a shell with
ulimit -m
set is a fairly easy way. If you want to run with some limit on a regular basis, you can create a wrapper script that doesulimit
before runningghci
.A platform independant way to accomplish this is to supply the
-M
option as on option to the Haskell runtime like thissee the GHC documentation’s page on how to control the RTS (runtime system) for details.
The
ghci
output now looks like: