I am using a 32 bit perl in my openvms system.(So perl can access up till 2gb of virtual address space ).
I am hitting "out of memory!" in a large perl script. I zeroed in on the location of variable causing this . However after my tests with devel:size it turns out the array is using only 13 Mb memory and the hash is using much less than that.
My question is about memory profiling this perl script in VMS.
is there a good way of doing memory profile on VMS?
I used size to get size of array and hash.(Array is local scope and hash is global scope)
DV Z01 A4:[INTRO_DIR]$ perl scanner_SCANDIR.PL
Directory is Z3:[new_dir]
13399796 is total on array
3475702 is total on hash
Directory is Z3:[new_dir.subdir]
2506647 is total on array
4055817 is total on hash
Directory is Z3:[new_dir.subdir.OBJECT]
5704387 is total on array
6040449 is total on hash
Directory is Z3:[new_dir.subdir.XFET]
1585226 is total on array
6390119 is total on hash
Directory is Z3:[new_dir.subdir.1]
3527966 is total on array
7426150 is total on hash
Directory is Z3:[new_dir.subdir.2]
1698678 is total on array
7777489 is total on hash
(edited: Pmis-spelled GFLQUOTA )
Where is that output coming from? To OpenVMS folks it suggests files in directories, which the code might suck in? There would typically be considerable malloc/align overhead per element saved.
Anyway the available ADDRESSABLE memory when strictly using 32 pointers on OpenVMS is 1GB: 0x0 .. 0x3fffffff, not 2GB, for programs and (malloc) data for 'P0' space. There is also room in P1 (0x7fffffff .. 0x4000000) for thread-local stack storages, but perl does not use (much) of that.
From a second session you can look at that with DCL:
$ pid = "xxxxxxxx"
$ write sys$output f$getjpi(pid,"FREP0VA"), " ", f$getjpi(pid,"FREP1VA")
$ write sys$output f$getjpi(pid,"PPGCNT"), " ", f$getjpi(pid,"GPGCNT")
$ write sys$output f$getjpi(pid,"PGFLQUOTA")
However... those are just addresses ranges, NOT how much memory the process is allowed to used. That's governed by the process page-file-quota. Check with $ SHOW PROC/QUOTA before running perl. And its usage can be reported as per above from the outside adding Private pages and Groups-shared pages as per above.
An other nice way to look at memory (and other quota) is SHOW PROC/CONT ... and then hit "q"
So how many elements are stored in each large active array? How large is an average element, rounded up to 16 bytes? How many hash elements? How large are the key + value on average (round up generously)
What is the exact message?
Does the program 'blow' up right away, or after a while (so you can use SHOW PROC/CONT)
Is there a source file data set (size) that does work?
Cheers,
Hein.