I want to render 4 millions triangles in my windows based software which is written in Visual Studio C++ 2010 (Build in Release Mode). When I render 3.9 millions triangles, the total RAM memory consumed by the software is 400MB. But when I try to render 4 millions triangles (just 100K more), the system gives me an error.
For Example:
Point *P = new (std::nothrow) Point[nb_triangles]; //==> "(std::nothrow)" is for catching the run time memory allocation error. (Point is X, Y, Z floats)
If(P == NULL)
message("System can't allocate this much memory."); // System gives me this error. It means the system can't reserve huge memory for this operation.
I have to allocate memory for vertices, face normals, vertex normals, etc.
Actually what I am not getting is, I have 8 GB RAM memory, (but in 32 bit XP windows = 3.2GB memory), and the software is just reserved 400MB, the free memory is more than 1 GB, but when I try to render just 100K triangles more, it gives me an error. Why it is giving me an error? because it still has more than 1 GB free RAM memory?
Is there anyway to fix this issue, how can I allocate all the available memory for my application ? Because of this issue, I have to make a limit in the software just for rendering 3.9 millions triangles and it is not good.
And one more question in my mind is, c++ "new" operator for memory allocation giving me error, how about c "malloc" operator ? can "malloc" fix this issue, is there any diffirence between these two?
Please guide me. Thanks.
Update # 1:
I have tried a lot, modify the code, remove memory leaks, etc, but I can not allocate memory more than 4 millions. Its not possible to change my whole code into "vector". I can't change into "vector", I have to stuck on my own data structure now with "new". Following are the pointers that I want to allocate in order to render 1 object.
P = new points[10000000]; // points is the class with 3 floats X, Y, Z;
N = new Norm[10000000]; // Norm is the class with 3 floats X, Y, Z;
V = new vNorm[10000000]; // vNorm is the class with 3 floats X, Y, Z;
T = new Tri[10000000]; // Tri is the class with 3 integers v1, v2, v3;
For one of the questions:
is there any diffirence between these two?
the different between new and malloc is as follows:
malloc
is used in C,malloc
allocates uninitialized memory. The allocated memory has to be released withfree
.new
initializes the allocated memory by calling the corresponding constructor. Memory allocated withnew
should be released withdelete
(which calls the destructor). You don't need to give the size of memory block in order to release the allocated memory.It is not clear whether
new
andmalloc
are related according to the standard (it depends on whether a specific compiler implementsnew
usingmalloc
or not), so the issue may or may not be resolved by simply replacingnew
withmalloc
.From the code you showed, it is difficult to spot the cause of error. You may try to replace the dynamic array with
vector
to see if it solves your problem. Meanwhile, you may usevalgrind
to check whether you have memory leak in your code (if you can somehow port your code to Linux with makefiles since unfortunately valgrind is not available on Windows.).There are differences between
malloc
andnew
, for example,new
will initialize your memory and call the constructor of the class automatically. Or initialize if they are primitive types(such asfloat
,int
,char
etc). Also the memory allocated bynew
should be deleted with thedelete
keyword which calls the destructor.C's
malloc()
as well asnew
operator in Visual Studio internally callHeapAlloc()
.HeapAlloc()
callsVirtualAlloc()
if the memory required is too large, or is shared between processes. So, it will not necessarily fix your issue. Infact if you are using C++ stick to usingnew
.It is one of the Great Myths of Windows programming, a process can never run out of RAM. Windows is a demand-paged virtual memory operating system, if a process needs more RAM then the operating system makes room by paging out other memory pages, owned by other processes. Or the process itself, swapping pages out that haven't been used for a while.
That myth is encouraged by the way Task Manager reports memory usage for a process with its default settings. It shows working set, the actual number of bytes of the process that are in RAM. A value that's usually much smaller than the amount of virtual memory allocated by the process. A process dies on OOM when it can't allocate virtual memory anymore. Another statistic in Taskmgr, the VM size value. And it usually dies not because all VM was used but because there isn't a hole left that's big enough. The SysInternals' VMMap utility is a good way to see how a process uses its address space.
Getting a larger virtual memory address space requires a pretty fundamental overhaul. Albeit that it is easy today, just target x64 as the platform target. A 64-bit process has massive amounts of address space available, limited only by the maximum size of the paging file. You could limp along in 32-bit mode, as long as you can count on actually running on a 64-bit operating system, by using the /LARGEADDRESSAWARE linker option. Which increases the VM size from 2 GB to 4 GB on a 64-bit operating system.