If I have a multi-processor board that has cache-coherent non-uniform memory access ( NUMA ), i.e. separate "northbridges" with separate RAM for each processor, does any compiler know how to automatically spread the data across the different memory systems such that processes working on local threads are mostly retrieving their data from the RAM associated with the processor the thread is running on?
I have a setup where 1 GB is attached to processor 0, 1 GB is attached to processor 1, et c. up to 4 processors. In the coherent memory space the physical memory for the RAM on the 1st processor is addresses 0 to 1GB-1. For the second processor it is 1GB to 2GB-1, and so on.
Will any compilers, or perhaps malloc
specifically, associate new memory alloc'd by a process on a specific core to the physical RAM associated with that core?
NUMA-aware memory allocation is not done at compile time. Making assumptions like this would be bad for portability.
On Linux, this is a kernel function, though you can control this at runtime with
numactl
orset_mempolicy
or withlibnuma
.Linux kernel knows about NUMA and will try to give your process pages from memory local to the current CPU (source: U. Drepper, "What Every Programmer Should Know About Memory".)
For MS platforms, the compiler is not aware of NUMA. However, the system is NUMA aware and will attempt to allocate memory in the same node.
See http://code.msdn.microsoft.com/64plusLP for some more details on how recent versions of Windows handle NUMA.