I memory optimized a code I have for embedded use. It works well but the result of this is that I got a lot of 1D, 2D and 3D mallocs and frees in the middle of functions that slow down the execution time.
For several reasons, I decided to modify the way I was doing it. I want to allocate all the memory I can with a single malloc at the start of my execution and just point the correct memory space where each array needs to be.
For info, I execute this on x86 for now so I don't any memory space issues. I declare my arrays this way:
unsigned char *memory;
memory = (unsigned char *)malloc(MYSIZE*sizeof(unsigned char));
type* p1;
p1 = (type *)((int)memory + p1_offset);
type** p2;
p2 = (type **)((int)memory + p2_offset);
for (int i=0 ; i<p2_height ; i++)
{
p2[i] = (type *)((int)p2 + p2_width*sizeof(type));
}
While it works well for my 1D pointer, it returns me a segfault for my 2D pointer declaration. I checked my offsets and they are good compare to the memory pointer. As I'm not experienced with this way of declaring my pointers maybe I'm misunderstanding something here so I would be pleased if someone can explain me more about this technique !
You’re declaring p2 as a pointer to an array of pointers, not a pointer to a flat two-dimensional array. You’re also (edited for clarity:) initializing p2
with a garbage integer, then casting it back to a pointer and dereferencing it.
Edited to add example code:
#include <assert.h>
#include <stdint.h>
#include <stdlib.h>
#include <stdio.h>
/* Boilerplate to turn this into a MWE: */
#define MYSIZE 1024U
typedef double elem_t;
static const size_t p1_offset = 0, p2_offset = 512;
/* Our buffer will hold W 1d elements and X*Y 2d elements. */
#define W 64U
#define X 32U
#define Y 2U
typedef struct {
elem_t array1[W];
elem_t array2[X][Y];
} spaces_t;
/* Test driver: */
int main(void)
{
/* sizeof(unsigned char) is defined as 1. Do you mean to allocate an
* array of MYSIZE bytes or MYSIZE elements of type?
*/
spaces_t * const memory = malloc(sizeof(spaces_t));
if (!memory) {
perror("malloc");
exit(EXIT_FAILURE);
}
elem_t* p1 = memory->array1;
elem_t* p2 = (elem_t*)(memory->array2);
/* Never cast a pointer to int. It's not even legal.
* Why does this assertion succeed? Why are memory and bad_idea
* equal, but memory+1 and bad_idea+1 different by the size of both
* of our arrays combined, minus one byte?
*/
const uintptr_t bad_idea = (uintptr_t)memory;
assert( (uintptr_t)(memory+1) - (bad_idea+1) == sizeof(spaces_t) - 1 );
/* Let’s initialize all the arrays. No segfaults? */
size_t i,j;
for (i = 0; i < W; ++i) {
*p1 = (elem_t)i;
assert( memory->array1[i] == *p1 );
++p1;
}
/* This is a lot faster when X is a power of 2: */
for (i = 0; i < X; ++i)
for ( j = 0; j < Y; ++j) {
*p2 = (elem_t)(100*i + j);
assert( memory->array2[i][j] == *p2 );
++p2;
}
return EXIT_SUCCESS;
}
The problem is that type** p2;
is a pointer to pointer, it has nothing to do with 2D arrays.
Instead, declare a pointer to a 2D array:
type (*p2)[x][y];
If you don't like to de-reference the 2D array as (*p2)[i][j]
, then simply drop the innermost dimension in the declaration:
type (*p2)[y];
Now you can de-reference the 2D array as p2[i][j]
. This trick works because p2[i]
gives you pointer arithmetic based on the pointed-to type (an array of j type items), same as for any other pointer.
Also the cast from pointer to (signed) int is obscure and unsafe. For every case in the code where you do this, keep char type, calculate the offset and then cast it to (void*)
. Example:
p2 = (void*)(memory + p2_offset);