-->

Sort objects of dynamic size

2019-04-05 01:57发布

问题:

Problem

Suppose I have a large array of bytes (think up to 4GB) containing some data. These bytes correspond to distinct objects in such a way that every s bytes (think s up to 32) will constitute a single object. One important fact is that this size s is the same for all objects, not stored within the objects themselves, and not known at compile time.

At the moment, these objects are logical entities only, not objects in the programming language. I have a comparison on these objects which consists of a lexicographical comparison of most of the object data, with a bit of different functionality to break ties using the remaining data. Now I want to sort these objects efficiently (this is really going to be a bottleneck of the application).

Ideas so far

I've thought of several possible ways to achieve this, but each of them appears to have some rather unfortunate consequences. You don't necessarily have to read all of these. I tried to print the central question of each approach in bold. If you are going to suggest one of these approaches, then your answer should respond to the related questions as well.

1. C quicksort

Of course the C quicksort algorithm is available in C++ applications as well. Its signature matches my requirements almost perfectly. But the fact that using that function will prohibit inlining of the comparison function will mean that every comparison carries a function invocation overhead. I had hoped for a way to avoid that. Any experience about how C qsort_r compares to STL in terms of performance would be very welcome.

2. Indirection using Objects pointing at data

It would be easy to write a bunch of objects holding pointers to their respective data. Then one could sort those. There are two aspects to consider here. On the one hand, just moving around pointers instead of all the data would mean less memory operations. On the other hand, not moving the objects would probably break memory locality and thus cache performance. Chances that the deeper levels of quicksort recursion could actually access all their data from a few cache pages would vanish almost completely. Instead, each cached memory page would yield only very few usable data items before being replaced. If anyone could provide some experience about the tradeoff between copying and memory locality I'd be very glad.

3. Custom iterator, reference and value objects

I wrote a class which serves as an iterator over the memory range. Dereferencing this iterator yields not a reference but a newly constructed object to hold the pointer to the data and the size s which is given at construction of the iterator. So these objects can be compared, and I even have an implementation of std::swap for these. Unfortunately, it appears that std::swap isn't enough for std::sort. In some parts of the process, my gcc implementation uses insertion sort (as implemented in __insertion_sort in file stl_alog.h) which moves a value out of the sequence, moves a number items by one step, and then moves the first value back into the sequence at the appropriate position:

          typename iterator_traits<_RandomAccessIterator>::value_type
            __val = _GLIBCXX_MOVE(*__i);
          _GLIBCXX_MOVE_BACKWARD3(__first, __i, __i + 1);
          *__first = _GLIBCXX_MOVE(__val);

Do you know of a standard sorting implementation which doesn't require a value type but can operate with swaps alone?

So I'd not only need my class which serves as a reference, but I would also need a class to hold a temporary value. And as the size of my objects is dynamic, I'd have to allocate that on the heap, which means memory allocations at the very leafs of the recusrion tree. Perhaps one alternative would be a vaue type with a static size that should be large enough to hold objects of the sizes I currently intend to support. But that would mean that there would be even more hackery in the relation between the reference_type and the value_type of the iterator class. And it would mean I would have to update that size for my application to one day support larger objects. Ugly.

If you can think of a clean way to get the above code to manipulate my data without having to allocate memory dynamically, that would be a great solution. I'm using C++11 features already, so using move semantics or similar won't be a problem.

4. Custom sorting

I even considered reimplementing all of quicksort. Perhaps I could make use of the fact that my comparison is mostly a lexicographical compare, i.e. I could sort sequences by first byte and only switch to the next byte when the firt byte is the same for all elements. I haven't worked out the details on this yet, but if anyone can suggest a reference, an implementation or even a canonical name to be used as a keyword for such a byte-wise lexicographical sorting, I'd be very happy. I'm still not convinced that with reasonable effort on my part I could beat the performance of the STL template implementation.

5. Completely different algorithm

I know there are many many kinds of sorting algorithms out there. Some of them might be better suited to my problem. Radix sort comes to my mind first, but I haven't really thought this through yet. If you can suggest a sorting algorithm more suited to my problem, please do so. Preferrably with implementation, but even without.

Question

So basically my question is this:
“How would you efficiently sort objects of dynamic size in heap memory?”

Any answer to this question which is applicable to my situation is good, no matter whether it is related to my own ideas or not. Answers to the individual questions marked in bold, or any other insight which might help me decide between my alternatives, would be useful as well, particularly if no definite answer to a single approach turns up.

回答1:

Since there are only 31 different object variations (1 to 32 bytes), you could easily create an object type for each and select a call to std::sort based on a switch statement. Each call will get inlined and highly optimized.

Some object sizes might require a custom iterator, as the compiler will insist on padding native objects to align to address boundaries. Pointers can be used as iterators in the other cases since a pointer has all the properties of an iterator.



回答2:

The most practical solution is to use the C style qsort that you mentioned.

template <unsigned S>
struct my_obj {
    enum { SIZE = S; };
    const void *p_;
    my_obj (const void *p) : p_(p) {}
    //...accessors to get data from pointer
    static int c_style_compare (const void *a, const void *b) {
        my_obj aa(a);
        my_obj bb(b);
        return (aa < bb) ? -1 : (bb < aa);
    }
};

template <unsigned N, typename OBJ>
void my_sort (const char (&large_array)[N], const OBJ &) {
    qsort(large_array, N/OBJ::SIZE, OBJ::SIZE, OBJ::c_style_compare);
}

(Or, you can call qsort_r if you prefer.) Since STL sort inlines the comparision calls, you may not get the fastest possible sorting. If all your system does is sorting, it may be worth it to add the code to get custom iterators to work. But, if most of the time your system is doing something other than sorting, the extra gain you get may just be noise to your overall system.



回答3:

I'd agree with std::sort using a custom iterator, reference and value type; it's best to use the standard machinery where possible.

You worry about memory allocations, but modern memory allocators are very efficient at handing out small chunks of memory, particularly when being repeatedly reused. You could also consider using your own (stateful) allocator, handing out length s chunks from a small pool.



回答4:

If you can overlay an object onto your buffer, then you can use std::sort, as long as your overlay type is copyable. (In this example, 4 64bit integers). With 4GB of data, you're going to need a lot of memory though.

As discussed in the comments, you can have a selection of possible sizes based on some number of fixed size templates. You would have to have pick from these types at runtime (using a switch statement, for example). Here's an example of the template type with various sizes and example of sorting the 64bit size.

Here's a simple example:

#include <vector>
#include <algorithm>
#include <iostream>
#include <ctime>

template <int WIDTH>
struct variable_width
{
   unsigned char w_[WIDTH];
};

typedef variable_width<8> vw8;
typedef variable_width<16> vw16;
typedef variable_width<32> vw32;
typedef variable_width<64> vw64;
typedef variable_width<128> vw128;
typedef variable_width<256> vw256;
typedef variable_width<512> vw512;
typedef variable_width<1024> vw1024;

bool operator<(const vw64& l, const vw64& r)
{
   const __int64* l64 = reinterpret_cast<const __int64*>(l.w_);
   const __int64* r64 = reinterpret_cast<const __int64*>(r.w_);

   return *l64 < *r64;
}

std::ostream& operator<<(std::ostream& out, const vw64& w)
{
   const __int64* w64 = reinterpret_cast<const __int64*>(w.w_);
   std::cout << *w64;
   return out;
}

int main()
{
   srand(time(NULL));
   std::vector<unsigned char> buffer(10 * sizeof(vw64));
   vw64* w64_arr = reinterpret_cast<vw64*>(&buffer[0]);

   for(int x = 0; x < 10; ++x)
   {
      (*(__int64*)w64_arr[x].w_) = rand();
   }

   std::sort(
      w64_arr,
      w64_arr + 10);

   for(int x = 0; x < 10; ++x)
   {
      std::cout << w64_arr[x] << '\n';
   }

   std::cout << std::endl;

   return 0;
}


回答5:

Given the enormous size (4GB), I would seriously consider dynamic code generation. Compile a custom sort into a shared library, and dynamically load it. The only non-inlined call should be the call into the library.

With precompiled headers, the compilation times may actually be not that bad. The whole <algorithm> header doesn't change, nor does your wrapper logic. You just need to recompile a single predicate each time. And since it's a single function you get, linking is trivial.



回答6:

#define OBJECT_SIZE 32
struct structObject
{
    unsigned char* pObject;
    bool operator < (const structObject &n) const
    {
        for(int i=0; i<OBJECT_SIZE; i++)
        {
            if(*(pObject + i) != *(n.pObject + i))
                return (*(pObject + i) < *(n.pObject + i));
        }

        return false;       
    }
};

int _tmain(int argc, _TCHAR* argv[])
{       
    std::vector<structObject> vObjects;
    unsigned char* pObjects = (unsigned char*)malloc(10 * OBJECT_SIZE); // 10 Objects


    for(int i=0; i<10; i++)
    {
        structObject stObject;
        stObject.pObject = pObjects + (i*OBJECT_SIZE);      
        *stObject.pObject = 'A' + 9 - i; // Add a value to the start to check the sort
        vObjects.push_back(stObject);
    }

    std::sort(vObjects.begin(), vObjects.end());


    free(pObjects);

To skip the #define

struct structObject
{
    unsigned char* pObject; 
};

struct structObjectComparerAscending 
{
    int iSize;

    structObjectComparerAscending(int _iSize)
    {
        iSize = _iSize;
    }

    bool operator ()(structObject &stLeft, structObject &stRight)
    { 
        for(int i=0; i<iSize; i++)
        {
            if(*(stLeft.pObject + i) != *(stRight.pObject + i))
                return (*(stLeft.pObject + i) < *(stRight.pObject + i));
        }

        return false;       
    }
};

int _tmain(int argc, _TCHAR* argv[])
{   
    int iObjectSize = 32; // Read it from somewhere

    std::vector<structObject> vObjects;
    unsigned char* pObjects = (unsigned char*)malloc(10 * iObjectSize);

    for(int i=0; i<10; i++)
    {
        structObject stObject;
        stObject.pObject = pObjects + (i*iObjectSize);      
        *stObject.pObject = 'A' + 9 - i; // Add a value to the start to work with something...  
        vObjects.push_back(stObject);
    }

    std::sort(vObjects.begin(), vObjects.end(), structObjectComparerAscending(iObjectSize));


    free(pObjects);