Sorting 1 million 8-digit numbers in 1 MB of RAM

2020-02-16 05:24发布

I have a computer with 1 MB of RAM and no other local storage. I must use it to accept 1 million 8-digit decimal numbers over a TCP connection, sort them, and then send the sorted list out over another TCP connection.

The list of numbers may contain duplicates, which I must not discard. The code will be placed in ROM, so I need not subtract the size of my code from the 1 MB. I already have code to drive the Ethernet port and handle TCP/IP connections, and it requires 2 KB for its state data, including a 1 KB buffer via which the code will read and write data. Is there a solution to this problem?

Sources Of Question And Answer:

slashdot.org

cleaton.net

30条回答
该账号已被封号
2楼-- · 2020-02-16 05:40

Suppose this task is possible. Just prior to output, there will be an in-memory representation of the million sorted numbers. How many different such representations are there? Since there may be repeated numbers we can't use nCr (choose), but there is an operation called multichoose that works on multisets.

  • There are 2.2e2436455 ways to choose a million numbers in range 0..99,999,999.
  • That requires 8,093,730 bits to represent every possible combination, or 1,011,717 bytes.

So theoretically it may be possible, if you can come up with a sane (enough) representation of the sorted list of numbers. For example, an insane representation might require a 10MB lookup table or thousands of lines of code.

However, if "1M RAM" means one million bytes, then clearly there is not enough space. The fact that 5% more memory makes it theoretically possible suggests to me that the representation will have to be VERY efficient and probably not sane.

查看更多
放我归山
3楼-- · 2020-02-16 05:40

I think the solution is to combine techniques from video encoding, namely the discrete cosine transformation. In digital video, rather recording the changing the brightness or colour of video as regular values such as 110 112 115 116, each is subtracted from the last (similar to run length encoding). 110 112 115 116 becomes 110 2 3 1. The values, 2 3 1 require less bits than the originals.

So lets say we create a list of the input values as they arrive on the socket. We are storing in each element, not the value, but the offset of the one before it. We sort as we go, so the offsets are only going to be positive. But the offset could be 8 decimal digits wide which this fits in 3 bytes. Each element can't be 3 bytes, so we need to pack these. We could use the top bit of each byte as a "continue bit", indicating that the next byte is part of the number and the lower 7 bits of each byte need to be combined. zero is valid for duplicates.

As the list fills up, the numbers should be get closer together, meaning on average only 1 byte is used to determine the distance to the next value. 7 bits of value and 1 bit of offset if convenient, but there may be a sweet spot that requires less than 8 bits for a "continue" value.

Anyway, I did some experiment. I use a random number generator and I can fit a million sorted 8 digit decimal numbers into about 1279000 bytes. The average space between each number is consistently 99...

public class Test {
    public static void main(String[] args) throws IOException {
        // 1 million values
        int[] values = new int[1000000];

        // create random values up to 8 digits lrong
        Random random = new Random();
        for (int x=0;x<values.length;x++) {
            values[x] = random.nextInt(100000000);
        }
        Arrays.sort(values);

        ByteArrayOutputStream baos = new ByteArrayOutputStream();

        int av = 0;    
        writeCompact(baos, values[0]);     // first value
        for (int x=1;x<values.length;x++) {
            int v = values[x] - values[x-1];  // difference
            av += v;
            System.out.println(values[x] + " diff " + v);
            writeCompact(baos, v);
        }

        System.out.println("Average offset " + (av/values.length));
        System.out.println("Fits in " + baos.toByteArray().length);
    }

    public static void writeCompact(OutputStream os, long value) throws IOException {
        do {
            int b = (int) value & 0x7f;
            value = (value & 0x7fffffffffffffffl) >> 7;
            os.write(value == 0 ? b : (b | 0x80));
        } while (value != 0);
    }
}
查看更多
狗以群分
4楼-- · 2020-02-16 05:41

Here's some working C++ code which solves the problem.

Proof that the memory constraints are satisfied:

Editor: There is no proof of the maximum memory requirements offered by the author either in this post or in his blogs. Since the number of bits necessary to encode a value depends on the values previously encoded, such a proof is likely non-trivial. The author notes that the largest encoded size he could stumble upon empirically was 1011732, and chose the buffer size 1013000 arbitrarily.

typedef unsigned int u32;

namespace WorkArea
{
    static const u32 circularSize = 253250;
    u32 circular[circularSize] = { 0 };         // consumes 1013000 bytes

    static const u32 stageSize = 8000;
    u32 stage[stageSize];                       // consumes 32000 bytes

    ...

Together, these two arrays take 1045000 bytes of storage. That leaves 1048576 - 1045000 - 2×1024 = 1528 bytes for remaining variables and stack space.

It runs in about 23 seconds on my Xeon W3520. You can verify that the program works using the following Python script, assuming a program name of sort1mb.exe.

from subprocess import *
import random

sequence = [random.randint(0, 99999999) for i in xrange(1000000)]

sorter = Popen('sort1mb.exe', stdin=PIPE, stdout=PIPE)
for value in sequence:
    sorter.stdin.write('%08d\n' % value)
sorter.stdin.close()

result = [int(line) for line in sorter.stdout]
print('OK!' if result == sorted(sequence) else 'Error!')

A detailed explanation of the algorithm can be found in the following series of posts:

查看更多
爷的心禁止访问
5楼-- · 2020-02-16 05:41

To represent the sorted array one can just store the first element and the difference between adjacent elements. In this way we are concerned with encoding 10^6 elements that can sum up to at most 10^8. Let's call this D. To encode the elements of D one can use a Huffman code. The dictionary for the Huffman code can be created on the go and the array updated every time a new item is inserted in the sorted array (insertion sort). Note that when the dictionary changes because of a new item the whole array should be updated to match the new encoding.

The average number of bits for encoding each element of D is maximized if we have equal number of each unique element. Say elements d1, d2, ..., dN in D each appear F times. In that case (in worst case we have both 0 and 10^8 in input sequence) we have

sum(1<=i<=N) F. di = 10^8

where

sum(1<=i<=N) F = 10^6, or F=10^6/N and the normalized frequency will be p= F/10^=1/N

The average number of bits will be -log2(1/P) = log2(N). Under these circumstances we should find a case that maximizes N. This happens if we have consecutive numbers for di starting from 0, or, di= i-1, therefore

10^8=sum(1<=i<=N) F. di = sum(1<=i<=N) (10^6/N) (i-1) = (10^6/N) N (N-1)/2

i.e.

N <= 201. And for this case average number of bits is log2(201)=7.6511 which means we will need around 1 byte per input element for saving the sorted array. Note that this doesn't mean D in general cannot have more than 201 elements. It just sows that if elements of D are uniformly distributed, it cannot have more than 201 unique values.

查看更多
时光不老,我们不散
6楼-- · 2020-02-16 05:42

Please see the first correct answer or the later answer with arithmetic encoding. Below you may find some fun, but not a 100% bullet-proof solution.

This is quite an interesting task and here is an another solution. I hope somebody would find the result useful (or at least interesting).

Stage 1: Initial data structure, rough compression approach, basic results

Let's do some simple math: we have 1M (1048576 bytes) of RAM initially available to store 10^6 8 digit decimal numbers. [0;99999999]. So to store one number 27 bits are needed (taking the assumption that unsigned numbers will be used). Thus, to store a raw stream ~3.5M of RAM will be needed. Somebody already said it doesn't seem to be feasible, but I would say the task can be solved if the input is "good enough". Basically, the idea is to compress the input data with compression factor 0.29 or higher and do sorting in a proper manner.

Let's solve the compression issue first. There are some relevant tests already available:

http://www.theeggeadventure.com/wikimedia/index.php/Java_Data_Compression

"I ran a test to compress one million consecutive integers using various forms of compression. The results are as follows:"

None     4000027
Deflate  2006803
Filtered 1391833
BZip2    427067
Lzma     255040

It looks like LZMA (Lempel–Ziv–Markov chain algorithm) is a good choice to continue with. I've prepared a simple PoC, but there are still some details to be highlighted:

  1. Memory is limited so the idea is to presort numbers and use compressed buckets (dynamic size) as temporary storage
  2. It is easier to achieve a better compression factor with presorted data, so there is a static buffer for each bucket (numbers from the buffer are to be sorted before LZMA)
  3. Each bucket holds a specific range, so the final sort can be done for each bucket separately
  4. Bucket's size can be properly set, so there will be enough memory to decompress stored data and do the final sort for each bucket separately

In-memory sorting

Please note, attached code is a POC, it can't be used as a final solution, it just demonstrates the idea of using several smaller buffers to store presorted numbers in some optimal way (possibly compressed). LZMA is not proposed as a final solution. It is used as a fastest possible way to introduce a compression to this PoC.

See the PoC code below (please note it just a demo, to compile it LZMA-Java will be needed):

public class MemorySortDemo {

static final int NUM_COUNT = 1000000;
static final int NUM_MAX   = 100000000;

static final int BUCKETS      = 5;
static final int DICT_SIZE    = 16 * 1024; // LZMA dictionary size
static final int BUCKET_SIZE  = 1024;
static final int BUFFER_SIZE  = 10 * 1024;
static final int BUCKET_RANGE = NUM_MAX / BUCKETS;

static class Producer {
    private Random random = new Random();
    public int produce() { return random.nextInt(NUM_MAX); }
}

static class Bucket {
    public int size, pointer;
    public int[] buffer = new int[BUFFER_SIZE];

    public ByteArrayOutputStream tempOut = new ByteArrayOutputStream();
    public DataOutputStream tempDataOut = new DataOutputStream(tempOut);
    public ByteArrayOutputStream compressedOut = new ByteArrayOutputStream();

    public void submitBuffer() throws IOException {
        Arrays.sort(buffer, 0, pointer);

        for (int j = 0; j < pointer; j++) {
            tempDataOut.writeInt(buffer[j]);
            size++;
        }            
        pointer = 0;
    }

    public void write(int value) throws IOException {
        if (isBufferFull()) {
            submitBuffer();
        }
        buffer[pointer++] = value;
    }

    public boolean isBufferFull() {
        return pointer == BUFFER_SIZE;
    }

    public byte[] compressData() throws IOException {
        tempDataOut.close();
        return compress(tempOut.toByteArray());
    }        

    private byte[] compress(byte[] input) throws IOException {
        final BufferedInputStream in = new BufferedInputStream(new ByteArrayInputStream(input));
        final DataOutputStream out = new DataOutputStream(new BufferedOutputStream(compressedOut));

        final Encoder encoder = new Encoder();
        encoder.setEndMarkerMode(true);
        encoder.setNumFastBytes(0x20);
        encoder.setDictionarySize(DICT_SIZE);
        encoder.setMatchFinder(Encoder.EMatchFinderTypeBT4);

        ByteArrayOutputStream encoderPrperties = new ByteArrayOutputStream();
        encoder.writeCoderProperties(encoderPrperties);
        encoderPrperties.flush();
        encoderPrperties.close();

        encoder.code(in, out, -1, -1, null);
        out.flush();
        out.close();
        in.close();

        return encoderPrperties.toByteArray();
    }

    public int[] decompress(byte[] properties) throws IOException {
        InputStream in = new ByteArrayInputStream(compressedOut.toByteArray());
        ByteArrayOutputStream data = new ByteArrayOutputStream(10 * 1024);
        BufferedOutputStream out = new BufferedOutputStream(data);

        Decoder decoder = new Decoder();
        decoder.setDecoderProperties(properties);
        decoder.code(in, out, 4 * size);

        out.flush();
        out.close();
        in.close();

        DataInputStream input = new DataInputStream(new ByteArrayInputStream(data.toByteArray()));
        int[] array = new int[size];
        for (int k = 0; k < size; k++) {
            array[k] = input.readInt();
        }

        return array;
    }
}

static class Sorter {
    private Bucket[] bucket = new Bucket[BUCKETS];

    public void doSort(Producer p, Consumer c) throws IOException {

        for (int i = 0; i < bucket.length; i++) {  // allocate buckets
            bucket[i] = new Bucket();
        }

        for(int i=0; i< NUM_COUNT; i++) {         // produce some data
            int value = p.produce();
            int bucketId = value/BUCKET_RANGE;
            bucket[bucketId].write(value);
            c.register(value);
        }

        for (int i = 0; i < bucket.length; i++) { // submit non-empty buffers
            bucket[i].submitBuffer();
        }

        byte[] compressProperties = null;
        for (int i = 0; i < bucket.length; i++) { // compress the data
            compressProperties = bucket[i].compressData();
        }

        printStatistics();

        for (int i = 0; i < bucket.length; i++) { // decode & sort buckets one by one
            int[] array = bucket[i].decompress(compressProperties);
            Arrays.sort(array);

            for(int v : array) {
                c.consume(v);
            }
        }
        c.finalCheck();
    }

    public void printStatistics() {
        int size = 0;
        int sizeCompressed = 0;

        for (int i = 0; i < BUCKETS; i++) {
            int bucketSize = 4*bucket[i].size;
            size += bucketSize;
            sizeCompressed += bucket[i].compressedOut.size();

            System.out.println("  bucket[" + i
                    + "] contains: " + bucket[i].size
                    + " numbers, compressed size: " + bucket[i].compressedOut.size()
                    + String.format(" compression factor: %.2f", ((double)bucket[i].compressedOut.size())/bucketSize));
        }

        System.out.println(String.format("Data size: %.2fM",(double)size/(1014*1024))
                + String.format(" compressed %.2fM",(double)sizeCompressed/(1014*1024))
                + String.format(" compression factor %.2f",(double)sizeCompressed/size));
    }
}

static class Consumer {
    private Set<Integer> values = new HashSet<>();

    int v = -1;
    public void consume(int value) {
        if(v < 0) v = value;

        if(v > value) {
            throw new IllegalArgumentException("Current value is greater than previous: " + v + " > " + value);
        }else{
            v = value;
            values.remove(value);
        }
    }

    public void register(int value) {
        values.add(value);
    }

    public void finalCheck() {
        System.out.println(values.size() > 0 ? "NOT OK: " + values.size() : "OK!");
    }
}

public static void main(String[] args) throws IOException {
    Producer p = new Producer();
    Consumer c = new Consumer();
    Sorter sorter = new Sorter();

    sorter.doSort(p, c);
}
}

With random numbers it produces the following:

bucket[0] contains: 200357 numbers, compressed size: 353679 compression factor: 0.44
bucket[1] contains: 199465 numbers, compressed size: 352127 compression factor: 0.44
bucket[2] contains: 199682 numbers, compressed size: 352464 compression factor: 0.44
bucket[3] contains: 199949 numbers, compressed size: 352947 compression factor: 0.44
bucket[4] contains: 200547 numbers, compressed size: 353914 compression factor: 0.44
Data size: 3.85M compressed 1.70M compression factor 0.44

For a simple ascending sequence (one bucket is used) it produces:

bucket[0] contains: 1000000 numbers, compressed size: 256700 compression factor: 0.06
Data size: 3.85M compressed 0.25M compression factor 0.06

EDIT

Conclusion:

  1. Don't try to fool the Nature
  2. Use simpler compression with lower memory footprint
  3. Some additional clues are really needed. Common bullet-proof solution does not seem to be feasible.

Stage 2: Enhanced compression, final conclusion

As was already mentioned in the previous section, any suitable compression technique can be used. So let's get rid of LZMA in favor of simpler and better (if possible) approach. There are a lot of good solutions including Arithmetic coding, Radix tree etc.

Anyway, simple but useful encoding scheme will be more illustrative than yet another external library, providing some nifty algorithm. The actual solution is pretty straightforward: since there are buckets with partially sorted data, deltas can be used instead of numbers.

encoding scheme

Random input test shows slightly better results:

bucket[0] contains: 10103 numbers, compressed size: 13683 compression factor: 0.34
bucket[1] contains: 9885 numbers, compressed size: 13479 compression factor: 0.34
...
bucket[98] contains: 10026 numbers, compressed size: 13612 compression factor: 0.34
bucket[99] contains: 10058 numbers, compressed size: 13701 compression factor: 0.34
Data size: 3.85M compressed 1.31M compression factor 0.34

Sample code

  public static void encode(int[] buffer, int length, BinaryOut output) {
    short size = (short)(length & 0x7FFF);

    output.write(size);
    output.write(buffer[0]);

    for(int i=1; i< size; i++) {
        int next = buffer[i] - buffer[i-1];
        int bits = getBinarySize(next);

        int len = bits;

        if(bits > 24) {
          output.write(3, 2);
          len = bits - 24;
        }else if(bits > 16) {
          output.write(2, 2);
          len = bits-16;
        }else if(bits > 8) {
          output.write(1, 2);
          len = bits - 8;
        }else{
          output.write(0, 2);
        }

        if (len > 0) {
            if ((len % 2) > 0) {
                len = len / 2;
                output.write(len, 2);
                output.write(false);
            } else {
                len = len / 2 - 1;
                output.write(len, 2);
            }

            output.write(next, bits);
        }
    }
}

public static short decode(BinaryIn input, int[] buffer, int offset) {
    short length = input.readShort();
    int value = input.readInt();
    buffer[offset] = value;

    for (int i = 1; i < length; i++) {
        int flag = input.readInt(2);

        int bits;
        int next = 0;
        switch (flag) {
            case 0:
                bits = 2 * input.readInt(2) + 2;
                next = input.readInt(bits);
                break;
            case 1:
                bits = 8 + 2 * input.readInt(2) +2;
                next = input.readInt(bits);
                break;
            case 2:
                bits = 16 + 2 * input.readInt(2) +2;
                next = input.readInt(bits);
                break;
            case 3:
                bits = 24 + 2 * input.readInt(2) +2;
                next = input.readInt(bits);
                break;
        }

        buffer[offset + i] = buffer[offset + i - 1] + next;
    }

   return length;
}

Please note, this approach:

  1. does not consume a lot of memory
  2. works with streams
  3. provides not so bad results

Full code can be found here, BinaryInput and BinaryOutput implementations can be found here

Final conclusion

No final conclusion :) Sometimes it is really good idea to move one level up and review the task from a meta-level point of view.

It was fun to spend some time with this task. BTW, there are a lot of interesting answers below. Thank you for your attention and happy codding.

查看更多
Viruses.
7楼-- · 2020-02-16 05:43

(My original answer was wrong, sorry for the bad math, see below the break.)

How about this?

The first 27 bits store the lowest number you have seen, then the difference to the next number seen, encoded as follows: 5 bits to store the number of bits used in storing the difference, then the difference. Use 00000 to indicate that you saw that number again.

This works because as more numbers are inserted, the average difference between numbers goes down, so you use less bits to store the difference as you add more numbers. I believe this is called a delta list.

The worst case I can think of is all numbers evenly spaced (by 100), e.g. Assuming 0 is the first number:

000000000000000000000000000 00111 1100100
                            ^^^^^^^^^^^^^
                            a million times

27 + 1,000,000 * (5+7) bits = ~ 427k

Reddit to the rescue!

If all you had to do was sort them, this problem would be easy. It takes 122k (1 million bits) to store which numbers you have seen (0th bit on if 0 was seen, 2300th bit on if 2300 was seen, etc.

You read the numbers, store them in the bit field, and then shift the bits out while keeping a count.

BUT, you have to remember how many you have seen. I was inspired by the sublist answer above to come up with this scheme:

Instead of using one bit, use either 2 or 27 bits:

  • 00 means you did not see the number.
  • 01 means you saw it once
  • 1 means you saw it, and the next 26 bits are the count of how many times.

I think this works: if there are no duplicates, you have a 244k list. In the worst case you see each number twice (if you see one number three times, it shortens the rest of the list for you), that means you have seen 50,000 more than once, and you have seen 950,000 items 0 or 1 times.

50,000 * 27 + 950,000 * 2 = 396.7k.

You can make further improvements if you use the following encoding:

0 means you did not see the number 10 means you saw it once 11 is how you keep count

Which will, on average, result in 280.7k of storage.

EDIT: my Sunday morning math was wrong.

The worst case is we see 500,000 numbers twice, so the math becomes:

500,000 *27 + 500,000 *2 = 1.77M

The alternate encoding results in an average storage of

500,000 * 27 + 500,000 = 1.70M

: (

查看更多
登录 后发表回答