Efficiently choosing a random line from a text fil

2019-04-22 17:55发布

问题:

This is essentially a more constrained version of this question.

Suppose we have a very large text file, containing a large number of lines.

We need to choose a line at random from the file, with uniform probability, but there are constraints:

  • Because this is a soft realtime application, we cannot iterate over the entire file. The choice should take a constant-ish amount of time.
  • Because of memory constraints, the file cannot be cached.
  • Because the file is permitted to change at runtime, the length of the file cannot be assumed to be a constant.

My first thought is to use an lstat() call to get the total filesize in bytes. fseek() can then be used to directly access a random byte offset, getting something like O(1) access into a random part of the file.

The problem is that we can't then do something like read to the next newline and call it a day, because that would produce a distribution biased toward long lines.

My first thought at solving this issue is to read until the first "n" newlines (wrapping back to the file's beginning if required), and then choose a line with uniform probability from this smaller set. It is safe to assume the file's contents are randomly ordered, so this sub-sample should be uniform with respect to length, and, since its starting point was selected uniformly from all possible points, it should represent a uniform choice from the file as a whole. So, in pseudo-C, our algorithm looks something like:

 lstat(filepath, &filestat);
 fseek(file, (int)(filestat.off_t*drand48()), SEEK_SET);
 char sample[n][BUFSIZ];
 for(int i=0;i<n;i++)
     fgets(sample[i], BUFSIZ, file); //plus some stuff to deal with file wrap around...
 return sample[(int)(n*drand48())];

This doesn't seem like an especially elegant solution, and I'm not completely confident it will be uniform, so I'm wondering if there's a better way to do it. Any thoughts?

EDIT: On further consideration, I'm now pretty sure that my method is not uniform, since the starting point is more likely to be inside longer words, and thus is not uniform. Tricky!

回答1:

Select a random character from the file (via rand and seek as you noted). Now, instead of finding the associated newline, since that is biased as you noted, I would apply the following algorithm:


Is the character a newline character?
   yes - use the preceeding line
   no  - try again

I can't see how this could give anything but a uniform distribution of lines. The efficiency depends on the average length of a line. If your file has relatively short lines, this could be workable, though if the file really can't be precached even by the OS, you might pay a heavy price in physical disk seeks.



回答2:

Solution was found, which works surprisingly well. Documenting here for myself and others.

This example code does around 80,000 draws per second in practice, with a mean line length that matches that of the file to 4 significant digits on most runs. In contrast, I get around 250 draws per second using the method from the cross referenced question.

Essentially what it does is sample a random place in the file, and then discard it and draw again with probability inversely proportionate to the line length. This cancels out the bias for longer words. On average, the method makes a number of draws equal to the average line length in the file before accepting one.

Some notable drawbacks:

  • Files with longer line lengths will produce more rejections per draw, making this much slower.
  • Files with longer line lengths require a larger constant than 50 in the rdraw function, which appears to mean much longer seek times in practice if line lengths exhibit high variance. For instance, setting it to BUFSIZ on one file I tested with reduced draw speeds to around 10000 draws per second. Still much faster than counting lines in the file though.

    int rdraw(FILE* where, char *storage, size_t bytes){
        int offset = (int)(bytes*drand48());
        int initial_seek = offset>50?offset-50:0;
        fseek(where, initial_seek, SEEK_SET);
        int chars_read = 0;
        while(chars_read + initial_seek < offset){
                fgets(storage,50,where);
                chars_read += strlen(storage);
        }
        return strlen(storage);
    }
    
    int main(){
        srand48(time(NULL));
        struct stat blah;
        stat("/usr/share/dict/words", &blah);
        FILE *where = fopen("/usr/share/dict/words", "r");
        off_t bytes = blah.st_size;
        char b[BUFSIZ+1];
    
        int i;
        for(i=0;i<1000000; i++){
                while(drand48() > 1.0/(rdraw(where, b, bytes)));
        }
    
    }
    


回答3:

If the file only changes in the end (more lines are added) you can create an algorithm with uniform probability:

Preparation: Create an index file that contains the offset for each n:th line. Use a fixed-width format so that the position can be used to determine which record you have.

  1. Open the index file and read the last record. Use ftell to determine the record number.

  2. Open the big file and fseek to the offset obtained in step 1.

  3. Read the big file to the end, counting the number of newlines. You now have the total number of lines in the big file.

  4. Generate a random number up to the number of lines obtained in step 3.

  5. fseek to, and read, the appropriate record in the index file.

  6. fseek to the appropriate offset in the large file. Skip the remainder of newlines.

  7. Read the line!

Example

Let's assume we chose n=100 and that the large file contains 367 lines.

Index file:

00000000,00004753,00009420,00016303
  1. The index file has 4 records, so the large file containsat least 300 records (100* (4-1)). Last offset is 16303.

  2. Open the large file and fseek to 16303.

  3. Count the remaining number of lines (67).

  4. Generata a random number in the range [0-366]. Let's say we got 112.

  5. 112/100 = 1 with 12 as remainder. Read the index file record with offset 1. We get the result 4753.

  6. fseek to 4753 in the large file and then skip 11 (12-1) lines.

  7. Read the 12th line.

Voila!

Edit:

I saw the comment on the target file changing. If the target file changes only rarely, then this may still be a viable approach. You would need to create an new index file before switching target file. You may also want to update the index file when the target file has grown more than n rows.