need help designing for search algorithm in a more

2020-07-11 05:45发布

I have a problem that involves biology area. Right now I have 4 VERY LARGE files(each with 0.1 billion lines), but the structure is rather simple, each line of these files has only 2 fields, both stands for a type of gene.

My goal is: design an efficient algorithm that can achieves the following: Find a circle within the contents of these 4 files. The circle is defined as:

field #1 in a line in file 1 == field #1 in a line in file 2 and
field #2 in a line in file 2 == field #1 in a line in file 3 and
field #2 in a line in file 3 == field #1 in a line in file 4 and
field #2 in a line in file 4 == field #2 in a line in file 1

I cannot think of a decent way to solve this, so I just wrote a brute-force-stupid-4-layer-nested loop for now. I'm thinking about sorting them as alphabetical order, even if that might help a little, but then it's also obvious that the computer memory would not allow me to load everything at once. Can anybody tell me a good way to solve this problem in a both time and space efficient way? Thanks!!

4条回答
看我几分像从前
2楼-- · 2020-07-11 06:31

Here is one algorithm:

  • Select an appropriate lookup structure: If field#1 is an integer, Use bit-fields or an dictionary (or a set) if its an string; Use the a lookup structure for each file, i.e 4 in your case
  • Initialization phase: For each file: parse the file line by line and set the appropriate bit in bit-field or add the field to the dictionary in the corresponding lookup structure for the file.
  • After initializing the lookup structure above, check the condition in your question.

The complexity of this depends on the lookup structure implementation. For bit fields, it will be O(1) and for set or dictionary, it will be O(lg(n)), since they are usually implemented as a Balanced Search Tree. The complete complexity will be O(n) or O(n lg(n)); You solution in the question has complexity of O(n^4)

You can get the code and solution for bit fields from here

HTH

查看更多
一纸荒年 Trace。
3楼-- · 2020-07-11 06:33

Here is one approach:

We will use the notation Fxy where x=field number , y=file_no

Sort each of the 4 files on the first fields.

For each field F11, find a match in file 2. This will be linear. Save these matches with all four fields to a new file. Now, use this file and use the corresponding field in this file and get all the matches from file3. Continue for file4 and back to file1.

In this way, as you progress to each new file, you are dealing with lesser number of lines. And since you have sorted the files, search in linear and can be done by reading from disk.

Here the complexity in O(n log n) for sorting, and O(m log n) for lookup, assuming m << n.

查看更多
虎瘦雄心在
4楼-- · 2020-07-11 06:40

First of all, I note that you can sort a file without holding it memory all at once, and that most operating systems have some program that does this, often called just "sort". Usually you can get it to sort on a field within a file, but if not you can rewrite each line to get it to sort the way you want.

Given this, you can connect two files by sorting them so that the first is sorted on field #1 and the second on field #2. You can then create one record for each match, combining all the fields, and only holding in memory a chunk from each file where all the fields you have sorted on have the same value. This will allow you to connect the result with another file - four such connections should solve your problem.

Depending on your data, the time it takes to solve your problem may depend on the order in which you make the connections. One rather naive way to make use of this is, at each stage, to take a small random sample from each file, and use this to see how many results will follow from each possible connection, and choose the connection that produces the fewest results. One way to take a random sample of N items from a large file is to take the first N lines in the file and then, when you have read in m lines so far, read the next line, and then with probability N/(m + 1) exchange one of the N lines held for it, else throw it away. Keep on until you have read through the whole file.

查看更多
ゆ 、 Hurt°
5楼-- · 2020-07-11 06:40

It's a bit easier to explain if your File 1 is the other way around (so each second element points to a first element in the next file).

  1. Start with File 1, copy it to a new file writing each A, B pair as B, A, 'REV'

  2. Append the contents of File 2 to it writing each A, B pair as A, B, 'FWD'

  3. Sort the file

  4. Process the file in chunks with the same initial value

    • Within that chunk group the lines into REV's and FWD's
    • Take the cartesian product of the revs and the fwds (nested loop)
    • Write a line with reverse(fwd) concat (rev) excluding the repeated token
    • e.g. B, A, 'REV' and B, C, 'FWD' -> C, B, A, 'REV'
  5. Append the next file to this new output file (adding 'FWD' to each line)

  6. Repeat from step 3

In essence you are building up a chain in reverse order and using a file-based sort algorithm to put sequences together that can be combined.

Of course it would be even easier to just read these files into a database and let it do the work ...

查看更多
登录 后发表回答