Palindrome detection efficiency

2020-02-08 08:30发布

I got curious by Jon Limjap's interview mishap and started to look for efficient ways to do palindrome detection. I checked the palindrome golf answers and it seems to me that in the answers are two algorithms only, reversing the string and checking from tail and head.

def palindrome_short(s):
    length = len(s)
    for i in xrange(0,length/2):
        if s[i] != s[(length-1)-i]: return False
    return True

def palindrome_reverse(s):
    return s == s[::-1]

I think neither of these methods are used in the detection of exact palindromes in huge DNA sequences. I looked around a bit and didn't find any free article about what an ultra efficient way for this might be.

A good way might be parallelizing the first version in a divide-and-conquer approach, assigning a pair of char arrays 1..n and length-1-n..length-1 to each thread or processor.

What would be a better way?

Do you know any?

9条回答
唯我独甜
2楼-- · 2020-02-08 08:36

They are both in O(N) so I don't think there is any particular efficiency problem with any of these solutions. Maybe I am not creative enough but I can't see how would it be possible to compare N elements in less than N steps, so something like O(log N) is definitely not possible IMHO.

Pararellism might help, but it still wouldn't change the big-Oh rank of the algorithm since it is equivalent to running it on a faster machine.

查看更多
Evening l夕情丶
3楼-- · 2020-02-08 08:38

Obviously, you're not going to be able to get better than O(n) asymptotic efficiency, since each character must be examined at least once. You can get better multiplicative constants, though.

For a single thread, you can get a speedup using assembly. You can also do better by examining data in chunks larger than a byte at a time, but this may be tricky due to alignment considerations. You'll do even better to use SIMD, if you can examine chunks as large as 16 bytes at a time.

If you wanted to parallelize it, you could divide the string into N pieces, and have processor i compare the segment [i*n/2, (i+1)*N/2) with the segment [L-(i+1)*N/2, L-i*N/2).

查看更多
家丑人穷心不美
4楼-- · 2020-02-08 08:38
public class Palindrome{
    private static boolean isPalindrome(String s){
        if(s == null)
            return false;   //unitialized String ? return false
        if(s.isEmpty())     //Empty Strings is a Palindrome 
            return true;
        //we want check characters on opposite sides of the string 
        //and stop in the middle <divide and conquer>
        int left = 0;  //left-most char    
        int right = s.length() - 1; //right-most char

        while(left < right){  //this elegantly handles odd characters string 
            if(s.charAt(left) != s.charAt(right)) //left char must equal 
                return false; //right else its not a palindrome
            left++; //converge left to right 
            right--;//converge right to left 
        }
        return true; // return true if the while doesn't exit 
    }
}

though we are doing n/2 calculations its still O(n) this can done also using threads, but calculations get messy, best to avoid it. this doesn't test for special characters and is case sensitive. I have code that does it, but this code can be modified to handle that easily.

查看更多
Fickle 薄情
5楼-- · 2020-02-08 08:42

Comparing from the center is always much more efficient since you can bail out early on a miss but it alwo allows you to do faster max palindrome search, regardless of whether you are looking for the maximal radius or all non-overlapping palindromes.

The only real paralellization is if you have multiple independent strings to process. Splitting into chunks will waste a lot of work for every miss and there's always much more misses than hits.

查看更多
The star\"
6楼-- · 2020-02-08 08:43

Given only one palindrome, you will have to do it in O(N), yes. You can get more efficiency with multi-processors by splitting the string as you said.

Now say you want to do exact DNA matching. These strings are thousands of characters long, and they are very repetitive. This gives us the opportunity to optimize.

Say you split a 1000-char long string into 5 pairs of 100,100. The code will look like this:

isPal(w[0:100],w[-100:]) and isPail(w[101:200], w[-200:-100]) ...

etc... The first time you do these matches, you will have to process them. However, you can add all results you've done into a hashtable mapping pairs to booleans:

isPal = {("ATTAGC", "CGATTA"): True, ("ATTGCA", "CAGTAA"): False}

etc... this will take way too much memory, though. For pairs of 100,100, the hash map will have 2*4^100 elements. Say that you only store two 32-bit hashes of strings as the key, you will need something like 10^55 megabytes, which is ridiculous.

Maybe if you use smaller strings, the problem can be tractable. Then you'll have a huge hashmap, but at least palindrome for let's say 10x10 pairs will take O(1), so checking if a 1000 string is a palindrome will take 100 lookups instead of 500 compares. It's still O(N), though...

查看更多
倾城 Initia
7楼-- · 2020-02-08 08:44

Another variant of your second function. We need no check equals of the right parts of normal and reverse strings.

def palindrome_reverse(s):
  l = len(s) / 2
  return s[:l] == s[l::-1]
查看更多
登录 后发表回答