I'm trying to obfuscate a large amount of data. I've created a list of words (tokens) which I want to replace and I am replacing the words one by one using the StringBuilder class, like so:
var sb = new StringBuilder(one_MB_string);
foreach(var token in tokens)
{
sb.Replace(token, "new string");
}
It's pretty slow! Are there any simple things that I can do to speed it up?
tokens is a list of about one thousand strings, each 5 to 15 characters in length.
Instead of doing replacements in a huge string (which means that you move around a lot of data), work through the string and replace a token at a time.
Make a list containing the next index for each token, locate the token that is first, then copy the text up to the token to the result followed by the replacement for the token. Then check where the next occurance of that token is in the string to keep the list up to date. Repeat until there are no more tokens found, then copy the remaining text to the result.
I made a simple test, and this method did 125000 replacements on a 1000000 character string in 208 milliseconds.
Token and TokenList classes:
Example of usage:
Output:
Note: This code does not handle overlapping tokens. If you for example have the tokens "pineapple" and "apple", the code doesn't work properly.
Edit:
To make the code work with overlapping tokens, replace this line:
with this code:
OK, you see why it's taking long, right?
You have 1 MB strings, and for each token, replace is iterating through the 1 MB and making a new 1 MB copy. Well, not an exact copy, as any token found is replaced with the new token value. But for each token you're reading 1 MB, newing up 1 MB of storage, and writing 1 MB.
Now, can we think of a better way of doing this? How about instead of iterating the 1 MB string for each token, we instead walk it once.
Before walking it, we'll create an empty output string.
As we walk the source string, if we find a token, we'll jump
token.length()
characters forward, and write out the obfuscated token. Otherwise we'll proceed to the next character.Essentially, we're turning the process inside out, doing the for loop on the long string, and at each point looking for a token. To make this fast, we'll want quick loop-up for the tokens, so we put them into some sort of associative array (a set).
In general, what takes longest in programming? New'ing up memory.
Now when we create a StringBuffer, what likely happens is that some amount of space is allocated (say, 64 bytes, and that whenever we append more than its current capacity, it probably, say, doubles its space. And then copies the old character buffer to the new one. (It's possible we can can C's realloc, and not have to copy.)
So if we start with 64 bytes, to get up to 1 MB, we allocate and copy: 64, then 128, then 256, then 512, then 1024, then 2048 ... we do this twenty times to get up to 1 MB. And in getting here, we've allocated 1 MB just to throw it away.
Pre-allocating, by using something analogous to C++'s
reserve()
function, will at least let us do that all at once. But it's still all at once for each token. You're at least producing a 1 MB temporary string for each token. If you have 2000 tokens, you're allocating about a 2 billion bytes of memory, all to end up with 1 MB. Each 1 MB throwaway contains the transformation of the previous resulting string, with the current token applied.And that's why this is taking so long.
Now yes, deciding which token to apply (if any), at each character, also takes time. You may wish to use a regular expression, which internally builds a state machine to run through all possibilities, rather than a set lookup, as I suggested initially. But what's really killing you is the time to allocate all that memory, for 2000 copies of a 1 MB string.
Dan Gibson suggests:
That was my reasoning behind putting them into an associative array (e.g, Java HashSet). But the other problem is matching, e.g., if one token is "a" and another is "an" -- if there are any common prefixes, that is, how do we match?
This is where Keltex's answer comes in handy: he delegates the matching to a Regex, which is a great idea, as a Regex already defines (greedy match) and implements how to do this. Once the match is made, we can examine what's captured, then use a Java Map (also an associative array) to find the obfuscated token for the matched, unobfuscated one.
I wanted to concentrate my answer on the not just how to fix this, but on why there was a problem in the first place.
If you can find your tokens via a regular expression, you can do something like this:
Then define Replacer as:
Would it be faster to build the string one token at a time, only replacing if need be? For this,
GetObfuscatedString()
could be implemented like so:Now, you can add each token to the builder like this:
You'll only have to make one pass over the string, and it might be faster.