I know how to create random string, like:
''.join(secrets.choice(string.ascii_uppercase + string.digits) for _ in range(N))
However, there should be no duplicates so what I am currently just checking if the key already exists in a list, like shown in the following code:
import secrets
import string
import numpy as np
amount_of_keys = 40000
keys = []
for i in range(0,amount_of_keys):
N = np.random.randint(12,20)
n_key = ''.join(secrets.choice(string.ascii_uppercase + string.digits) for _ in range(N))
if not n_key in keys:
keys.append(n_key)
Which is okay for a small amount of keys like 40000
, however the problem does not scale well the more keys there are. So I am wondering if there is a faster way to get to the result for even more keys, like 999999
Alternate approach: Uniqueness in creation rather than by test
The obvious approach to your question would be to generate random output, and then check whether it is unique. Though I do not offer an implementation, here is an alternate approach:
Now you have output that is guaranteed to be unique, and appears to be random.
Example
Suppose you would want to generate 999999 strings with lengths from 12 and 20. The approach will of course work for all character sets, but lets keep it simple and assume you want to use only 0-9.
Small scale example
Generate randomness:
sdfdsf xxer ver
Generate uniqueness
xd ae bd
Combine
xdsdfdsf aexxer bdver
Note that this method assumes that you have a minimum number of characters per entry, which seems to be the case in your question.
A simple and fast one:
-- Edit: The below refers to a previous revision of Martijn's answer. After our discussion he added another solution to it, which is essentially the same as mine but with some optimizations. They don't help much, though, it's only about 3.4% faster than mine in my testing, so in my opinion they mostly just complicate things. --
Compared with Martijn's final solution in his accepted answer mine is a lot simpler, about factor 1.7 faster, and not biased:
Martijn's has a bias in the first character,
A
appears far too often and8
far to seldom. I ran my test ten times, his most common first character was alwaysA
orB
(five times each), and his least common character was always7
,8
or9
(two, three and five times, respectively). I also checked the lengths separately, length 17 was particularly bad, his most common first character always appeared about 51500 times while his least common first character appeared about 25400 times.Fun side note: I'm using the
secrets
module that Martijn dismissed :-)My whole script:
Basic improvements, sets and local names
Use a set, not a list, and testing for uniqueness is much faster; set membership testing takes constant time independent of the set size, while lists take O(N) linear time. Use a set comprehension to produce a series of keys at a time to avoid having to look up and call the
set.add()
method in a loop; properly random, larger keys have a very small chance of producing duplicates anyway.Because this is done in a tight loop, it is worth your while optimising away all name lookups as much as possible:
The
_randint
keyword argument binds thenp.random.randint
name to a local in the function, which are faster to reference than globals, especially when attribute lookups are involved.The
pickchar()
partial avoids looking up attributes on modules or more locals; it is a single callable that has all the references in place, so is faster in execute, especially when done in a loop.The
while
loop keeps iterating only if there were duplicates produced. We produce enough keys in a single set comprehension to fill the remainder if there are no duplicates.Timings for that first improvement
For 100 items, the difference is not that big:
but when you start scaling this up, you'll notice that the O(N) membership test cost against a list really drags your version down:
My version is already almost twice as fast as 10k items; 40k items can be run 10 times in about 32 seconds:
The list version took over 2 minutes, more than ten times as long.
Numpy's random.choice function, not cryptographically strong
You can make this faster still by forgoing the
secrets
module and usingnp.random.choice()
instead; this won't produce a cryptographic level randomness however, but picking a random character is twice as fast:This makes a huge difference, now 10 times 40k keys can be produced in just 16 seconds:
Further tweaks with the itertools module and a generator
We can also take the
unique_everseen()
function from theitertools
module Recipes section to have it take care of the uniqueness, then use an infinite generator and theitertools.islice()
function to limit the results to just the number we want:This is slightly faster still, but only marginally so:
os.urandom() bytes and a different method of producing strings
Next, we can follow on on Adam Barnes's ideas for using UUID4 (which is basically just a wrapper around
os.urandom()
) and Base64. But by case-folding Base64 and replacing 2 characters with randomly picked ones, his method severely limits the entropy in those strings (you won't produce the full range of unique values possible, a 20-character string only using(256 ** 15) / (36 ** 20)
== 1 in every 99437 bits of entropy!).The Base64 encoding uses both upper and lower case characters and digits but also adds the
-
and/
characters (or+
and_
for the URL-safe variant). For only uppercase letters and digits, you'd have to uppercase the output and map those extra two characters to other random characters, a process that throws away a large amount of entropy from the random data provided byos.urandom()
. Instead of using Base64, you could also use the Base32 encoding, which uses uppercase letters and the digits 2 through 8, so produces strings with 32 ** n possibilities versus 36 ** n. However, this can speed things up further from the above attempts:This is really fast:
40k keys, 10 times, in just over 4 seconds. So about 75 times as fast; the speed of using
os.urandom()
as a source is undeniable.This is, cryptographically strong again;
os.urandom()
produces bytes for cryptographic use. On the other hand, we reduced the number of possible strings produced by more than 90% (((36 ** 20) - (32 ** 20)) / (36 ** 20) * 100
is 90.5), we are no longer using the0
,1
,8
and9
digits in the outputs.So perhaps we should use the
urandom()
trick to produce a proper Base36 encoding; we'll have to produce our ownb36encode()
function:and use that:
This is reasonably fast, and above all produces the full range of 36 uppercase letters and digits:
Granted, the base32 version is almost twice as fast as this one (thanks to an efficient Python implementation using a table) but using a custom Base36 encoder is still twice the speed of the non-cryptographically secure
numpy.random.choice()
version.However, using
os.urandom()
produces bias again; we have to produce more bits of entropy than is required for between 12 to 19 base36 'digits'. For 17 digits, for example, we can't produce 36 ** 17 different values using bytes, only the nearest equivalent of 256 ** 11 bytes, which is about 1.08 times too high, and so we'll end up with a bias towardsA
,B
, and to a lesser extent,C
(thanks Stefan Pochmann for pointing this out).Picking an integer below
(36 ** length)
and mapping integers to base36So we need to reach out to a secure random method that can give us values evenly distributed between
0
(inclusive) and36 ** (desired length)
(exclusive). We can then map the number directly to the desired string.First, mapping the integer to a string; the following has been tweaked to produce the output string the fastest:
Next, we need a fast and cryptographically secure method of picking our number in a range. You can still use
os.urandom()
for this, but then you'd have to mask the bytes down to a maximum number of bits, and then loop until your actual value is below the limit. This is actually already implemented, by thesecrets.randbelow()
function. In Python versions < 3.6 you can userandom.SystemRandom().randrange()
, which uses the exact same method with some extra wrapping to support a lower bound greater than 0 and a step size.Using
secrets.randbelow()
the function becomes:and this then is quite close to the (probably biased) base64 solution:
This is almost as fast as the Base32 approach, but produces the full range of keys!
So it's a speed race is it?
Building on the work of Martijn Pieters, I've got a solution which cleverly leverages another library for generating random strings:
uuid
.My solution is to generate a
uuid4
, base64 encode it and uppercase it, to get only the characters we're after, then slice it to a random length.This works for this case because the length of outputs we're after, (12-20), is shorter than the shortest base64 encoding of a uuid4. It's also really fast, because
uuid
is very fast.I also made it a generator instead of a regular function, because they can be more efficient.
Interestingly, using the standard library's
randint
function was faster thannumpy
's.Here is the test output:
Here is the code for
uuidgen()
:And here is the entire project. (At commit d9925d at the time of writing).
Thanks to feedback from Martijn Pieters, I've improved the method somewhat, increasing the entropy, and speeding it up by a factor of about 1/6th.
There is still a lot of entropy lost in casting all lowercase letters to uppercase. If that's important, then it's possibly advisable to use
b32encode()
instead, which has the characters we want, minus0
,1
,8
, and9
.The new solution reads as follows:
And the test output:
The new commit in my repository is 5625fd.
Martijn's comments on entropy got me thinking. The method I used with
base64
and.upper()
makes letters SO much more common than numbers. I revisited the problem with a more binary mind on.The idea was to take output from
os.urandom()
, interpret it as a long string of 6-bit unsigned numbers, and use those numbers as an index to a rolling array of the allowed characters. The first 6-bit number would select a character from the rangeA..Z0..9A..Z01
, the second 6-bit number would select a character from the range2..9A..Z0..9A..T
, and so on.This has a slight crushing of entropy in that the first character will be slightly less likely to contain
2..9
, the second character less likely to containU..Z0
, and so on, but it's so much better than before.It's slightly faster than
uuidgen()
, and slightly slower thanurandomgen()
, as shown below:I'm not entirely sure how to eliminate the last bit of entropy crushing; offsetting the start point for the characters will just move the pattern along a little, randomising the offset will be slow, shuffling the map will still have a period... I'm open to ideas.
The new code is as follows:
And the full code, along with a more in-depth README on the implementation, is over at de0db8.
I tried several things to speed the implementation up, as visible in the repo. Something that would definitely help is a character encoding where the numbers and ASCII uppercase letters are sequential.
Caveat: This is not cryptographically secure. I want to give an alternative
numpy
approach to the one in Martijn's great answer.numpy
functions aren't really optimised to be called repeatedly in a loop for small tasks; rather, it's better to perform each operation in bulk. This approach gives more keys than you need (massively so in this case because I over-exaggerated the need to overestimate) and so is less memory efficient but is still super fast.We know that all your string lengths are between 12 and 20. Just generate all the string lengths in one go. We know that the final
set
has the possibility of trimming down the final list of strings, so we should anticipate that and make more "string lengths" than we need. 20,000 extra is excessive, but it's to make a point:string_lengths = np.random.randint(12, 20, 60000)
Rather than create all our sequences in a
for
loop, create a 1D list of characters that is long enough to be cut into 40,000 lists. In the absolute worst case scenario, all the random string lengths in (1) were the max length of 20. That means we need 800,000 characters.pool = list(string.ascii_letters + string.digits)
random_letters = np.random.choice(pool, size=800000)
Now we just need to chop that list of random characters up. Using
np.cumsum()
we can get sequential starting indices for the sublists, andnp.roll()
will offset that array of indices by 1, to give a corresponding array of end indices.starts = string_lengths.cumsum()
ends = np.roll(string_lengths.cumsum(), -1)
Chop up the list of random characters by the indices.
final = [''.join(random_letters[starts[x]:ends[x]]) for x, _ in enumerate(starts)]
Putting it all together:
And
timeit
results: