I ran across this situation this afternoon, so I thought I'd ask what you guys do.
We have a randomized password generator for user password resets and while fixing a problem with it, I decided to move the routine into my (slowly growing) test harness.
I want to test that passwords generated conform to the rules we've set out, but of course the results of the function will be randomized (or, well, pseudo-randomized).
What would you guys do in the unit test? Generate a bunch of passwords, check they all pass and consider that good enough?
You can also look into mutation testing (Jester for Java, Heckle for Ruby)
Without knowing what your rules are it's hard to say for sure, but assuming they are something like "the password must be at least 8 characters with at least one upper case letter, one lower case letter, one number and one special character" then it's impossible even with brute force to check sufficient quantities of generated passwords to prove the algorithm is correct (as that would require somewhere over 8^70 = 1.63x10^63 checks depending on how many special characters you designate for use, which would take a very, very long time to complete).
Ultimately all you can do is test as many passwords as is feasible, and if any break the rules then you know the algorithm is incorrect. Probably the best thing to do is leave it running overnight, and if all is well in the morning you're likely to be OK.
If you want to be doubly sure in production, then implement an outer function that calls the password generation function in a loop and checks it against the rules. If it fails then log an error indicating this (so you know you need to fix it) and generate another password. Continue until you get one that meets the rules.
I'm assuming that the user-entered passwords conform to the same restrictions as the random generated ones. So you probably want to have a set of static passwords for checking known conditions, and then you'll have a loop that does the dynamic password checks. The size of the loop isn't too important, but it should be large enough that you get that warm fuzzy feeling from your generator, but not so large that your tests take forever to run. If anything crops up over time, you can add those cases to your static list.
In the long run though, a weak password isn't going to break your program, and password security falls in the hands of the user. So your priority would be to make sure that the dynamic generation and strength-check doesn't break the system.
A unit test should do the same thing every time that it runs, otherwise you may run into a situation where the unit test only fails occasionally, and that could be a real pain to debug.
Try seeding your pseudo-randomizer with the same seed every time (in the test, that is--not in production code). That way your test will generate the same set of inputs every time.
If you can't control the seed and there is no way to prevent the function you are testing from being randomized, then I guess you are stuck with an unpredictable unit test. :(
The function is a hypothesis that for all inputs, the output conforms to the specifications. The unit test is an attempt to falsify that hypothesis. So yes, the best you can do in this case is to generate a large amount of outputs. If they all pass your specification, then you can be reasonably sure that your function works as specified.
Consider putting the random number generator outside this function and passing a random number to it, making the function deterministic, instead of having it access the random number generator directly. This way, you can generate a large number of random inputs in your test harness, pass them all to your function, and test the outputs. If one fails, record what that value is so that you have a documented test case.
Firstly, use a seed for your PRNG. Your input is no longer random and gets rid of the problem of unpredictable output - i.e. now your unit test is deterministic.
This doesn't however solve the problem of testing the implementation, but here is an example of how a typical method that relies upon randomness can be tested.
Imagine we've implemented a function that takes a collection of red and blue marbles and picks one at random, but a weighting can be assigned to the probability, i.e. weights of 2 and 1 would mean red marbles are twice as likely to be picked as blue marbles.
We can test this by setting the weight of one choice to zero and verifying that in all cases (in practice, for a large amount of test input) we always get e.g. blue marbles. Reversing the weights should then give the opposite result (all red marbles).
This doesn't guarantee our function is behaving as intended (if we pass in an equal number of red and blue marbles and have equal weights do we always get a 50/50 distribution over a large number of trials?) but in practice it is often sufficient.