I would like to generate a random number n
such that n
is in the range (a,b)
or (a,b]
where a < b
. Is this possible in python? It seems the only choices are a + random.random()*(b-a)
which is includes [a,b)
or random.uniform(a,b)
which includes the range [a,b]
so neither meet my needs.
问题:
回答1:
Computer generation of "random" numbers is tricky, and especially of "random" floats. You need to think long & hard about what you really want. In the end, you'll need to build something on top of integers, not directly out of floats.
Under the covers, in Python (and every other language using the Mersenne Twister's source code), generating a "random" IEEE-754 double (Python's basic random.random()
) really works by generating a random 53-bit integer, then dividing by 2**53:
randrange(2**53) / 9007199254740992.0
That's why the output range is [0.0, 1.0)
, but not all representable floats in that range are equally likely. Only the ones that can be expressed in the form I/2**53
for an integer 0 <= I < 2**53
. For example, the float 1.0 / 2**60
can never be returned.
There are no "real numbers" here, just representable binary-floating-point numbers, so to answer your question first requires that you specify the exact set of those from which you're trying to pick.
If the answer is that you don't want to get that picky, then the distinction between open and closed is also too picky to bother with. If you can specify the precise set, then the solution is to generate more-or-less obvious random integers that map to your output set.
For example, if you want to pick "random" floats from [3.0, 6.0] with just 2 bits after the radix point, there are 13 possible outputs. So the first step is
i = random.randrange(13)
Then map to the range of interest:
return 3.0 + i / 4.0
EDIT: USELESS BUT EDUCATIONAL ;-)
As noted in the comments, picking uniformly from all representable floats x
with 0.0 < x < 1.0
can be done, but is very far from being uniformly distributed across that range. There are, for example, 2**52
representable floats in [0.5, 1.0)
, but also 2**52
representable floats in [0.25, 0.5)
, and ... in [2.0**-i, 2.0**(1-i))
for increasing i
until the number of representable floats starts shrinking when we hit the subnormal range, eventually falling to none when we underflow to 0 completely.
As bit patterns they're very simple, though: the set of representable IEEE-754 doubles (Python floats on almost all platforms) in (0, 1)
consists of, when viewing the bit patterns as integers, simply
range(1, 0x3ff0000000000000)
So a function to generate each of those with equal likelihood is straightforward to write using bit-fiddling tricks:
from struct import unpack
from random import randrange
def gen01():
i = randrange(1, 0x3ff0000000000000)
as_bytes = i.to_bytes(8, "big")
return unpack(">d", as_bytes)[0]
Just run that a few times to see why it's useless - it's very heavily skewed toward the 0.0 end of the range:
>>> for i in range(10):
... print(gen01())
9.796357610869274e-104
4.125848254595866e-197
1.8114434720880952e-253
1.4937625148849258e-285
1.0537573744489343e-304
2.79008159472542e-58
4.718459887295062e-217
2.7996009087703915e-295
3.4129442284798105e-170
2.299402306630583e-115
回答2:
For (a, b]
, do b - random.random()*(b-a)
.
回答3:
random.randint(a,b)
seems to do that. https://docs.python.org/2/library/random.html
回答4:
Though a bit tricky, you may use np.random.rand
to generate random number in (a, b]
:
import numpy as np
size = 10 # No. of random numbers to be generated
a, b = 0, 10 # Can be any values
rand_num = np.random.rand(size) # [0, 1)
rand_num *= -1 # (-1, 0]
rand_num += 1 # (0, 1]
rand_num = a + rand_num * (b - a) # (a, b]