A basic simulation of GBM doesn't seem to work. What am I doing wrong? The following code always outputs values less than 1e-20, instead of something distributed randomly around 1.0:
import math
import random
p = 1
dt = 1
mu = 0
sigma = 1
for k in range(100):
p *= math.exp((mu - sigma * sigma / 2) * dt +
sigma * random.normalvariate(0, dt * dt))
print(p)
I'm running:
ActivePython 3.1.2.3 (ActiveState Software Inc.) based on
Python 3.1.2 (r312:79147, Mar 22 2010, 12:30:45) [MSC v.1500 64 bit (AMD64)] on
win32
My OS is Windows 7 Professional on i7-930 CPU (64-bit).
I'd be happy to run any other tests on my machine to isolate the problem.
I found the answer. There is no problem with the code. It's just that the resulting lognormal distribution has enormous scale parameter = 1 * sqrt(100) = 10. With scale of 10, the skewness is insane.
Thus, even though the mean of the distribution is 1.0, it will take me billions of iterations (if not billions of billions) to see a single number greater than 1.0.
Seems fine:
import math
import random
def compute():
p = 1
dt = 1
mu = 0
sigma = 1
for k in range(100):
p *= math.exp((mu - sigma * sigma / 2) * dt +
sigma * random.normalvariate(0, dt * dt))
return p
print [compute() for x in range(20)]
gives:
[118.85952235362008, 7.3312246638859059e-14, 29.509674994408684, 1.8720575806397408, 1.5882398997219834e-05, 2967.524471541024, 0.0031130343677571093, 19942.669293314699, 0.00011878818261757498, 5382.80288111769, 0.22867624175360118, 0.028535167622775418, 12.6324011631628, 20.604456159054738, 0.0034504567371058613, 6.5804828930878056e-06, 6398.0493448486704, 0.0014978345496292245, 721546.38343724841, 1285.546939393781]
This is using python 2.6.1
Running the code in Python 2.6.6 gets reasonable answers, while running it in Python 3.1.2 gives the small numbers you describe. I think this is because in Python 2.x the division operator gives an integer result when dividing two integers, while in Python 3.x it gives a floating-point result in the same situation. Hence the divide-by-two in the calculation gives different results depending on what version.
To make it consistent, force the output of the division to an integer:
p *= math.exp(int(mu - sigma * sigma / 2)) * dt +
sigma * random.normalvariate(0, dt * dt))
This makes the output the same in both 2.x and 3.x. A sample set of output on my machine is:
0.0243898032782, 6126.78299771, 0.00450839758621, 1.17316856812, 0.00479489258202, 4.88995369021e-06, 0.033957530608, 29.9492464423, 3.16953460691
This seems to be in the ballpark of what you are looking for.
Using float division (//) rather than integer division (/) in Python 3.1 works:
import math
import random
p = 1
dt = 1
mu = 0
sigma = 1
for k in range(100):
p *= math.exp((mu - sigma * sigma // 2) * dt +
sigma * random.normalvariate(0, dt * dt))
print(p)
On an example run, I got the following numbers:
0.0989269233704
2536660.91466
2146.09989782
0.502233504924
0.43052439984
14.1156450335