I know about basic data types and that float types (float,double) can not hold some numbers exactly.
In porting some code from Matlab to Python (Numpy) I however found some significant differences in calculations, and I think it's going back to precision.
Take the following code, z-normalizing a 500 dimensional vector with only first two elements having a non-zero value.
Matlab:
Z = repmat(0,500,1); Z(1)=3;Z(2)=1;
Za = (Z-repmat(mean(Z),500,1)) ./ repmat(std(Z),500,1);
Za(1)
>>> 21.1694
Python:
from numpy import zeros,mean,std
Z = zeros((500,))
Z[0] = 3
Z[1] = 1
Za = (Z - mean(Z)) / std(Z)
print Za[0]
>>> 21.1905669677
Besides that the formatting shows a bit more digits in Python, there is a huge difference (imho), more than 0.02
Both Python and Matlab are using a 64 bit data type (afaik). Python uses 'numpy.float64' and Matlab 'double'.
Why is the difference so huge? Which one is more correct?
According to the documentation of
std
at SciPy, it has a parameter calledddof
:In numpy,
ddof
is zero by default while in MATLAB is one. So, I think this may solve the problem:To answer your question, no, this is not a problem of precision. As @rocksportrocker points out, there are two popular estimators for the standard deviation. MATLAB's
std
has both available but as a standard uses a different one from what you used in Python.Try
std(Z,1)
instead ofstd(Z)
:leads to
in MATLAB. Read rockspotrocker's answer about which of the two results is more appropriate for what you want to do ;-).
Maybe the difference comes from the
mean
andstd
calls. Compare those first.There are several definitions for
std
, some use the sqaure root ofothers use
instead.
From a mathematical point: these formulas are estimators of the variance of a normal distributed random variable. The distribution has two parameters
sigma
andmu
. If you knowmu
exactly the optimal estimator forsigma ** 2
isIf you have to estimate
mu
from the data usingmu = mean(xi)
, the optimal estimator forsigma**2
is