I write a function sigmoid:
def sigmoid(inX):
return 1.0 / (1 + exp(-inX))
and when the inX is a matrix of 100x1 size, the error arises:
/home/abyss/python/machine/logistic/logReg.py in sigmoid(inX)
12
13 def sigmoid(inX):
---> 14 r = 1.0 / (1 + exp(-inX))
15 return r
16
TypeError: only length-1 arrays can be converted to Python scalars
but I can directly use this expression in command line:
In [59]: r = 1.0 / (1 + exp(-h))
In [60]: shape(r)
Out[60]: (100, 1)
I am totally confused, how did this happen?
When you are taking the exponent of an array or anything else you don't want to use Python's math library, use NumPy library instead: