How to get dimensions right using fmin_cg in scipy

2019-06-04 08:40发布

问题:

I have been trying to use fmin_cg to minimize cost function for Logistic Regression.

xopt = fmin_cg(costFn, fprime=grad, x0= initial_theta, 
                                 args = (X, y, m), maxiter = 400, disp = True, full_output = True )

This is how I call my fmin_cg

Here is my CostFn:

def costFn(theta, X, y, m):
    h = sigmoid(X.dot(theta))
    J = 0
    J = 1 / m * np.sum((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
    return J.flatten()

Here is my grad:

def grad(theta, X, y, m):
    h = sigmoid(X.dot(theta))
    J = 1 / m * np.sum((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
    gg = 1 / m * (X.T.dot(h-y))
    return gg.flatten()

It seems to be throwing this error:

/Users/sugethakch/miniconda2/lib/python2.7/site-packages/scipy/optimize/linesearch.pyc in phi(s)
     85     def phi(s):
     86         fc[0] += 1
---> 87         return f(xk + s*pk, *args)
     88 
     89     def derphi(s):

ValueError: operands could not be broadcast together with shapes (3,) (300,) 

I know it's something to do with my dimensions. But I can't seem to figure it out. I am noob, so I might be making an obvious mistake.

I have read this link:

fmin_cg: Desired error not necessarily achieved due to precision loss

But, it somehow doesn't seem to work for me.

Any help?


Updated size for X,y,m,theta

(100, 3) ----> X

(100, 1) -----> y

100 ----> m

(3, 1) ----> theta


This is how I initialize X,y,m:

data = pd.read_csv('ex2data1.txt', sep=",", header=None)                        
data.columns = ['x1', 'x2', 'y']                                                       
x1 = data.iloc[:, 0].values[:, None]                                                     
x2 = data.iloc[:, 1].values[:, None]                                                    
y = data.iloc[:, 2].values[:, None]
# join x1 and x2 to make one array of X
X = np.concatenate((x1, x2), axis=1)
m, n = X.shape

ex2data1.txt:

34.62365962451697,78.0246928153624,0
30.28671076822607,43.89499752400101,0
35.84740876993872,72.90219802708364,0
.....

If it helps, I am trying to re-code one of the homework assignments for the Coursera's ML course by Andrew Ng in python

回答1:

Finally, I figured out what the problem in my initial program was.

My 'y' was (100, 1) and the fmin_cg expects (100, ). Once I flattened my 'y' it no longer threw the initial error. But, the optimization wasn't working still.

 Warning: Desired error not necessarily achieved due to precision loss.
     Current function value: 0.693147
     Iterations: 0
     Function evaluations: 43
     Gradient evaluations: 41

This was the same as what I achieved without optimization.

I figured out the way to optimize this was to use the 'Nelder-Mead' method. I followed this answer: scipy is not optimizing and returns "Desired error not necessarily achieved due to precision loss"

Result = op.minimize(fun = costFn, 
                x0 = initial_theta, 
                args = (X, y, m),
                method = 'Nelder-Mead',
                options={'disp': True})#,
                #jac = grad)

This method doesn't need a 'jacobian'. I got the results I was looking for,

Optimization terminated successfully.
     Current function value: 0.203498
     Iterations: 157
     Function evaluations: 287


回答2:

Well, since I don't know exactly how your initializing m, X, y, and theta I had to make some assumptions. Hopefully my answer is relevant:

import numpy as np
from scipy.optimize import fmin_cg
from scipy.special import expit

def costFn(theta, X, y, m):
    # expit is the same as sigmoid, but faster
    h = expit(X.dot(theta))

    # instead of 1/m, I take the mean
    J =  np.mean((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
    return J #should be a scalar


def grad(theta, X, y, m):
    h = expit(X.dot(theta))
    J =  np.mean((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
    gg =  (X.T.dot(h-y))    
    return gg.flatten()

# initialize matrices
X = np.random.randn(100,3)
y = np.random.randn(100,) #this apparently needs to be a 1-d vector
m = np.ones((3,)) # not using m, used np.mean for a weighted sum (see ali_m's comment)
theta = np.ones((3,1))

xopt = fmin_cg(costFn, fprime=grad, x0=theta, args=(X, y, m), maxiter=400, disp=True, full_output=True )

While the code runs, I don't know enough about your problem to know if this is what you're looking for. But hopefully this can help you understand the problem better. One way to check your answer is to call fmin_cg with fprime=None and see how the answers compare.