I am trying to perform a portfolio optimization that returns the weights which maximize my utility function. I can do this portion just fine including the constraint that weights sum to one and that the weights also give me a target risk. I have also included bounds for [0 <= weights <= 1]. This code looks as follows:
def rebalance(PortValue, port_rets, risk_tgt):
#convert continuously compounded returns to simple returns
Rt = np.exp(port_rets) - 1
covar = Rt.cov()
def fitness(W):
port_Rt = np.dot(Rt, W)
port_rt = np.log(1 + port_Rt)
q95 = Series(port_rt).quantile(.05)
cVaR = (port_rt[port_rt < q95] * sqrt(20)).mean() * PortValue
mean_cVaR = (PortValue * (port_rt.mean() * 20)) / cVaR
return -1 * mean_cVaR
def solve_weights(W):
import scipy.optimize as opt
b_ = [(0.0, 1.0) for i in Rt.columns]
c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1},
{'type':'eq', 'fun': lambda W: sqrt(np.dot(W, np.dot(covar, W))\
* 252) - risk_tgt})
optimized = opt.minimize(fitness, W, method='SLSQP', constraints=c_, bounds=b_)
if not optimized.success:
raise BaseException(optimized.message)
return optimized.x # Return optimized weights
init_weights = Rt.ix[1].copy()
init_weights.ix[:] = np.ones(len(Rt.columns)) / len(Rt.columns)
return solve_weights(init_weights)
Now I can delve into the problem, I have my weights stored in a MultIndex pandas Series such that each asset is an ETF corresponding to an asset class. When an equally weights portfolio is printed out looks like this:
Out[263]:equity CZA 0.045455 IWM 0.045455 SPY 0.045455 intl_equity EWA 0.045455 EWO 0.045455 IEV 0.045455 bond IEF 0.045455 SHY 0.045455 TLT 0.045455 intl_bond BWX 0.045455 BWZ 0.045455 IGOV 0.045455 commodity DBA 0.045455 DBB 0.045455 DBE 0.045455 pe ARCC 0.045455 BX 0.045455 PSP 0.045455 hf DXJ 0.045455 SRV 0.045455 cash BIL 0.045455 GSY 0.045455 Name: 2009-05-15 00:00:00, dtype: float64
how can I include an additional bounds requirement such that when I group this data together, the sum of the weight falls between the allocation ranges I have predetermined for that asset class?
So concretely, I want to include an additional boundary such that
init_weights.groupby(level=0, axis=0).sum()
Out[264]:equity 0.136364 intl_equity 0.136364 bond 0.136364 intl_bond 0.136364 commodity 0.136364 pe 0.136364 hf 0.090909 cash 0.090909 dtype: float64
is within these bounds
[(.08,.51), (.05,.21), (.05,.41), (.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)]
[UPDATE] I figured I would show my progress with a clumsy psuedo-solution that I am not too happy with. Namely becuase it doesn't solve the weights using the entire data set but rather asset class by asset class. The other issue is that it instead returns the series rather than the weights, But I am sure someone more apt than myself, could offer some insight in regards to the groupby function.
So with a mild tweak to my initial code, I have:
PortValue = 100000
model = DataFrame(np.array([.08,.12,.05,.05,.65,0,0,.05]), index= port_idx, columns = ['strategic'])
model['tactical'] = [(.08,.51), (.05,.21),(.05,.41),(.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)]
def fitness(W, Rt):
port_Rt = np.dot(Rt, W)
port_rt = np.log(1 + port_Rt)
q95 = Series(port_rt).quantile(.05)
cVaR = (port_rt[port_rt < q95] * sqrt(20)).mean() * PortValue
mean_cVaR = (PortValue * (port_rt.mean() * 20)) / cVaR
return -1 * mean_cVaR
def solve_weights(Rt, b_= None):
import scipy.optimize as opt
if b_ is None:
b_ = [(0.0, 1.0) for i in Rt.columns]
W = np.ones(len(Rt.columns))/len(Rt.columns)
c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1})
optimized = opt.minimize(fitness, W, args=[Rt], method='SLSQP', constraints=c_, bounds=b_)
if not optimized.success:
raise ValueError(optimized.message)
return optimized.x # Return optimized weights
The following one-liner will return the somewhat optimized series
port = np.dot(port_rets.groupby(level=0, axis=1).agg(lambda x: np.dot(x,solve_weights(x))),\
solve_weights(port_rets.groupby(level=0, axis=1).agg(lambda x: np.dot(x,solve_weights(x))), \
list(model['tactical'].values)))
Series(port, name='portfolio').cumsum().plot()
[Update 2]
The following changes will return the constrained weights, though still not optimal as it is broken down and optimized on the constituent asset classes, so when the constraint for target risk is considered only a collapsed version of the initial covariance matrix is avaliable
def solve_weights(Rt, b_ = None):
W = np.ones(len(Rt.columns)) / len(Rt.columns)
if b_ is None:
b_ = [(0.01, 1.0) for i in Rt.columns]
c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1})
else:
covar = Rt.cov()
c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1},
{'type':'eq', 'fun': lambda W: sqrt(np.dot(W, np.dot(covar, W)) * 252) - risk_tgt})
optimized = opt.minimize(fitness, W, args = [Rt], method='SLSQP', constraints=c_, bounds=b_)
if not optimized.success:
raise ValueError(optimized.message)
return optimized.x # Return optimized weights
class_cont = Rt.ix[0].copy()
class_cont.ix[:] = np.around(np.hstack(Rt.groupby(axis=1, level=0).apply(solve_weights).values),3)
scalars = class_cont.groupby(level=0).sum()
scalars.ix[:] = np.around(solve_weights((class_cont * port_rets).groupby(level=0, axis=1).sum(), list(model['tactical'].values)),3)
return class_cont.groupby(level=0).transform(lambda x: x * scalars[x.name])
After much time this seems to be the only solution that fits...
Not totally sure I understand, but I think you can add the following as another constraint:
where
x.range
is the interval (tuple
) repeatedK[i]
times whereK
is the number of times a particular level occurs andi
is thei
th level.column_name
in your case happens to be a date.This says constrain the weights such that the sum of the weights in the
i
th group is between the associatedtuple
interval.To map each of the level names to an interval do this: