given several vectors:
x1 = [3 4 6]
x2 = [2 8 1]
x3 = [5 5 4]
x4 = [6 2 1]
I wanna find weight w1, w2, w3 to each item, and get the weighted sum of each vector: yi = w1*i1 + w2*i2 + w3*i3
. for example, y1 = 3*w1 + 4*w2 + 6*w3
to make the variance of these values(y1, y2, y3, y4) to be minimized.
notice: w1, w2, w3 should > 0, and w1 + w2 + w3 = 1
I don't know what kind of problems it should be... and how to solve it in python or matlab?
You can start with building a loss function stating the variance and the constraints on w
's. The mean is m = (1/4)*(y1 + y2 + y3 + y4)
. The variance is then (1/4)*((y1-m)^2 + (y2-m)^2 + (y3-m)^2 + (y4-m)^2)
and the constraint is a*(w1+w2+w3 - 1)
where a
is the Lagrange multiplier. The problem looks like to me a convex optimisation with convex constraints since the loss function is quadratic with respect to target variables (w1,w2,w3) and the constraints are linear. You can look for projected gradient descent algorithms which respect to the constraints provided. Take a look to here http://www.ifp.illinois.edu/~angelia/L5_exist_optimality.pdf There are no straightforward analytic solutions to such kind of problems in general.
w = [5, 6, 7]
x1 = [3, 4, 6]
x2 = [2, 8, 1]
x3 = [5, 5, 4]
y1, y2, y3 = 0, 0, 0
for index, i in enumerate(w):
y1 = y1 + i * x1[index]
y2 = y2 + i * x2[index]
y3 = y3 + i * x3[index]
print(min(y1, y2, y3))
I think I maybe get the purpose of your problem.But if you want to find the smallest value, I hope this can help you.
I just make the values fixed, you can make it to be the def
when you see this is one way to solve your question.
I don't know much about optimization problem, but I get the idea of gradient descent so I tried to reduce the weight between the max score and min score, my script is below:
# coding: utf-8
import numpy as np
#7.72
#7.6
#8.26
def get_max(alist):
max_score = max(alist)
idx = alist.index(max_score)
return max_score, idx
def get_min(alist):
max_score = min(alist)
idx = alist.index(max_score)
return max_score, idx
def get_weighted(alist,aweight):
res = []
for i in range(0, len(alist)):
res.append(alist[i]*aweight[i])
return res
def get_sub(list1, list2):
res = []
for i in range(0, len(list1)):
res.append(list1[i] - list2[i])
return res
def grad_dec(w,dist, st = 0.001):
max_item, max_item_idx = get_max(dist)
min_item, min_item_idx = get_min(dist)
w[max_item_idx] = w[max_item_idx] - st
w[min_item_idx] = w[min_item_idx] + st
def cal_score(w, x):
score = []
print 'weight', w ,x
for i in range(0, len(x)):
score_i = 0
for j in range(0,5):
score_i = w[j]*x[i][j] + score_i
score.append(score_i)
# check variance is small enough
print 'score', score
return score
# cal_score(w,x)
if __name__ == "__main__":
init_w = [0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
x = [[7.3, 10, 8.3, 8.8, 4.2], [6.8, 8.9, 8.4, 9.7, 4.2], [6.9, 9.9, 9.7, 8.1, 6.7]]
score = cal_score(init_w,x)
variance = np.var(score)
round = 0
for round in range(0, 100):
if variance < 0.012:
print 'ok'
break
max_score, idx = get_max(score)
min_score, idx2 = get_min(score)
weighted_1 = get_weighted(x[idx], init_w)
weighted_2 = get_weighted(x[idx2], init_w)
dist = get_sub(weighted_1, weighted_2)
# print max_score, idx, min_score, idx2, dist
grad_dec(init_w, dist)
score = cal_score(init_w, x)
variance = np.var(score)
print 'variance', variance
print score
In my practice it really can reduce the variance. I am very glad but I don't know whether my solution is solid in math.
My full solution can be viewed in PDF.
The trick is to put the vectors x_i
as columns of a matrix X
.
Then writing the problem becomes a Convex Problem with constrain of the solution to be on the Unit Simplex.
I solved it using Projected Sub Gradient Method.
I calculated the Gradient of the objective function and created a projection to the Unit Simplex.
Now all needed is to iterate them.
I validated my solution using CVX.
% StackOverflow 44984132
% How to calculate weight to minimize variance?
% Remarks:
% 1. sa
% TODO:
% 1. ds
% Release Notes
% - 1.0.000 08/07/2017
% * First release.
%% General Parameters
run('InitScript.m');
figureIdx = 0; %<! Continue from Question 1
figureCounterSpec = '%04d';
generateFigures = OFF;
%% Simulation Parameters
dimOrder = 3;
numSamples = 4;
mX = randi([1, 10], [dimOrder, numSamples]);
vE = ones([dimOrder, 1]);
%% Solve Using CVX
cvx_begin('quiet')
cvx_precision('best');
variable vW(numSamples)
minimize( (0.5 * sum_square_abs( mX * vW - (1 / numSamples) * (vE.' * mX * vW) * vE )) )
subject to
sum(vW) == 1;
vW >= 0;
cvx_end
disp([' ']);
disp(['CVX Solution - [ ', num2str(vW.'), ' ]']);
%% Solve Using Projected Sub Gradient
numIterations = 20000;
stepSize = 0.001;
simplexRadius = 1; %<! Unit Simplex Radius
stopThr = 1e-6;
hKernelFun = @(vW) ((mX * vW) - ((1 / numSamples) * ((vE.' * mX * vW) * vE)));
hObjFun = @(vW) 0.5 * sum(hKernelFun(vW) .^ 2);
hGradFun = @(vW) (mX.' * hKernelFun(vW)) - ((1 / numSamples) * vE.' * (hKernelFun(vW)) * mX.' * vE);
vW = rand([numSamples, 1]);
vW = vW(:) / sum(vW);
for ii = 1:numIterations
vGradW = hGradFun(vW);
vW = vW - (stepSize * vGradW);
% Projecting onto the Unit Simplex
% sum(vW) == 1, vW >= 0.
vW = ProjectSimplex(vW, simplexRadius, stopThr);
end
disp([' ']);
disp(['Projected Sub Gradient Solution - [ ', num2str(vW.'), ' ]']);
%% Restore Defaults
% set(0, 'DefaultFigureWindowStyle', 'normal');
% set(0, 'DefaultAxesLooseInset', defaultLoosInset);
You can see the full code in StackOverflow Q44984132 (PDF is available as well).