In Python only, and using data from a Pandas dataframe, how can I use PuLP to solve linear programming problems the same way I can in Excel? How much budget should be allocated to each Channel under the New Budget column so we maximize the total number of estimated successes? I'm really looking for a concrete example using data from a dataframe and not really high-level advice.
Problem Data Setup
Channel 30-day Cost Trials Success Cost Min Cost Max New Budget
0 Channel1 1765.21 9865 812 882.61 2647.82 0
1 Channel2 2700.00 15000 900 1350.00 4050.00 0
2 Channel3 2160.00 12000 333 1080.00 3240.00 0
This is a Maximization problem.
The objective function is:
objective_function = sum((df['New Budget']/(df['30-day Cost']/df['Trials']))*(df['Success']/df['Trials']))
The constraints are:
- The sum of
df['New Budget']
must equal5000
- The
New Budget
for a given channel can go no lower than theCost Min
- The
New Budget
for a given channel can go no higher than theCost Max
Any ideas how to translate this pandas dataframe solver linear problem using PuLP or any other solver approach? The end-result would be what you see in the image below.
In general you create a dictionary of variables (
x
in this case) and a model variable (mod
in this case). To create the objective you usesum
over the variables times some scalars, adding that result tomod
. You construct constraints by again computing linear combinations of variables, using>=
,<=
, or==
, and adding that constraint tomod
. Finally you usemod.solve()
to get the solutions.Data: