Bin Packing - Brute force recursive solution - How

2019-02-15 11:12发布

I have an array which contains a list of different sizes of materials : {4,3,4,1,7,8} . However, the bin can accomodate materials upto size 10. I need to find out the minimum number of bins needed to pack all the elements in the array.

For the above array, you can pack in 3 bins and divide them as follows: {4,4,1}, {3,7} , {8} . There are other possible arrangements that also fit into three stock pipes, but it cannot be done with fewer

I am trying to solve this problem through recursion in order to understand it better.

I am using this DP formulation (page 20 of the pdf file)

Consider an input (n1;:::;nk) with n = ∑nj items
Determine set of k-tuples (subsets of the input) that can be packed into a single bin
That is, all tuples (q1;:::;qk) for which OPT(q1;:::;qk) = 1
Denote this set by Q For each k-tuple q , we have OPT(q) = 1
Calculate remaining values by using the recurrence : OPT(i1;:::;ik) = 1 + minOPT(i1 - q1;:::;ik - qk)

I have made the code, and it works fine for small data set. But if increase the size of my array to more than 6 elements, it becomes extremely slow -- takes about 25 seconds to solve an array containing 8 elements Can you tell me if theres anything wrong with the algorithm? I dont need an alternative solution --- just need to know why this is so slow, and how it can be improved

Here is the code I have written in C++ :

void recCutStock(Vector<int> & requests,  int numStocks)
{
 if (requests.size() == 0)
 {

     if(numStocks <= minSize)
    {
        minSize = numStocks;

    }
//   cout<<"GOT A RESULT : "<<numStocks<<endl;
        return ;
 }
 else
 {
    if(numStocks+1 < minSize) //minSize is a global variable initialized with a big val
    {
     Vector<int> temp ; Vector<Vector<int> > posBins;
     getBins(requests, temp, 0 , posBins); // 2-d array(stored in posBins) containing all possible single bin formations

    for(int i =0; i < posBins.size(); i++)
    {
        Vector<int> subResult;
        reqMinusPos(requests, subResult,  posBins[i]);  // subtracts the initial request array from the subArray
        //displayArr(subResult);


            recCutStock(subResult,  numStocks+1);


    }
    }
    else return;
 }

}

The getBins function is as follows :

void getBins(Vector<int> & requests, Vector<int> current, int index, Vector<Vector<int> > & bins)

{

if (index == requests.size())

{

if(sum(current,requests) <= stockLength && sum(current, requests)>0 )

        {
                bins.add(current);
            //  printBins(current,requests);
            }
            return ;
        }
        else
        {

        getBins(requests, current, index+1 , bins);
                current.add(index);
            getBins(requests, current, index+1 , bins);
        }
}

5条回答
地球回转人心会变
2楼-- · 2019-02-15 11:29

I've written a bin-packing solution and I can recommend best-fit with random order.

查看更多
在下西门庆
3楼-- · 2019-02-15 11:36

But if increase the size of my array to more than 6 elements, it becomes extremely slow -- takes about 25 seconds to solve an array containing 8 elements Can you tell me if theres anything wrong with the algorithm?

That's normal with brute force. Brute force does not scale at all.

查看更多
萌系小妹纸
4楼-- · 2019-02-15 11:42

In your case: Bin size = 30, total items = 27, at least 3 bins are needed. You could try first fit decreasing, and it works!

More ways to improve: With 3 bins and 27 size units, you will have 3 units of space left over. Which means you can ignore the item of size 1; if you fit the others into 3 bins, it will fit somewhere. That leaves you with 26 size units. That means you will have at least two units empty in one bin. If you had items of size 2, you could ignore them as well because they would fit. If you had two items of size 2, you could ignore items of size 3 as well.

You have two items of size 7 + 3 which is exactly the bin size. There is always an optimal solution where these two are in the same bin: If the "7" were with other items, their size would be 3 or less, so you could swap them with the "3" if it is in another bin.

Another method: If you have k items >= bin size / 2 (you can't have two items equal to bin size / 2 at this point), then you need k bins. This might increase the minimum number of bins that you estimated initially which in turn increases the guaranteed empty space in all bins which increases the minimum size of leftover space in one bin. If for j = 1, 2, ..., k you can fit all items with them into j bins that could possibly fit into the same bin, then this is optimal. For example, if you had sizes 8, 1, 1 but no size 2, then 8+1+1 in a bin would be optimal. Since you have 8 + 4 + 4 left, and nothing fits with the 8, "8" alone in its bin is optimal. (If you had items of sizes 8, 8, 8, 2, 1, 1, 1 and nothing else of size 2, packing them into three bins would be optimal).

More things to try if you have large items: If you have a large item, and the largest item that fits with it is as large or larger than any combination of items that would fit, then combining them is optimal. If there is more space, then this can be repeated.

So all in all, a bit of thinking reduced the problem to fitting two items of sizes 4, 4 into one or more bins. With larger problems, every little bit helps.

查看更多
迷人小祖宗
5楼-- · 2019-02-15 11:42

After doing what you can to reduce the problem, you are left with the problem to fit n items into k bins if possible, into k + 1 bins otherwise, or into k + 2 bins etc. If k bins fail, then you know that you will have more empty space in an optimal solution of k + 1 bins, which may make it possible to remove more small items, so that's the first thing to do.

Then you try some simple methods: First fit descending, next fit descending. I tried a reasonably fast variation of first fit descending: As long as the two largest items fit, add the largest item. Otherwise, find the single item or the largest combination of two items that fit, and add the single item or the larger of that combination. If any of these algorithms fits your items into k bins, you solved the problem.

And eventually you need brute force. You can decide: Do you attempt to fit everything into k bins, or do you attempt to prove it isn't possible? You will have some empty space to play with. Let's say 10 bins of size 100 and items of total size 936, that would leave you 64 units of empty space. If you put only items of size 80 into your first bin, then 20 of your 64 units are already gone, making it much harder to find a solution from there. So you don't try things in random order. You first try combinations for the first bin that fill it completely or close to completely. Since small items make it easier to fill containers completely you try not to use them in the first bins but leave them for later, when you have less choice of items. And when you've found items to put into a bin, try one of the simple algorithms to see if they can finish it. For example, if first fit descending put 90 units into the first bin, and you just managed to put 99 units in there, it is quite possible that this is enough improvement to fit everything.

On the other hand, if there is very little space (10 bins, total item size 995 for example) you may want to prove that fitting the items is not possible. You don't need to care about optimising the algorithm to find a solution quickly, because you need to try all combinations to see they don't work. Obviously you need with these numbers to fit at least 95 units into the first bin and so on; that might make it easy to rule out solutions quickly. If you proved that k bins are not achievable, then k+1 bins should be a much easier target.

查看更多
\"骚年 ilove
6楼-- · 2019-02-15 11:43

The dynamic programming algorithm is O(n^{2k}) where k is the number of distinct items and n is the total number of items. This can be very slow irrespective of the implementation. Typically, when solving an NP-hard problem, heuristics are required for speed.

I suggest you consider Next Fit Decreasing Height (NFDH) and First Fit Decreasing Height (FFDH) from Coffman et al. They are 2-optimal and 17/10-optimal, respectively, and they run in O(n log n) time.

I recommend you first try NFDH: sort in decreasing order, store the result in a linked list, then repeatedly try to insert the items starting from the beginning (largest values first) until you have filled the bin or there is no more items that can be inserted. Then go to the next bin and so on.

References:

Owen Kaser, Daniel Lemire, Tag-Cloud Drawing: Algorithms for Cloud Visualization, Tagging and Metadata for Social Information Organization (WWW 2007), 2007. (See Section 5.1 for a related discussion.)

E. G. Coffman, Jr., M. R. Garey, D. S. Johnson, and R. E. Tarjan. Performance bounds for level-oriented two-dimensional packing algorithms. SIAM J. Comput., 9(4):808–826, 1980.

查看更多
登录 后发表回答