I have a small problem understanding the coin change problem in dynamic programming.
Simply put, I have to change a sum using a minimum number of coins.
I have n denominations of coins of values 1 = v1 < v2 < ... < vn, and we note M(j) the minimum number of coins required to make change for amount j.
In the above formula I don't understand what M(j-vi) means. vi has to be the maximum value of the coins used in j-1?
You're making piles of coins for different values j, named M(j). The point of M(j - vi) is to consider a coin of value vi, then which pile do you add it to in order to reach the value j? The pile with value j - vi of course, because that plus the coin you're considering now add up to the value j.
Of course the goal is to have as few as possible coins, so you take the smallest pile that you can extend to reach the value j
by adding a coin of vi to it. That's what the min
does. +1 because you add the coin vi to the pile to form the new pile M(j).
Plus one means you are consuming one more coin and therefore the total change you will make is decrease by the amount of the added coin, which is why the original j value is decremented.