Consider a below problem
Given an infinite number of nickels (5 cents) and pennies (1 cent). Write a code to calculate a number of ways of representing n cents.
My code
def coins(n):
if (n < 0):
return 0
elif (n == 0):
return 1
else:
if (cnt_lst[n-1] == 0):
cnt_lst[n-1] = coins(n-1) + coins(n-5)
return cnt_lst[n-1]
if __name__ == "__main__":
cnt = int(input())
cnt_lst = [0] * cnt #Memiozation
ret = coins(cnt)
print(ret)
Above approach counting repeated patterns more than one (obviously I'm not checking them explicitly).
[5,1,1,1,1,1] [1,5,1,1,1,1] [1,1,5,1,1,1], etc
Maintaining another list which holds the previously seen patterns will require a lot of memory as n
grows. What another approach we can use to overcome this problem?
Instead of using a 1-dimensional list, you can use a 2-dimensional array where one dimension is the total, and the second dimension is the largest-value coin available.
Let's say C
is your list of coin values in ascending order (in your example, C = [1, 5]
). Then let A[i][j]
be the number of ways to represent value i
with coins 0
through j
.
We know that for any j
, A[0][j] = 1
because there is exactly one way to represent the value 0
: no coins.
Now suppose we want to find A[8][1]
, the number of ways to represents 8 cents with pennies and nickels. Each representation will either use a nickel or it won't. If we don't use a nickel then we can only use pennies, so there are A[8][0]
ways to do this. If we do use a nickel then we have 3
cents left so there are A[3][1]
ways to do this.
To compute A[8][0]
we only have one coin available so A[8][0] = A[7][0] = ... = A[0][0] = 1
.
To compute A[3][1]
we can't use a nickel since 3 < 5
, so A[3][1] = A[3][0]
. From there we have A[3][0] = A[2][0] = ... = 1
as above.
In general:
A[i][j] = A[i][j-1] if i < C[j]
else A[i][j-1] + A[i-C[j]][j]
This algorithm works for any set of coin values.
While your implementation is a twist of the typical Coin Change Problem with 1 dimension less which may cause you double count a lot of the combinations, I think it is not that complicated for the given coin system.
I think the solution is simply floor(N/5) + 1
As you can only use [0..N/5]
of 5 cents, other left over you must use 1 cents
This can be done that easily is because your coin system is canonical and greedy algorithm / mindset can be applied.
If you insist to use Dynamic Programming to solve the problem, you need to add one dimension, which represents which type of coins being used. This method is general with any coin systems
Define C(x, m):= The # of ways to make number x using first m type of coins
So now the recurrence formula become:
C(x, m) = C(x-coin_type[m], m) + C(x, m-1)
, means you choose to use the m-th type of coin, or not use it
Here is the main point, that why this recurrence works and yours not working, is the order of the iteration of the state.
Using this recurrence formula, we can do something like
For i = 0 to # of coin_type
For j = 0 to n
C(j, i) = C(j-coin_type[i], i) + C(j, i-1)
Notice the outer loop forces an ordering of iteration of the type of coins. Say coin_type = {1,3,5}
, the loop will first count the way of using {1} only, then the next iteration of loop will count the way of using {1,3}, the last iteration will count the way using {1,3,5}.
You will never count the way using {3,1,5}, or {1,5,3}...etc. The order of the coin type is fixed, meaning you will not double count anything