Given the number of rows and columns of a 2d matrix
Initially all elements of matrix are 0
Given the number of 1's that should be present in each row
Given the number of 1's that should be present in each column
Determine if it is possible to form such matrix.
Example:
Input: r=3 c=2 (no. of rows and columns)
2 1 0 (number of 1's that should be present in each row respectively)
1 2 (number of 1's that should be present in each column respectively)
Output: Possible
Explanation:
1 1
0 1
0 0
I tried solving this problem for like 12 hours by checking if summation of Ri = summation of Ci
But I wondered if wouldn't be possible for cases like
3 3
1 3 0
0 2 2
r and c can be upto 10^5
Any ideas how should I move further?
Edit: Constraints added and output should only be "possible" or "impossible". The possible matrix need not be displayed.
Can anyone help me now?
Inspiring from the solution given by RobertBaron I have tried to build a new algorithm.
here, i have sorted the rows in ascending order and cols in descending order. later decrementing particular row and column if 1 need to be placed! it is working for all the test cases posted here! rest GOD knows
You can use brute force (iterating through all
2^(r * c)
possibilities) to solve it, but that will take a long time. Ifr * c
is under 64, you can accelerate it to a certain extent using bit-wise operations on 64-bit integers; however, even then, iterating through all 64-bit possibilities would take, at 1 try per ms, over 500M years.A wiser choice is to add bits one by one, and only continue placing bits if no constraints are broken. This will eliminate the vast majority of possibilities, greatly speeding up the process. Look up backtracking for the general idea. It is not unlike solving sudokus through guesswork: once it becomes obvious that your guess was wrong, you erase it and try guessing a different digit.
As with sudokus, there are certain strategies that can be written into code and will result in speedups when they apply. For example, if the sum of 1s in rows is different from the sum of 1s in columns, then there are no solutions.
If over 50% of the bits will be on, you can instead work on the complementary problem (transform all ones to zeroes and vice-versa, while updating row and column counts). Both problems are equivalent, because any answer for one is also valid for the complementary.
I will illustrate the algorithm with an example.
Assume we have
m
rows andn
columns. Letrows[i]
be the number of 1s in rowi
, for0 <= i < m
, andcols[j]
be the number of 1s in columnj
, for0 <= j < n
.For example, for
m = 3
, andn = 4
, we could have:rows = {4 2 3}
,cols = {1 3 2 3}
, and the solution array would be:Because we only want to know whether a solution exists, the values in
rows
andcols
may be permuted in any order. The solution of each permutation is just a permutation of the rows and columns of the above solution.So, given
rows
andcols
, sortcols
in decreasing order, androws
in increasing order. For our example, we havecols = {3 3 2 1}
androws = {2 3 4}
, and the equivalent problem.We transform
cols
into a form that is better suited for the algorithm. Whatcols
tells us is that we have two series of 1s of length 3, one series of 1s of length 2, and one series of 1s of length 1, that are to be distributed among the rows of the array. We rewritecols
to capture just that, that isCOLS = {2/3 1/2 1/1}
, 2 series of length 3, 1 series of length 2, and 1 series of length 1.Because we have 2 series of length 3, a solution exists only if we can put two 1s in the first row. This is possible because
rows[0] = 2
. We do not actually put any 1 in the first row, but record the fact that 1s have been placed there by decrementing the length of the series of length 3. SoCOLS
becomes:and we combine our two counts for series of length 2, yielding:
We now have the reduced problem:
Again we need to place 1s from our series of length 2 to have a solution. Fortunately,
rows[1] = 3
and we can do this. We decrement the length of3/2
and get:We have the reduced problem:
Which is solved by 4 series of length 1, just what we have left. If at any step, the series in
COLS
cannot be used to satisfy a row count, then no solution is possible.The general processing for each row may be stated as follows. For each row
r
, starting from the first element inCOLS
, decrement the lengths of as many elementscount[k]/length[k]
ofCOLS
as needed, so that the sum of thecount[k]
's equalsrows[r]
. Eliminate series of length 0 inCOLS
and combine series of same length.Note that because elements of
COLS
are in decreasing order of lengths, the length of the last element decremented is always less than or equal to the next element inCOLS
(if there is a next element).EXAMPLE 2 : Solution exists.
1 series of length 2 is decremented to satisfy
rows[0] = 1
, and the 2 other series of length 2 remains at length 2.The 2 series of length 2 are decremented, and 1 of the series of length 1. The series whose length has become 0 is deleted, and the series of length 1 are combined.
A solution exists for
rows[2]
can be satisfied.EXAMPLE 3: Solution does not exists.
SPACE COMPLEXITY
It is easy to see that it is
O(m + n)
.TIME COMPLEXITY
We iterate over each row only once. For each row
i
, we need to iterate over at mostrows[i] <= n
elements ofCOLS
. Time complexity isO(m x n)
.After finding this algorithm, I found the following theorem:
from the post Finding if binary matrix exists given the row and column sums.
This is basically what my algorithm does, while trying to optimize the decrementing part, i.e., all the -1's in the above theorem. Now that I see the above theorem, I know my algorithm is correct. Nevertheless, I checked the correctness of my algorithm by comparing it with a brute-force algorithm for arrays of up to 50 cells.
Here is the C# implementation.
This problem can be solved in O(n log n) using Gale-Ryser Theorem. (where n is the maximum of lengths of the two degree sequences).
First, make both sequences of equal length by adding 0's to the smaller sequence, and let this length be n. Let the sequences be A and B. Sort A in non-decreasing order, and sort B in non-increasing order. Create another prefix sum array P for B such that ith element of P is equal to sum of first i elements of B. Now, iterate over k's from 1 to n, and check for
The second sum can be calculated in O(log n) using binary search for index of last number in B smaller than k, and then using precalculated P.
Hint: one possible solution utilizes Maximum Flow Problem by creating a special graph and running the standard maximum flow algorithm on it.
If you're not familiar with the above problem, you may start reading about it e.g. here https://en.wikipedia.org/wiki/Maximum_flow_problem
If you're interested in the full solution please comment and I'll update the answer. But it requires understading the above algorithm.
Solution as requested:
Create a graph of
r+c+2
nodes.Node 0 is the source, node
r+c+1
is the sink. Nodes1..r
represent the rows, whiler+1..r+c
the columns.Create following edges:
i=1..r
of capacityr_i
i=r+1..r+c
to sink of capacityc_i
i=1..r
andj=r+1..r+c
of capacity 1Run maximum flow algorithm, the saturated edges between row nodes and column nodes define where you should put 1.
Or if it's not possible then the maximum flow value is less than number of expected ones in the matrix.
(Note: to avoid confusion between when I'm talking about the actual numbers in the problem vs. when I'm talking about the zeros in the ones in the matrix, I'm going to instead fill the matrix with spaces and X's. This obviously doesn't change the problem.)
Some observations:
With that in mind, here's one fairly simple approach:
(Note: the reason I say to start with the row needing the fewest X's, and work your way to the row with the most X's, is that a row needing more X's may involve examining updating more elements of the array and of the stack, so the rows needing fewer X's are cheaper. This isn't just a matter of postponing the work: the rows needing fewer X's can help "consolidate" the array, so that there will be fewer distinct column-counts, making the later rows cheaper than they would otherwise be. In a very-bad-case scenario, such as the case of a square matrix where every single row needs a distinct positive number of X's and every single column needs a distinct positive number of X's, the fewest-to-most order means you can handle each row in O(1) time, for linear time overall, whereas the most-to-fewest order would mean that each row would take time proportional to the number of X's it needs, for quadratic time overall.)
Overall, this takes no worse than O(r+c+n) time (where n is the number of X's); I think that the optimizations I've listed are enough to ensure that it's closer to O(r+c) time, but it's hard to be 100% sure. I recommend trying it to see if it's fast enough for your purposes.