Arranging the number 1 in a 2d matrix

2020-02-24 12:07发布

Given the number of rows and columns of a 2d matrix

Initially all elements of matrix are 0

Given the number of 1's that should be present in each row

Given the number of 1's that should be present in each column

Determine if it is possible to form such matrix.

Example:

Input: r=3 c=2 (no. of rows and columns)
2 1 0 (number of 1's that should be present in each row respectively)
1 2 (number of 1's that should be present in each column respectively)

Output: Possible

Explanation:

1 1
0 1
0 0

I tried solving this problem for like 12 hours by checking if summation of Ri = summation of Ci

But I wondered if wouldn't be possible for cases like

3 3
1 3 0
0 2 2

r and c can be upto 10^5

Any ideas how should I move further?

Edit: Constraints added and output should only be "possible" or "impossible". The possible matrix need not be displayed.

Can anyone help me now?

6条回答
狗以群分
2楼-- · 2020-02-24 12:21

Inspiring from the solution given by RobertBaron I have tried to build a new algorithm.

rows = [int(x)for x in input().split()]
cols = [int (ss) for ss in input().split()]
rows.sort()
cols.sort(reverse=True)
for i in range(len(rows)):
    for j in range(len(cols)):
        if(rows[i]!= 0 and cols[j]!=0):
            rows[i] = rows[i] - 1;
            cols[j]  =cols[j]-1;
print("rows: ",rows)
print("cols: ",cols)
#if there is any non zero value, print NO else print yes
flag = True
for i in range(len(rows)):
    if(rows[i]!=0):
        flag = False
        break

for j in range(len(cols)):
    if(cols[j]!=0):
        flag = False

if(flag):
    print("YES")
else:
    print("NO")

here, i have sorted the rows in ascending order and cols in descending order. later decrementing particular row and column if 1 need to be placed! it is working for all the test cases posted here! rest GOD knows

查看更多
走好不送
3楼-- · 2020-02-24 12:31

You can use brute force (iterating through all 2^(r * c) possibilities) to solve it, but that will take a long time. If r * c is under 64, you can accelerate it to a certain extent using bit-wise operations on 64-bit integers; however, even then, iterating through all 64-bit possibilities would take, at 1 try per ms, over 500M years.

A wiser choice is to add bits one by one, and only continue placing bits if no constraints are broken. This will eliminate the vast majority of possibilities, greatly speeding up the process. Look up backtracking for the general idea. It is not unlike solving sudokus through guesswork: once it becomes obvious that your guess was wrong, you erase it and try guessing a different digit.

As with sudokus, there are certain strategies that can be written into code and will result in speedups when they apply. For example, if the sum of 1s in rows is different from the sum of 1s in columns, then there are no solutions.

If over 50% of the bits will be on, you can instead work on the complementary problem (transform all ones to zeroes and vice-versa, while updating row and column counts). Both problems are equivalent, because any answer for one is also valid for the complementary.

查看更多
smile是对你的礼貌
4楼-- · 2020-02-24 12:44

I will illustrate the algorithm with an example.

Assume we have m rows and n columns. Let rows[i] be the number of 1s in row i, for 0 <= i < m, and cols[j] be the number of 1s in column j, for 0 <= j < n.

For example, for m = 3, and n = 4, we could have: rows = {4 2 3}, cols = {1 3 2 3}, and the solution array would be:

    1 3 2 3
  +--------
4 | 1 1 1 1
2 | 0 1 0 1
3 | 0 1 1 1

Because we only want to know whether a solution exists, the values in rows and cols may be permuted in any order. The solution of each permutation is just a permutation of the rows and columns of the above solution.

So, given rows and cols, sort cols in decreasing order, and rows in increasing order. For our example, we have cols = {3 3 2 1} and rows = {2 3 4}, and the equivalent problem.

    3 3 2 1
  +--------
2 | 1 1 0 0
3 | 1 1 1 0
4 | 1 1 1 1

We transform cols into a form that is better suited for the algorithm. What cols tells us is that we have two series of 1s of length 3, one series of 1s of length 2, and one series of 1s of length 1, that are to be distributed among the rows of the array. We rewrite cols to capture just that, that is COLS = {2/3 1/2 1/1}, 2 series of length 3, 1 series of length 2, and 1 series of length 1.

Because we have 2 series of length 3, a solution exists only if we can put two 1s in the first row. This is possible because rows[0] = 2. We do not actually put any 1 in the first row, but record the fact that 1s have been placed there by decrementing the length of the series of length 3. So COLS becomes:

COLS = {2/2 1/2 1/1}

and we combine our two counts for series of length 2, yielding:

COLS = {3/2 1/1}

We now have the reduced problem:

3 | 1 1 1 0
4 | 1 1 1 1

Again we need to place 1s from our series of length 2 to have a solution. Fortunately, rows[1] = 3 and we can do this. We decrement the length of 3/2 and get:

COLS = {3/1 1/1} = {4/1}

We have the reduced problem:

4 | 1 1 1 1

Which is solved by 4 series of length 1, just what we have left. If at any step, the series in COLS cannot be used to satisfy a row count, then no solution is possible.

The general processing for each row may be stated as follows. For each row r, starting from the first element in COLS, decrement the lengths of as many elements count[k]/length[k] of COLS as needed, so that the sum of the count[k]'s equals rows[r]. Eliminate series of length 0 in COLS and combine series of same length.

Note that because elements of COLS are in decreasing order of lengths, the length of the last element decremented is always less than or equal to the next element in COLS (if there is a next element).

EXAMPLE 2 : Solution exists.

rows = {1 3 3}, cols = {2 2 2 1} => COLS = {3/2 1/1}

1 series of length 2 is decremented to satisfy rows[0] = 1, and the 2 other series of length 2 remains at length 2.

rows[0] = 1
COLS = {2/2 1/1 1/1} = {2/2 2/1}

The 2 series of length 2 are decremented, and 1 of the series of length 1. The series whose length has become 0 is deleted, and the series of length 1 are combined.

rows[1] = 3
COLS = {2/1 1/0 1/1} = {2/1 1/1} = {3/1}

A solution exists for rows[2] can be satisfied.

rows[2] = 3
COLS = {3/0} = {}

EXAMPLE 3: Solution does not exists.

rows = {0 2 3}, cols = {3 2 0 0} => COLS = {1/3 1/2}

rows[0] = 0
COLS = {1/3 1/2}

rows[1] = 2
COLS = {1/2 1/1}

rows[2] = 3  => impossible to satisfy; no solution.

SPACE COMPLEXITY

It is easy to see that it is O(m + n).

TIME COMPLEXITY

We iterate over each row only once. For each row i, we need to iterate over at most rows[i] <= n elements of COLS. Time complexity is O(m x n).

After finding this algorithm, I found the following theorem:

The Havel-Hakimi theorem (Havel 1955, Hakimi 1962) states that there exists a matrix Xn,m of 0’s and 1’s with row totals a0=(a1, a2,… , an) and column totals b0=(b1, b2,… , bm) such that bi ≥ bi+1 for every 0 < i < m if and only if another matrix Xn−1,m of 0’s and 1’s with row totals a1=(a2, a3,… , an) and column totals b1=(b1−1, b2−1,… ,ba1−1, ba1+1,… , bm) also exists.

from the post Finding if binary matrix exists given the row and column sums.

This is basically what my algorithm does, while trying to optimize the decrementing part, i.e., all the -1's in the above theorem. Now that I see the above theorem, I know my algorithm is correct. Nevertheless, I checked the correctness of my algorithm by comparing it with a brute-force algorithm for arrays of up to 50 cells.

Here is the C# implementation.

public class Pair
{
    public int Count;
    public int Length;
}

public class PairsList
{
    public LinkedList<Pair> Pairs;
    public int TotalCount;
}

class Program
{

    static void Main(string[] args)
    {
        int[] rows = new int[] { 0, 0, 1, 1, 2, 2 };
        int[] cols = new int[] { 2, 2, 0 };
        bool success = Solve(cols, rows);
    }

    static bool Solve(int[] cols, int[] rows)
    {
        PairsList pairs = new PairsList() { Pairs = new LinkedList<Pair>(), TotalCount = 0 };

        FillAllPairs(pairs, cols);

        for (int r = 0; r < rows.Length; r++)
        {
            if (rows[r] > 0)
            {
                if (pairs.TotalCount < rows[r])
                    return false;

                if (pairs.Pairs.First != null && pairs.Pairs.First.Value.Length > rows.Length - r)
                    return false;

                DecrementPairs(pairs, rows[r]);
            }
        }

        return pairs.Pairs.Count == 0 || pairs.Pairs.Count == 1 && pairs.Pairs.First.Value.Length == 0;
    }

    static void DecrementPairs(PairsList pairs, int count)
    {
        LinkedListNode<Pair> pair = pairs.Pairs.First;

        while (count > 0 && pair != null)
        {
            LinkedListNode<Pair> next = pair.Next;

            if (pair.Value.Count == count)
            {
                pair.Value.Length--;
                if (pair.Value.Length == 0)
                {
                    pairs.Pairs.Remove(pair);
                    pairs.TotalCount -= count;
                }
                else if (pair.Next != null && pair.Next.Value.Length == pair.Value.Length)
                {
                    pair.Value.Count += pair.Next.Value.Count;
                    pairs.Pairs.Remove(pair.Next);
                    next = pair;
                }
                count = 0;
            }
            else if (pair.Value.Count < count)
            {
                count -= pair.Value.Count;
                pair.Value.Length--;
                if (pair.Value.Length == 0)
                {
                    pairs.Pairs.Remove(pair);
                    pairs.TotalCount -= pair.Value.Count;
                }
                else if(pair.Next != null && pair.Next.Value.Length == pair.Value.Length)
                {
                    pair.Value.Count += pair.Next.Value.Count;
                    pairs.Pairs.Remove(pair.Next);
                    next = pair;
                }
            }
            else // pair.Value.Count > count
            {
                Pair p = new Pair() { Count = count, Length = pair.Value.Length - 1 };
                pair.Value.Count -= count;
                if (p.Length > 0)
                {
                    if (pair.Next != null && pair.Next.Value.Length == p.Length)
                        pair.Next.Value.Count += p.Count;
                    else
                        pairs.Pairs.AddAfter(pair, p);
                }
                else
                    pairs.TotalCount -= count;
                count = 0;
            }

            pair = next;
        }
    }

    static int FillAllPairs(PairsList pairs, int[] cols)
    {
        List<Pair> newPairs = new List<Pair>();

        int c = 0;
        while (c < cols.Length && cols[c] > 0)
        {
            int k = c++;
            if (cols[k] > 0)
                pairs.TotalCount++;
            while (c < cols.Length && cols[c] == cols[k])
            {
                if (cols[k] > 0) pairs.TotalCount++;
                c++;
            }
            newPairs.Add(new Pair() { Count = c - k, Length = cols[k] });
        }

        LinkedListNode<Pair> pair = pairs.Pairs.First;

        foreach (Pair p in newPairs)
        {
            while (pair != null && p.Length < pair.Value.Length)
                pair = pair.Next;

            if (pair == null)
            {
                pairs.Pairs.AddLast(p);
            }
            else if (p.Length == pair.Value.Length)
            {
                pair.Value.Count += p.Count;
                pair = pair.Next;
            }
            else // p.Length > pair.Value.Length
            {
                pairs.Pairs.AddBefore(pair, p);
            }
        }

        return c;
    }
}
查看更多
家丑人穷心不美
5楼-- · 2020-02-24 12:44

This problem can be solved in O(n log n) using Gale-Ryser Theorem. (where n is the maximum of lengths of the two degree sequences).

First, make both sequences of equal length by adding 0's to the smaller sequence, and let this length be n. Let the sequences be A and B. Sort A in non-decreasing order, and sort B in non-increasing order. Create another prefix sum array P for B such that ith element of P is equal to sum of first i elements of B. Now, iterate over k's from 1 to n, and check for
Gale-Ryser Theorem

The second sum can be calculated in O(log n) using binary search for index of last number in B smaller than k, and then using precalculated P.

查看更多
家丑人穷心不美
6楼-- · 2020-02-24 12:45

Hint: one possible solution utilizes Maximum Flow Problem by creating a special graph and running the standard maximum flow algorithm on it.

If you're not familiar with the above problem, you may start reading about it e.g. here https://en.wikipedia.org/wiki/Maximum_flow_problem

If you're interested in the full solution please comment and I'll update the answer. But it requires understading the above algorithm.

Solution as requested:

Create a graph of r+c+2 nodes.

Node 0 is the source, node r+c+1 is the sink. Nodes 1..r represent the rows, while r+1..r+c the columns.

Create following edges:

  • from source to nodes i=1..r of capacity r_i
  • from nodes i=r+1..r+c to sink of capacity c_i
  • between all the nodes i=1..r and j=r+1..r+c of capacity 1

Run maximum flow algorithm, the saturated edges between row nodes and column nodes define where you should put 1.

Or if it's not possible then the maximum flow value is less than number of expected ones in the matrix.

查看更多
疯言疯语
7楼-- · 2020-02-24 12:47

(Note: to avoid confusion between when I'm talking about the actual numbers in the problem vs. when I'm talking about the zeros in the ones in the matrix, I'm going to instead fill the matrix with spaces and X's. This obviously doesn't change the problem.)

Some observations:

  • If you're filling in a row, and there's (for example) one column needing 10 more X's and another column needing 5 more X's, then you're sometimes better off putting the X in the "10" column and saving the "5" column for later (because you might later run into 5 rows that each need 2 X's), but you're never better off putting the X in the "5" column and saving the "10" column for later (because even if you later run into 10 rows that all need an X, they won't mind if they don't all go in the same column). So we can use a somewhat "greedy" algorithm: always put an X in the column still needing the most X's. (Of course, we'll need to make sure that we don't greedily put an X in the same column multiple times for the same row!)
  • Since you don't need to actually output a possible matrix, the rows are all interchangeable and the columns are all interchangeable; all that matter is how many rows still need 1 X, how many still need 2 X's, etc., and likewise for columns.

With that in mind, here's one fairly simple approach:

  • (Optimization.) Add up the counts for all the rows, add up the counts for all the columns, and return "impossible" if the sums don't match.
  • Create an array of length r+1 and populate it with how many columns need 1 X, how many need 2 X's, etc. (You can ignore any columns needing 0 X's.)
  • (Optimization.) To help access the array efficiently, build a stack/linked-list/etc. of the indices of nonzero array elements, in decreasing order (e.g., starting at index r if it's nonzero, then index r−1 if it's nonzero, etc.), so that you can easily find the elements representing columns to put X's in.
  • (Optimization.) To help determine when there'll be a row can't be satisfied, also make note of the total number of columns needing any X's, and make note of the largest number of X's needed by any row. If the former is less than the latter, return "impossible".
  • (Optimization.) Sort the rows by the number of X's they need.
  • Iterate over the rows, starting with the one needing the fewest X's and ending with the one needing the most X's, and for each one:
    • Update the array accordingly. For example, if a row needs 12 X's, and the array looks like [..., 3, 8, 5], then you'll update the array to look like [..., 3+7 = 10, 8+5−7 = 6, 5−5 = 0]. If it's not possible to update the array because you run out of columns to put X's in, return "impossible". (Note: this part should never actually return "impossible", because we're keeping count of the number of columns left and the max number of columns we'll need, so we should have already returned "impossible" if this was going to happen. I mention this check only for clarity.)
    • Update the stack/linked-list of indices of nonzero array elements.
    • Update the total number of columns needing any X's. If it's now less than the greatest number of X's needed by any row, return "impossible".
    • (Optimization.) If the first nonzero array element has an index greater than the number of rows left, return "impossible".
  • If we complete our iteration without having returned "impossible", return "possible".

(Note: the reason I say to start with the row needing the fewest X's, and work your way to the row with the most X's, is that a row needing more X's may involve examining updating more elements of the array and of the stack, so the rows needing fewer X's are cheaper. This isn't just a matter of postponing the work: the rows needing fewer X's can help "consolidate" the array, so that there will be fewer distinct column-counts, making the later rows cheaper than they would otherwise be. In a very-bad-case scenario, such as the case of a square matrix where every single row needs a distinct positive number of X's and every single column needs a distinct positive number of X's, the fewest-to-most order means you can handle each row in O(1) time, for linear time overall, whereas the most-to-fewest order would mean that each row would take time proportional to the number of X's it needs, for quadratic time overall.)

Overall, this takes no worse than O(r+c+n) time (where n is the number of X's); I think that the optimizations I've listed are enough to ensure that it's closer to O(r+c) time, but it's hard to be 100% sure. I recommend trying it to see if it's fast enough for your purposes.

查看更多
登录 后发表回答