Removing unsolvable equations from an underdetermi

2019-05-06 22:39发布

问题:

My program tries to solve a system of linear equations. In order to do that, it assembles matrix coeff_matrix and vector value_vector, and uses Eigen to solve them like:

Eigen::VectorXd sol_vector = coeff_matrix
        .colPivHouseholderQr().solve(value_vector);

The problem is that the system can be both over- and under-determined. In the former case, Eigen either gives a correct or uncorrect solution, and I check the solution using coeff_matrix * sol_vector - value_vector.

However, please consider the following system of equations:

a + b - c     =  0
        c - d =  0
        c     = 11
      - c + d =  0

In this particular case, Eigen solves the three latter equations correctly but also gives solutions for a and b.

What I would like to achieve is that only the equations which have only one solution would be solved, and the remaining ones (the first equation here) would be retained in the system.

In other words, I'm looking for a method to find out which equations can be solved in a given system of equations at the time, and which cannot because there will be more than one solution.

Could you suggest any good way of achieving that?

Edit: please note that in most cases the matrix won't be square. I've added one more row here just to note that over-determination can happen too.

回答1:

I think what you want to is the singular value decomposition (SVD), which will give you exact what you want. After SVD, "the equations which have only one solution will be solved", and the solution is pseudoinverse. It will also give you the null space (where infinite solutions come from) and left null space (where inconsistency comes from, i.e. no solution).



回答2:

Based on the SVD comment, I was able to do something like this:

Eigen::FullPivLU<Eigen::MatrixXd> lu = coeff_matrix.fullPivLu();

Eigen::VectorXd sol_vector = lu.solve(value_vector);
Eigen::VectorXd null_vector = lu.kernel().rowwise().sum();

AFAICS, the null_vector rows corresponding to single solutions are 0s while the ones corresponding to non-determinate solutions are 1s. I can reproduce this throughout all my examples with the default treshold Eigen has.

However, I'm not sure if I'm doing something correct or just noticed a random pattern.



回答3:

What you need is to calculate the determinant of your system. If the determinant is 0, then you have an infinite number of solutions. If the determinant is very small, the solution exists, but I wouldn't trust the solution found by a computer (it will lead to numerical instabilities).

Here is a link to what is the determinant and how to calculate it: http://en.wikipedia.org/wiki/Determinant

Note that Gaussian elimination should also work: http://en.wikipedia.org/wiki/Gaussian_elimination With this method, you end up with lines of 0s if there are an infinite number of solutions.

Edit

In case the matrix is not square, you first need to extract a square matrix. There are two cases:

  1. You have more variables than equations: then you have either no solution, or an infinite number of them.
  2. You have more equations than variables: in this case, find a square sub-matrix of non-null determinant. Solve for this matrix and check the solution. If the solution doesn't fit, it means you have no solution. If the solution fits, it means the extra equations were linearly-dependant on the extract ones.

In both case, before checking the dimension of the matrix, remove rows and columns with only 0s.

As for the gaussian elimination, it should work directly with non-square matrices. However, this time, you should check that the number of non-empty row (i.e. a row with some non-0 values) is equal to the number of variable. If it's less you have an infinite number of solution, and if it's more, you don't have any solutions.