I have been looking for a library in c++ to solve an undetermined system like this
q is a vector, w, x, y, z variables and a,b,c,d constants.
argmin_q MAX(q) - MIN(q)
s.t.
q[1] = a - w - y
q[2] = b - w - z
q[3] = c - x - y
q[4] = d - x - z
It would be very useful to find a solver, algorithm, etc. I found a couple of libraries able to solve undetermined systems but additionally I need to minimize the distance between the coefficients.
Thank you in advance
altober
Though you may not realize it, your question is probably more a linear-algebra question than a programming question, though it does have a programming dimension to it.
In this answer, I assume that your actual problem could be much larger than the N = 8 sketch you have posed in illustration. The mathematical discussion comes first. Sample code follows.
The nature of linear-algebra problems seems to be that they are either practically treatable by general linear-algebra techniques or they are not. General linear-algebra techniques such as the L-U factorization and the singular-value decomposition have been thoroughly developed since the 1960s, and have a firm theoretical foundation. The trouble is that such general techniques tend to O(N*N*N) in computational treatment, which is to say that making your vector 10 times longer wants 1000 times the computer time to treat. Beyond some maximum practical size, your problem grows computationally intractable -- at least by general linear algebra techniques.
IF YOUR PROBLEM IS EXTREMELY LARGE
Fortunately, there exist a bewildering variety of special-purpose techniques to reduce the O(N*N*N) problem to an O(N*N) problem. These techniques have mostly emerged since the 1960s and are an area of active research. Unfortunately, to know which technique to apply, and how, requires an intimate understanding of the problem and a fairly good grasp of matrix mathematics. At O(N*N*N), general techniques are known. At O(N*N), there are no general techniques, only lots of special-purpose techniques that work on restricted kinds of systems. You can often find a suitable restriction, if you work at it long enough, but it is never as simple as dumping your matrix into some general-purpose solver. Not at O(N*N).
The best book I have read on the subject is Henk A. van der Vorst's Iterative Krylov Methods for Large Linear Systems. Happily, the book's price is reasonable. Of course, you will need a firm grounding in general linear-algebra techniques before tackling van der Voorst. I suggest Joel N. Franklin's short 1968 book Matrix Theory, available in a cheap 1993 paperback from Dover. (Van der Vorst does not happen to mention Kapp's and Brown's clever, simple and often effective Method of Ordered Multiple Interactions, which I have repeatedly found useful, so let me mention that here, so that you can look it up if you need it. Disclosure: I know Brown personally and could be partial for this reason; but, still, his and Kapp's technique is a fine one which van der Vorst does omit.)
For reasons I do not fully understand, most of the recent techniques (though not Kapp's and Brown's) seem to prefer to work in real numbers, treating complex numbers as a special case or ignoring them entirely. You have not mentioned whether your numbers are complex, but if they are, then this may limit your options somewhat.
As an intermediate level between Franklin and van der Vorst, if you lack the time or interest to tackle the latter, you should at least look up and acquaint yourself with the method of conjugate gradients.
IF YOUR PROBLEM IS ONLY MODERATELY LARGE
If your problem is only moderately large -- say, N < 20,000 or so -- then your task is much easier. Forget van der Vorst in this case: you don't need him. Depending on whether you want your answer in milliseconds, seconds, minutes or hours (you haven't mentioned which, but it only affects the practical limit on N), you can tolerate O(N*N*N) performance, and in this case Franklin's general techniques from the 1960s are quite robust.
Fast, efficient, thorough and correct, the LAPACK library, and its low-level helper BLAS, are the standard in this area. You should use them.
LAPACK, BLAS AND C++
A colleage of mine and I have each tried various C and C++ interfaces to LAPACK and BLAS. For various reasons, we have found none of them wholly satisfactory, though they do try to be helpful. Each of us ultimately decided to skip the interfaces altogether and to use LAPACK and BLAS directly. This is what I tend to recommend to you.
LAPACK and BLAS were meant to be called from FORTRAN 77, not from C++. Still, calling them from C++ is not so hard, and you don't need a C++ interface to do it. In fact, you probably don't want a C++ interface, at least not a generic one prepared by somebody else. LAPACK and BLAS will be happier if you don't smother them in one. (Remember that FORTRAN 77, whose linking facilities though effective were limited, lacked C-style header files.)
The first thing you must know to use LAPACK and BLAS directly is this, and you will be miserable until you understand it: matrices are stored column-major. That is, each column, not each row, of the matrix is represented in memory sequentially as a unit. Thus, a matrix element is stored immediately between the elements above and below it, not immediately between the elements left and right of it. Actually, LAPACK and BLAS are prudent to store matrices this way, for Kernighan and Ritchie, who invented C, never seemed very interested in linear algebra. C is a fine language, but C's default matrix storage convention was probably a mistake from the first. From the mathematician's perspective, LAPACK and BLAS store matrices the way they ought to be stored, column-major; and, if you are writing linear-algebra code then you should store your matrices this way, too. Ignore C's default, row-major convention: it has nothing to do with the conventions of matrix mathematics and you don't want it.
Here is a complete example:
#include <iostream>
extern "C" {
void dgesv_(
const int *N,
const int *NRHS,
double *A,
const int *LDA,
int *IPIV,
double *B,
const int *LDB,
int *INFO
);
}
int main()
{
const int N = 2;
// Let A = | 3.0 6.1 |
// |-1.2 1.7 |
double A[N*N] = {3.0, -1.2, 6.1, 1.7};
// Let b = | 5.5 |
// | 0.4 |
double x[N] = {5.5, 0.4};
// (Why is b named x? Answer: because b and x share
// the same storage, because Lapack will write x over b.)
// Solve the linear system A*x = b.
const int NRHS = 1;
const int LDA = N;
int IPIV [N];
const int LDB = N;
int INFO;
dgesv_(&N, &NRHS, A, &LDA, IPIV, x, &LDB, &INFO);
// Uncomment the next line if you wish to see
// the error code Lapack returns.
//std::cout << "INFO == " << INFO << "\n";
// Output x.
for (int i = 0; i < N; ++i) std::cout << x[i] << "\n";
return 0;
}
[Why is the function named dgesv_()? Answer: it isn't, at least not in FORTRAN 77, in which it is named DGESV. However, FORTRAN 77 is case-insensitive and, when we translate that to C's linking conventions, we get dgesv_(). At least, that's what my colleague and I get on every Linux, BSD or OSX machine on which we've tried it. On Debian or Ubuntu, the shell command "readelf --symbols /usr/lib/liblapack.so | grep -i DGESV" discovers this. On a Microsoft platform, the C-linked symbol may have some other name: you'll have to investigate if this is relevant to you.]
LAPACK and BLAS are very good at what they do. You should use them.
Whether you store your data in C-style arrays (as the example does) or in C++-style std::valarrays is up to you. LAPACK and BLAS don't care, so long as the storage is column-major.
THE MINIMIZATION OF COEFFICIENTS
You have not given enough information for me to tell exactly what you wish to do with your coefficients, but one suspects that what you may need is the solution to an N-by-(2*N) underdetermined system. Such runs beyond the scope of Franklin's book but, if you have already absorbed Franklin's material, then my own notes on the pseudoinverse may help you.
Obviously, if you were looking for a quick solution, I don't have it. There may be a precanned software package that does precisely what you want, but my experience suggests that the odds are against you in this. There are just too many hundreds of distinct kinds of matrix problems that arise in practice for precanned software to crack. However, LAPACK and BLAS, together with Franklin, van der Vorst and the answer you are now reading, offer all the tools you should need to crack your specific problem.
Good luck.