When we need to optimize a function on the positive real half-line, and we only have non-constraints optimization routines, we use y = exp(x), or y = x^2 to map to the real line and still optimize on the log or the (signed) square root of the variable.
Can we do something similar for linear constraints, of the form Ax = b where, for x a d-dimensional vector, A is a (N,n)-shaped matrix and b is a vector of length N, defining the constraints ?
While, as Ervin Kalvelaglan says this is not always a good idea, here is one way to do it.
Suppose we take the SVD of A, getting
A = U*S*V'
where if A is n x m
U is nxn orthogonal,
S is nxm, zero off the main diagonal,
V is mxm orthogonal
Computing the SVD is not a trivial computation.
We first zero out the elements of S which we think are non-zero just due to noise -- which can be a slightly delicate thing to do.
Then we can find one solution x~ to
A*x = b
as
x~ = V*pinv(S)*U'*b
(where pinv(S) is the pseudo inverse of S, ie replace the non zero elements of the diagonal by their multiplicative inverses)
Note that x~ is a least squares solution to the constraints, so we need to check that it is close enough to being a real solution, ie that Ax~ is close enough to b -- another somewhat delicate thing. If x~ doesn't satisfy the constraints closely enough you should give up: if the constraints have no solution neither does the optimisation.
Any other solution to the constraints can be written
x = x~ + sum c[i]*V[i]
where the V[i] are the columns of V corresponding to entries of S that are (now) zero. Here the c[i] are arbitrary constants. So we can change variables to using the c[] in the optimisation, and the constraints will be automatically satisfied. However this change of variables could be somewhat irksome!
Related
In going through the exercises of SICP, it defines a fixed-point as a function that satisfies the equation F(x)=x. And iterating to find where the function stops changing, for example F(F(F(x))).
The thing I don't understand is how a square root of, say, 9 has anything to do with that.
For example, if I have F(x) = sqrt(9), obviously x=3. Yet, how does that relate to doing:
F(F(F(x))) --> sqrt(sqrt(sqrt(9)))
Which I believe just converges to zero:
>>> math.sqrt(math.sqrt(math.sqrt(math.sqrt(math.sqrt(math.sqrt(9))))))
1.0349277670798647
Since F(x) = sqrt(x) when x=1. In other words, how does finding the square root of a constant have anything to do with finding fixed points of functions?
When calculating the square-root of a number, say a, you essentially have an equation of the form x^2 - a = 0. That is, to find the square-root of a, you have to find an x such that x^2 = a or x^2 - a = 0 -- call the latter equation as (1). The form given in (1) is an equation which is of the form g(x) = 0, where g(x) := x^2 - a.
To use the fixed-point method for calculating the roots of this equation, you have to make some subtle modifications to the existing equation and bring it to the form f(x) = x. One way to do this is to rewrite (1) as x = a/x -- call it (2). Now in (2), you have obtained the form required for solving an equation by the fixed-point method: f(x) is a/x.
Observe that this method requires both sides of the equation to have an 'x' term; an equation of the form sqrt(a) = x doesn't meet the specification and hence can't be solved (iteratively) using the fixed-point method.
The thing I don't understand is how a square root of, say, 9 has anything to do with that.
For example, if I have F(x) = sqrt(9), obviously x=3. Yet, how does that relate to doing: F(F(F(x))) --> sqrt(sqrt(sqrt(9)))
These are standard methods for numerical calculation of roots of non-linear equations, quite a complex topic on its own and one which is usually covered in Engineering courses. So don't worry if you don't get the "hang of it", the authors probably felt it was a good example of iterative problem solving.
You need to convert the problem f(x) = 0 to a fixed point problem g(x) = x that is likely to converge to the root of f(x). In general, the choice of g(x) is tricky.
if f(x) = x² - a = 0, then you should choose g(x) as follows:
g(x) = 1/2*(x + a/x)
(This choice is based on Newton's method, which is a special case of fixed-point iterations).
To find the square root, sqrt(a):
guess an initial value of x0.
Given a tolerance ε, compute xn+1 = 1/2*(xn + a/xn) for n = 0, 1, ... until convergence.
I'm writing a program to do the Newton Raphson Method for n variable (System of equation) using Datagridview. My problem is to determine the inverse for Jacobian Matrix. I've search in internet to find a solution but a real couldn't get it until now so if someone can help me I will real appreciate. Thanks in advance.
If you are asking for a recommendation of a library, that is explicitly off topic in Stack Overflow. However below I mention some algorithms that are commonly used; this may help you to find, or write, what you need. I would, though, not recommend writing something, unless you really want to, as it can be tricky to get these algorithms right. If you do decide to write something I'd recommend the QR method, as the easiest to write, though the theory is a little subtle.
First off do you really need to compute the inverse? If, for example, what you need to do is to compute
x = inv(J)*y
then it's faster and more accurate to treat this problem as
solve J*x = y for x
The methods below all factor J into other matrices, for which this solution can be done. A good package that implements the factorisation will also have the code to perform the solution.
If you do really really need the inverse often the best way is to solve, one column at a time
J*K = I for K, where I is the identity matrix
LU decomposition
This may well be the fastest of the algorithms described here but is also the least accurate. An important point is that the algorithm must include (partial) pivoting, or it will not work on all invertible matrices, for example it will fail on a rotation through 90 degrees.
What you get is a factorisation of J into:
J = P*L*U
where P is a permutation matrix,
L lower triangular,
U upper triangular
So having factorised, to solve for x we do three steps, each straightforward, and each can be done in place (ie all the x's can be the same variable)
Solve P*x1 = y for x1
Solve L*x2 = x1 for x2
Solve U*x = x2 for x
QR decomposition
This may be somewhat slower than LU but is more accurate. Conceptually this factorises J into
J = Q*R
Where Q is orthogonal and R upper triangular. However as it is usually implemented you in fact pass y as well as J to the routine, and it returns R (in J) and Q'*y (in the passed y), so to solve for x you just need to solve
R*x = y
which, given that R is upper triangular, is easy.
SVD (Singular value decomposition)
This is the most accurate, but also the slowest. Moreover unlike the others you can make progress even if J is singular (you can compute the 'generalised inverse' applied to y).
I recommend reading up on this, but advise against implementing it yourself.
Briefly you factorise J as
J = U*S*V'
where U and V are orthogonal and S diagonal.
There are, of course, many other ways of solving this problem. For example if your matrices are very large (dimension in the thousands) an it may, particularly if they are sparse (lots of zeroes), be faster to use an iterative method.
For this question, a region is a subset of Zd defined by finitely many linear inequalities with integer coefficients, where Zd is the set of d-tuples of integers. For example, the set of pairs (x, y) of non-negative integers with 2x+3y >= 10 constitutes a region with d=2 (non-negativity just imposes the additional inequalities x>=0 and y>=0).
Question: is there a good way, using integer programming (or something else?), to check if one region is contained in a union of finitely many other regions?
I know one way to check containment, which I describe below, but I'm hoping someone may be able to offer some improvements, as it's not too efficient.
Here's the way I know to check containment. First, integer programming libraries can directly check if a region is empty: in integer programming terminology (as I understand it), emptiness of a region corresponds to infeasibility of a model. I have coded up something using the gurobi library to check emptiness, and it seems to work well in practice for the kind of regions I care about.
Suppose now that we want to check if a region X is contained in another region Y (a special case of the question). Let Z be the intersection of X with the complement of Y. Then X is contained in Y if and only if Z is empty. Now, Z itself is not a region in my sense of the word, but it is a union of regions Z_1, ..., Z_n, where n is the number of inequalities used to define Y. We can check if Z is empty by checking that each of Z_1, ..., Z_n is empty, and we can do this as described above.
The general case can be handled in exactly the same way: if Y is a finite union of regions Y_1, ..., Y_k then Z is still a finite union of regions Z_1, ..., Z_n, and so we just check that each Z_i is empty. If Y_i is defined by m_i inequalities then n = m_1 * m_2 * ... * m_k.
So to summarize, we can reduce the containment problem to the emptiness problem, which the library can solve directly. The issue is that we may have to solve a very large number of emptiness problems to solve containment (e.g., if each Y_i is defined by only two inequalities then n = 2^k grows exponentially with k), and so this may take a lot of time.
You can't really expect a simple answer. Suppose that A is defined by all constraints of the form 0 <= x_i <= 1. A can be thought of as the collection of all possible rows of a truth table. Given any logical expression of the form e.g. x or (not y) or z, you can express it as a linear inequality
such as x + (1-y) + z >= 1 (along with the 0-1 constraints). Using this approach, any Boolean formula in conjunctive normal form (CNF) can be expressed as a region in Z^n. If A is defined as above and B_1, B_2, ...., B_k is a list of regions corresponding to CNFs then A is contained in the union of the B_i if and only if the disjunction of those CNFs is a tautology. But tautology-checking is a canonical example of an NP-complete problem.
None of this is to say that it can't be usefully reduced to ILP (which itself is NP-complete). I don't see of any direct way to do so, though I suspect that some of the techniques used to identify redundant constraints would be relevant.
How do you impose a constraint that all values in a vector you are trying to optimize for are greater than zero, using fmincon()?
According to the documentation, I need some parameters A and b, where A*x ≤ b, but I think if I make A a vector of -1's and b 0, then I will have optimized for the sum of x>0, instead of each value of x greater than 0.
Just in case you need it, here is my code. I am trying to optimize over a vector (x) such that the (componentwise) product of x and a matrix (called multiplierMatrix) makes a matrix for which the sum of the columns is x.
function [sse] = myfun(x) % this is a nested function
bigMatrix = repmat(x,1,120) .* multiplierMatrix;
answer = sum(bigMatrix,1)';
sse = sum((expectedAnswer - answer).^2);
end
xGuess = ones(1:120,1);
[sse xVals] = fmincon(#myfun,xGuess,???);
Let me know if I need to explain my problem better. Thanks for your help in advance!
You can use the lower bound:
xGuess = ones(120,1);
lb = zeros(120,1);
[sse xVals] = fmincon(#myfun,xGuess, [],[],[],[], lb);
note that xVals and sse should probably be swapped (if their name means anything).
The lower bound lb means that elements in your decision variable x will never fall below the corresponding element in lb, which is what you are after here.
The empties ([]) indicate you're not using linear constraints (e.g., A,b, Aeq,beq), only the lower bounds lb.
Some advice: fmincon is a pretty advanced function. You'd better memorize the documentation on it, and play with it for a few hours, using many different example problems.
I have two large square sparse matrices, A & B, and need to compute the following: A * B^-1 in the most efficient way. I have a feeling that the answer involves using scipy.sparse, but can't for the life of me figure it out.
After extensive searching, I have run across the following thread: Efficient numpy / lapack routine for product of inverse and sparse matrix? but can't figure out what the most efficient way would be.
Someone suggested using LU decomposition which is built into the sparse module of scipy, but when I try and do LU on sample matrix is says the result is singular (although when I just do a * B^-1 i get an answer). I have also heard someone suggest using linalg.spsolve(), but i can't figure out how to implement this as it requires a vector as the second argument.
If it helps, once I have the solution s.t. A * B^-1 = C, i only need to know the value for one row of the matrix C. The matrices will be roughly 1000x1000 to 1500x1500.
Actually 1000x1000 matrices are not that large. You can compute the inverse of such a matrix using numpy.linalg.inv(B) in less than 1 second on a modern desktop computer.
But you can be much more efficient if you rewrite your problem taking into account the fact that you only need one row of C (this is actually very often the case).
Let us write d_i = [0 0 0 ... 0 1 0 ... 0 ], a vector with only one one on the i-th element.
You can write, if ^t denotes the transpose :
AB^-1 = C <=> A = CB <=> A^t = B^t C^t
For the i-th row :
A^t d_i = B^t C^t d_i <=> a_i = B^t c_i
So you have a linear inverse problem which can be solved using numpy.linalg.solve
ci = np.linalg.solve(B.T, a[i])