Alternative for swi prologs clpq library for soving simplex - optimization

Excuse me if this is the wrong place to ask.
I have been using SWI Prolog's clpq library to solve simplex. I find the syntax pretty simple and expressive. It looks like this:
:- use_module(library(clpq)).
main(U, V, W) :-
{ 0 =< U, U =< 1,
0 =< V, V =< 1,
0 =< W, W =< 1
},
maximize(U + V - W).
No need to convert into any special format, you just type your constraints and the object function. Great, but it has come to my attention that clpq has bugs and is un-maintained, so I lack confidence in it.
So I was wondering if someone knows something opensource and equally as simple, without bugs? The best I have found so far is the GNU linear programming kit. What are other people using for experimenting with simplex?

For the archive, the simplex implementation in maxima (http://maxima.sourceforge.net/) is very good.

Related

Does any one know how to solve the following equation?

When I reading this paper http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1976ApJ...209..214B&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf
I try to solve eq(49) numerically, it seems a fokker-planck equation, I find finite difference method doesn't work, it's unstable.
Does any one know how to solve it?
computational science stack exchange is where you can ask and hope for an answer. Or you could try its physics cousin. The equation, you quote, is integro-differential equation, fairly non-linear... Fokker-Plank looking equation. Definitely not the typical Fokker-Plank.
What you can try is to discretize the space part of the function g(x,t) using finite differences or finite-elements. After all, 0 < x < x_max and you have boundary conditions. You also have to discretize the corresponding integration. So maybe finite elements might be more appropriate? Finite elements means you can write g(x, t) as a series of a well chosen basis of compactly supported simple enough functions Bj(x) : j = 1...N in the interval [0, x_max]
g(x,t) = sum_j=1:N gj(t)*Bj(x)
That will turn your function into a (large) vector gj(t) = g(x_j, t), for j = 1, 1, ...., N. As a result, you will obtain a non-linear system of ODEs
dgj(t)/dt = Qj(g1(t), g2(t), ..., gN(t))
j = 1 ... N
After that use something like Runge-Kutta to integrate numerically the ODE system.

Should T be a parameter, a function, or what?

I'm new here and I don't really know how to precisely ask my question. I have to prepare code that will proceed like x1 = x0 + t* e, which in practice looks like:
x1 = [0.5, 1] + [0, t]
x1 = [0.5, 1+t]
How should I declare t to make it work? I mean t has to remain here all the time, to make it possible to calculate the roots of a quadratic function a few steps further.
This would be hard to implement in a general purpose programming language because you need t to stay "symbolic" so you can do algebraic manipulations with it. You should look into implementing this in a Computer Algebra System (CAS) because those are specifically designed to handle symbolic computations. Implementing what you describe would be very quick and easy in a CAS.
There is well-known (and expensive proprietary) CAS software like Mathematica or Matlab. Or if you are working in C++ or python there is SymbolicC++ and SymPy that integrate well with each of them respectively. You can see Wikipedia for a list of CAS software.

backtracking line search parameter

I am reading/practicing a bit with optimization using Nocedal&Wright, when I got the the simple backtracking algorithm, where if d is my line direction and a is the step size the algorithm looks for a such that
for some 0 < c < 1. They advised to use a very small c, order of 10^-4.
That seemed very odd to me, as a very loss demand.
I did some experimenting with c = 0.3 and it seemed to work much better then the sugested 10^-4 ( for a simple quadratic problem and steepest descent).
Any intuition as to why such a low value should work and why didn't it do well for me?
Thanks.
∇ f() may have completely different scales for different problems;
one stepsize cannot fit all.
Consider f(x) = sin( ω . x ): the right c will depend on ω,
which may be on the order of 1, or 1e-6, or ...
Thus it's a good idea to scale ∇ f() to about norm 1, then play with c.
(People who recommend "c = ...", please describe your problem size and scales.)
Add some noise to your quadratic, see what happens as you increase the noise.
Try quadratic + noise in 2d, 10d.
In machine learning, there seems to be quite a lot of folklore on c a.k.a. learning rate;
google
learning-rate on stackexchange.com ,
also gradient-descent step-size
and adagrad adaptive gradient.

define / declare variable in Scilab`

I would like to ask how I can define / declare a variable in Scilab. In some PDFs that I read, it says that I can just type it in and Scilab will take care of the declaration. Not so. I want to set up a matrix equation of something like:
Ax + By + Cz = D
Mx + Ny + Pz = E
Rx + Sy + Tz = F
And then I want to get the general value of x, y, x in terms of A, B, C, D, E, F, M, N, P, R, S, T. I remember this is possible with Matlab. And later on, I want to plug in these values to get actual numbers. Please help.
Scilab is much more oriented at numerical computation than algebra solving, but you can still do it.
In your case you first should define the system in the form M1*x=M2, being M1 upper triangular.
I suggest you look at help for solve() and trianfml(), there are nice examples.
After that you can evaluate the expressions giving any value you want for A, B, C, ..., using evstr()
For symbolic algebra, I recommend Wolfram mathematica, Maple, or Maxima (this last one is open-source like Scilab)
OK, this is what I found. SciLab requires "symbolic math toolbox" in order to do symbolic math. the scimax/overload toolbox (by Calixte Denizet) can do that by integrating Maxima with SciLab. however, it is only available on Linux/Unix OS. another way to do it is the OVLD/SYM toolbox (by the deceased Jean-François Magni) which works with Windows (even Win 7). however, support for this toolbox has ceased due to its author's demise. the installation guide on spoken-tutorial.org no longer exists. thus, I am left with using Maxima by itself to solve symbolic equations and calculus problems.

Is there a vDSP function to do the following operation?

Sorry if this is obvious. I'm just getting into the Accelerate framework and trying to go beyond the very simple stuff. I'm staring down the vDSP reference but I'm not sure how the following would be phrased or what it might be called in technical lingo. I want the following operation - what's the best way to do this with vDSP? I'm just having trouble finding it. (In pseudocode, for i from 0 to some N:)
O[i]= A + B * (sum of vector I from 0 to i)
Thanks!
To clarify: these are both vectors of floats and speed is critical.
It turns out this is equivalent to:
vDSP_vrsum(I, 1, &B, O, 1, N);
vDSP_vsadd(O, 1, &A, O, 1, N);