Is it possible to use commercially available solvers such as Gurobi, CPLEX or Mosek with Gekko? If yes, could anyone give a small example showing how to do it?
Thanks.
The solvers that you referenced are for linear, mixed-integer linear, quadratic, mixed-integer quadratic, and quadratically constrained problems. There is no current interface because they can't solve the full range of problems that are required by Gekko such as Nonlinear Programming (NLP) and Mixed-Integer Nonlinear Programming (MINLP). MINLP solvers such as APOPT can solve LP, QP, and MILP problems but it isn't as fast as Gurobi or CPLEX for MILP problems. It is possible to link new solvers to Gekko and there are several proprietary solvers linked that require a license to activate. Gurobi and CPLEX both have Python APIs so I recommend those if you are interested in using them with Python. More information on publicly available solvers is available in the APMonitor documentation.
Related
I tried this : https://github.com/titu1994/tfdiffeq,
but had an issue and cannot proceed further - https://github.com/titu1994/tfdiffeq/issues/10
Tensorflow Probability has differentiable ODE solvers here.
You will get used to TFP solvers soon because the interface is much similar to tfdiffeq.
(But it also has some issues and I'm having trouble tooš„)
We used DeepXDE for solving differential equations. (DeepXDE is a framework for solving differential equations, based on TensorFlow). It works fine, but the accuracy of the solution is limited, and optimizing the meta-parameters did not help. Is this limitation a well-known problem? How the accuracy of solutions can be increased? We used the Adam-optimizer; are there optimizers that are more suitable for numerical problems, if high precision is needed?
(I think the problem is not specific for some concrete equation, but if needed I add an example.)
There are actually some methods that could increase the accuracy of the model:
Random Resampling
Residual Adaptive Refinement (RAR): https://arxiv.org/pdf/1907.04502.pdf
They even have an implemented example in their github repository:
https://github.com/lululxvi/deepxde/blob/master/examples/Burgers_RAR.py
Also, You could try using a different architecture such as Multi-Scale Fourier NNs. They seem to outperform PINNs, in cases where the solution contains lots of "spikes".
I am looking for optimization modelling libraries in python like CVXPY and Pyomo with support of complex variables (variables with real and imaginary part) and non-linear problems. CVXPY support complex variables but doesn't support nonlinear function for constraints. On the other hand, Pyomo can support nonlinear problems but doesn't support complex variables.
In conclusion: I am working on a large scale nonlinear and nonconvex optimization problem with some comlex variables and I am looking for something like cvxpy for these types of problems.
Any suggestions!
Thanks
Knowing that Tensorflow is good for working with matrices, would I be able to use Tensorflow to create a cellular automata? And would this offer a great deal of speed over just coding it in Python?
Are there any tutorials or websites that could point me in the right direction to use Tensorflow for more general purpose computing than machine learning (for example, simulations)?
If so, could someone help point me in the right direction to the type of Tensorflow commands I would need to learn to make this program? Thanks!
A TensorFlow implementation is likely to offer an improvement in execution time, especially if executed by GPU, since CA can be executed in parallel. See: https://cs.stackexchange.com/a/320/67726.
A starting point for TensorFlow in general might be the official guide and documentation, which do go beyond just machine learning. Also available are two tutorials on non-ML examples: Mandelbrot Set, Partial Differential Equations.
While TensorFlow is usually mentioned in the context of machine learning, it is worth noting that:
TensorFlowā¢ is an open source software library for high performance
numerical computation. Its flexible architecture allows easy
deployment of computation across a variety of platforms (CPUs, GPUs,
TPUs), and from desktops to clusters of servers to mobile and edge
devices.
Edit: here's an implementation and a tutorial about Conway's Game of Life using TF.
nVidia, for example, has CUBLAS, which promises 7-14x speedup. Naively, this is nowhere near the theoretical throughput of any of nVidia's GPU cards. What are the challenges in speeding up linear algebra on GPUs, and are there faster linear algebra routings already available?
As far as I know, CUBLAS is the fastest linear algebra implementation available for Nvidia GPUs. If you require LAPACK functionality, there's CULAPACK.
Note that CUBLAS only covers dense linear algebra; for sparse matrices, there's CUSPARSE (also provided as part of the CUDA toolkit).
The speedup greatly depends on the type of data you're operating on, as well as the specific operation you're performing. Some linear algebra operations parallelize very well, and others don't because they're inherently sequential. Optimization of numerical algorithms for parallel architectures is (and has been, for decades) an ongoing area of research -- so the performance of the algorithms is continually improving.