Which among python libraries like pulp and Scipy , has the capability to work with Cplex solver .If we have huge constraints and datasets for optimisation in supply chain .
PuLP is explicitly designed to model (and solve) LPs and it has bindings to use CPLEX under the hood.
You can also use Scipy with CPLEX, just not directly: you can organize your data in scipy and when it comes to creating constraints you can construct them from the data stored in scipy data structures.
Related
Backgrund
In Tensorflow, even when using mutable variables, it looks there is no out option as in numpy to specify the location to store the calculation result. One of the reason why the calculation gets slower is the temporary copy as explained From Python to Numpy and in my understanding re-using the existing buffer would avoid such copies.
Question
Would like to understand why there is no out option equivalent in Tensorflow. For instance matmul appear to have no such option to specify the location.Is it because by design Tensorflow will avoid making temporary copies or does it always create temporary copies.
It appears there is no copy indexing or view indexing concepts that numpy has. When an array is extracted from an existing array, is it a shallow copy (view) or a deep copy or it depends?
Please advise where to look at to understand the internal behavior overview similar to From Python to Numpy that gives good insights into its internal architecture and performance considerations.
Tensorflow produces computations graphs, which are highly optimized in terms of the data flow. For example, if some of the stated computations are not needed to produce the final result, TF would not evaluate them. Moreover, TF compiles procedures to its own low-level operations. Hence out parameter of numpy does not make sense in this context.
Thus, TF internally optimizes all steps of the dataflow, and you do not need to provide any instructions. You can optimize the procedure of getting the result as an algorithm, but not how the algorithmworks internally.
To get familiar with the idea what a computational graph is, consider reading this guide
Does anybody have a Tensorflow 2 tf.keras subclass for the L-BFGS algorithm? If one wants to use L-BFGS, one has currently two (official) options:
TF Probability
SciPy optimization
These two options are quite cumbersome to use, especially when using custom models. So I am planning to implement a custom subclass of tf.keras.optimizers to use L-BFGS. But before I start, I was curious, whether somebody already tackled this task?
I've implemented an interface between keras and SciPy optimize.
https://github.com/pedro-r-marques/keras-opt
I'm using 'cg' by default but you should also be able to use 'l-bfgs'. Take a look at the unit tests for example usage. I will add documentation as soon as possible.
Does anybody have a Tensorflow 2 tf.keras subclass for the L-BFGS algorithm?
Yes, here's (yet another) implementation L-BFGS (and any other scipy.optimize.minimize solver) for your consideration in case it fits your use case:
https://pypi.org/project/kormos/
https://github.com/mbhynes/kormos
This package has a similar goal to Pedro's answer above, but I would recommend it over the keras-opt package if you run into issues with memory consumption during training. I implemented kormos when trying to build a Rendle-type factorization machine and kept OOMing with other full-batch solver implementations.
These two options are quite cumbersome to use, especially when using custom models. So I am planning to implement a custom subclass of tf.keras.optimizers to use L-BFGS. But before I start, I was curious, whether somebody already tackled this task?
Agreed, it's a little cumbersome to fit the signatures of tfp and scipy into the parameter fitting procedure in keras, because of the way that keras steps in and out of an optimizer that has persistent state between calls, which is not how most [old school?] optimization libraries work.
This is addressed specifically in the kormos package since IMO during prototyping it's a pretty common workflow to alternate between either a stochastic optimizer and a full-batch deterministic optimizer, and this should be simple enough to do ad hoc in the python interpreter.
The package has models that extend keras.Model and keras.Sequential:
kormos.models.BatchOptimizedModel
kormos.models.BatchOptimizedSequentialModel
These can be compiled to be fit with either the standard or the scipy solvers; it would look something like this:
from tensorflow import keras
from kormos.models import BatchOptimizedSequentialModel
# Create an Ordinary Least Squares regressor
model = BatchOptimizedSequentialModel()
model.add(keras.layers.Dense(
units=1,
input_shape=(5,),
))
# compile the model for stochastic optimization
model.compile(loss=keras.losses.MeanSquaredError(), optimizer="sgd")
model.fit(...)
# compile the model for deterministic optimization using scipy.optimize.minimize
model.compile(loss=keras.losses.MeanSquaredError(), optimizer="L-BFGS-B")
model.fit(...)
I have been using TensorRT and TensorFlow-TRT to accelerate the inference of my DL algorithms.
Then I have heard of:
JAX https://github.com/google/jax
Trax https://github.com/google/trax
Both seem to accelerate DL. But I am having a hard time to understand them. Can anyone explain them in simple terms?
Trax is a deep learning framework created by Google and extensively used by the Google Brain team. It comes as an alternative to TensorFlow and PyTorch when it comes to implementing off-the-shelf state of the art deep learning models, for example Transformers, Bert etc. , in principle with respect to the Natural Language Processing field.
Trax is built upon TensorFlow and JAX. JAX is an enhanced and optimised version of Numpy. The important distinction about JAX and NumPy is that the former using a library called XLA (advanced linear algebra) which allows to run your NumPy code on GPU and TPU rather than on CPU like it happens in the plain NumPy, thus speeding up computation.
I am looking for optimization modelling libraries in python like CVXPY and Pyomo with support of complex variables (variables with real and imaginary part) and non-linear problems. CVXPY support complex variables but doesn't support nonlinear function for constraints. On the other hand, Pyomo can support nonlinear problems but doesn't support complex variables.
In conclusion: I am working on a large scale nonlinear and nonconvex optimization problem with some comlex variables and I am looking for something like cvxpy for these types of problems.
Any suggestions!
Thanks
Is it possible to use Tensorflow in a distributed manner and use the fit_generator()? In my research so far I have not seen anything on how to do this or if it is possible. If it is not possible then what are some possible solutions to use distributed Tensorflow when all the data will not fit in memory.
Using fit_generator() is not possible under a tensorflow distribution scope.
have a lookt at tf.data. i rewrote all my Keras ImageDataGenerators to a tensorflow data pipeline. doesn't need much time, is more transparent and quite remarkably faster.