Is there any function in Tensorflow which does the exact same thing as theano.tensor.switch(cond, ift, iff)?
In TensorFlow you can use tf.cond(https://www.tensorflow.org/api_docs/python/tf/cond). There are some examples in the documentation.
Edit: As you mentioned, this operation is not element wise, then it's equivalent is tf.where (https://www.tensorflow.org/api_docs/python/tf/where)
Related
I'm trying to learn how to use XLA for my models. And I'm looking at the doc from official here: https://www.tensorflow.org/xla#enable_xla_for_tensorflow_models. It was documented that there are two methods to enable XLA: 1) Explicit compilation by using #tf.function(jit_compile=True) to decorate your training function. 2) Auto-clustering by setting environment variables.
As I'm using tensorflow 1.15, not 2.x. So I think the second approach is the same as using this statement:
config.graph_options.optimizer_options.global_jit_level = (
tf.OptimizerOptions.ON_1)
You can also found info from here: https://www.tensorflow.org/xla/tutorials/autoclustering_xla. It seems this is what they used in tf2.x:
tf.config.optimizer.set_jit(True) # Enable XLA.
I think they are the same, correct me if I'm wrong.
OK, so if using the first approach, I think in tf1.15, this is equivalent to using
tf.xla.experimental.compile(computation)
So, my question is if I have used
tf.xla.experimental.compile(computation) to decorate my whole training function. Is this equivalent to use
config.graph_options.optimizer_options.global_jit_level = (
tf.OptimizerOptions.ON_1)
? Anybody knows? Much appreciated.
According to this video from TF team (2021), clustering will automatically look for places to optimize. Nevertheless, due to an unpredictable behaviour, they recommend decorating tf.fuctions with #tf.function(jit_compile=True) over using out-of-the-box clustering.
In case you want to use autoclustering, set_jit(True) is being deprecated and the most correct way now is tf.config.optimizer.set_jit('autoclustering')
What is the equivalent of Python function tf.gradients(loss, [var]) in C++? Thanks!
The equivalent function in C++ is tensorflow::AddSymbolicGradients(). You will need to obtain a tensorflow::Graph object representing your graph to use this function. However, adding gradients in C++ is still experimental, so beware that this function signature is subject to change.
I can't find chip define code in tensorflow, but it is used everywhere.
for example, params.template chip<0>(index).
What does the number in <> mean, it seems that the number can be 0,1,2,3
It is part of the Eigen library that Tensorflow uses. The code is here:
https://bitbucket.org/eigen/eigen/src/6f952374ef2b6b8786b653e3fe8b7b7b712950ef/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h?fileviewer=file-view-default
Is it possible to use data stored in Sframe to train e.g., a Random Forest, of scikit-learn implementation without converting the whole dataset to numpy?
According by Turi-forum:
"If you use the most recent version of SFrame (which only became available via pip yesterday) you can use the tonumpy function to create an ndarray from an SFrame."
https://forum.turi.com/discussion/1642/
I have a function in Python:
def f(x):
return x[0]**3 + x[1]**2 + 7
# Actually more than this.
# No analytical expression
It's a scalar valued function of a vector.
How can I approximate the Jacobian and Hessian of this function in numpy or scipy numerically?
(Updated in late 2017 because there's been a lot of updates in this space.)
Your best bet is probably automatic differentiation. There are now many packages for this, because it's the standard approach in deep learning:
Autograd works transparently with most numpy code. It's pure-Python, requires almost no code changes for typical functions, and is reasonably fast.
There are many deep-learning-oriented libraries that can do this.
Some of the most popular are TensorFlow, PyTorch, Theano, Chainer, and MXNet. Each will require you to rewrite your function in their kind-of-like-numpy-but-needlessly-different API, and in return will give you GPU support and a bunch of deep learning-oriented features that you may or may not care about.
FuncDesigner is an older package I haven't used whose website is currently down.
Another option is to approximate it with finite differences, basically just evaluating (f(x + eps) - f(x - eps)) / (2 * eps) (but obviously with more effort put into it than that). This will probably be slower and less accurate than the other approaches, especially in moderately high dimensions, but is fully general and requires no code changes. numdifftools seems to be the standard Python package for this.
You could also attempt to find fully symbolic derivatives with SymPy, but this will be a relatively manual process.
Restricted to just SciPy, the most convenient way I found was scipy.misc.derivative, within the appropriate loops, with lambdas to curry the function of interest.