What is the tensorflow/keras equivalent of Theano's gt?
Based on Theano's documentation, theano.tensor.gt returns a symbolic 'int8' tensor representing the result of logical greater-than (a>b).
In Theano, I have:
import theano.tensor as T
posInd1 = T.gt(D1, eps).nonzero()[0]
How do I convert it to TensorFlow?
Related
Can I use numpy arrays when using pytorch?
I am converting a code from tensorflow to pytorch and the code uses numpy arrays during the computation. Can I keep my inputs as numpy arrays during the computation or do I have to convert them to torch tensors?
If that array is being passed to a Pytorch model with pytorch nn layers, then it MUST be a <torch.tensor> and NOT a numpy array.
Depending on the Pytorch layer, the tensor has to be in a specific shape like for nn.Conv2d layers you must have a 4d torch tensor and for nn.Linear you must have a 2d torch tensor.
This is among many reasons, it cannot be a numpy array.
Sarthak
I want to get the scalar value of a function parameter like following code does:
import tensorflow as tf
#tf.function
def test(key_value):
tf.print(key_value.numpy())
a = tf.constant(0)
test(a)
But there is no numpy function when running in autograph.
numpy is only available outside of tf.function, where Tensors have actual values. Within tf.function, you have access to a restricted API. As long as you pass the tensor to a TensorFlow op, you don't need to call numpy:
import tensorflow as tf
#tf.function
def test(key_value):
tf.print(key_value)
a = tf.constant(0)
test(a)
Have a look at this guide for more info.
If I want to implement a classifier using the sklearn library. Is there a way to save the model or convert the file into a saved tensorflow file in order to convert it to tensorflow lite later?
If you replicate the architecture in TensorFlow, which will be pretty easy given that scikit-learn models are usually rather simple, you can explicitly assign the parameters from the learned scikit-learn models to TensorFlow layers.
Here is an example with logistic regression turned into a single dense layer:
import tensorflow as tf
import numpy as np
from sklearn.linear_model import LogisticRegression
# some random data to train and test on
x = np.random.normal(size=(60, 21))
y = np.random.uniform(size=(60,)) > 0.5
# fit the sklearn model on the data
sklearn_model = LogisticRegression().fit(x, y)
# create a TF model with the same architecture
tf_model = tf.keras.models.Sequential()
tf_model.add(tf.keras.Input(shape=(21,)))
tf_model.add(tf.keras.layers.Dense(1))
# assign the parameters from sklearn to the TF model
tf_model.layers[0].weights[0].assign(sklearn_model.coef_.transpose())
tf_model.layers[0].bias.assign(sklearn_model.intercept_)
# verify the models do the same prediction
assert np.all((tf_model(x) > 0)[:, 0].numpy() == sklearn_model.predict(x))
It is not always easy to replicate a scikit model in tensorflow. For instance scitik has a lot of on the fly imputation libraries which will be a bit tricky to implement in tensorflow
I need to create upper triangular masking tensor in pytorch. in tensorflow it's easy using matrix_band_part. is it any pytorch equivalent for this function?
it's like numpy triu_indices but I need for tensor, not just matrix.
It's possible to read dense data by this way:
# tf - tensorflow, np - numpy, sess - session
m = np.ones((2, 3))
placeholder = tf.placeholder(tf.int32, shape=m.shape)
sess.run(placeholder, feed_dict={placeholder: m})
How to read scipy sparse matrix (for example scipy.sparse.csr_matrix) into tf.placeholder or maybe tf.sparse_placeholder ?
I think that currently TF does not have a good way to read from sparse data. If you do not want to convert a your sparse matrix into a dense one, you can try to construct a sparse tensor..
Here is what official tutorial tells you:
SparseTensors don't play well with queues. If you use SparseTensors
you have to decode the string records using tf.parse_example after
batching (instead of using tf.parse_single_example before batching).
To feed SciPy sparse matrix to TF placeholder
Option 1: you need to use tf.sparse_placeholder. In Use coo_matrix in TensorFlow shows the way to feed data to a sparse_placeholder
Option 2: you need to convert sparse matrix to NumPy dense matrix and feed to tf.place_holder (of course, this way is impossible when the converted dense matrix is out of memory)