How one can get the smallest positive float value in TensorFLow?
In numpy one can use np.nextafter and do something like:
>>> import numpy as np
>>> np.nextafter(0, 1)
4.9406564584124654e-324
For this, Tensorflow has tf.math.nexafter. See here: TensorFlow doc
(It's actually unsurprising that TensoFlow is very similar to Numpy here, since the mathematics of Tensorflow are based on those of Numpy, and you can easily use np arrays as data input in tf.)
Related
Can I use numpy arrays when using pytorch?
I am converting a code from tensorflow to pytorch and the code uses numpy arrays during the computation. Can I keep my inputs as numpy arrays during the computation or do I have to convert them to torch tensors?
If that array is being passed to a Pytorch model with pytorch nn layers, then it MUST be a <torch.tensor> and NOT a numpy array.
Depending on the Pytorch layer, the tensor has to be in a specific shape like for nn.Conv2d layers you must have a 4d torch tensor and for nn.Linear you must have a 2d torch tensor.
This is among many reasons, it cannot be a numpy array.
Sarthak
I tried searching for the documentation online but I can't find anything that gives me an answer. What does .numpy() function do? The example code given is:
y_true = []
for X_batch, y_batch in mnist_test:
y_true.append(y_batch.numpy()[0].tolist())
Both in Pytorch and Tensorflow, the .numpy() method is pretty much straightforward. It converts a tensor object into an numpy.ndarray object. This implicitly means that the converted tensor will be now processed on the CPU.
Ever getting a problem understanding some PyTorch function you may ask help().
import torch
t = torch.tensor([1,2,3])
help(t.numpy)
Out:
Help on built-in function numpy:
numpy(...) method of torch.Tensor instance
numpy() -> numpy.ndarray
Returns :attr:`self` tensor as a NumPy :class:`ndarray`. This tensor and the
returned :class:`ndarray` share the same underlying storage. Changes to
:attr:`self` tensor will be reflected in the :class:`ndarray` and vice versa.
This numpy() function is the converter form torch.Tensor to numpy array.
If we look at this code below, we see a simple example where the .numpy() convert Tensors to numpy arrays automatically.
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
In the 2nd last line of code, we see that the tensorflow officials declared it as the converter of Tensor to a numpy array.
You may check it out here
Whenever I try printing I always get truncated results
import tensorflow as tf
import numpy as np
np.set_printoptions(threshold=np.nan)
tensor = tf.constant(np.ones(999))
tensor = tf.Print(tensor, [tensor])
sess = tf.Session()
sess.run(tensor)
As you can see I've followed a guide I found on Print full value of tensor into console or write to file in tensorflow
But the output is simply
...\core\kernels\logging_ops.cc:79] [1 1 1...]
I want to see the full tensor, thanks.
This is solved easily by checking the Tensorflow API for tf.Print. Pass summarize=n where n is the number of elements you want displayed.
You can do it as follows in TensorFlow 2.x:
import tensorflow as tf
tensor = tf.constant(np.ones(999))
tf.print(tensor, summarize=-1)
From TensorFlow docs -> summarize: The first and last summarize elements within each dimension are recursively printed per Tensor. If set to -1, it will print all elements of every tensor.
https://www.tensorflow.org/api_docs/python/tf/print
To print all tensors without truncation in TensorFlow 2.x:
import numpy as np
import sys
np.set_printoptions(threshold=sys.maxsize)
It's possible to read dense data by this way:
# tf - tensorflow, np - numpy, sess - session
m = np.ones((2, 3))
placeholder = tf.placeholder(tf.int32, shape=m.shape)
sess.run(placeholder, feed_dict={placeholder: m})
How to read scipy sparse matrix (for example scipy.sparse.csr_matrix) into tf.placeholder or maybe tf.sparse_placeholder ?
I think that currently TF does not have a good way to read from sparse data. If you do not want to convert a your sparse matrix into a dense one, you can try to construct a sparse tensor..
Here is what official tutorial tells you:
SparseTensors don't play well with queues. If you use SparseTensors
you have to decode the string records using tf.parse_example after
batching (instead of using tf.parse_single_example before batching).
To feed SciPy sparse matrix to TF placeholder
Option 1: you need to use tf.sparse_placeholder. In Use coo_matrix in TensorFlow shows the way to feed data to a sparse_placeholder
Option 2: you need to convert sparse matrix to NumPy dense matrix and feed to tf.place_holder (of course, this way is impossible when the converted dense matrix is out of memory)
I wonder if there is a way to compute the Gaussian kernel of a numpy masked array?
I import:
from sklearn.metrics.pairwise import rbf_kernel
If one uses a masked array and gives it as the input to the rbf_kernel function of scikit learn package the result is not a masked array. It seems that all the pairwise distances are calculated regardless of some of them being masked!
Scikit-learn doesn't support masked arrays.
Computing the RBF kernel is really simple if you can compute euclidean distances, though.