Sampling None-size tensor from distribution in tensorflow - tensorflow

The following code:
import tensorflow as tf
tfd = tf.contrib.distributions
mean = [0.0, 0.0]
scale = [1.0, 1.0]
dist = tfd.MultivariateNormalDiag(loc=mean, scale_diag=scale)
samp = dist.sample([None])
Gives the error:
TypeError: Expected int32, got None of type '_Message' instead.
But generates n samples from the distribution if None is replaced with an integer n. Is there any way to get an unknown number of samples from the distribution?
EDIT: The original question may be badly phrased; I want to sample a tensor of shape (None, ...) to combine with other tensors of this shape. Clearly somewhere in there an input is needed to fix the size at runtime.

You could do
num_samples = tf.placeholder(dtype=tf.int32, shape=())
sampl = dist.sample(num_samples)
and then feed in the number of samples. Likewise, if you have a scalar tensor representing the number of samples, you can pass that in.

Related

Passing random value in tensorflow function as a parameter

I have code in my augmentation tf.data pipeline...
# BLURE
filter_size = tf.random.uniform(shape=[], minval=0, maxval=5)
image = tfa.image.mean_filter2d(image, filter_shape=filter_size)
But I'm constantly getting error...
TypeError: The `filter_shape` argument must be a tuple of 2 integers. Received: Tensor("filter_shape:0", shape=(), dtype=int32)
I tried getting static value from random tensorflow like this...
# BLURE
filter_size = tf.get_static_value(tf.random.uniform(shape=[], minval=0, maxval=5))
image = tfa.image.mean_filter2d(image, filter_shape=filter_size)
But I get error...
TypeError: The `filter_shape` argument must be a tuple of 2 integers. Received: None
And this errors makes me sad :(
I want to create augmentation pipeline for tf.data btw...
You should specify an output shape. However, when I did that I ran into another error which hints that the shape requested by mean_filter2d should not be a Tensor. Therefore, I decided to simply go with the random module to generate a random tuple to modify your image.
import random
import tensorflow_addons as tfa
filter_size = tuple(random.randrange(0, 5) for _ in range(2))
image_bllr = tfa.image.mean_filter2d(image, filter_shape=filter_size)

Tensorflow loss function no gradient provided

Currently I try to code my own loss function, but when returning the result (a tensor that consists of a list with the loss values) I get the following error:
ValueError: No gradients provided for any variable: ['conv2d/kernel:0', 'conv2d/bias:0', 'conv2d_1/kernel:0', 'conv2d_1/bias:0', 'dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0', 'dense_2/kernel:0', 'dense_2/bias:0'].
However in tutorials and in their docs they also use tf.recude_mean and when using it like them (they showed how to code mse loss function) I dont get the error, so it seems that I am missing something
My code:
gl = tfa.losses.GIoULoss()
def loss(y_true, y_pred):
batch_size = y_true.shape[0]
# now contains 32 lists (a batch) of bbxs -> shape is (32, 7876)
bbx_true = y_true.numpy()
# now contains 32 lists (a batch) of bbxs here we have to double access [0] in order to get the entry itself
# -> shape is (32, 1, 1, 7876)
bbx_pred = y_pred.numpy()
losses = []
curr_true = []
curr_pred = []
for i in range(batch_size):
curr_true = bbx_true[i]
curr_pred = bbx_pred[i][0][0]
curr_true = [curr_true[x:x+4] for x in range(0, len(curr_true), 4)]
curr_pred = [curr_pred[x:x+4] for x in range(0, len(curr_pred), 4)]
if len(curr_true) == 0:
curr_true.append([0., 0.,0.,0.])
curr_loss = gl(curr_true, curr_pred)
losses.append(curr_loss)
return tf.math.reduce_mean(losses, axis=-1)
Basically I want to achive bounding box regression and because of that I want to use the GIoUloss loss function. Because my model outputs 7896 neurons (the max amount of bounding boxes I want to predict according to my training set times 4) and the gioloss function needs the input as an array of lists with 4 elements each, I have to perform this transformation.
How do I have to change my code in order to also build up a gradient
Numpy don't provide autograd functions so you need to have Tensorflow tensors exclusively in your loss (otherwise the gradient is lost during backpropagation). So avoid using .numpy() and use the tensorflow operators and slicing on tensoflow tensors instead.

Strange output of Conv2D in tflite graph

I have a tflite graph fragment of which depicted on attached picture
I needed to debug it's behavior and already on the first step I got quite puzzling results.
When I feed zeros tensor as input after first Conv2D I expect to get a tensor which consists only of values from bias of Conv2D (since all kernel elements get multiplied by zeros), but instead I've got a tensor which consists of some random data, here is the code snippet:
def test_graph(path=PATH_DEFAULT):
interp = tf.lite.Interpreter(path)
interp.allocate_tensors()
input_details = interp.get_input_details()
in_idx = input_details[0]['index']
zeros = np.zeros(shape=(1, 256, 256, 3), dtype=np.float32)
interp.set_tensor(in_idx, zeros)
interp.invoke()
# index of output of first conv2d operator is 3 (see netron pic)
after_conv_2d = interp.get_tensor(3)
# shape of bias is just [count of output channels]
n, h, w, c = after_conv_2d.shape
# if we feed zeros as input, we can expect that the only values we get are the values of bias
# since all kernel elems in that case are multiplied by zeros
uniq_vals_cnt = len(np.unique(after_conv_2d))
assert uniq_vals_cnt <= c, f"There are {uniq_vals_cnt} in output, should be <= than {c}"
output:
AssertionError: There are 287928 in output, should be <= than 24
Can someone help me with my misunderstanding?
Seems my assumption that I can get any intermediate tensor from interpreter is wrong, we can do it only for outputs, even though interpreter do not raise error and even gives tensors of the right shape for indices related to non-output tesnors.
One way to debug such graph would be to make all tensors outputs, but it seems easiest way to do it would be converting tflite file to pb with toco and then convert pb back to tflite with new outputs specified. This way is not ideal though because toco support for tflite -> pb conversion was removed after 1.9 and using versions before that can break (in my case it breaks) on some graphs.
More of it is here:
tflite: get_tensor on non-output tensors gives random values

What's the difference between _keras_shape and _shape for a tensor in tensorflow?

In tensorflow, I find that a tensor has two variable: _shape and _keras_shape. The following picture is an example:
The K.int_shape() would return the _keras_shape.
What's the difference between _shape and _keras_shape?
The difference is already mentioned in the picture above. See the output type. One is tuple while other output is of type TensorShape.
For Example:
import tensorflow as tf
a = tf.placeholder(tf.float32, [None, 128])
a.shape
# output: TensorShape([Dimension(None), Dimension(128)])
from keras import backend as K
print(K.int_shape(a))
# output: (None, 128)
tf.keras.backend.int_shape returns the shape of tensor or variable as a tuple of int or None entries while _shape is of type the TensorShape.
Read more about tf.keras.backend.int_shape from here and TensorShape from here.
The keras layer would compute the shape of the tensor, but the tensorflow would not. In this way, the _keras_shape was the shape calculated by the self defined layer.

Gradients of Bernoulli Samples

I'm trying to calculate the gradients of the samples from a Bernoulli distribution w.r.t. the probabilities p (of a sample being 1).
I tried using both the implementation of the Bernoulli distribution provided in tensorflow.contrib.distributions and my own simple implementation based on this discussion. However both methods fail when I try to calculate the gradients.
Using the Bernoulli implementation:
import tensorflow as tf
from tensorflow.contrib.distributions import Bernoulli
p = tf.constant([0.2, 0.6])
b = Bernoulli(p=p)
s = b.sample()
g = tf.gradients(s, p)
with tf.Session() as session:
print(session.run(g))
The above code gives me the following error:
TypeError: Fetch argument None has invalid type <class 'NoneType'>
Using my implementation:
import tensorflow as tf
p = tf.constant([0.2, 0.6])
shape = [1, 2]
s = tf.select(tf.random_uniform(shape) - p > 0.0, tf.ones(shape), tf.zeros(shape))
g = tf.gradients(s, p)
with tf.Session() as session:
print(session.run(g))
Same error:
TypeError: Fetch argument None has invalid type <class 'NoneType'>
Is there a way to calculate the gradients of Bernoulli samples?
(My TensorFlow version is 0.12).
You cannot backprop through a discrete stochastic node for obvious reasons. As gradients are not defined.
However if you approximate the Bernoulli with a continuos distribution controlled by a temperature parameter, yes you can.
This idea is called reparametrization trick and is implemented in the RelaxedBernoulli in Tensorflow Probability (or also in TF.contrib library)
Relaxed bernoulli
You can specify the probability p of your Bernoulli, which is your random variable, et voilĂ .