Tensorflow, tf.multinomial, get the associated probabilities failed - tensorflow

I am trying to using tf.multinomial to sample, and I want to get the associated probability value of the sampled values. Here is my example code,
In [1]: import tensorflow as tf
In [2]: tf.enable_eager_execution()
In [3]: probs = tf.constant([[0.5, 0.2, 0.1, 0.2], [0.6, 0.1, 0.1, 0.1]], dtype=tf.float32)
In [4]: idx = tf.multinomial(probs, 1)
In [5]: idx # print the indices
Out[5]:
<tf.Tensor: id=43, shape=(2, 1), dtype=int64, numpy=
array([[3],
[2]], dtype=int64)>
In [6]: probs[tf.range(probs.get_shape()[0], tf.squeeze(idx)]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-56ef51f84ca2> in <module>
----> 1 probs[tf.range(probs.get_shape()[0]), tf.squeeze(idx)]
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py in _slice_helper(tensor, slice_spec, var)
616 new_axis_mask |= (1 << index)
617 else:
--> 618 _check_index(s)
619 begin.append(s)
620 end.append(s + 1)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py in _check_index(idx)
514 # TODO(slebedev): IndexError seems more appropriate here, but it
515 # will break `_slice_helper` contract.
--> 516 raise TypeError(_SLICE_TYPE_ERROR + ", got {!r}".format(idx))
517
518
TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got <tf.Tensor: id=7, shape=(2,), dtype=int32, numpy=array([3, 2])>
The expected result I want is [0.2, 0.1] as indicated by idx.
But in Numpy, this method works as answered in https://stackoverflow.com/a/23435869/5046896
How can I fix it?

You can try tf.gather_nd, you can try
>>> import tensorflow as tf
>>> tf.enable_eager_execution()
>>> probs = tf.constant([[0.5, 0.2, 0.1, 0.2], [0.6, 0.1, 0.1, 0.1]], dtype=tf.float32)
>>> idx = tf.multinomial(probs, 1)
>>> row_indices = tf.range(probs.get_shape()[0], dtype=tf.int64)
>>> full_indices = tf.stack([row_indices, tf.squeeze(idx)], axis=1)
>>> rs = tf.gather_nd(probs, full_indices)
Or, you can use tf.distributions.Multinomial, the advantage is you do not need to care about the batch_size in the above code. It works under varying batch_size when you set the batch_size=None. Here is a simple example,
multinomail = tf.distributions.Multinomial(
total_count=tf.constant(1, dtype=tf.float32), # sample one for each record in the batch, that is [1, batch_size]
probs=probs)
sampled_actions = multinomail.sample() # sample one action for data in the batch
predicted_actions = tf.argmax(sampled_actions, axis=-1)
action_probs = sampled_actions * predicted_probs
action_probs = tf.reduce_sum(action_probs, axis=-1)
I prefer the latter one because it is flexible and elegant.

Related

ValueError: Shapes must be equal rank in assign_add()

I am reading tf.Variable in Tensorflow r2.0 in TF2:
import tensorflow as tf
# Create a variable.
w = tf.constant([1, 2, 3, 4], tf.float32, shape=[2, 2])
# Use the variable in the graph like any Tensor.
y = tf.matmul(w,tf.constant([7, 8, 9, 10], tf.float32, shape=[2, 2]))
v= tf.Variable(w)
# The overloaded operators are available too.
z = tf.sigmoid(w + y)
tf.shape(z)
# Assign a new value to the variable with `assign()` or a related method.
v.assign(w + 1)
v.assign_add(tf.constant([1.0, 21]))
ValueError: Shapes must be equal rank, but are 2 and 1 for
'AssignAddVariableOp_4' (op: 'AssignAddVariableOp') with input shapes:
[], 2.
And also how come the following returns false?
tf.shape(v) == tf.shape(tf.constant([1.0, 21],tf.float32))
My other question is that when we are in TF 2, we should not use tf.Session() anymore, correct? It seems we should never run session.run(), but the API document keys doing it with tf.compat.v1, etc. So why they are using it in TF2 docs?
Any help would be appreciated.
CS
As it clearly says in the error, it is expecting shape [2,2] for assign_add on v which is having the shape [2,2].
If you try to give any shape other than the initial shape of the Tensor which you are trying to do assign_add the error will be given.
Below is the modified code with the expected shape for the operation.
import tensorflow as tf
# Create a variable.
w = tf.constant([1, 2, 3, 4], tf.float32, shape=[2, 2])
# Use the variable in the graph like any Tensor.
y = tf.matmul(w,tf.constant([7, 8, 9, 10], tf.float32, shape=[2, 2]))
v= tf.Variable(w)
# The overloaded operators are available too.
z = tf.sigmoid(w + y)
tf.shape(z)
# Assign a new value to the variable with `assign()` or a related method.
v.assign(w + 1)
print(v)
v.assign_add(tf.constant([1, 2, 3, 4], tf.float32, shape=[2, 2]))
Output for v:
<tf.Variable 'UnreadVariable' shape=(2, 2) dtype=float32, numpy=
array([[3., 5.],
[7., 9.]], dtype=float32)>
Now the following Tensor comparison is returning True.
tf.shape(v) == tf.shape(tf.constant([1.0, 21],tf.float32))
<tf.Tensor: shape=(2,), dtype=bool, numpy=array([ True, True])>
Coming to your tf.Session() question, in TensorFlow 2.0 eager execution is enabled by default, still, if you need to disable eager execution and can use tf.Session like below.
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
hello = tf.constant('Hello, TensorFlow!')
sess = tf.compat.v1.Session()
print(sess.run(hello))

shuffling two tensors in the same order

As above. I tried those to no avail:
tf.random.shuffle( (a,b) )
tf.random.shuffle( zip(a,b) )
I used to concatenate them and do the shuffling, then unconcatenate / unpack. But now I'm in a situation where (a) is 4D rank tensor while (b) is 1D, so, no way to concatenate.
I also tried to give the seed argument to the shuffle method so it reproduces the same shuffling and I use it twice => Failed. Also tried to do the shuffling myself with randomly shuffled range of numbers, but TF is not as flexible as numpy in fancy indexing and stuff ==> failed.
What I'm doing now is, convert everything back to numpy then use shuffle from sklearn then go back to tensors by recasting. It is sheer stupid way. This is supposed to happen inside a graph.
You could just shuffle the indices and then use tf.gather() to extract values corresponding to those shuffled indices:
TF2.x (UPDATE)
import tensorflow as tf
import numpy as np
x = tf.convert_to_tensor(np.arange(5))
y = tf.convert_to_tensor(['a', 'b', 'c', 'd', 'e'])
indices = tf.range(start=0, limit=tf.shape(x)[0], dtype=tf.int32)
shuffled_indices = tf.random.shuffle(indices)
shuffled_x = tf.gather(x, shuffled_indices)
shuffled_y = tf.gather(y, shuffled_indices)
print('before')
print('x', x.numpy())
print('y', y.numpy())
print('after')
print('x', shuffled_x.numpy())
print('y', shuffled_y.numpy())
# before
# x [0 1 2 3 4]
# y [b'a' b'b' b'c' b'd' b'e']
# after
# x [4 0 1 2 3]
# y [b'e' b'a' b'b' b'c' b'd']
TF1.x
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, (None, 1, 1, 1))
y = tf.placeholder(tf.int32, (None))
indices = tf.range(start=0, limit=tf.shape(x)[0], dtype=tf.int32)
shuffled_indices = tf.random.shuffle(indices)
shuffled_x = tf.gather(x, shuffled_indices)
shuffled_y = tf.gather(y, shuffled_indices)
Make sure that you compute shuffled_x, shuffled_y in the same session run. Otherwise they might get different index orderings.
# Testing
x_data = np.concatenate([np.zeros((1, 1, 1, 1)),
np.ones((1, 1, 1, 1)),
2*np.ones((1, 1, 1, 1))]).astype('float32')
y_data = np.arange(4, 7, 1)
print('Before shuffling:')
print('x:')
print(x_data.squeeze())
print('y:')
print(y_data)
with tf.Session() as sess:
x_res, y_res = sess.run([shuffled_x, shuffled_y],
feed_dict={x: x_data, y: y_data})
print('After shuffling:')
print('x:')
print(x_res.squeeze())
print('y:')
print(y_res)
Before shuffling:
x:
[0. 1. 2.]
y:
[4 5 6]
After shuffling:
x:
[1. 2. 0.]
y:
[5 6 4]

Keras - pad tensor with values on the borders

I have image with size that's not even, so when convolution scales it down by a factor of 2, and then I do Conv2DTranspose, I don't get consistent sizes, which is a problem.
So I thought I'd pad the intermediate tensor with an extra row and column, with values same as what I see on the edges, for minimal disruption. How do I do this in Keras, is it even possible? What are my alternatives?
With Tensorflow for background, you could use tf.concat() to add to your tensor a duplicate of the row/column.
Supposing you want to duplicate the last row/column:
import tensorflow as tf
from keras.layers import Lambda, Input
from keras.models import Model
import numpy as np
def duplicate_last_row(tensor):
return tf.concat((tensor, tf.expand_dims(tensor[:, -1, ...], 1)), axis=1)
def duplicate_last_col(tensor):
return tf.concat((tensor, tf.expand_dims(tensor[:, :, -1, ...], 2)), axis=2)
# --------------
# Demonstrating with TF:
x = tf.convert_to_tensor([[[1, 2, 3], [4, 5, 6]],
[[10, 20, 30], [40, 50, 60]]])
x = duplicate_last_row(duplicate_last_col(x))
with tf.Session() as sess:
print(sess.run(x))
# [[[ 1 2 3 3]
# [ 4 5 6 6]
# [ 4 5 6 6]]
#
# [[10 20 30 30]
# [40 50 60 60]
# [40 50 60 60]]]
# --------------
# Using as a Keras Layer:
inputs = Input(shape=(5, 5, 3))
padded = Lambda(lambda t: duplicate_last_row(duplicate_last_col(t)))(inputs)
model = Model(inputs=inputs, outputs=padded)
model.compile(optimizer="adam", loss='mse', metrics=['mse'])
batch = np.random.rand(2, 5, 5, 3)
x = model.predict(batch, batch_size=2)
print(x.shape)
# (2, 6, 6, 3)

reduce_sum by certain dimension

I have two embeddings tensor A and B, which looks like
[
[1,1,1],
[1,1,1]
]
and
[
[0,0,0],
[1,1,1]
]
what I want to do is calculate the L2 distance d(A,B) element-wise.
First I did a tf.square(tf.sub(lhs, rhs)) to get
[
[1,1,1],
[0,0,0]
]
and then I want to do an element-wise reduce which returns
[
3,
0
]
but tf.reduce_sum does not allow my to reduce by row. Any inputs would be appreciated. Thanks.
Add the reduction_indices argument with a value of 1, eg.:
tf.reduce_sum( tf.square( tf.sub( lhs, rhs) ), 1 )
That should produce the result you're looking for. Here is the documentation on reduce_sum().
According to TensorFlow documentation, reduce_sum function which takes four arguments.
tf.reduce_sum(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None).
But reduction_indices has been deprecated. Better to use axis instead of. If the axis is not set, reduces all its dimensions.
As an example,this is taken from the documentation,
# 'x' is [[1, 1, 1]
# [1, 1, 1]]
tf.reduce_sum(x) ==> 6
tf.reduce_sum(x, 0) ==> [2, 2, 2]
tf.reduce_sum(x, 1) ==> [3, 3]
tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
tf.reduce_sum(x, [0, 1]) ==> 6
Above requirement can be written in this manner,
import numpy as np
import tensorflow as tf
a = np.array([[1,7,1],[1,1,1]])
b = np.array([[0,0,0],[1,1,1]])
xtr = tf.placeholder("float", [None, 3])
xte = tf.placeholder("float", [None, 3])
pred = tf.reduce_sum(tf.square(tf.subtract(xtr, xte)),1)
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
nn_index = sess.run(pred, feed_dict={xtr: a, xte: b})
print nn_index

Scikit Learn: RandomForest: clf.predict works with float, but not clf.score

I'm working on a classification problem. The labels I am trying to predict:
df3['relevance'].unique()
array([ 3. , 2.5 , 2.33, 2.67, 2. , 1. , 1.67, 1.33, 1.25,
2.75, 1.75, 1.5 , 2.25])
When I call predict using the features I've made, it works OK:
clf = RandomForestClassifier()
clf.fit(df3[features], df['relevance'])
pd.crosstab(clf.predict(df3[features]), df3['relevance'])
But when I call clf.score:
clf.score(df3['features'], df3['relevance'])
I get
ValueError: continuous is not supported
Should I be classifying the relevance label I am trying to predict as another data type? Thanks for any help.
The issue you are facing happens is likely because your relevance column is made up of continuous numbers.
I would suggest switching over to the RandomForestRegressor() if you are trying to predict continuous numbers. Otherwise, convert your variables into 1s and 0s based on some threshold value.
Simply encode labels as integers and everything will work well. Floats suggest regression.
In particular you can use LabelEncoder http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
>>> from sklearn.ensemble import RandomForestClassifier as RF
>>> import numpy as np
>>> X = np.array([[0], [1], [1.2]])
>>> y = [0.5, 1.2, -0.1]
>>> clf = RF()
>>> clf.fit(X, y)
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=None, verbose=0,
warm_start=False)
>>> print clf.score(y, X)
Traceback (most recent call last):
[.....]
ValueError: continuous is not supported
>>> y = [0, 1, 2]
>>> clf.fit(X, y)
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=None, verbose=0,
warm_start=False)
>>> print clf.score(X, y)
1.0
or compute .score yourself as this is extremely trivial function
print np.mean(clf.predict(X) == y)