Tensorflow tf.nn.embedding_lookup - tensorflow

is there a small neural network in tf.nn.embedding_lookup??
When I train some data, a value of the same index is changing.
So is it trained also? while I'm training my model
I checked the official embedding_lookup code but I can not see any tf.Variables for train embedding parameter.
But when I print all tf.Variables then I can found a Variable which is within embedding scope
Thank you.

Yes, the embedding is learned. You can look at the tf.nn.embedding_lookup operation as doing the following matrix multiplication more efficiently:
import tensorflow as tf
import numpy as np
NUM_CATEGORIES, EMBEDDING_SIZE = 5, 3
y = tf.placeholder(name='class_idx', shape=(1,), dtype=tf.int32)
RS = np.random.RandomState(42)
W_em_init = RS.randn(NUM_CATEGORIES, EMBEDDING_SIZE)
W_em = tf.get_variable(name='W_em',
initializer=tf.constant_initializer(W_em_init),
shape=(NUM_CATEGORIES, EMBEDDING_SIZE))
# Using tf.nn.embedding_lookup
y_em_1 = tf.nn.embedding_lookup(W_em, y)
# Using multiplication
y_one_hot = tf.one_hot(y, depth=NUM_CATEGORIES)
y_em_2 = tf.matmul(y_one_hot, W_em)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
sess.run([y_em_1, y_em_2], feed_dict={y: [1.0]})
# [array([[ 1.5230298 , -0.23415338, -0.23413695]], dtype=float32),
# array([[ 1.5230298 , -0.23415338, -0.23413695]], dtype=float32)]
The variable W_em will be trained in exactly the same way irrespective of whether you use y_em_1 or y_em_2 formulation; y_em_1 is likely to be more efficient, though.

Related

how to initialize a Variable tensor for the weight matrix in a keras model?

I am trying to use tensors variables to use as weights in a keras layer..
I know that I can use numpy arrays instead but the reason I want to feed tensors is that I want my weight matrices to be of the type SparseTensor.
This is a small example that I have coded so far:
def model_keras(seed, new_hidden_size_list=None):
number_of_layers = 1
hidden_size = 512
hidden_size_list = [hidden_size] * number_of_layers
input_size = 784
output_size = 10
if new_hidden_size_list is not None:
hidden_size_list = new_hidden_size_list
weight_input = tf.Variable(tf.random.normal([784, 512], mean=0.0, stddev=1.0))
bias_input = tf.Variable(tf.random.normal([512], mean=0.0, stddev=1.0))
weight_output = tf.Variable(tf.random.normal([512, 10], mean=0.0, stddev=1.0))
# This gives me an error when trying to use in kernel_initializer and bias_initializer in the keras model
weight_initializer_input = tf.initializers.variables([weight_input])
bias_initializer_input = tf.initializers.variables([bias_input])
weight_initializer_output = tf.initializers.variables([weight_output])
# This works fine
#weight_initializer_input = tf.initializers.lecun_uniform(seed=None)
#bias_initializer_input = tf.initializers.lecun_uniform(seed=None)
#weight_initializer_output = tf.initializers.lecun_uniform(seed=None)
print(weight_initializer_input, bias_initializer_input, weight_initializer_output)
model = keras.models.Sequential()
for index in range(number_of_layers):
if index == 0:
# input layer
model.add(keras.layers.Dense(hidden_size_list[index], activation=nn.selu, use_bias=True,
kernel_initializer=weight_initializer_input,
bias_initializer=bias_initializer_input,
input_shape=(input_size,)))
else:
model.add(keras.layers.Dense(hidden_size_list[index], activation=nn.selu, use_bias=True,
kernel_initializer=weight_initializer_hidden,
bias_initializer=bias_initializer_hidden))
# output layer
model.add(keras.layers.Dense(output_size, use_bias=False, kernel_initializer=weight_initializer_output))
model.add(keras.layers.Activation(nn.softmax))
return model
I am using tensorflow 1.15.
Any idea how one can use custom (user defined) Tensor Variables as initializer instead of pre-set schemes (e.g. Glorot, Truncated Normal etc). Another approach that I could take is to explicitly define the computations instead of using the keras.Layer.
Many thanks
Your code works after enabling eager execution.
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
Add this at the top of you file.
See this for working code.

Cannot convert a symbolic Tensor (dense_2_target_2:0) to a numpy array

I'm trying to implement SVM as the last layer of a CNN for classification, I'm trying to implement this code:
def custom_loss_value(y_true, y_pred):
print(y_true)
print(y_pred)
X = y_pred
print(X)
Y = y_true
Predict = []
Prob = []
scaler = StandardScaler()
# X = scaler.fit_transform(X)
param_grid = {'C': [0.1, 1, 8, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
SVM = GridSearchCV(SVC(kernel='rbf',probability=True), cv=3, param_grid=param_grid, scoring='auc', verbose=1)
SVM.fit(X, Y)
Final_Model = SVM.best_estimator_
Predict = Final_Model.predict(X)
Prob = Final_Model.predict_proba(X)
return categorical_hinge(tf.convert_to_tensor(Y, dtype=tf.float32), tf.convert_to_tensor(Predict, dtype=tf.float32))
sgd = tf.keras.optimizers.SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss=custom_loss_value, optimizer=sgd, metrics=['accuracy'])
I'm getting the error: Cannot convert a symbolic Tensor (dense_2_target_2:0) to a numpy array
on the line SVM.fit(X,Y)
I also tried converting the y_true and y_pred to np array but was getting error then also
To train a neural network with gradient descent, you need a model to be differentiable. So, you need to be able to take a gradient w.r.t. every trainable parameter.
Some problems arise in your code:
You can't directly train an SVM inside a Keras loss function. It
takes a TensorFlow tensor and uses TF ops. The output is also a
Tensorflow tensor. sklearn can work with NumPy arrays or lists but
not tensors.
It is very hard and practically not useful to train SVM through backpropagation. Something about it can be read here.
You can train SVM on top of pretrained model instead of fully-connected layer.

Importing pre-trained embeddings into Tensorflow's Embedding Feature Column

I have a TF Estimator that uses Feature Columns at its input layer. One of these is and EmbeddingColumn which I have been initializing randomly (the default behaviour).
Now I would like to pre-train my embeddings in gensim and transfer the learned embeddings into my TF model. The embedding_column accepts an initializer argument which expects a callable that can be created using tf.contrib.framework.load_embedding_initializer.
However, that function expects a saved TF checkpoint, which I don't have, because I trained my embeddings in gensim.
The question is: how do I save gensim word vectors (which are numpy arrays) as a tensor in the TF checkpoint format so that I can use that to initialize my embedding column?
Figured it out! This worked in Tensorflow 1.14.0.
You first need to turn the embedding vectors into a tf.Variable. Then use tf.train.Saver to save it in a checkpoint.
import tensorflow as tf
import numpy as np
ckpt_name = 'gensim_embeddings'
vocab_file = 'vocab.txt'
tensor_name = 'embeddings_tensor'
vocab = ['A', 'B', 'C']
embedding_vectors = np.array([
[1,2,3],
[4,5,6],
[7,8,9]
], dtype=np.float32)
embeddings = tf.Variable(initial_value=embedding_vectors)
init_op = tf.global_variables_initializer()
saver = tf.train.Saver({tensor_name: embeddings})
with tf.Session() as sess:
sess.run(init_op)
saver.save(sess, ckpt_name)
# writing vocab file
with open(vocab_file, 'w') as f:
f.write('\n'.join(vocab))
To use this checkpoint to initialize an embedding feature column:
cat = tf.feature_column.categorical_column_with_vocabulary_file(
key='cat', vocabulary_file=vocab_file)
embedding_initializer = tf.contrib.framework.load_embedding_initializer(
ckpt_path=ckpt_name,
embedding_tensor_name='embeddings_tensor',
new_vocab_size=3,
embedding_dim=3,
old_vocab_file=vocab_file,
new_vocab_file=vocab_file
)
emb = tf.feature_column.embedding_column(cat, dimension=3, initializer=embedding_initializer, trainable=False)
And we can test to make sure it has been initialized properly:
def test_embedding(feature_column, sample):
feature_layer = tf.keras.layers.DenseFeatures(feature_column)
print(feature_layer(sample).numpy())
tf.enable_eager_execution()
sample = {'cat': tf.constant(['B', 'A'], dtype=tf.string)}
test_embedding(item_emb, sample)
The output, as expected, is:
[[4. 5. 6.]
[1. 2. 3.]]
Which are the embeddings for 'B' and 'A' respectively.

Keras TimeDistributed Not Masking CNN Model

For the sake of example, I have an input consisting of 2 images,of total shape (2,299,299,3). I'm trying to apply inceptionv3 on each image, and then subsequently process the output with an LSTM. I'm using a masking layer to exclude a blank image from being processed (specified below).
The code is:
import numpy as np
from keras import backend as K
from keras.models import Sequential,Model
from keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D, BatchNormalization, \
Input, GlobalAveragePooling2D, Masking,TimeDistributed, LSTM,Dense,Flatten,Reshape,Lambda, Concatenate
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.applications import inception_v3
IMG_SIZE=(299,299,3)
def create_base():
base_model = inception_v3.InceptionV3(weights='imagenet', include_top=False)
x = GlobalAveragePooling2D()(base_model.output)
base_model=Model(base_model.input,x)
return base_model
base_model=create_base()
#Image mask to ignore images with pixel values of -1
IMAGE_MASK = -2*np.expand_dims(np.ones(IMG_SIZE),0)
final_input=Input((2,IMG_SIZE[0],IMG_SIZE[1],IMG_SIZE[2]))
final_model = Masking(mask_value = -2.)(final_input)
final_model = TimeDistributed(base_model)(final_model)
final_model = Lambda(lambda x: x, output_shape=lambda s:s)(final_model)
#final_model = Reshape(target_shape=(2, 2048))(final_model)
#final_model = Masking(mask_value = 0.)(final_model)
final_model = LSTM(5,return_sequences=False)(final_model)
final_model = Model(final_input,final_model)
#Create a sample test image
TEST_IMAGE = np.ones(IMG_SIZE)
#Create a test sample input, consisting of a normal image and a masked image
TEST_SAMPLE = np.concatenate((np.expand_dims(TEST_IMAGE,axis=0),IMAGE_MASK))
inp = final_model.input # input placeholder
outputs = [layer.output for layer in final_model.layers] # all layer outputs
functors = [K.function([inp]+ [K.learning_phase()], [out]) for out in outputs]
layer_outs = [func([np.expand_dims(TEST_SAMPLE,0), 1.]) for func in functors]
This does not work correctly. Specifically, the model should mask the IMAGE_MASK part of the input, but it instead processes it with inception (giving a nonzero output). here are the details:
layer_out[-1] , the LSTM output is fine:
[array([[-0.15324114, -0.09620268, -0.01668587, 0.07938149, -0.00757846]], dtype=float32)]
layer_out[-2] and layer_out[-3] , the LSTM input is wrong, it should have all zeros in the second array:
[array([[[ 0.37713543, 0.36381325, 0.36197218, ..., 0.23298527,
0.43247852, 0.34844452],
[ 0.24972123, 0.2378867 , 0.11810347, ..., 0.51930511,
0.33289322, 0.33403745]]], dtype=float32)]
layer_out[-4], the input to the CNN is correctly masked:
[[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
...,
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]]],
[[[-0., -0., -0.],
[-0., -0., -0.],
[-0., -0., -0.],
...,
[-0., -0., -0.],
[-0., -0., -0.],
[-0., -0., -0.]],
Note that the code seems to work correctly with a simpler base_model such as:
def create_base():
input_layer=Input(IMG_SIZE)
base_model=Flatten()(input_layer)
base_model=Dense(2048)(base_model)
base_model=Model(input_layer,base_model)
return base_model
I have exhausted most online resources on this. Permutations of this question have been asked on Keras's github, such as here, here and here, but I can't seem to find any concrete resolution.
The links suggest that the issues seem to be stemming from a combination of TimeDistributed being applied to BatchNormalization, and the hacky fixes of either the Lambda identity layer, or Reshape layers remove errors but don't seem to output the correct model.
I've tried to force the base model to support masking via:
base_model.__setattr__('supports_masking',True)
and I've also tried applying an identity layer via:
TimeDistributed(Lambda(lambda x: base_model(x), output_shape=lambda s:s))(final_model)
but none of these seem to work. Note that I would like the final model to be trainable, in particular the CNN part of it should remain trainable.
Not entirely sure this will work, but based on the comment made here, with a newer version of tensorflow + keras it should work:
final_model = TimeDistributed(Flatten())(final_input)
final_model = Masking(mask_value = -2.)(final_model)
final_model = TimeDistributed(Reshape(IMG_SIZE))(final_model)
final_model = TimeDistributed(base_model)(final_model)
final_model = Model(final_input,final_model)
I took a look at the source code of masking, and I noticed Keras creates a mask tensor that only reduces the last axis. As long as you're dealing with 5D tensors, it will cause no problem, but when you reduce the dimensions for the LSTM, this masking tensor becomes incompatible.
Doing the first flatten step, before masking, will assure that the masking tensor works properly for 3D tensors. Then you expand the image again to its original size.
I'll probably try to install newer versions soon to test it myself, but these installing procedures have caused too much trouble and I'm in the middle of something important here.
On my machine, this code compiles, but that strange error appears in prediction time (see link at the first line of this answer).
Creating a model for predicting the intermediate layers
I'm not sure, by the code I've seen, that the masking function is kept internally in tensors. I don't know exactly how it works, but it seems to be managed separately from the building of the functions inside the layers.
So, try using a keras standard model to make the predictions:
inp = final_model.input # input placeholder
outputs = [layer.output for layer in final_model.layers] # all layer outputs
fullModel = Model(inp,outputs)
layerPredictions = fullModel.predict(np.expand_dims(TEST_SAMPLE,0))
print(layerPredictions[-2])
It seems to be working as intended. Masking in Keras doesn't produce zeros as you would expect, it instead skips the timesteps that are masked in upstream layers such as LSTM and loss calculation. In case of RNNs, Keras (at least tensorflow) is implemented such that the states from the previous step are carried over, tensorflow_backend.py. This is done in part to preserve the shapes of tensors when dynamic input is given.
If you really want zeros you will have to implement your own layer with a similar logic to Masking and return zeros explicitly. To solve your problem, you need a mask before the final LSTM layer using the final_input:
class MyMask(Masking):
"""Layer that adds a mask based on initial input."""
def compute_mask(self, inputs, mask=None):
# Might need to adjust shapes
return K.any(K.not_equal(inputs[0], self.mask_value), axis=-1)
def call(self, inputs):
# We just return input back
return inputs[1]
def compute_output_shape(self, input_shape):
return input_shape[1]
final_model = MyMask(mask_value=-2.)([final_input, final_model])
You probably can attach the mask in a simpler manner but this custom class essentially adds a mask based on your initial inputs and outputs a Keras tensor that now has a mask.
Your LSTM will ignore in your example the second image. To confirm you can return_sequences=Trueand check that the output for 2 images are identical.
I'm trying implement the same thing, I want my LSTM sequences to have variable sizes. However I can't even implement your original model. I obtain the following error: TypeError: Layer input_1 does not support masking, but was passed an input_mask: Tensor("time_distributed_1/Reshape_1:0", shape=(?, 100, 100), dtype=bool) I'm using tensorflow 1.10 and keras 2.2.2
I solved the problem by adding a second input, a mask to specify which timesteps to take into account for the LSTM. That way the image sequence always has the same number of timesteps, the CNN always generates an output, but some of them are ignored for the LSTM input. However, the missing images need to be chosen carefully so that the batch normalization is not affected.
def LSTM_CNN(params):
resnet = ResNet50(include_top=False, weights='imagenet', pooling = 'avg')
input_layer = Input(shape=(params.numFrames, params.height, params.width, 3))
input_mask = Input(shape=(params.numFrames,1))
curr_layer = TimeDistributed(resnet)(input_layer)
resnetOutput = Dropout(0.5)(curr_layer)
curr_layer = multiply([resnetOutput,input_mask])
cnn_output = curr_layer
curr_layer = Masking(mask_value=0.0)(curr_layer)
lstm_out = LSTM(256, dropout=0.5)(curr_layer)
output = Dense(output_dim=params.numClasses, activation='sigmoid')(lstm_out)
model = Model([input_layer, input_mask], output)
return model

Minimal RNN example in tensorflow

Trying to implement a minimal toy RNN example in tensorflow.
The goal is to learn a mapping from the input data to the target data, similar to this wonderful concise example in theanets.
Update: We're getting there. The only part remaining is to make it converge (and less convoluted). Could someone help to turn the following into running code or provide a simple example?
import tensorflow as tf
from tensorflow.python.ops import rnn_cell
init_scale = 0.1
num_steps = 7
num_units = 7
input_data = [1, 2, 3, 4, 5, 6, 7]
target = [2, 3, 4, 5, 6, 7, 7]
#target = [1,1,1,1,1,1,1] #converges, but not what we want
batch_size = 1
with tf.Graph().as_default(), tf.Session() as session:
# Placeholder for the inputs and target of the net
# inputs = tf.placeholder(tf.int32, [batch_size, num_steps])
input1 = tf.placeholder(tf.float32, [batch_size, 1])
inputs = [input1 for _ in range(num_steps)]
outputs = tf.placeholder(tf.float32, [batch_size, num_steps])
gru = rnn_cell.GRUCell(num_units)
initial_state = state = tf.zeros([batch_size, num_units])
loss = tf.constant(0.0)
# setup model: unroll
for time_step in range(num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
step_ = inputs[time_step]
output, state = gru(step_, state)
loss += tf.reduce_sum(abs(output - target)) # all norms work equally well? NO!
final_state = state
optimizer = tf.train.AdamOptimizer(0.1) # CONVERGEs sooo much better
train = optimizer.minimize(loss) # let the optimizer train
numpy_state = initial_state.eval()
session.run(tf.initialize_all_variables())
for epoch in range(10): # now
for i in range(7): # feed fake 2D matrix of 1 byte at a time ;)
feed_dict = {initial_state: numpy_state, input1: [[input_data[i]]]} # no
numpy_state, current_loss,_ = session.run([final_state, loss,train], feed_dict=feed_dict)
print(current_loss) # hopefully going down, always stuck at 189, why!?
I think there are a few problems with your code, but the idea is right.
The main issue is that you're using a single tensor for inputs and outputs, as in:
inputs = tf.placeholder(tf.int32, [batch_size, num_steps]).
In TensorFlow the RNN functions take a list of tensors (because num_steps can vary in some models). So you should construct inputs like this:
inputs = [tf.placeholder(tf.int32, [batch_size, 1]) for _ in xrange(num_steps)]
Then you need to take care of the fact that your inputs are int32s, but a RNN cell works on float vectors - that's what embedding_lookup is for.
And finally you'll need to adapt your feed to put in the input list.
I think the ptb tutorial is a reasonable place to look, but if you want an even more minimal example of an out-of-the-box RNN you can take a look at some of the rnn unit tests, e.g., here.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/kernel_tests/rnn_test.py#L164