Extend word embedding layer for incremental word2vec training with Tensorflow - tensorflow

I'd like to train word vectors/embeddings incrementally. With each incremental run I want to extend the vocabulary of the model and add new rows to the embeddings matrix.
The embeddings matrix is a partitioned variable, so ideally I want to avoid using assign since it's not implemented for partitioned variables.
One way I tried, looks like this:
# Set prev_vocab_size and new_vocab_size
#accordingly to the corpus/text of the current run
prev_embeddings = tf.get_variable(
'prev_embeddings',
shape=[prev_vocab_size, FLAGS.embedding_size],
dtype=tf.float32,
initializer=tf.random_uniform_initializer(-1.0, 1.0)
)
new_embeddings = tf.get_variable(
'new_embeddings',
shape=[new_vocab_to_add,
FLAGS.embedding_size],
dtype=tf.float32,
initializer=tf.random_uniform_initializer(
-1.0, 1.0)
)
combined_embeddings = tf.concat(
[prev_embeddings, new_embeddings], 0)
embeddings = tf.Variable(
combined_embeddings,
expected_shape=[prev_vocab_size + new_vocab_to_add, FLAGS.embedding_size],
dtype=tf.float32,
name='embeddings')
Now, this works well for the first run. But if I do the second run, I get a Assign requires shapes of both tensors to match error because the restored original prev_embeddings variable (from the first run) doesn't match the new shape (based on the extended vocab) I declare in the second run.
So I modified the tf.train.Saver to save the new_embeddings as the prev_embeddings like this:
saver = tf.train.Saver({"prev_embeddings": new_embeddings})
Now, in the second run, the prev_embeddings has the shape of new_embeddings in the previous run and I don't get an error for this.
However, now the new_embeddings in the second run has a different shape than in the first run and therefore when restoring the variables from the first run, I get another Assign requires shapes of both tensors to match error.
What's the best way to extend/expand the embeddings variable incrementally with new vectors for new words in the vocabulary while keeping the old and trained vectors?
Any help would be much appreciated.

Related

How to use the TensorFlow dataset API with unknown shapes properly?

I've been trying for several hours to complete this task with no success.
I have a very large dataset which is comprised of the following structure:
I want to split this data into X and Y (and pass Y to tf.to_categorical) as in the picture using the tf.data.Dataset API, but unfortunately every attempt of me trying to use it has ended up with some kind of error.
How do I use tf.data.Dataset to:
Split each row to x and y.
Convert Y to categorical with tf.to_categorical.
Split the dataset into batches.
Feed my model with the dataset.
My current attempt:
def map_sequence():
for sequence in input_sequences:
yield sequence[:-1], keras.utils.to_categorical(sequence[-1], total_words)
dataset = tf.data.Dataset.from_generator(map_sequence,
(tf.int32, tf.int32),
(tf.TensorShape(title_length-1), tf.TensorShape(total_words)))
But when I try to train my model with the following code:
inputs = keras.layers.Input(shape=(title_length-1, ))
x = keras.layers.Embedding(total_words, 32)(inputs)
x = keras.layers.Bidirectional(keras.layers.LSTM(64, return_sequences=True))(x)
x = keras.layers.Bidirectional(keras.layers.LSTM(64))(x)
predictions = keras.layers.Dense(total_words, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=predictions)
model.compile('Adam', 'categorical_crossentropy', metrics=['acc'])
model.fit(dataset)
I am getting this error: ValueError: Shapes (32954, 1) and (65, 32954) are incompatible
I think you have a similar problem as in this question. Keras expects the dataset that you give to produce batches, not individual examples. Since you are giving it two one-dimensional vectors at a time, Keras interprets that each of these is a batch of examples with one feature. So, your X data, which has 65 elements, is interpreted as a batch of 65 examples with a single feature (a 65x1 tensor). This fixes the batch size to 65. The output of the model has then shape 65x32,954 (which I assume is the value of total_words). But your Y vector, with 32,954 elements, is again interpreted as a batch of 32,954 with one features (32,954x1 tensor). These two things don't match, hence the error. You should be able to fix it by simply making a new dataset with batch before passing it to fit.
In any case, if you input_sequences is a NumPy array, as it seems to be, your method to produce the dataset is not really good, as using a generator will be really slow. This is a better way to do the same:
def map_sequence(sequence):
# Using tf.one_hot instead of keras.utils.to_categorical
# because we are working with TensorFlow tensors now
return sequence[:-1], tf.one_hot(sequence[-1], total_words)
dataset = tf.data.Dataset.from_tensor_slices(input_sequences)
dataset = dataset.map(map_sequence)
dataset = dataset.batch(batch_size)

tf.gradients(model.output, model.input) computes a different value each time I run it

I'm trying to compute the gradient of the output layer with respect to the input layer. My neural network is relatively small (input layer composed of 9 activation units and the output layer of 1) and the training went fine as the test provided a very good accuracy. I made the NN model using Keras.
In order to solve my problem, I need to compute the gradient of the output with respect to the input. This is, I need to obtain the Jacobian which as dimension [1x9]. The gradients function in tensorflow should provide me with everything I need, but when I run the code below I obtain a different solution every time.
output_v = model.output
input_v = model.input
gradients = tf.gradients(output_v, input_v)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
print(sess.run(model.input,feed_dict={model.input:x_test_N[0:1,:]}))
evaluated_gradients = sess.run(gradients,feed_dict{model.input:x_test_N[0:1,:]})
print(evaluated_gradients)
sess.close()
The first print command shows this value every time I run it (just to make sure that the input values are not modified):
[[-1.4306372 -0.1272892 0.7145787 1.338818 -1.2957293 -0.5402862-0.7771702 -0.5787912 -0.9157122]]
But the second print shows different ones:
[[ 0.00175761, -0.0490326 , -0.05413761, 0.09952173, 0.06112418, -0.04772799, 0.06557006, -0.02473242, 0.05542536]]
[[-0.00416433, 0.08235116, -0.00930298, 0.04440641, 0.03752216, 0.06378302, 0.03508484, -0.01903783, -0.0538374 ]]
Using finite differences, evaluated_gradients[0,0] = 0.03565103, which isn't close to any of the first values previously printed.
Thanks for your time!
Alberto
Solved by creating a specific session just before training my model:
sess = tf.Session()
sess.run(tf.global_variables_initializer())
K.set_session(sess)
history = model.fit(x_train_N, y_train_N, epochs=n_epochs,
validation_split=split, verbose=1, batch_size=n_batch_size,
shuffle='true', callbacks=[early_stop, tensorboard])
And evaluating the gradient after training, while tf.session is still open:
evaluated_gradients = sess.run(K.gradients(model.output, model.input), feed_dict={model.input: x_test_N})
Presumably your network is set up to initialize weights to random values. When you run sess.run(tf.initialize_all_variables()), you are initializing your variables to new random values. Therefore you get different values for output_v in every run, and hence different gradients. If you want to use a model you trained before, you should replace the initialization with initialize_all_variables() with a restore command. I am not familiar with how this is done in Keras since I usually work directly with tensorflow, but I would try this.
Also note that initialize_all_variables is deprecated and you should use global_variables_initializer instead.

How to initialize a keras tensor employed in an API model

I am trying to implemente a Memory-augmented neural network, in which the memory and the read/write/usage weight vectors are updated according to a combination of their previous values. These weigths are different from the classic weight matrices between layers that are automatically updated with the fit() function! My problem is the following: how can I correctly initialize these weights as keras tensors and use them in the model? I explain it better with the following simplified example.
My API model is something like:
input = Input(shape=(5,6))
controller = LSTM(20, activation='tanh',stateful=False, return_sequences=True)(input)
write_key = Dense(4,activation='tanh')(controller)
read_key = Dense(4,activation='tanh')(controller)
w_w = Add()([w_u, w_r]) #<---- UPDATE OF WRITE WEIGHTS
to_write = Dot()([w_w, write_key])
M = Add()([M,to_write])
cos_sim = Dot()([M,read_key])
w_r = Lambda(lambda x: softmax(x,axis=1))(cos_sim) #<---- UPDATE OF READ WEIGHTS
w_u = Add()([w_u,w_r,w_w]) #<---- UPDATE OF USAGE WEIGHTS
retrieved_memory = Dot()([w_r,M])
controller_output = concatenate([controller,retrieved_memory])
final_output = Dense(6,activation='sigmoid')(controller_output)`
You can see that, in order to compute w_w^t, I have to have first defined w_r^{t-1} and w_u^{t-1}. So, at the beginning I have to provide a valid initialization for these vectors. What is the best way to do it? The initializations I would like to have are:
M = K.variable(numpy.zeros((10,4))) # MEMORY
w_r = K.variable(numpy.zeros((1,10))) # READ WEIGHTS
w_u = K.variable(numpy.zeros((1,10))) # USAGE WEIGHTS`
But, analogously to what said in #2486(entron), these commands do not return a keras tensor with all the needed meta-data and so this returns the following error:
AttributeError: 'NoneType' object has no attribute 'inbound_nodes'
I also thought to use the old M, w_r and w_u as further inputs at each iteration and analogously get in output the same variables to complete the loop. But this means that I have to use the fit() function to train online the model having just the target as final output (Model 1), and employ the predict() function on the model with all the secondary outputs (Model 2) to get the variables to use at the next iteration. I have also to pass the weigth matrices from Model 1 to Model 2 using get_weights() and set_weights(). As you can see, it becomes a little bit messy and too slow.
Do you have any suggestions for this problem?
P.S. Please, do not focus too much on the API model above because it is a simplified (almost meaningless) version of the complete one where I skipped several key steps.

What's the difference between tf.placeholder and tf.Variable?

I'm a newbie to TensorFlow. I'm confused about the difference between tf.placeholder and tf.Variable. In my view, tf.placeholder is used for input data, and tf.Variable is used to store the state of data. This is all what I know.
Could someone explain to me more in detail about their differences? In particular, when to use tf.Variable and when to use tf.placeholder?
In short, you use tf.Variable for trainable variables such as weights (W) and biases (B) for your model.
weights = tf.Variable(
tf.truncated_normal([IMAGE_PIXELS, hidden1_units],
stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))), name='weights')
biases = tf.Variable(tf.zeros([hidden1_units]), name='biases')
tf.placeholder is used to feed actual training examples.
images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
This is how you feed the training examples during the training:
for step in xrange(FLAGS.max_steps):
feed_dict = {
images_placeholder: images_feed,
labels_placeholder: labels_feed,
}
_, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)
Your tf.variables will be trained (modified) as the result of this training.
See more at https://www.tensorflow.org/versions/r0.7/tutorials/mnist/tf/index.html. (Examples are taken from the web page.)
The difference is that with tf.Variable you have to provide an initial value when you declare it. With tf.placeholder you don't have to provide an initial value and you can specify it at run time with the feed_dict argument inside Session.run
Since Tensor computations compose of graphs then it's better to interpret the two in terms of graphs.
Take for example the simple linear regression
WX+B=Y
where W and B stand for the weights and bias and X for the observations' inputs and Y for the observations' outputs.
Obviously X and Y are of the same nature (manifest variables) which differ from that of W and B (latent variables). X and Y are values of the samples (observations) and hence need a place to be filled, while W and B are the weights and bias, Variables (the previous values affect the latter) in the graph which should be trained using different X and Y pairs. We place different samples to the Placeholders to train the Variables.
We only need to save or restore the Variables (at checkpoints) to save or rebuild the graph with the code.
Placeholders are mostly holders for the different datasets (for example training data or test data). However, Variables are trained in the training process for the specific tasks, i.e., to predict the outcome of the input or map the inputs to the desired labels. They remain the same until you retrain or fine-tune the model using different or the same samples to fill into the Placeholders often through the dict. For instance:
session.run(a_graph, dict = {a_placeholder_name : sample_values})
Placeholders are also passed as parameters to set models.
If you change placeholders (add, delete, change the shape etc) of a model in the middle of training, you can still reload the checkpoint without any other modifications. But if the variables of a saved model are changed, you should adjust the checkpoint accordingly to reload it and continue the training (all variables defined in the graph should be available in the checkpoint).
To sum up, if the values are from the samples (observations you already have) you safely make a placeholder to hold them, while if you need a parameter to be trained harness a Variable (simply put, set the Variables for the values you want to get using TF automatically).
In some interesting models, like a style transfer model, the input pixes are going to be optimized and the normally-called model variables are fixed, then we should make the input (usually initialized randomly) as a variable as implemented in that link.
For more information please infer to this simple and illustrating doc.
TL;DR
Variables
For parameters to learn
Values can be derived from training
Initial values are required (often random)
Placeholders
Allocated storage for data (such as for image pixel data during a feed)
Initial values are not required (but can be set, see tf.placeholder_with_default)
The most obvious difference between the tf.Variable and the tf.placeholder is that
you use variables to hold and update parameters. Variables are
in-memory buffers containing tensors. They must be explicitly
initialized and can be saved to disk during and after training. You
can later restore saved values to exercise or analyze the model.
Initialization of the variables is done with sess.run(tf.global_variables_initializer()). Also while creating a variable, you need to pass a Tensor as its initial value to the Variable() constructor and when you create a variable you always know its shape.
On the other hand, you can't update the placeholder. They also should not be initialized, but because they are a promise to have a tensor, you need to feed the value into them sess.run(<op>, {a: <some_val>}). And at last, in comparison to a variable, placeholder might not know the shape. You can either provide parts of the dimensions or provide nothing at all.
There other differences:
the values inside the variable can be updated during optimizations
variables can be shared, and can be non-trainable
the values inside the variable can be stored after training
when the variable is created, 3 ops are added to a graph (variable op, initializer op, ops for the initial value)
placeholder is a function, Variable is a class (hence an uppercase)
when you use TF in a distributed environment, variables are stored in a special place (parameter server) and are shared between the workers.
Interesting part is that not only placeholders can be fed. You can feed the value to a Variable and even to a constant.
Adding to other's answers, they also explain it very well in this MNIST tutorial on Tensoflow website:
We describe these interacting operations by manipulating symbolic
variables. Let's create one:
x = tf.placeholder(tf.float32, [None, 784]),
x isn't a specific value. It's a placeholder, a value that we'll input when we ask TensorFlow to
run a computation. We want to be able to input any number of MNIST
images, each flattened into a 784-dimensional vector. We represent
this as a 2-D tensor of floating-point numbers, with a shape [None,
784]. (Here None means that a dimension can be of any length.)
We also need the weights and biases for our model. We could imagine
treating these like additional inputs, but TensorFlow has an even
better way to handle it: Variable. A Variable is a modifiable tensor
that lives in TensorFlow's graph of interacting operations. It can be
used and even modified by the computation. For machine learning
applications, one generally has the model parameters be Variables.
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
We create these Variables by giving tf.Variable the initial value of
the Variable: in this case, we initialize both W and b as tensors full
of zeros. Since we are going to learn W and b, it doesn't matter very
much what they initially are.
Tensorflow uses three types of containers to store/execute the process
Constants :Constants holds the typical data.
variables: Data values will be changed, with respective the functions such as cost_function..
placeholders: Training/Testing data will be passed in to the graph.
Example snippet:
import numpy as np
import tensorflow as tf
### Model parameters ###
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
### Model input and output ###
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
### loss ###
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
### optimizer ###
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
### training data ###
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]
### training loop ###
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x:x_train, y:y_train})
As the name say placeholder is a promise to provide a value later i.e.
Variable are simply the training parameters (W(matrix), b(bias) same as the normal variables you use in your day to day programming, which the trainer updates/modify on each run/step.
While placeholder doesn't require any initial value, that when you created x and y TF doesn't allocated any memory, instead later when you feed the placeholders in the sess.run() using feed_dict, TensorFlow will allocate the appropriately sized memory for them (x and y) - this unconstrained-ness allows us to feed any size and shape of data.
In nutshell:
Variable - is a parameter you want trainer (i.e. GradientDescentOptimizer) to update after each step.
Placeholder demo -
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b # + provides a shortcut for tf.add(a, b)
Execution:
print(sess.run(adder_node, {a: 3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))
resulting in the output
7.5
[ 3. 7.]
In the first case 3 and 4.5 will be passed to a and b respectively, and then to adder_node ouputting 7. In second case there's a feed list, first step 1 and 2 will be added, next 3 and 4 (a and b).
Relevant reads:
tf.placeholder doc.
tf.Variable doc.
Variable VS placeholder.
Variables
A TensorFlow variable is the best way to represent shared, persistent state manipulated by your program. Variables are manipulated via the tf.Variable class. Internally, a tf.Variable stores a persistent tensor. Specific operations allow you to read and modify the values of this tensor. These modifications are visible across multiple tf.Sessions, so multiple workers can see the same values for a tf.Variable. Variables must be initialized before using.
Example:
x = tf.Variable(3, name="x")
y = tf.Variable(4, name="y")
f = x*x*y + y + 2
This creates a computation graph. The variables (x and y) can be initialized and the function (f) evaluated in a tensorflow session as follows:
with tf.Session() as sess:
x.initializer.run()
y.initializer.run()
result = f.eval()
print(result)
42
Placeholders
A placeholder is a node (same as a variable) whose value can be initialized in the future. These nodes basically output the value assigned to them during runtime. A placeholder node can be assigned using the tf.placeholder() class to which you can provide arguments such as type of the variable and/or its shape. Placeholders are extensively used for representing the training dataset in a machine learning model as the training dataset keeps changing.
Example:
A = tf.placeholder(tf.float32, shape=(None, 3))
B = A + 5
Note: 'None' for a dimension means 'any size'.
with tf.Session as sess:
B_val_1 = B.eval(feed_dict={A: [[1, 2, 3]]})
B_val_2 = B.eval(feed_dict={A: [[4, 5, 6], [7, 8, 9]]})
print(B_val_1)
[[6. 7. 8.]]
print(B_val_2)
[[9. 10. 11.]
[12. 13. 14.]]
References:
https://www.tensorflow.org/guide/variables
https://www.tensorflow.org/api_docs/python/tf/placeholder
O'Reilly: Hands-On Machine Learning with Scikit-Learn & Tensorflow
Think of Variable in tensorflow as a normal variables which we use in programming languages. We initialize variables, we can modify it later as well. Whereas placeholder doesn’t require initial value. Placeholder simply allocates block of memory for future use. Later, we can use feed_dict to feed the data into placeholder. By default, placeholder has an unconstrained shape, which allows you to feed tensors of different shapes in a session. You can make constrained shape by passing optional argument -shape, as I have done below.
x = tf.placeholder(tf.float32,(3,4))
y = x + 2
sess = tf.Session()
print(sess.run(y)) # will cause an error
s = np.random.rand(3,4)
print(sess.run(y, feed_dict={x:s}))
While doing Machine Learning task, most of the time we are unaware of number of rows but (let’s assume) we do know the number of features or columns. In that case, we can use None.
x = tf.placeholder(tf.float32, shape=(None,4))
Now, at run time we can feed any matrix with 4 columns and any number of rows.
Also, Placeholders are used for input data ( they are kind of variables which we use to feed our model), where as Variables are parameters such as weights that we train over time.
Placeholder :
A placeholder is simply a variable that we will assign data to at a later date. It allows us to create our operations and build our computation graph, without needing the data. In TensorFlow terminology, we then feed data into the graph through these placeholders.
Initial values are not required but can have default values with tf.placeholder_with_default)
We have to provide value at runtime like :
a = tf.placeholder(tf.int16) // initialize placeholder value
b = tf.placeholder(tf.int16) // initialize placeholder value
use it using session like :
sess.run(add, feed_dict={a: 2, b: 3}) // this value we have to assign at runtime
Variable :
A TensorFlow variable is the best way to represent shared,
persistent state manipulated by your program.
Variables are manipulated via the tf.Variable class. A tf.Variable
represents a tensor whose value can be changed by running ops on it.
Example : tf.Variable("Welcome to tensorflow!!!")
Tensorflow 2.0 Compatible Answer: The concept of Placeholders, tf.placeholder will not be available in Tensorflow 2.x (>= 2.0) by default, as the Default Execution Mode is Eager Execution.
However, we can use them if used in Graph Mode (Disable Eager Execution).
Equivalent command for TF Placeholder in version 2.x is tf.compat.v1.placeholder.
Equivalent Command for TF Variable in version 2.x is tf.Variable and if you want to migrate the code from 1.x to 2.x, the equivalent command is
tf.compat.v2.Variable.
Please refer this Tensorflow Page for more information about Tensorflow Version 2.0.
Please refer the Migration Guide for more information about migration from versions 1.x to 2.x.
Think of a computation graph. In such graph, we need an input node to pass our data to the graph, those nodes should be defined as Placeholder in tensorflow.
Do not think as a general program in Python. You can write a Python program and do all those stuff that guys explained in other answers just by Variables, but for computation graphs in tensorflow, to feed your data to the graph, you need to define those nods as Placeholders.
For TF V1:
Constant is with initial value and it won't change in the computation;
Variable is with initial value and it can change in the computation; (so good for parameters)
Placeholder is without initial value and it won't change in the computation. (so good for inputs like prediction instances)
For TF V2, same but they try to hide Placeholder (graph mode is not preferred).
In TensorFlow, a variable is just another tensor (like tf.constant or tf.placeholder). It just so happens that variables can be modified by the computation. tf.placeholder is used for inputs that will be provided externally to the computation at run-time (e.g. training data). tf.Variable is used for inputs that are part of the computation and are going to be modified by the computation (e.g. weights of a neural network).

Update only part of the word embedding matrix in Tensorflow

Assuming that I want to update a pre-trained word-embedding matrix during training, is there a way to update only a subset of the word embedding matrix?
I have looked into the Tensorflow API page and found this:
# Create an optimizer.
opt = GradientDescentOptimizer(learning_rate=0.1)
# Compute the gradients for a list of variables.
grads_and_vars = opt.compute_gradients(loss, <list of variables>)
# grads_and_vars is a list of tuples (gradient, variable). Do whatever you
# need to the 'gradient' part, for example cap them, etc.
capped_grads_and_vars = [(MyCapper(gv[0]), gv[1])) for gv in grads_and_vars]
# Ask the optimizer to apply the capped gradients.
opt.apply_gradients(capped_grads_and_vars)
However how do I apply that to the word-embedding matrix. Suppose I do:
word_emb = tf.Variable(0.2 * tf.random_uniform([syn0.shape[0],s['es']], minval=-1.0, maxval=1.0, dtype=tf.float32),name='word_emb',trainable=False)
gather_emb = tf.gather(word_emb,indices) #assuming that I pass some indices as placeholder through feed_dict
opt = tf.train.AdamOptimizer(1e-4)
grad = opt.compute_gradients(loss,gather_emb)
How do I then use opt.apply_gradients and tf.scatter_update to update the original embeddign matrix? (Also, tensorflow throws an error if the second argument of compute_gradient is not a tf.Variable)
TL;DR: The default implementation of opt.minimize(loss), TensorFlow will generate a sparse update for word_emb that modifies only the rows of word_emb that participated in the forward pass.
The gradient of the tf.gather(word_emb, indices) op with respect to word_emb is a tf.IndexedSlices object (see the implementation for more details). This object represents a sparse tensor that is zero everywhere, except for the rows selected by indices. A call to opt.minimize(loss) calls AdamOptimizer._apply_sparse(word_emb_grad, word_emb), which makes a call to tf.scatter_sub(word_emb, ...)* that updates only the rows of word_emb that were selected by indices.
If on the other hand you want to modify the tf.IndexedSlices that is returned by opt.compute_gradients(loss, word_emb), you can perform arbitrary TensorFlow operations on its indices and values properties, and create a new tf.IndexedSlices that can be passed to opt.apply_gradients([(word_emb, ...)]). For example, you could cap the gradients using MyCapper() (as in the example) using the following calls:
grad, = opt.compute_gradients(loss, word_emb)
train_op = opt.apply_gradients(
[tf.IndexedSlices(MyCapper(grad.values), grad.indices)])
Similarly, you could change the set of indices that will be modified by creating a new tf.IndexedSlices with a different indices.
* In general, if you want to update only part of a variable in TensorFlow, you can use the tf.scatter_update(), tf.scatter_add(), or tf.scatter_sub() operators, which respectively set, add to (+=) or subtract from (-=) the value previously stored in a variable.
Since you just want to select the elements to be updated (and not to change the gradients), you can do as follows.
Let indices_to_update be a boolean tensor that indicates the indices you wish to update, and entry_stop_gradients is defined in the link, Then:
gather_emb = entry_stop_gradients(gather_emb, indices_to_update)
(Source)
Actually, I was also struggling with such a problem. In my case, I needed to train a model with w2v embeddings, but not all of the tokens existed in embedding matrix. Thus for those tokens which were not in matrix, I made random initialization. Of course tokens for which embeddings were already trained, shouldn't be updated, thus I've came up with such a solution:
class PartialEmbeddingsUpdate(tf.keras.layers.Layer):
def __init__(self, len_vocab,
weights,
indices_to_update):
super(PartialEmbeddingsUpdate, self).__init__()
self.embeddings = tf.Variable(weights, name='embedding', dtype=tf.float32)
self.bool_mask = tf.equal(tf.expand_dims(tf.range(0,len_vocab),1), tf.expand_dims(indices_to_update,0))
self.bool_mask = tf.reduce_any(self.bool_mask,1)
self.bool_mask_not = tf.logical_not(self.bool_mask)
self.bool_mask_not = tf.expand_dims(tf.cast(self.bool_mask_not, dtype=self.embeddings.dtype),1)
self.bool_mask = tf.expand_dims(tf.cast(self.bool_mask, dtype=self.embeddings.dtype),1)
def call(self, input):
input = tf.cast(input, dtype=tf.int32)
embeddings = tf.stop_gradient(self.bool_mask_not * self.embeddings) + self.bool_mask * self.embeddings
return tf.gather(embeddings,input)
Where len_vocab - is your vocabulary length, weights - matrix of weights (some of which shouldn't be updated) and indices_to_update - indices of those tokens which should be updated. After that I applied this layer instead of tf.keras.layers.Embeddings. Hope it helps everyone, who encountered the same problem.