I have followed the tutorial available at: https://www.tensorflow.org/quantum/tutorials/mnist. I have modified this tutorial to the simplest example I could think of: an input set in which x increases linearly from 0 to 1 and y = x < 0.3. I then use a PQC with a single Rx gate with a symbol, and a readout using a Z gate.
When retrieving the optimized symbol and adjusting it manually, I can easily find a value that provides 100% accuracy, but when I let the Adam optimizer run, it converges to either always predict 1 or always predict -1. Does anybody spot what I do wrong? (and I apologize for not being able to break down the code to a smaller example)
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# used to embed classical data in quantum circuits
def convert_to_circuit_cont(image):
"""Encode truncated classical image into quantum datapoint."""
values = np.ndarray.flatten(image)
qubits = cirq.GridQubit.rect(4, 1)
circuit = cirq.Circuit()
for i, value in enumerate(values):
if value:
circuit.append(cirq.rx(value).on(qubits[i]))
return circuit
# define classical dataset
length = 1000
np.random.seed(42)
# create a linearly increasing set for x from 0 to 1 in 1/length steps
x_train_sorted = np.asarray([[x/length] for x in range(0,length)], dtype=np.float32)
# p is used to shuffle x and y similarly
p = np.random.permutation(len(x_train_sorted))
x_train = x_train_sorted[p]
# y = x < 0.3 in {-1, 1} for Hinge loss
y_train_sorted = np.asarray([1 if (x/length)<0.30 else -1 for x in range(0,length)])
y_train = y_train_sorted[p]
# test == train for this example
x_test = x_train_sorted[:]
y_test = y_train_sorted[:]
# convert classical data into quantum circuits
x_train_circ = [convert_to_circuit_cont(x) for x in x_train]
x_test_circ = [convert_to_circuit_cont(x) for x in x_test]
x_train_tfcirc = tfq.convert_to_tensor(x_train_circ)
x_test_tfcirc = tfq.convert_to_tensor(x_test_circ)
# define the PQC circuit, consisting out of 1 qubit with 1 gate (Rx) and 1 parameter
def create_quantum_model():
data_qubits = cirq.GridQubit.rect(1, 1)
circuit = cirq.Circuit()
a = sympy.Symbol("a")
circuit.append(cirq.rx(a).on(data_qubits[0])),
return circuit, cirq.Z(data_qubits[0])
model_circuit, model_readout = create_quantum_model()
# Build the Keras model.
model = tf.keras.Sequential([
# The input is the data-circuit, encoded as a tf.string
tf.keras.layers.Input(shape=(), dtype=tf.string),
# The PQC layer returns the expected value of the readout gate, range [-1,1].
tfq.layers.PQC(model_circuit, model_readout),
])
# used for logging progress during optimization
def hinge_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true) > 0.0
y_pred = tf.squeeze(y_pred) > 0.0
result = tf.cast(y_true == y_pred, tf.float32)
return tf.reduce_mean(result)
# compile the model with Hinge loss and Adam, as done in the example. Have tried with various learning_rates
model.compile(
loss = tf.keras.losses.Hinge(),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.1),
metrics=[hinge_accuracy])
EPOCHS = 20
BATCH_SIZE = 32
NUM_EXAMPLES = 1000
# fit the model
qnn_history = model.fit(
x_train_tfcirc, y_train,
batch_size=32,
epochs=EPOCHS,
verbose=1,
validation_data=(x_test_tfcirc, y_test),
use_multiprocessing=False)
results = model.predict(x_test_tfcirc)
results_mapped = [-1 if x<=0 else 1 for x in results[:,0]]
print(np.sum(np.equal(results_mapped, y_test)))
After 20 epochs of optimization, I get the following:
1000/1000 [==============================] - 0s 410us/sample - loss: 0.5589 - hinge_accuracy: 0.6982 - val_loss: 0.5530 - val_hinge_accuracy: 0.7070
This results in 700 samples out of 1000 predicted correctly. When looking at the mapped results, this is because all results are predicted as -1. When looking at the raw results, they linearly increase from -0.5484014 to -0.99996257.
When retrieving the weight with w = model.layers[0].get_weights(), subtracting 0.8, and setting it again with model.layers[0].set_weights(w), I get 920/1000 correct. Fine-tuning this process allows me to achieve 1000/1000.
Update 1:
I have also printed the update of the weight over the various epochs:
4.916246, 4.242602, 3.3765688, 2.6855211, 2.3405066, 2.206207, 2.1734586, 2.1656137, 2.1510274, 2.1634471, 2.1683235, 2.188944, 2.1510284, 2.1591303, 2.1632445, 2.1542525, 2.1677444, 2.1702878, 2.163104, 2.1635907
I set the weight to 1.36, a value which gives 908/1000 (as opposed to 700/100). The optimizer moves away from it:
1.7992111, 2.0727847, 2.1370323, 2.15711, 2.1686404, 2.1603785, 2.183334, 2.1563332, 2.156857, 2.169908, 2.1658351, 2.170673, 2.1575692, 2.1505954, 2.1561477, 2.1754034, 2.1545155, 2.1635509, 2.1464484, 2.1707492
One thing that I noticed is that the value for the hinge accuracy was 0.75 with the weight 1.36, which is higher than the 0.7 for 2.17. If this is the case, I am either in an unlucky part of the optimization landscape where the global minimum does not correspond to the minimum of the loss landscape, or the loss value is determined incorrectly. This is what I will be investigating next.
The minima of the Hinge loss function for this examples does not correspond with the maxima of number of correctly classified examples. Please see plot of these w.r.t. the value of the parameter. Given that the optimizer works towards the minima of the loss, not the maxima of the number of classified examples, the code (and framework/optimizer) do what they are supposed to do. Alternatively, one could use a different loss function to try to find a better fit. For example binarized l1 loss. This function would have the same global optimum, but would likely have a very flat landscape.
Related
I've been back and forth with this for ages, but without being able to find a solution so far anywhere. So, I have a HuggingFace model ('bert-base-cased') that I'm using with TensorFlow and a custom dataset. I've: (1) tokenized my data (2) split the data; (3) converted the data to TF dataset format; (4) instantiated, compiled and fit the model.
During training, it behaves as you'd expect: training and validation accuracy go up. But when I evaluate the model on the test dataset using TF's model.evaluate and model.predict, the results are very different. The accuracy as reported by model.evaluate is higher (and more or less in line with the validation accuracy); the accuracy as reported by model.predict is about 10% lower. (Maybe it's just a coincidence, but it's similar to the reported training accuracy after the single epoch of fine-tuning.)
Can anyone figure out what's causing this? I include snippets of my code below.
# tokenize the dataset
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path="bert-base-cased",use_fast=False)
def tokenize_function(examples):
return tokenizer(examples['text'], padding = "max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
# splitting dataset
trainSize = 0.7
valTestSize = 1 - trainSize
train_testvalid = tokenized_datasets.train_test_split(test_size=valTestSize,stratify_by_column='class')
valid_test = train_testvalid['test'].train_test_split(test_size=0.5,stratify_by_column='class')
# renaming each of the datasets for convenience
train_set = train_testvalid['train']
val_set = valid_test['train']
test_set = valid_test['test']
# converting the tokenized datasets to TensorFlow datasets
data_collator = DefaultDataCollator(return_tensors="tf")
tf_train_dataset = train_set.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=['class'],
shuffle=True,
collate_fn=data_collator,
batch_size=8)
tf_validation_dataset = val_set.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=['class'],
shuffle=False,
collate_fn=data_collator,
batch_size=8)
tf_test_dataset = test_set.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=['class'],
shuffle=False,
collate_fn=data_collator,
batch_size=8)
# loading tensorflow model
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=1)
# compiling the model
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-6),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=tf.metrics.BinaryAccuracy())
# fitting model
history = model.fit(tf_train_dataset,
validation_data=tf_validation_dataset,
epochs=1)
# Evaluating the model on the test data using `evaluate`
results = model.evaluate(x=tf_test_dataset,verbose=2) # reports binary_accuracy: 0.9152
# first attempt at using model.predict method
hits = 0
misses = 0
for x, y in tf_test_dataset:
logits = tf.keras.backend.get_value(model(x, training=False).logits)
labels = tf.keras.backend.get_value(y)
for i in range(len(logits)):
if logits[i][0] < 0:
z = 0
else:
z = 1
if z == labels[i]:
hits += 1
else:
misses += 1
print(hits/(hits+misses)) # reports binary_accuracy: 0.8187
# second attempt at using model.predict method
modelPredictions = model.predict(tf_test_dataset).logits
testDataLabels = np.concatenate([y for x, y in tf_test_dataset], axis=0)
hits = 0
misses = 0
for i in range(len(modelPredictions)):
if modelPredictions[i][0] >= 0:
z = 1
else:
z = 0
if z == testDataLabels[i]:
hits += 1
else:
misses += 1
print(hits/(hits+misses)) # reports binary_accuracy: 0.8187
Things I've tried include:
different loss functions (it's a binary classification problem with the label column of the dataset filled with either a zero or a one for each row);
different ways of unpacking the test dataset and feeding it to model.predict;
altering the 'num_labels' parameter between 1 and 2.
I fixed the problem by changing the num_labels parameter to two and the loss function to sparse categorical cross entropy. (I then had to change my model.predict loop by taking the argmax of the two logits produced by the model.)
I'm try to solve a multi-linear-regression problem with a very simple linear network. The network only consists of a single dense layer as its output layer and the activation function is set to linear. I synthesize the output data Y by multiplying the input data X by the system (weight) matrix A: Y=A.X . Both X and A contain random numbers with normal or uniform distributions (the problem happens regardless). In this case, the network reaches above 99% accuracy in only 7 Epochs over 1000 samples as one would expect.
Now, if I synthesize X from Y, which this time around has uniform random numbers, using A's inverse: X = inv(A).Y
, and try to train the network, after two hundred Epochs, the accuracy only reaches 94%.
Why is there such a huge disparity between the two cases even-though the system matrix (weights) is exactly the same. The only difference is related to the random distribution of X and Y. If I'm forced to follow the second case, how can I improve the trainability of my network so that it can be trained in few epochs.
I have tried different optimizers, initializers and regularizations but they didn't help.
Here's the code for the version that doesn't converge so well. To get the first version I use gen1 in Dataset.from_generator(gen2, ...) instead of gen2.
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import keras
N = 256
np.random.seed(0)
A = np.random.normal(0,.4,(N,N))
Ainv = np.linalg.inv(A)
import itertools
input_size = N
def gen1():
for i in itertools.count(1):
X = np.random.rand(N,1)-.5
Y = np.dot(A,X)
yield (X[:,0],Y[:,0])
def gen2():
for i in itertools.count(1):
Y = np.random.rand(N,1)-0.5
X = np.dot(Ainv,Y)
yield (X[:,0],Y[:,0])
dataset = tf.data.Dataset.from_generator(
gen2,
(tf.float64, tf.float64),
(tf.TensorShape([N]), tf.TensorShape([N])))
train_ds = dataset.take(950)
valid_ds = dataset.skip(950).take(50)
#train_ds = train_ds.shuffle(2000, reshuffle_each_iteration = True)
train_ds = train_ds.batch(1)
valid_ds = valid_ds.batch(1)
from keras.layers import Input, Dense
from keras.models import Model
from keras import backend
def rabs(y_t, y_p):
return backend.mean(backend.abs(y_p - y_t), axis=-1)/(tf.keras.backend.max(y_t) - tf.keras.backend.min(y_t))*100
inp = Input(shape=(input_size,))
out = Dense(N, activation='linear')(inp)
autoencoder = Model(inp, out)
#opt = tf.keras.optimizers.Adam(learning_rate=.0001)
opt = tf.keras.optimizers.SGD(learning_rate=.2, momentum=0.7)
autoencoder.compile(optimizer= opt,
loss=tf.keras.losses.MeanSquaredError(),metrics= [rabs])
autoencoder.summary()
autoen_model = autoencoder.fit(train_ds, validation_data = valid_ds, epochs = 200)
plt.plot(autoen_model.history['rabs'])
plt.plot(autoen_model.history['val_rabs'])
plt.title('Model Accuracy')
plt.ylabel('Relative Absolute Mean Error %')
plt.xlabel('Epoch')
plt.legend(['Training set', 'Validation set'], loc='upper left')
plt.show()
Training graphs
Case 1: Y synthesized
Case 2: X synthesized
Why I think this is happening
I'm going to ignore that you're doing stochastic gradient descent, and
just imagine that you're working with the entire dataset for each step. In
this case, your problem in both cases is to minimize ||Y-AX||^2 over A.
After doing some algebra, you can write this as a quadratic optimization
problem of the form
\min_{z} z^T Q z + b^T z,
where z \in R^{256^2} represents the entries of the matrix A, Q is a
symmetric matrix obtained only from X, and b is a vector obtained from X
and Y. What you are asking Tensorflow to do is to solve this problem using
gradient descent.
The convergence rate of gradient descent on this type of problem is
governed by the condition number of Q, which is its largest eigenvalue
divided by its smallest. A condition number that is much larger than one
leads to slow gradient descent, as some variables update much faster than
others. A condition number closer to one is best for obtaining fast
convergence. In Guler's Foundations of Optimization (Section 14.2) you can
read more about the effect of condition number on convergence of (a
variant of) gradient descent, though there are probably better resources
on this out there.
In your case, the eigenvalues of Q are just the eigenvalues of XX^T, which
are the squared singular values of X. For the first dataset, X is
uniformly distributed, and in the second X= A_0^{-1} Y, where Y is
uniformly distributed.
The difference in convergence you are observing comes from the fact that
multiplication by A_0^{-1} wildly increases the condition number of your
matrix. In the following python code I did some random trials of this, and
found that the condition number of the second matrix is way bigger.
Thousands of times bigger.
import numpy as np
cond1 = []
cond2 = []
for i in range(10):
A = np.random.normal(0,0.4,(256,256))
Ainv = np.linalg.inv(A)
X1 = np.random.rand(256,950)
X1sv = np.linalg.svd(X1, compute_uv = False)
Y = np.random.rand(256,950)
X2 = np.dot(Ainv,Y)
X2sv = np.linalg.svd(X2, compute_uv = False)
cond1.append((X1sv.max()/X1sv.min())**2)
cond2.append((X2sv.max()/X2sv.min())**2)
cond1 = np.array(cond1)
cond2 = np.array(cond2)
print('X1\'s condition number has mean {:.2f} and std {:.2f} '.format(cond1.mean(), cond1.std()))
print('X2\'s condition number has mean {:.2f} and std {:.2f} '.format(cond2.mean(), cond2.std()))
print('X2\'s mean condition number is {:.1f} times as big as X1\'s'.format(cond2.mean()/cond1.mean()))
So that's my guess as to why you're seeing worse convergence for the
second case than the first. I could be wrong, but maybe this will point
you in the right direction.
Suggested Solutions
There are a couple of solutions to this:
Use a optimization algorithm like Adam or RMSprop which will make some
efforts to improve the condition number of your matrix. You can learn more
about those in Chapter 8 of https://www.deeplearningbook.org/.
Do you need to have A be a Gaussian matrix? A matrix with eigenvalues
closer to 1 would reduce this problem.
There are optimization techniques (nothing to do with machine learning)
that ameliorate the difficulties of a large condition number. You might
look up preconditioned gradient descent for more information on this.
I don't think there anything wrong in the optimization process, I think the problem is your misleading metrics rabs(y_t, y_p)
For the outputs of rabs(y_t, y_p) is same after MAE divide by (backend.max(y_t) - backend.min(y_t)), the Y of gen1 and Y of gen2 need in the same probability distribution, which is not the case here, since in gen1 your Y = np.dot(Ainv,np.random.rand(N,1)) and in gen2 Y = np.random.rand(N,1)
Simple example here is to consider y_true_1 = (0.1, 0.2, 0.3), y_true_2 = (0.1, 0.2, 0.5) and y_predict_1 = (0.0, 0.1, 0.2), y_predict_2 = (0.0, 0.1, 0.4), then MAE_1 = MAE_2 = 0.1, but after MAE_1 divide by (max(y_true_1) - min(y_true_1 )) the RMAE_1 = 0.5 and MAE_2 divide by (max(y_true_2) - min(y_true_2 )) the RMAE_2 = 0.25, you can see now why if distribution of y_true_1 is different from distribution of y_true_2, then you cannot expect two outputs of rabs(y_t, y_p) will be the same
I change the rabs(y_t, y_p) to MAS:
def rabs(y_t, y_p):
return backend.mean(backend.abs(y_p - y_t))
And optimizer to:
learning_rate_fn = tf.keras.optimizers.schedules.InverseTimeDecay(1.0, 950 * 100, 9)
opt = tf.keras.optimizers.Adam(learning_rate=learning_rate_fn)
And I run it many time with epochs = 100, the outputs for both gen1() and gen2() is around:
gen1:
Epoch 1/100
950/950 [==============================] - 1s 625us/step - loss: 1631.5898 - rabs: 31.9912 - val_loss: 1568.4200 - val_rabs: 31.6044
Epoch 100/100
950/950 [==============================] - 1s 541us/step - loss: 16.1436 - rabs: 3.1877 - val_loss: 19.1974 - val_rabs: 3.5311
gen2:
Epoch 1/100
950/950 [==============================] - 1s 614us/step - loss: 51.9863 - rabs: 5.7896 - val_loss: 20.9347 - val_rabs: 3.5948
Epoch 100/100
950/950 [==============================] - 1s 540us/step - loss: 0.7340 - rabs: 0.6716 - val_loss: 0.5478 - val_rabs: 0.5920
As you can see the optimizer basically does the same job, it reduce the loss(MSE) by 100 times and rabs(MAE) by 10 times
I want to implement word2vec using tensorflow 2.0
I have prepared dataset according to the skip-gramm model and I have got approx. 18 million observations(target and context words).
I have used the followng dataset for my goal:
https://www.kaggle.com/c/quora-question-pairs/notebooks
I have created a new dataset for n-gramm model. I have used windows_size 2 and number of skips equal to 2 as well in order to create for each target word(as our input) create context word(that is what I have to predict). It looks like this:
target context
1 3
1 1
2 1
2 1222
Here is my code:
dataset_train = tf.data.Dataset.from_tensor_slices((target, context))
dataset_train = dataset_train.shuffle(buffer_size=1024).batch(64)
#Parameters:
num_words = len(word_index)#approximately 100000
embed_size = 300
num_sampled = 64
initializer_softmax = tf.keras.initializers.GlorotUniform()
#Variables:
embeddings_weight = tf.Variable(tf.random.uniform([num_words,embed_size],-1.0,1.0))
softmax_weight = tf.Variable(initializer_softmax([num_words,embed_size]))
softmax_bias = tf.Variable(initializer_softmax([num_words]))
optimizer = tf.keras.optimizers.Adam()
#As before, we are supplying a list of integers (that correspond to our validation vocabulary words) to the embedding_lookup() function, which looks up these rows in the normalized_embeddings tensor, and returns the subset of validation normalized embeddings.
#Now that we have the normalized validation tensor, valid_embeddings, we can multiply this by the full normalized vocabulary (normalized_embedding) to finalize our similarity calculation:
#tf.function
def training(X,y):
with tf.GradientTape() as tape:
embed = tf.nn.embedding_lookup(embeddings_weight,X)
loss = tf.reduce_mean(tf.nn.sampled_softmax_loss(weights = softmax_weight, biases = softmax_bias, inputs = embed,
labels = y, num_sampled = num_sampled, num_classes = num_words))
variables = [embeddings_weight,softmax_weight,softmax_bias]
gradients = tape.gradient(loss,variables)
optimizer.apply_gradients(zip(gradients,variables))
EPOCHS = 30
for epoch in range(EPOCHS):
print('Epoch:',epoch)
for X,y in dataset_train:
training(X,y)
#compute similarity of words:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings_weight), 1, keepdims=True))
norm_embed = embeddings_weight/ norm
temp_emb = tf.nn.embedding_lookup(norm_embed,X)
similarity = tf.matmul(temp_emb,tf.transpose(norm_embed))
But the computation of even 1 epoch lasts too long. Is it possible somehow to improve the perfomance of my code?(I am using google colab for the code execution)
EDIT: this is a shape of my train dataset
dataset_train
<BatchDataset shapes: ((None,), (None, 1)), types: (tf.int64, tf.int64)>
I was following the instructions from this guide: https://adventuresinmachinelearning.com/word2vec-tutorial-tensorflow/
This is because softmax function is computationally quite expensive while dealing with possibilities of millions of points in Word2Vec algorithm as explained here. A faster training would be possible with negative sampling.
This question already has answers here:
Weighted cost function in tensorflow
(2 answers)
Closed 4 years ago.
I have a neural network with MSE loss function being implemented something like this:
# input x_ph is of size Nx1 and output should also be of size Nx1
def train_neural_network_batch(x_ph, predict=False):
prediction = neural_network_model(x_ph)
# MSE loss function
cost = tf.reduce_mean(tf.square(prediction - y_ph))
optimizer = tf.train.AdamOptimizer(learn_rate).minimize(cost)
# mini-batch optimization here
I'm fairly new to neural networks and Python, but I understand that each iteration, a sample of training points will be fed into the neural network and the loss function evaluated at the points in this sample. However, I would like to be able to modify the loss function so that it weights certain data more heavily. Some pseudocode of what I mean
# manually compute the MSE of the data without the first sampled element
cost = 0.0
for ii in range(1,len(y_ph)):
cost += tf.square(prediction[ii] - y_ph[ii])
cost = cost/(len(y_ph)-1.0)
# weight the first sampled data point more heavily according to some parameter W
cost += W*(prediction[0] - y_ph[0])
I might have more points I wish to weight differently as well, but for now, I'm just wondering how I can implement something like this in tensorflow. I know len(y_ph) is invalid as y_ph is just a placeholder, and I can't just do something like y_ph[i] or prediction[i].
You can do this in multiple ways:
1) If some of your data instances weighting are simply 2 times or 3 times more than normal instance, you may just copy those instance multiple times in your data set. Thus they would occupy more weight in loss, hence satisfy your intention. This is the simplest way.
2) If your weighting is more complex, say a float weighting. You can define a placeholder for weighting, multiply it to loss, and use feed_dict to feed the weighting in session together with x batch and y batch. Just make sure instance_weight is the same size with batch_size
E.g.
import tensorflow as tf
import numpy as np
with tf.variable_scope("test", reuse=tf.AUTO_REUSE):
x = tf.placeholder(tf.float32, [None,1])
y = tf.placeholder(tf.float32, [None,1])
instance_weight = tf.placeholder(tf.float32, [None,1])
w1 = tf.get_variable("w1", shape=[1, 1])
prediction = tf.matmul(x, w1)
cost = tf.square(prediction - y)
loss = tf.reduce_mean(instance_weight * cost)
opt = tf.train.AdamOptimizer(0.5).minimize(loss)
with tf.Session() as sess:
x1 = [[1.],[2.],[3.]]
y1 = [[2.],[4.],[3.]]
instance_weight1 = [[10.0], [10.0], [0.1]]
sess.run(tf.global_variables_initializer())
print (x1)
print (y1)
print (instance_weight1)
for i in range(1000):
_, loss1, prediction1 = sess.run([opt, loss, prediction], feed_dict={instance_weight : instance_weight1, x : x1, y : y1 })
if (i % 100) == 0:
print(loss1)
print(prediction1)
NOTE instance_weight1, you may change instance_weight1 to see the difference (here batch_size is set to 3)
Where x1,y1 and x2,y2 follow the rule y=2*x
Whereas x3,y3 follow the rule y=x
But with different weight as [10,10,0.1], the prediction1 coverage to y1,y2 rule and almost ignored y3, the output are as:
[[1.9823183]
[3.9646366]
[5.9469547]]
PS: in tensorflow graph, it's highly recommended not to use for loops, but use matrix operator instead to parallel the calculation.
I have a convolutional neural network with three images as inputs:
x_anchor = tf.placeholder('float', [None, 4900], name='x_anchor')
x_positive = tf.placeholder('float', [None, 4900], name='x_positive')
x_negative = tf.placeholder('float', [None, 4900], name='x_negative')
Within a train function, I feed the placeholders with the actual images:
input1, input2, input3 = training.next_batch(start,end)
....some other operations...
loss_value = sess.run([cost], feed_dict={x_anchor:input1, x_positive:input2, x_negative:input3})
I'm using a triplet loss function on these three inputs (that's actually the cost variable above):
def triplet_loss(d_pos, d_neg):
margin = 0.2
loss = tf.reduce_mean(tf.maximum(0., margin + d_pos - d_neg))
return loss
How can I filter the losses, so only the images with loss_value > 0 will be used to train the network?
How can I implement something like:
if(loss_value for input1, input2, input3 > 0)
use inputs to train network
else
do nothing/try another input
What I have tried so far:
I took the images one by one (input1[0], input2[0], input3[0]), calculated the loss, and if the loss was positive I would calculate (and apply) the gradients. But the problem is I use dropout in my model and I have to apply the model twice on my inputs:
First to calculate the loss and verify whether it's greater than 0
Second to run the optimizer: this is when things go wrong. As I mentioned before, I use dropout, so the results of the model on my inputs are different, so the new loss will sometimes be 0 even if the loss determined at step 1 is greater than 0.
I also tried to use tf.py_func but got stuck.
There's a new TensorFlow feature called “AutoGraph”. AutoGraph converts Python code, including control flow, print() and other Python-native features, into pure TensorFlow graph code. For example:
#autograph.convert()
def huber_loss(a):
if tf.abs(a) <= delta:
loss = a * a / 2
else:
loss = delta * (tf.abs(a) - delta / 2)
return loss
becomes this code at execution time due to the decorator:
def tf__huber_loss(a):
with tf.name_scope('huber_loss'):
def if_true():
with tf.name_scope('if_true'):
loss = a * a / 2
return loss,
def if_false():
with tf.name_scope('if_false'):
loss = delta * (tf.abs(a) - delta / 2)
return loss,
loss = ag__.utils.run_cond(tf.less_equal(tf.abs(a), delta), if_true,
if_false)
return loss
What you wanted to do could have been implemented before using tf.cond().
I found out about this through this medium post.