Keras - custom loss function - chamfer distance - tensorflow

I am attempting object segmentation using a custom loss function as defined below:
def chamfer_loss_value(y_true, y_pred):
# flatten the batch
y_true_f = K.batch_flatten(y_true)
y_pred_f = K.batch_flatten(y_pred)
# ==========
# get chamfer distance sum
// error here
y_pred_mask_f = K.cast(K.greater_equal(y_pred_f,0.5), dtype='float32')
finalChamferDistanceSum = K.sum(y_pred_mask_f * y_true_f, axis=1, keepdims=True)
return K.mean(finalChamferDistanceSum)
def chamfer_loss(y_true, y_pred):
return chamfer_loss_value(y_true, y_pred)
y_pred_f is the result of my U-net. y_true_f is the result of a euclidean distance transform on the ground truth label mask x as shown below:
distTrans = ndimage.distance_transform_edt(1 - x)
To compute the Chamfer distance, you multiply the predicted image (ideally, a mask with 1 and 0) with the ground truth distance transform, and simply sum over all pixels. To do this, I needed to get a mask y_pred_mask_f by thresholding y_pred_f, then multiply with y_true_f, and sum over all pixels.
y_pred_f provides a continuous range of values in [0,1], and I get the error None type not supported at the evaluation of y_true_mask_f. I know the loss function has to be differentiable, and greater_equal and cast are not. But, is there a way to circumvent this in Keras? Perhaps using some workaround in Tensorflow?

Well, this was tricky. The reason behind your error is that there is no continuous dependence between your loss and your network. In order to compute gradients of your loss w.r.t. to network, your loss must compute the gradient of indicator if your output is greater than 0.5 (as this is the only connection between your final loss value and output y_pred from your network). This is impossible as this indicator is partially constant and not continuous.
Possible solution - smooth your indicator:
def chamfer_loss_value(y_true, y_pred):
# flatten the batch
y_true_f = K.batch_flatten(y_true)
y_pred_f = K.batch_flatten(y_pred)
y_pred_mask_f = K.sigmoid(y_pred_f - 0.5)
finalChamferDistanceSum = K.sum(y_pred_mask_f * y_true_f, axis=1, keepdims=True)
return K.mean(finalChamferDistanceSum)
As sigmoid is a continuous version of a step function. If your output comes from sigmoid - you could simply use y_pred_f instead of y_pred_mask_f.

Related

Reconstruction loss function of VAE

I am learning from an example given by TensorFlow document, https://www.tensorflow.org/tutorials/generative/cvae#define_the_loss_function_and_the_optimizer:
VAEs train by maximizing the evidence lower bound (ELBO) on the
marginal log-likelihood.
In practice, optimize the single sample Monte Carlo estimate of this
expectation: logp(x|z) + logp(z) - logq(z|x).
The loss function was implemented as:
def log_normal_pdf(sample, mean, logvar, raxis=1):
log2pi = tf.math.log(2. * np.pi)
return tf.reduce_sum(
-.5 * ((sample - mean) ** 2. * tf.exp(-logvar) + logvar + log2pi),
axis=raxis)
def compute_loss(model, x):
mean, logvar = model.encode(x)
z = model.reparameterize(mean, logvar)
x_logit = model.decode(z)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)
logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3])
logpz = log_normal_pdf(z, 0., 0.)
logqz_x = log_normal_pdf(z, mean, logvar)
return -tf.reduce_mean(logpx_z + logpz - logqz_x)
Since this example used MINIST dataset, x can be normalized to [0, 1] and sigmoid_cross_entropy_with_logits was used here.
My questions are:
What if x > 1, what kind of loss could be used?
Can we use other loss functions as a reconstruction loss in VAE, such as Huber loss (https://en.wikipedia.org/wiki/Huber_loss)?
Another example used MSE loss (as follow), is MSE loss a valid ELBO loss to measure p(x|z)?
https://www.tensorflow.org/guide/keras/custom_layers_and_models#putting_it_all_together_an_end-to-end_example
# Iterate over the batches of the dataset.
for step, x_batch_train in enumerate(train_dataset):
with tf.GradientTape() as tape:
reconstructed = vae(x_batch_train)
# Compute reconstruction loss
loss = mse_loss_fn(x_batch_train, reconstructed)
loss += sum(vae.losses) # Add KLD regularization loss
In the loss function of a variational autoencoder, you jointly optimize two terms:
The reconstruction loss between prediction and label, like in a normal autoencoder
The distance between the parametrized probability distribution and the assumed true probability distribution. In practice, the true distribution is usually assumed to be Gaussian and distance is measured in terms of Kullback-Leibler divergence
For the reconstruction loss part, you can pick any loss function that fits your data, including MSE and Huber. It is generally still a good idea to normalize your input features though.

How to map an array of values for y_true to a single value in order to compare to y_pred in a Tensorflow loss function (Tensorflow/Tensorflow Quantum)

I am trying to implement the circuits listed on page 8 in the following paper: https://arxiv.org/pdf/1905.10876.pdf using Tensorflow Quantum (TFQ). I have done so previously for a subset of circuits using Qiskit, and ended up with accuracies that can be found on page 14 in the following paper: https://arxiv.org/pdf/2003.09887.pdf. In TFQ, my accuracies are way down. I think this delta originates because in TFQ, I only used 1 observable Pauli Z operator on the first qubit, and the circuits do not seem to "transfer all knowledge" to the first qubit. I place this in quotes, because I am sure there is a better way to describe this. In Qiskit on the other hand, 16 states (4^2) get mapped to 2 states.
My question: how can I get my accuracies back up?
Potential answer a): some method of "transferring all information" to a single qubit, potentially an ancilla qubit, and doing a readout on this qubit.
Potential answer b) placing a Pauli Z observable on all qubits (4 in total), mapping half of the 16 states to a label 0 and the other half to a label 1. I attempted this in the code below.
My attempt at answer b):
I have a Tensorflow Quantum (TFQ) circuit implemented in Tensorflow. The circuit has multiple observables, which I try to bring together in my loss function. I prefer to use as many standard components as possible, but need to map my quantum states to a label in order to determine the loss. I think what I am trying to achieve is not unique to TFQ. I define my model in the following way:
def circuit():
data_qubits = cirq.GridQubit.rect(4, 1)
circuit = cirq.Circuit()
...
return circuit, [cirq.Z(data_qubits[0]), cirq.Z(data_qubits[1]), cirq.Z(data_qubits[2]), cirq.Z(data_qubits[3])]
model_circuit, model_readout = circuit()
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(), dtype=tf.string),
# The PQC layer returns the expected value of the readout gate, range [-1,1].
tfq.layers.PQC(model_circuit, model_readout),
])
# compile model
model.compile(
loss = loss_mse,
optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
metrics=[])
in loss_mse (Mean Square Error), I receive a (32, 4) tensor for y_pred. One row could look like
[-0.2, 0.33, 0.6, 0.3]
This would have to be first mapped from [-1,1] to a binarized version of [0,1], so that it looks like:
[0, 1, 1, 1]
Now, a table lookup needs to happen, which tells if this combination is 0 or 1. Finally, the regular (y_true-y_pred)^2 can be performed by that row, followed by a np.sum on all rows. I tried to implement this:
def get_label(measurement):
if measurement == [0,0,0,0]: return 0
...
elif measurement == [1,1,1,1]: return 0
else: return -1
def py_call(y_true, y_pred):
# cast tensor to numpy
y_pred_np = np.asarray(y_pred)
loss = np.zeros((len(y_pred))) # could be a single variable with += within the loop
# evalaute all 32 samples
for pred in range(len(y_pred_np)):
# map, binarize and lookup
y_labelled = get_label([0 if y<0 else 1 for y in y_pred_np[pred]])
# regular loss comparison
loss[pred] = (y_labelled - y_true[pred])**2
# reduce
loss = np.sum(loss)/len(y_true)
return loss
#tf.function
def loss_mse(y_true, y_pred):
external_list = []
loss = tf.py_function(py_call, inp=[y_true, y_pred], Tout=[tf.float64])
return loss
However, the system appears to still expect a (32,4) tensor. I would have thought I could simply provide a single loss values (float). My question: how can I map multiple values for y_true to a single number in order to compare with a single y_pred value in a tensorflow loss function?
So it looks like there are a couple of things going on here. To answer your question
how can I map multiple values for y_true to a single number in order to compare with a single y_pred value in a tensorflow loss function ?
What you might want is some kind of tf.reduce_* function like tf.reduce_mean or tf.reduce_sum. This function will allow you to apply this reduction operation accross a given tensor axis allowing you to convert a tensor of shape (32, 4) to a tensor of shape (32,) or a tensor of shape (4,). Here is a quick snippet:
#tf.function
def my_loss(y_true, y_pred):
# y_true is shape (32, 4)
# y_pred is shape (32, 4)
# Scale from [-1, 1] to [0, 1]
y_true += 1
y_true /= 2
y_pred += 1
y_pred /= 2
# These are now both (32,) with the reduction of taking the mean applied along
# the second axis.
reduced_true = tf.reduce_mean(y_true, axis=1)
reduced_pred = tf.reduce_mean(y_pred, axis=1)
# Now a scalar loss.
loss = tf.reduce_mean((reduce_true - reduced_pred) ** 2)
return loss
Now the above isn't exactly what you want, since it's not super clear to me at least what exact reduction rules you have in mind for taking something like [0,1,1,1] -> 0 vs [0,0,0,0] -> 1.
Another thing I will also mention is that if you want JUST the sum of these Pauli Operators in cirq that you have term by term in the list [cirq.Z(data_qubits[0]), cirq.Z(data_qubits[1]), cirq.Z(data_qubits[2]), cirq.Z(data_qubits[3])] and all you care about is the final sum of these expectations, you could just as easily do:
my_operator = sum([cirq.Z(data_qubits[0]), cirq.Z(data_qubits[1]),
cirq.Z(data_qubits[2]), cirq.Z(data_qubits[3])])
print(my_op)
Which should give something like:
cirq.PauliSum(cirq.LinearDict({frozenset({(cirq.GridQubit(0, 0), cirq.Z)}): (1+0j), frozenset({(cirq.GridQubit(0, 1), cirq.Z)}): (1+0j), frozenset({(cirq.GridQubit(0, 2), cirq.Z)}): (1+0j), frozenset({(cirq.GridQubit(0, 3), cirq.Z)}): (1+0j)}))
Which is also compatable as a readout operation in the PQC layer. Lastly if would recommend reading through some of the snippets and examples here:
https://www.tensorflow.org/quantum/api_docs/python/tfq/layers/PQC
and here:
https://www.tensorflow.org/quantum/api_docs/python/tfq/layers/Expectation
Which give a pretty good description of how the input and output signatures of the functions look as well as the shapes you can expect from them.

Is it scaled twice in keras code categorical_crossentropy?

I see categorical_crossentropy is implemented in Keras as follows:
def categorical_crossentropy(target, output, from_logits=False, axis=-1):
"""Categorical crossentropy between an output tensor and a target tensor.
# Arguments
target: A tensor of the same shape as `output`.
output: A tensor resulting from a softmax
(unless `from_logits` is True, in which
case `output` is expected to be the logits).
from_logits: Boolean, whether `output` is the
result of a softmax, or is a tensor of logits.
axis: Int specifying the channels axis. `axis=-1`
corresponds to data format `channels_last`,
and `axis=1` corresponds to data format
`channels_first`.
# Returns
Output tensor.
# Raises
ValueError: if `axis` is neither -1 nor one of
the axes of `output`.
"""
output_dimensions = list(range(len(output.get_shape())))
if axis != -1 and axis not in output_dimensions:
raise ValueError(
'{}{}{}'.format(
'Unexpected channels axis {}. '.format(axis),
'Expected to be -1 or one of the axes of `output`, ',
'which has {} dimensions.'.format(len(output.get_shape()))))
# Note: tf.nn.softmax_cross_entropy_with_logits
# expects logits, Keras expects probabilities.
if not from_logits:
# scale preds so that the class probas of each sample sum to 1
output /= tf.reduce_sum(output, axis, True)
# manual computation of crossentropy
_epsilon = _to_tensor(epsilon(), output.dtype.base_dtype)
output = tf.clip_by_value(output, _epsilon, 1. - _epsilon)
return - tf.reduce_sum(target * tf.log(output), axis)
I don't under stand from
output_dimensions = list(range(len(output.get_shape())))
to
output /= tf.reduce_sum(output, axis, True).
I understand Output is probabilities, a tensor resulting from a softmax -> It mean is scaled preds so that the class probas of each sample sum to 1. Why do they need to scale preds so that the probas class of each sample sum to 1 again? Please explain this.
Because you need to make sure that each probability is between 0 and 1, else the cross-entropy computation will be incorrect. Its a way to also prevent user errors when they make (unnormalized) probabilities outside that range.

InvalidArgumentError: In[0] is not a matrix. Instead it has shape []

I'm not able to train the network using keras, getting the following error, at epoch 1, first batch:
InvalidArgumentError: In[0] is not a matrix. Instead it has shape []
[[{{node training/SGD/gradients/dense_1/MatMul_grad/MatMul}}]]
I'm trying to solve a regression problem using Keras and a custom function provided by https://github.com/farrell236/DeepPose
The network is a quite simple CNN VGG-like.
I think the problem is the loss function. In particular, I suppose that the weight initialization is the issue (take a look at the Tensorflow example: https://github.com/farrell236/DeepPose/blob/master/tensorflow/example)
That's my loss function:
def custom_loss(y_true, y_pred):
loss = SE3GeodesicLoss(np.ones((1, 6)))
tf.initializers.constant([loss])
y_pred = tf.cast(y_pred, dtype=tf.float32)
y_true = tf.cast(y_true, dtype=tf.float32)
loss = SE3GeodesicLoss(np.ones(6))
geodesic_loss = loss.geodesic_loss(y_pred, y_true)
geodesic_loss = tf.cast(geodesic_loss, dtype=tf.float32)
return geodesic_loss
What's strange is that I'm able to use this function as a metric for the training.
Further information:
What I'm trying to do is to estimate the position of an object having images as input and relative Eulerian angles and distance of the target as labels (which means 6 parameters [r_x, r_y, r_z, t_x, t_y, t_z]). I'm trying to implement this loss function in order to solve the attitude estimation problem. Other losses (means: MSE, MAE) are not effective enough in solving attitude regression problem.
Do you have any suggestion?

Can you process a tensor in chunks in a custom Keras loss function?

I am trying to write a cusom Keras loss function in which I process the tensors in sub-vector chunks. For example, if an output tensor represented a concatenation of quaternion coefficients (i.e. w,x,y,z,w,x,y,z...) I might wish to normalize each quaternion before calculating the mean squared error in a loss function like:
def norm_quat_mse(y_true, y_pred):
diff = y_pred - y_true
dist = 0
for i in range(0,16,4):
dist += K.sum( K.square(diff[i:i+4] / K.sqrt(K.sum(K.square(diff[i:i+4])))))
return dist/4
While Keras will accept this function without error and use in training, it outputs a different loss value from when applied as an independent function and when using model.predict(), so I suspect it is not working properly. None of the built-in Keras loss functions use this per-chunk processing approach, is it possible to do this within Keras' auto-differentiation framework?
Try:
def norm_quat_mse(y_true, y_pred):
diff = y_pred - y_true
dist = 0
for i in range(0,16,4):
dist += K.sum( K.square(diff[:,i:i+4] / K.sqrt(K.sum(K.square(diff[:,i:i+4])))))
return dist/4
You need to know that shape of y_true and y_pred is (batch_size, output_size) so you need to skip first dimension during computations.