Getting loss for test image - cntk

I'm trying to get the loss from a test image in Faster R-CNN.
If I run copy.copy(trainer.previous_minibatch_loss_average) right after trainer.train_minibatch(data) then I can get the loss out for the trained image(mb=1).
When I try to do the exact same after trainer.test_minibatch(data) I get: This Value object is invalid and can no longer be accessed.
I've been looking around and it seems that other may have accomplish with something similar. Here.
Anyone know what to do to get the loss of a test image?

results = []
results.append(trainer.previous_minibatch_loss_average)
The above should work.

Related

CNN: Unstable of model score vs iteration

I got my model score vs iteration graph is unstable. How can I improve it?
This is what I get
Here is my code
Code 1
Code 2
Code 3
Code 4
Code 5
Your network looks fairly stock/copy and pasted. I'm pretty sure I've seen this code before.
Without knowing much about your input data I'm not sure if you're solving a classification problem or not but try first switching it to softmax and negative log likelihood on the output.
The output activation and loss function are mainly for binary classification.
You can also get rid of the ReNormalizeL2PerLayer. That might hinder the network from learning depending on your data.
It's also hard to help without knowing much about your input data but sometimes unit mean zero variance may not be suitable for your data set. Consider switching to a zero to 1 scaling instead.
Lastly, for quick iteration times consider overfitting on a small amount of data first when testing. That will help you see if there's any signal in your data and if your network can learn.

Tensorflow validation_data error for multi-input model

My tensorflow 2.6 model has two inputs. When I train this model without validation data - a la model.fit(x=[train_data1, train_data2], y= train_target)- it works perfectly. When I try to add some validation data, however, I receive errors.
model.fit(x=[train_data1, train_data2], y= train_target,
validation_data=([val_data1, val_data2], val_target))
throws the following error:
Layer Input__ expects 2 input(s), but it received 3 input tensors.
The closest thing I got for help is this question. There, the answerer suggests doing exactly as I have done. What can be done so that this model can use validation_data?
After an hour of beating my head against the wall, I restarted the kernel then tried
model.fit(x=[train_data1, train_data2], y= train_target,
validation_data=([val_data1, val_data2], val_target))
again, just like in the question. It worked...
Like every IT person in the history of the human race will remind you, "Did you try turning it off and on again?" Lesson learned.
try wrapping it in a numpy array or a tensor like this:
validation_data=(np.array([val_data1, val_data2]), val_target)

Predict probability of predicted class

ml beginner here.
I have a dataset containing the GPA, GRE, TOEFL, SOP&LOR Ranking(out of 5)etc. (all numerical) , and a final column that states whether or not they were admitted to a university(0 or 1), which is what we'll use as y_train.
I'm supposed to not just classify the predicted labels, but also calculate the probability of each person getting admitted.
edit: so from the from the first comment, I built a Logistic Regression model, and with some googling I found 'predict_proba' from sklearn and tried implementing it. There werent any syntactical errors but the code values given by predict_proba were horribly wrong.
Link: https://github.com/tarunn2799/gre-pred/blob/master/GRE%20Admission%20Probability-%20Extraaedge.ipynb
please help me finding where I've gone wrong, and also tips to reduce the loss
thank you!
I read your notebook, but I'm confused why you think the predict_proba are horribly wrong..
Is the predict accuracy not good, or the format of predict_proba not as you expected?
You could use sklearn.metrics.accuracy_score(), sklearn.metrics.confusion_matrix() to check your predict label, or use sklearn.metrics.roc_auc_score() to check the result of predict_proba. Check both train & text parts are better.
I think the format of predict_proba is correct, or maybe you could try the predict_log_proba() to calculate the log probability?
Hope this could help you.

Tensorflow error:The graph couldn't be sorted in topological order

When I run my loss function and it will be occur this error or warning.
I really can not figure out what cause it.
I guess that maybe I didn't use the origin input,for example:
def loss(predict,label):
#because some reason I need to extract some values in predict
predictProcessed = process(predict)
#predictProcessed is a subset of predict
loss = tf.square(predict - label)
return loss
My guess is right or not?
And I also use double for-loop in this code,Should the code use less for for-loop?thanks

Gradient for Each Example Using map_fn

I want to get the gradient of a layer with respect to a parameter matrix for each example. Normally, I would need a Jacobian, but following this idea, I decided to use map_fn so I could feed forward data in a batch rather than one by one. This gives me a problem I do not understand, unfortunately. With the code
get_grads = tf.map_fn(lambda x: tf.gradients(x, W['1'])[0], softmax_probs)
sess.run(get_grads, feed_dict={x: images[0:100]})
I get this error
InvalidArgumentError: TensorArray map_21/TensorArray_36#map_21/while/gradients: Could not write to TensorArray index 0 because it has already been read.
W['1'] is a variable in the graph. Ideas?
It seems like your issue may be connected with the bug
https://github.com/tensorflow/tensorflow/issues/7643
One commenter posts a possible fix at the end. You could try that out.
Alternatively, if you what you want is the jacobian, then you can check out this solution:
https://github.com/tensorflow/tensorflow/issues/675#issuecomment-362853672
although it appears that it will not work when nested.
I don't think this will work because x in this case is a loop variable which TensorFlow does not know how to connect to softmax_probs.