I am facing a very strange problem where I am building an RNN model using tensorflow and then storing the model variables (all) using tf.Saver after I finish training.
During testing, I just build the inference part again and restore the variables to the graph. The restoration part does not give any error.
But when I start testing on the evaluation test, I always get same output from the inference all i.e. for all test inputs, I get the same output.
I printed the output during training and I do see that output is different for different training samples and cost is also decreasing.
But when I do testing, it always gives me same output no matter what is the input.
Can someone help me to understand why this could be happening? I want to post some minimal example but as I am not getting any error, I am not sure what should I post here. I will be happy to share more information if it can help the issue.
One difference I have between the inference graph during training and testing is the number of time steps in RNN. During training I train for n steps (n = 20 or more) for a batch before updating gradients while for testing I just use one step as I only want to predict for that input.
Thanks
I have been able to resolve this issue. This seemed to be happening as one of my input feature was very dominant in its original values due to which after some operations all values were converging to single number.
Scaling that feature has helped to resolve this.
Thanks
Can you create a small reproducible case and post this as a bug to https://github.com/tensorflow/tensorflow/issues ? That will help this question get attention from the right people.
Related
Everything is in the title, I was reproducing the TensorTlow tutorial about time series here: https://www.tensorflow.org/tutorials/structured_data/time_series?hl=en#performance_3
I reproduce the same graph and obtain the following :
results obtained
Why do they plot validation and test error instead of train and test error?
And we can observe a significative difference between validation and test errors, how to interpret it, overfitting?
Thank you in advance,
Best regards
I am trying to train a simple image classification model using TensorFlow Lite. I am following this documentation to write my code. As specified in the documentation, in order to train my model, I have written model = image_classifier.create(train_data, model_spec=model_spec.get('mobilenet_v2'), validation_data=validation_data). After training for a few seconds, however, I get an InvalidArgumentError. I believe that the error is due to something in my dataset but it is too difficult to eliminate all the sources of the error from the dataset manually because it consists of thousands of images. After some research, I found a potential solution - I could use tf.data.experimental.ignore_errors which would "produce a dataset that contains the same elements as the input, but silently drop any elements that caused an error." From the documentation, however, (here) I couldn't figure out how to integrate this transformation function with my code. If I place the line dataset = dataset.apply(tf.data.experimental.ignore_errors()) before training the model, the system doesn't know which elements to drop. If I place the line after, the system never reaches the line because an error arises in training. Moreover, the system gives an error message AttributeError: 'ImageClassifierDataLoader' object has no attribute 'apply'. I would appreciate if someone can tell me how to integrate tf.data.experimental.ignore_errors() with my model or possible alternatives to the issue I am facing.
Hi if you are exactly following the documentation then
tf.data.experimental.ignore_errors won't work for you because you are not loading your data using tf.data,You are most probably using from tflite_model_maker.image_classifier import DataLoader.
Note: Please mention the complete code snippet to help you out to solve the issue
I am running inference using one of the models from TensorFlow's object detection module. I'm looping over my test images in the same session and doing sess.run(). However, on profiling these runs, I realize the first run always has a higher time as compared to the subsequent runs.
I found an answer here, as to why that happens, but there was no solution on how to fix.
I'm deploying the object detection inference pipeline on an Intel i7 CPU. The time for one session.run(), for 1,2,3, and the 4th image looks something like (in seconds):
1. 84.7132628
2. 1.495621681
3. 1.505012751
4. 1.501652718
Just a background on what all I have tried:
I tried using the TFRecords approach TensorFlow gave as a sample here. I hoped it would work better because it doesn't use a feed_dict. But since more I/O operations are involved, I'm not sure it'll be ideal. I tried making it work without writing to the disk, but always got some error regarding the encoding of the image.
I tried using the TensorFlow datasets to feed the data, but I wasn't sure how to provide the input, since the during inference I need to provide input for "image tensor" key in the graph. Any ideas on how to use this to provide input to a frozen graph?
Any help will be greatly appreciated!
TLDR: Looking to reduce the run time of inference for the first image - for deployment purposes.
Even though I have seen that the first inference takes longer, the difference (84 Vs 1.5) that is shown there seems to be a bit unbelievable. Are you counting the time to load model also, inside this time metric? Can this could be the difference for the large time difference? Is the topology that complex, so that this time difference could be justified?
My suggestions:
Try Openvino : See if the topology you are working on, is supported in Openvino. OpenVino is known to speed up the inference workloads by a substantial amount due to it's capability to optimize network operations. Also, the time taken to load openvino model, is comparitively lower in most of the cases.
Regarding the TFRecords Approach, could you please share the exact error and at what stage you got the error?
Regarding Tensorflow datasets, could you please check out https://github.com/tensorflow/tensorflow/issues/23523 & https://www.tensorflow.org/guide/datasets. Regarding the "image tensor" key in the graph, I hope, your original inference pipeline should give you some clues.
I'm having some learning experience on tensorflows estimator api. Doing some classification task on a small dataset with tensorflow's tf.contrib.learn.DNNClassifier (I know there is tf.estimator.DNNClassifier but I have to work on tensorflow 1.2) I get the accuracy graph on my test dataset. I wonder why there are these negative peaks.
I thought they could occur because of overfitting and self repairing. The next datapoint after the peak seems to have the same value as the point before.
I tried to look into the code to find any proof that estimator's train function has such a mechanism but did not find any.
So, is there such a mechanism or are there other possible explanations?
I don't think that the Estimator's train functions has any such mechanism.
Some possible theories:
Does your training restart anytime? Its possible that if you have some Estimated Moving Average (EMA) in your model, upon restart the moving average has to be recomputed.
Is your input data randomized? If not, its possible that a patch of input data is all misclassified, and again the EMA is possibly smoothing out.
This is pretty mysterious to me. If you do find out what the real issue is please do share!
I have to analyse some images of drops, taken using a microscope, which may contain some cell. What would be the best thing to do in order to do it?
Every acquisition of images returns around a thousand pictures: every picture contains a drop and I have to determine whether the drop has a cell inside or not. Every acquisition dataset presents with a very different contrast and brightness, and the shape of the cells is slightly different on every setup due to micro variations on the focus of the microscope.
I have tried to create a classification model following the guide "TensorFlow for poets", defining two classes: empty drops and drops containing a cell. Unfortunately the result wasn't successful.
I have also tried to label the cells and giving to an object detection algorithm using DIGITS 5, but it does not detect anything.
I was wondering if these algorithms are designed to recognise more complex object or if I have done something wrong during the setup. Any solution or hint would be helpful!
Thank you!
This is a collage of drops from different samples: the cells are a bit different from every acquisition, due to the different setup and ambient lights
This kind of problem should definitely be possible. I would suggest starting with a cifar 10 convolutional neural network tutorial and customizing it for your problem.
In future posts you should tell us how your training is progressing. Make sure you're outputting the following information every few steps (maybe every 10-100 steps):
Loss/cost function output, you should see your loss decreasing over time.
Classification accuracy on the current batch of your training data
Classification accuracy on a held out test set (if you've implemented test set evaluation, you might implement this second)
There are many, many, many things that can go wrong, from bad learning rates, to preprocessing steps that go awry. Neural networks are very hard to debug, they are very resilient to bugs, making it hard to even know if you have a bug in your code. For that reason make sure you're visualizing everything.
Another very important step to follow is to save the images exactly as you are passing them to tensorflow. You will have them in a matrix form, you can save that matrix form as an image. Do that immediately before you pass the data to tensorflow. Make sure you are giving the network what you expect it to receive. I can't tell you how many times I and others I know have passed garbage into the network unknowingly, assume the worst and prove yourself wrong!
Your next post should look something like this:
I'm training a convolutional neural network in tensorflow
My loss function (sigmoid cross entropy) is decreasing consistently (show us a picture!)
My input images look like this (show us a picture of what you ACTUALLY FEED to the network)
My learning rate and other parameters are A, B, and C
I preprocessed the data by doing M and N
The accuracy the network achieves on training data (and/or test data) is Y
In answering those questions you're likely to solve 10 problems along the way, and we'll help you find the 11th and, with some luck, last one. :)
Good luck!