How to drop elements in dataset that can cause an error while training a TensorFlow Lite model - tensorflow

I am trying to train a simple image classification model using TensorFlow Lite. I am following this documentation to write my code. As specified in the documentation, in order to train my model, I have written model = image_classifier.create(train_data, model_spec=model_spec.get('mobilenet_v2'), validation_data=validation_data). After training for a few seconds, however, I get an InvalidArgumentError. I believe that the error is due to something in my dataset but it is too difficult to eliminate all the sources of the error from the dataset manually because it consists of thousands of images. After some research, I found a potential solution - I could use tf.data.experimental.ignore_errors which would "produce a dataset that contains the same elements as the input, but silently drop any elements that caused an error." From the documentation, however, (here) I couldn't figure out how to integrate this transformation function with my code. If I place the line dataset = dataset.apply(tf.data.experimental.ignore_errors()) before training the model, the system doesn't know which elements to drop. If I place the line after, the system never reaches the line because an error arises in training. Moreover, the system gives an error message AttributeError: 'ImageClassifierDataLoader' object has no attribute 'apply'. I would appreciate if someone can tell me how to integrate tf.data.experimental.ignore_errors() with my model or possible alternatives to the issue I am facing.

Hi if you are exactly following the documentation then
tf.data.experimental.ignore_errors won't work for you because you are not loading your data using tf.data,You are most probably using from tflite_model_maker.image_classifier import DataLoader.
Note: Please mention the complete code snippet to help you out to solve the issue

Related

Yolov5 object detection training

Please i need you help concerning my yolov5 training process for object detection!
I try to train my object detection model yolov5 for detecting small object ( scratch). For labelling my images i used roboflow, where i applied some data augmentation and some pre-processing that roboflow offers as a services. when i finish the pre-processing step and the data augmentation roboflow gives the choice for different output format, in my case it is yolov5 pytorch, and roboflow does everything for me splitting the data into training validation and test. Hence, Everything was set up as it should be for my data preparation and i got at the end the folder with data.yaml and the images with its labels, in data.yaml i put the path of my training and validation sets as i saw in the GitHub tutorial for yolov5. I followed the steps very carefully tought.
The problem is when the training start i get nan in the obj and box column as you can see in the picture bellow, that i don't know the reason why, can someone relate to that or give me any clue to find the solution please, it's my first project in computer vision.
This is what i get when the training process starts
This the last message error when the training finish
I think the problem comes maybe from here but i don't know how to fix it, i used the code of yolov5 team as it's in the tuto
The training continue without any problem but the map and precision remains 0 all the process !!
Ps : Here is the link of tuto i followed : https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data
This is what I would do to troubleshoot it. - Run your code on collab because the environment is proven to work well - Confirm that your labels look good and are setup correctly. Can you checked to ensure the classes look right? In one of the screenshots it looks like you have no labels
Running my code in colab worked successfully and the resulats were good. I think that the problem was in my personnel laptop environment maybe the version of pytorch i was using '1.10.0+cu113', or something else ! If you have any advices to set up my environnement for yolov5 properly i would be happy to take from you guys. many Thanks again to #alexheat
I'm using Yolov5 for my custom dataset too. This problem might be due to the directory misplacement.
And using different version of Pytorch will not be a problem. Anyway you can try using the version they mentioned in 'requirements.txt'
It's better if you run
cd yolov5
pip3 install -r requirements.txt
Let me know if this helps.

TensorFlow Fedetated TypeError (generator must be callable)

I'm attempting to train a character prediction model similar to the Tensorflow Federated tutorial. I'm preprocessing my data and setting up my model as specified in the Google Research GitHub repository. I'm wrapping the model for use with Tensorflow Federated with this method tff.learning.from_keras_model.
When I begin training I encounter this error:
TypeError: generator must be callable.
I'm having trouble figuring out exactly what is causing this issue. I'm so confused that I'm not even sure what information to include in this question for the necessary context to answer it.
Any help with what this error means or how I can overcome it would be appreciated and please let me know if more information is required to help out.

saving RL agent by pickle, cannot save because of pickle.thread_RLock -- what is the source of this error?

I am trying to save my reinforcement learning agent class after training for further training later on by pickling it.
The script used is:
with open('agent.pickle','wb') as agent_file:
pickle.dump(agent,agent_file)
I am receiving an error:
TypeError: can't pickle _thread.RLock objects
I have searched this error message but not sure what the actual source of the error is. The traceback is uninformative with respect to specifically which line of code is causing this error. The scripts uses come from 3 independent .py files. A tensorflow, keras model has been built in one of them, but again unsure about where specifically this is coming from!I have read this error can come from lambda functions, but none of these are defined by myself, unless they are used internally byu a package such as tensorflow.
I too faced the same error but found a workaround. After model.fit(), use model.save("modelName"). It will make a folder inside which the model will be saved.
To load the model, use keras.models.load_model("modelName")

Tensorflow restored model gives different results

I have recently been working on word2vec within tensor flow and it works well so I decided to try to save and load it but when I restore the model it gives different results. It does this every time I restore it. Here is my code
https://github.com/drok0920/cobalt/tree/master
I am sorry if i am improperly using terms as I am relatively new to this topic.

Tensorflow - Any input gives me same output

I am facing a very strange problem where I am building an RNN model using tensorflow and then storing the model variables (all) using tf.Saver after I finish training.
During testing, I just build the inference part again and restore the variables to the graph. The restoration part does not give any error.
But when I start testing on the evaluation test, I always get same output from the inference all i.e. for all test inputs, I get the same output.
I printed the output during training and I do see that output is different for different training samples and cost is also decreasing.
But when I do testing, it always gives me same output no matter what is the input.
Can someone help me to understand why this could be happening? I want to post some minimal example but as I am not getting any error, I am not sure what should I post here. I will be happy to share more information if it can help the issue.
One difference I have between the inference graph during training and testing is the number of time steps in RNN. During training I train for n steps (n = 20 or more) for a batch before updating gradients while for testing I just use one step as I only want to predict for that input.
Thanks
I have been able to resolve this issue. This seemed to be happening as one of my input feature was very dominant in its original values due to which after some operations all values were converging to single number.
Scaling that feature has helped to resolve this.
Thanks
Can you create a small reproducible case and post this as a bug to https://github.com/tensorflow/tensorflow/issues ? That will help this question get attention from the right people.