Yolov5 object detection training - object-detection

Please i need you help concerning my yolov5 training process for object detection!
I try to train my object detection model yolov5 for detecting small object ( scratch). For labelling my images i used roboflow, where i applied some data augmentation and some pre-processing that roboflow offers as a services. when i finish the pre-processing step and the data augmentation roboflow gives the choice for different output format, in my case it is yolov5 pytorch, and roboflow does everything for me splitting the data into training validation and test. Hence, Everything was set up as it should be for my data preparation and i got at the end the folder with data.yaml and the images with its labels, in data.yaml i put the path of my training and validation sets as i saw in the GitHub tutorial for yolov5. I followed the steps very carefully tought.
The problem is when the training start i get nan in the obj and box column as you can see in the picture bellow, that i don't know the reason why, can someone relate to that or give me any clue to find the solution please, it's my first project in computer vision.
This is what i get when the training process starts
This the last message error when the training finish
I think the problem comes maybe from here but i don't know how to fix it, i used the code of yolov5 team as it's in the tuto
The training continue without any problem but the map and precision remains 0 all the process !!
Ps : Here is the link of tuto i followed : https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data

This is what I would do to troubleshoot it. - Run your code on collab because the environment is proven to work well - Confirm that your labels look good and are setup correctly. Can you checked to ensure the classes look right? In one of the screenshots it looks like you have no labels

Running my code in colab worked successfully and the resulats were good. I think that the problem was in my personnel laptop environment maybe the version of pytorch i was using '1.10.0+cu113', or something else ! If you have any advices to set up my environnement for yolov5 properly i would be happy to take from you guys. many Thanks again to #alexheat

I'm using Yolov5 for my custom dataset too. This problem might be due to the directory misplacement.
And using different version of Pytorch will not be a problem. Anyway you can try using the version they mentioned in 'requirements.txt'
It's better if you run
cd yolov5
pip3 install -r requirements.txt
Let me know if this helps.

Related

Freeze Saved_Model.pb created from converted Keras H5 model

I am currently trying to train a custom model for use in Unity (Barracuda) for object detection and I am struggling near what I believe to be the last part of the pipeline. Following various tutorials and git-repos I have done the following...
Using Darknet, I have trained a custom-model using the Tiny-Yolov2 model. (model tested successfully on a webcam python script)
I have taken the final weights from that training and converted them
to a Keras (h5) file. (model tested successfully on a webcam python
script)
From Keras, I then use tf.save_model to turn it into a
save_model.pd.
From save_model.pd I then convert it using tf2onnx.convert to change
it to an onnx file.
Supposedly from there it can then work in one of a few Unity sample
projects...
...however, this project fails to read in the Unity Sample projects I've tried to use. From various posts it seems that I may need to use a 'frozen' save_model.pd before converting it to ONNX. However all the guides and python functions that seem to be used for freezing save_models require a lot more arguments than I have awareness of or data for after going through so many systems. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py - for example, after converting into Keras, I only get left with a h5 file, with no knowledge of what an input_graph_def, or output_node_names might refer to.
Additionally, for whatever reason, I cannot find any TF version (1 or 2) that can successfully run this python script using 'from tensorflow.python.checkpoint import checkpoint_management' it genuinely seems like it not longer exists.
I am not sure why I am going through all of these conversions and steps but every attempt to find a cleaner process between training and unity seemed to lead only to dead ends.
Any help or guidance on this topic would be sincerely appreciated, thank you.

CNTK - Faster RCNN Train with My Own Labels Data Set Can not Train More Than 20 Images

I'm working with CNTK Faster RCNN object detection and now I have been facing with problem.
To make you understand the problem, I will start with explain my work process from started.
First I follow by https://learn.microsoft.com/en-us/cognitive-toolkit/object-detection-using-faster-r-cnn
to install all of need package. I successful in the step. Then I try with grocery data set which is contain 20 images train (I'm using base model as AlexNet).
And the results is done. everything look work at this point.
Then I use VoTT to labels my dataset and I put it into data set folder of CNTK. I also use annotations_helper.py to generate other input files for prepare model training step.
After I create My_DataSet_config.py and change some configuration. I realize that I can not train my data set more than 20 image. Let's say if I train 30 images programs will error like gt_boxes is empty (it's really empty but with some specific images training number it's no longer empty).
So I try to follow some instruction I found on GitHub like the problem is image and annotation files, try to delete the image and run again.
I really done that but it's not solution on my case. If the number of data set for train still not 20 images, I will find the error again with any image. Please take a look. Thank you
Python 3.5
Windows
CNTK 2.7
Here is my data set configuration file.
enter image description here
Here is my model configuration file.
enter image description here

Tensorboard projector will compute PCA endlessly

I have just over 100k word embeddings which I created using gensim, originally each containing 200 dimensions. I've been trying to visualize them within tensorboard's projector but I have only failed so far.
My problem is that tensorboard seems to freeze while computing PCA. At first, I left the page open for 16 hours, imagining that it was just too much to be calculated, but nothing happened. At this point, I started to try and test different scenarios just in case all I needed was more time and I was trying to rush things. The following is a list of my testing so far, all of which failed at the same spot, computing PCA:
I plotted only 10 points of 200 dimensions;
I retrained my gensim model so that I could reduce its dimensionality to 100;
Then I reduced it to 10;
Then to 2;
Then I tried plotting only 2 points, i.e. 2 two dimensional points;
I am using Tensorflow 1.11;
You can find my last saved tensor flow session here, would you mind trying it out?
I am still a beginner, therefore I used a couple tutorial to get me started; I used Sud Harsan work so far.
Any help is much appreciated. Thanks.
Updates:
A) I've found someone else dealing with the same problem; I tried the solution provided, but it didn't change anything.
B) I thought it could have something to do with my installation, therefore I tried uninstalling tensorflow and installing it back; no luck. I then proceeded to create a new environment dedicated to tensorflow and that also didn't work.
C) Assuming there was something wrong with my code, I ran tensorflow's basic embedding tutorial to check if I could open its projector's results. And guess what?! I still can't go past "Calculating PCA"
Now, I did visit the online projector example and that loads perfectly.
Again, Any help would be more than appreciated. Thanks!
I have the same problem with word2vec_basic.py
My environment: win10, conda, python 3.6.7, tensorflow 1.11, tensorboard 1.11
That may not your fault because I roll back tensorflow & tensorboard from 1.11 to 1.7
And guess what?! The projector appears just a few seconds!
reference
Update 10/11
tensorboard & tensorflow 1.12 are available in conda today, I take a try and this problem seems to be fixed.
As mentioned by Bluedrops, updating tensorboard and tensorflow seems to fix the problem.
I created a new environment with conda and installed the newest versions of Tensorflow, Tensorboard and their dependencies and that seems to fix the issue.

Tensorflow object detection API not displaying global steps

I am new here. I recently started working with object detection and decided to use the Tensorflow object detection API. But, when I start training the model, it does not display the global step like it should, although it's still training in the background.
Details:
I am training on a server and accessing it using OpenSSH on Windows. I trained a custom dataset, by collecting pictures and labeling them. I trained it using model_main.py. Also, until a couple of months back, the API was a little different, and only recently they changed to the latest version. For instance, earlier it used to use train.py for training, instead of model_main.py. All the online tutorial I can find use train.py, so it might be a problem with the latest commit. But I don't find anyone else fining this problem.
Thanks in advance!
Add tf.logging.set_verbosity(tf.logging.INFO) after the import section of the model_main.py script. It will display a summary after every 100th step.
As Thommy257 suggested, adding tf.logging.set_verbosity(tf.logging.INFO) after the import section of model_main.py prints the summary after every 100 steps by default.
Further, to specify the frequency of the summary, change
config = tf.estimator.RunConfig(model_dir=FLAGS.model_dir)
to
config = tf.estimator.RunConfig(model_dir=FLAGS.model_dir, log_step_count_steps=k)
where it will print after every k steps.
Regarding the recent change to model_main , the previous version is available at the folder "legacy". I use train.py and eval.py from this legacy folder with the same functionality as before.

Tensorflow: Softmax cross entropy with logits becomes inf

I am working on the Tensorflow for poets tutorial. Most of the time, training fails with an error Nan in summary histogram.
I run the following command on the original data to retrain:
python -m scripts.retrain
--bottleneck_dir=tf_files/bottlenecks
--model_dir=tf_files/models/
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}"
--output_graph=tf_files/retrained_graph.pb
--output_labels=tf_files/retrained_labels.txt
--image_dir=/ml/data/images
This error occurred in other mentions as well. I followed the instructions there using tfdg which gave me a bit more insight (see below). However, I am still stuck because I do not know why this happens and what I can do to fix it without much experience in TF and neural networks. This is especially confusing because it happens with 100% tutorial code & data.
Here is the output from tfdg. The first time the error appears:
And the node in detail:
To look at the retrain script you can find Google's original code here. It was not modified in my case. Sorry for not including it (too many characters).
Hyper parameters & result
For additional information: trainings works with ridiculously small values for learning rate (e.g. using 0,000001). However this does not lead to good results. No matter how many epochs I train, performance stays on a low level (probably being stuck in local minima during optimisation).
I had searched about compatibility as I was doing in 2.7 also, but it said 3.5 is the best version now with all latest tensorflow support. So I created virtual environment with python 3.5. I think that's why the stability issue.
Are you sure tf_files folder is being created?
I faced some issue on command line python. I switched to spyder and changed the variable data of input as required in retrain.py and it runs smoothly. I know, it's not a solution but a turnaround.