Pycaret Model Performance Reducing after Hyper parameter tuning - data-science

I'm trying to train model using pycaret and noticed many times that Model performance reduces after Hyperparameter tuning.
I have attached an image showing how i'm trying to tuning.
Can any one suggest what i'm doing wrong?
Thanks in advance!

It also happened to me.After little search ,from documentation;
"In order to tune hyperparameters, the tune_model() function is used. This function automatically tunes the hyperparameters of a model on a pre-defined search space and scores it using stratified cross-validation."
So what i did, I used not only best model also 2nd and 3rd too. Incase some overfitting problems :).

Related

Can I use EfficientNetB7 as my baseline model for image recognition?

I have seen many articles that used EfficientNetB0 as their baseline model, but I never saw anyone used EfficientNetB7 yet. From the EfficientNet Github page (https://github.com/qubvel/efficientnet) I saw that EfficientNetB7 achieved a very high accuracy result. Why doesn't everyone just use EfficientNetB7? Is it because of the memory limit or is there any other consideration to use EfficientNetB0?
A baseline is the result of a very basic model or approach to a problem. It is used to compare performance of more complex methods such as larger models, feature engineering or data augmentation.
EfficientNetB0 is used as it is a reliable model for somewhat good accuracy and because it is fast to train due to a low number of parameters.
Using EfficentNetB7 could serve as a baseline model, however when testing non-architecture related changes, such as data augmentation as mentioned earlier, retraining the large network will take longer slowing down your iteration speed.

Does changing a token name in an image caption model affect performance?

If I train an image caption model then stop to rename a few tokens:
Should I train the model from scratch?
Or can I reload the model and continue training from the last epoch with the updated vocabulary?
Will either approach effect model accuracy/performance differently?
I would go for option 2.
When training the model from scratch, you are initializing the model's weights randomly and then you fit them based on your problem. However, if, instead of using random weights, you use weights that have already been trained for a similar problem, you may decrease the convergence time. This option is kind similar to the idea of transfer learning.
Just to give the other team a voice: So what is actually the difference between training from scratch and reloading a model and continuing training?
(2) will converge faster, (1) will probably have a better performance and should thus be chosen. Do we actually care about training times when we trade them off with performance - do you really? See you do not.
The further your model is already converged to a specific problem, the harder it gets to get it back into another optimum. Now you might be lucky and the chance, that you are going down the right rabid hole, rises with similar tasks and similar data. Yet with a change in your setup this can not be guaranteed.
Initializing a few epochs on other than your target domain, definitely makes sense and is beneficial, yet the question arises why you would not train on your target domain from the very beginning.
Note: For a more substantial read I'd like to refer you to this paper, where they explain in more depth why domain is of the essence and transfer learning could mess with your final performance.
It depends on the number of tokens being relabeled compared to the total amount. Just because you mentioned there are few of them, then the optimal solution in my opinion is clear.
You should start the training from scratch but initialize the weights with the values they had from wherever the previous training stopped (again mentioning that it is crucial that the samples that are being re-labeled are not of substantial amount). This way, the model will likely converge faster than starting with random weights and also better than trying to re-fit ("forget") what it managed to learn from the previous training.
Topologically speaking you are initializing in a position where the model is closer to a global minimum but has not made any steps towards a local minimum.
Hope this helps.

Object Detection model stuck at low mAP

I am trying to reproduce the results of the SSDLite model reported in the MobileNetV2 paper (arXiv:1801.04381), which should achieve about 22.1% mAP on the COCO detection challenge. However, I am stuck at 9% mAP. This is strange behavior because the model does work somewhat, but is still far off from the reported result. Can this much of a gap be caused by hyperparameters/optimizer choices (I am using adam instead of sgd), or is it almost certain that there is a bug in my implementation?
It is also worth mentioning that the model successfully overfits a small subset of the training set, but on the whole training set the loss seems to reach a plateau fairly quickly.
Has anyone encountered a problem similar to this?
Can this much of a gap be caused by hyperparameters/optimizer choices
(I am using adam instead of sgd), or is it almost certain that there
is a bug in my implementation?
Even small changes in the hyperparameters and a different optimizer choice can impact the training and the resulting precision of the classifier a lot. So your low precision might not necessary be due to a bug but could also be due to wrong parametrization.
It is also worth mentioning that the model successfully overfits a
small subset of the training set, but on the whole training set the
loss seems to reach a plateau fairly quickly.
Seems like you run into a local optimum which only works for a subset of your data, which could also be a pointer for an suboptimal parameterization.
Like #Matias Valdenegro also mentioned, to reproduce the exact result you might have to use the same parameters as in the original implementation.

Which model (GPT2, BERT, XLNet and etc) would you use for a text classification task? Why?

I'm trying to train a model for a sentence classification task. The input is a sentence (a vector of integers) and the output is a label (0 or 1). I've seen some articles here and there about using Bert and GPT2 for text classification tasks. However, I'm not sure which one should I pick to start with. Which of these recent models in NLP such as original Transformer model, Bert, GPT2, XLNet would you use to start with? And why? I'd rather to implement in Tensorflow, but I'm flexible to go for PyTorch too.
Thanks!
It highly depends on your dataset and is part of the data scientist's job to find which model is more suitable for a particular task in terms of selected performance metric, training cost, model complexity etc.
When you work on the problem you will probably test all of the above models and compare them. Which one of them to choose first? Andrew Ng in "Machine Learning Yearning" suggest starting with simple model so you can quickly iterate and test your idea, data preprocessing pipeline etc.
Don’t start off trying to design and build the perfect system.
Instead, build and train a basic system quickly—perhaps in just a few
days
According to this suggestion, you can start with a simpler model such as ULMFiT as a baseline, verify your ideas and then move on to more complex models and see how they can improve your results.
Note that modern NLP models contain a large number of parameters and it is difficult to train them from scratch without a large dataset. That's why you may want to use transfer learning: you can download pre-trained model and use it as a basis and fine-tune it to your task-specific dataset to achieve better performance and reduce training time.
I agree with Max's answer, but if the constraint is to use a state of the art large pretrained model, there is a really easy way to do this. The library by HuggingFace called pytorch-transformers. Whether you chose BERT, XLNet, or whatever, they're easy to swap out. Here is a detailed tutorial on using that library for text classification.
EDIT: I just came across this repo, pytorch-transformers-classification (Apache 2.0 license), which is a tool for doing exactly what you want.
Well like others mentioned, it depends on the dataset and multiple models should be tried and best one must be chosen.
However, sharing my experience, XLNet beats all other models so far by a good margin. Hence if learning is not the objective, i would simple start with XLNET and then try a few more down the line and conclude. It just saves time in exploring.
Below repo is excellent to do all this quickly. Kudos to them.
https://github.com/microsoft/nlp-recipes
It uses hugging face transformers and makes them dead simple. 😃
I have used XLNet, BERT, and GPT2 for summarization tasks (English only). Based on my experience, GPT2 works the best among all 3 on short paragraph-size notes, while BERT performs better for longer texts (up to 2-3 pages). You can use XLNet as a benchmark.

How to fix incorrect guess in image recognition

I'm very new to this stuff so please bear with me. I followed a quick simple video about image recognition/classification in YT and the program indeed could classify the image with a high percentage. But then I do have some other images that was incorrectly classified.
On tensorflow site: https://www.tensorflow.org/tutorials/image_retraining#distortions
However, one should generally avoid point-fixing individual errors in
the test set, since they are likely to merely reflect more general
problems in the (much larger) training set.
so here are my questions:
What would be the best way to correct the program's guess? eg. image is B but the app returned with the results "A - 70%, B - 30%"
If the answer to one would be to retrain again, how do I go about retraining the program again without deleting the previous bottlenecks files created? ie. I want the program to keep learning while retaining previous data I already trained it to recognize.
Unfortunately there is often no easy fix, because the model you are training is highly complex and very hard for a human to interpret.
However, there are techniques you can use to try and reduce your test error. First make sure your model isn't overfitting or underfitting by observing the difference between train and test errors. If either is the case then try applying standard techniques, such as choosing a deeper model and/or using more filters if underfitting or adding regularization if overfitting.
Since you say you are already classifying correctly a high percentage of the time, I would start inspecting misclassified examples directly to try and gain insight into what you might be able to improve.
If possible, try and observe what your misclassified images have in common. If you are lucky they will all fall into one or a small number of categories. Here are some examples of what you might see and possible solutions:
Problem: Dogs facing left are misclassified as cats
Solution: Try augmenting your training set with rotations
Problem: Darker images are being misclassified
Solution: Make sure you are normalizing your images properly
It is also possible that you have reached the limits of your current approach. If you still need to do better consider trying a different approach like using a pretrained network for image recognition, such as VGG.