Is it possible to train a new cascade classifier from existing xml file - cascade

I am currently using opencv haarcascade_frontalface_alt.xml as face detector. When I am testing it, it failed to find the face for some images. I am wondering if it is possible to train a new classifier from the current version by adding those failures?

You have to train with your new dataset :(

Related

Deploying recommendation model from Tensorflow/models after training?

I followed the small tutorial here: https://github.com/tensorflow/models/tree/master/official/recommendation
to train a recommendation model based on the ml-1m movielens dataset. How would I go about deploying this to start using it?
I've tried adding my own code to convert the keras model into tflite to put on firebase, but converter.convert() throws a value error. I've looked into Tensorflow serving, but the checkpoint that it outputs does not follow the format needed from what it appears. I'm not even sure how to format the input data to get recommendations.
I am new to ml and tensorflow, so I appreciate details. Thank you.
The content recommendation codelab from Firebase ML codelabs has detailed steps on how to train, convert, and deploy a TFLite recommendations model to a mobile app.

How to build tensorflow object detection model for custom classes and also include the 90 classess the SSD Mobilenet model contains

I am building a new tensorflow model based off of SSD V1 coco model in order to perform real time object detection in a video but i m trying to find if there is a way to build a model where I can add a new class to the existing model so that my model has all those 90 classes available in SSD MOBILENET COCO v1 model and also contains the new classes that i want to classify.
For example, I have created training data for two classes: man, woman
Now, I built a new tensorflow model that identifies a man and/or woman in a video. However, my model does not have the other 90 classes present in original SSD Mobilenet model. I am looking for a way to concatenate both models or pass more than one model to my code to detect the objects.
If you have any questions or if I am not clear, please feel free to probe me further.
The only way i find is you need to get dataset of SSD Mobilenet model on which it was trained.
Make sure all the images are present in one directory and annotations in another directory.
We should have a corresponding annotation file for each image file
ex: myimage.jpg and myimage.xml
If all the images of your customed dataset are of same formate with SSD Mobilenet model then annotate it with a tool called LabelImg.
Add that images and annotated files to respective images and annotations directory where we have already saved SSD Mobilenet.
Try regenerate new TFrecord and continue with remaining procedure on it.
You can use transfer learning with Tensorflow API.
Transfer learning allows you to load re-trained network and modify the fully connected layer by introducing your classes.
There is full description for this in the following references:
Codelab
A good explanation here
Tensorflow API here for more details
Also you can use google cloud platform for better and faster results:
I wish this helps you.
I don't think there is a way you can add your classes to the existing 90 classes without using the dataset it is previously trained with. Your only way is to use that dataset plus your own and retrain the model.

How to train the bigger version of SSD(600x600?) in tensorflow object detection api?

The given config files for the ssd models has 300x300 as input sizes.
I would like to train the model with the bigger version to try to get a better accuracy as was said in the paper. How do I do this? Do I simply change the values in the config file or is there a specific way to do this?
Yeap u just have to modify that parameter before training.

finetuning tensorflow seq2seq model

I've trained a seq2seq model for machine translation (DE-EN). And I have saved the trained model checkpoint. Now, I'd like to fine-tune this model checkpoint to some specific domain data samples which have not been seen in previous training phase. Is there a way to achieve this in tensorflow? Like modifying the embedding matrix somehow.
I couldn't find any relevant papers or works addressing this issue.
Also, I'm aware of the fact that the vocabulary files needs to be updated according to new sentence pairs. But, then do we have to again start training from scratch? Isn't there an easy way to dynamically update the vocabulary files and embedding matrix according to the new samples and continue training from the latest checkpoint?

how can use torch model?

I have a Torch Model which is trained on a large scale dataset (Places Dataset) and it's authors uploaded it on github, i am working on a similar project and i want to make use of it and use it's trained weights instead of use the large dataset to train it and save time and efforts, it is possible ? how can i know the only the trained filters weights? i don't want to copy the code, i only want to make use of it and save time and efforts.
NOTE: I use Tensoflow in my implementation.