How Can I get the Training PipleLine from ITransformer of ML Model in ML.NET - asp.net-core

ITransformer mlmodel = mlContext.Model.Load(path, out var inputschema);
this is my Saved model I want to change its inputshema but I want to retain the Trained Model.
I have tried
mlmodel.Transform(trainingDataView);
"trainingDataView is my new Data But it Has not helped"
Tell me How can I get the training pipleline from Itransformer model that is been already trained model

Related

Can't save model in saved_model format when finetune bert model

When training the bert model, the weights are saved well, but the entire model is not saved.
After model.fit,
save model as model.save_weights('bert_xxx.h5') and load_weights works fine,
but since only weights are saved, the model frame must be loaded separately.
So I want to save the entire model at once.
However, the following error occurs.
The tensorflow version was 2.4, and the bert code used https://qiita.com/namakemono/items/4c779c9898028fc36ff3
Why is only the weights saved and not the entire model?
And how can I save the whole model??

Training a keras model on pretrained weights using load_weights()

I am using a custom keras model in Databricks environment.
For a custom keras model, model.save(model.h5) does not work, because custom model is not serializable. Instead it is recommended to use model.save_weights(path) as an alternate.
model.save_weights(pathDirectory) works. This yields 3 files checkpoint,.data-00000-of-00001,.index in the pathDirectory
For loading weights, Following mechanism is working fine.
model = Model()
model.load_weights(path)
But I want to train my model on pretrained weights I just saved. Like I saved model weights, and continue training on these saved weights afterwards.
So, when I load model weights and apply training loop, I get this error, TypeError: 'CheckpointLoadStatus' object is not callable
After much research, I have found a workaround,
we can also save model using
model.save("model.hpy5") and read it the saved model in databricks.
model.h5 not work for customized models, but it works for standard models.

How to train my already trained model with a new class?

Am kind of new to mxnet and I wanted to ask if I can execute a command to train my already trained model with a custom dataset. The first time I trained my model is i.e. with only one class ['dog'] then after I trained the model, I want to train it again with a new class of 'cat' so it will be like ['dog', 'cat']. Is this possible? Thanks in advance.

Tensorflow : Is it possible to identify the data is used for training?

I have created text classification model(.pb) using tensorflow. Prediction is good.
Is it possible to check the sentence using for prediction is already used to train the model or not. I need to retrain the model when new sentence is given to model to predict.
I did some research and couldn't find a way to get the train data only with the pb file because that file only stores the features and not the actual train data(obviously),but if you have the dataset,then you can easily verify duh....
I don't think you can ever find the exact train data with only the trained model,cause the model only contains the features and not the actual train data

Can I retrain an old model with new data using TensorFlow?

I am new to TensorFlow and I am just trying to see if my idea is even possible.
I have trained a model with multi class classifier. Now I can classify a sentence in input, but I would like to change the result of CNN, for example, to improve the score of classification or change the classification.
I want to try to train just a single sentence with its class on a trained model, is this possible?
If I understand your question correctly, you are trying to reload a previously trained model either to run it through further iterations, test it on a new sentence, or fine tune the model a bit. If this is the case, yes you can do this. Look into saving and restoring models (https://www.tensorflow.org/api_guides/python/state_ops#Saving_and_Restoring_Variables).
To give you a rough outline, when you initially train your model, after setting up the network architecture, set up a saver:
trainable_var = tf.trainable_variables()
sess = tf.Session()
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer
# Run/train your model until some completion criteria is reached
#....
#....
saver.save(sess, 'model.ckpt')
Now, to reload your model:
saver = tf.train.import_meta_graph('model.ckpt.meta')
saver.restore('model.ckpt')
#Note: if you have already defined all variables before restoring the model, import_meta_graph is not necessary
This will give you access to all the trained variables and you can now feed in whatever new sentence you have. Hope this helps.