I am given two data.
Firstly, the train data with known class (target)
Secondly, the test data with no class (no target)
I split the training data into train set and validation set .
I oversample the train data and test it on my validation set.
It is an imbalanced dataset.
After picking out the best model,
Will I fit it back to my entire dataset for my final prediction on test(unseen data)
Model = LGBMClassifier()
Model.fit(X,Y)
Model.predict (test)
or I fit it on oversample training .
Model = LGBMClassifier()
Model.fit(X_train_smote,Y_train_smote)
Model.predict (test)
Related
I use Tensorflow2 to train my CNN model.After training,I save the best model and want to test it on my test dataset,However,the result is much worse than valid result during training.I find when the test batch is bigger ,the result is better,and the test dataset is not shuffled,the result is worse.
My CNN Model includes batch normlization layers,I guess that is the reason,but I can't resolve the problem.Help me , please.this is my code
Here's the situation I'm working with. I've got ONE model but of a bunch of different pre-trained sets of weights for this model. I need to iterate through all these sets of weights and make a prediction for the same input once for each set of weights. I am currently doing this basically as follows:
def ModelIterator(model,pastModelWeights,modelInput):
elapsedIters = len(pastModelWeights)
outputs = []
for t in range(elapsedIters):
iterModel = model #This is just a Keras model object with no pre-set weights
iterModel.set_weights(pastModelWeights[t])
iterOutput = iterModel.predict(x=modelInput)
outputs.append(iterOutput)
return outputs
As you can see, this is really just a single model whose weights I'm changing for each iteration t in order to make predictions on the same input each time. Each prediction is very fast, but I need to do this with many (thousands) sets of weights, and as elapsedIters increases, this loop becomes quite slow.
So the question is, can I parallelize this process? Instead of setting the model's weights for each t and generating predictions in series, is there a relatively simple (heh...) way any of you know of to make these predictions simultaneously?
When dealing with time series forecasting, I've seen most people follow these steps when using an LSTM model:
Obtain, clean, and pre-process data
Take out validation dataset for future comparison with model predictions
Initialise and train LSTM model
Use a copy of validation dataset to be pre-processed exactly like the training data
Use trained model to make predictions on the transformed validation data
Evaluate results: predictions vs validation
However, if the model is accurate, how do you make predictions that go beyond the end of the validation period?
The following only accepts data that have been transformed in the same way as the training data, but for predictions that go beyond the validation period, you don't have any input data to feed to the model. So, how do people do this?
# Predictions vs validation
predictions = model.predict(transformed_validation)
# Future predictions
future_predictions = model.predict(?)
To predict the ith value, your LSTM model need last N values.
So if you want to forecast, you should use each prediction to predict the next one.
In other terms you have to loop over something like
prediction = model.predict(X[-N:])
X.append(prediction)
As you can guess, you add your output in your input that's why your predictions can diverge and amplify uncertainty.
Other model are more stable to predict far future.
You have to break your data into training and testing and then fit your mode. Finally, you make a prediction like this.
future_predictions = model.predict(X_test)
Check out the link below for all details.
Time-Series Forecasting: Predicting Stock Prices Using An LSTM Model
I built an image classification model (CNN) using tensorflow-keras. I have some new images which I need to feed into the same model in order to increase the accuracy of the existing model.
I tried using the following code. But it decreases the accuracy.
re_calibrated_model = loaded_model.fit_generator(new_training_set,
steps_per_epoch=int(stp),
epochs=int(epc),
validation_data=new_test_set,
verbose=1,
validation_steps = 50)
Is there any method that I can use to re-calibrate my CNN model?
Your new training session does not start from previous training accuracy if you use completely different dataset to do second training.
You need to feed (old_images+new_images) for your intention.
What I normally do is to train the CNN model on the first batch of images and save that model. If I need to "retrain" the model with additional images, I load the previous saved model from disk and apply the inputs (test and train) and call the fit method. As mentioned before this only works if your outputs are exactly the same i.e. if you are using the same input and output classes.
In my experience training models using different image batches does not necessarily make your model more or less accurate, but rather increase the training time with each batch. Since I am using a CPU to train, my training time is about 30% faster if I train two batches of 1000 images each as oppose to training one batch of 2000 images for example.
I am training a UNET for semantic segmentation but I only have 200 labeled images. Given the small size of the dataset, it definitely needs some data augmentation techniques.
I have question about the test and the validation set.
I have custom data generator which keep feeding data from folder for training model.
So what I plan to do is:
do data augmentation for the training set and keep all of it in the same folder
"randomly" pick some of training data into test and validation set (of course, before training).
I am not sure if this is fine, since we just do some simple processing (flipping, transposing, adjusting brightness)
Would it be better to separate the data first and do the augmentation for the rest of data in the training folder?