I build a neural network model on Python 3.6
I'm trying to predict price of condominium based on their attributes such as lat, lng, distance to public transport, year-built, and so on.
I use the same training set for the model. However, each time I print out value of the variables in hidden layer is different.
testing_df_w_price = testing_df.copy()
testing_df.drop('PricePerSq',axis = 1, inplace = True)
training_df, testing_df = training_df.drop(['POID'], axis=1), testing_df.drop(['POID'], axis=1)
col_train = list(training_df.columns)
col_train_bis = list(training_df.columns)
col_train_bis.remove('PricePerSq')
mat_train = np.matrix(training_df)
mat_test = np.matrix(testing_df)
mat_new = np.matrix(training_df.drop('PricePerSq', axis = 1))
mat_y = np.array(training_df.PricePerSq).reshape((training_df.shape[0],1))
prepro_y = MinMaxScaler()
prepro_y.fit(mat_y)
prepro = MinMaxScaler()
prepro.fit(mat_train)
prepro_test = MinMaxScaler()
prepro_test.fit(mat_new)
train = pd.DataFrame(prepro.transform(mat_train),columns = col_train)
test = pd.DataFrame(prepro_test.transform(mat_test),columns = col_train_bis)
# List of features
COLUMNS = col_train
FEATURES = col_train_bis
LABEL = "PricePerSq"
# Columns for tensorflow
feature_cols = [tf.contrib.layers.real_valued_column(k) for k in FEATURES]
# Training set and Prediction set with the features to predict
training_set = train[COLUMNS]
prediction_set = train.PricePerSq
# Train and Test
x_train, x_test, y_train, y_test = train_test_split(training_set[FEATURES] , prediction_set, test_size=0.25, random_state=42)
y_train = pd.DataFrame(y_train, columns = [LABEL])
training_set = pd.DataFrame(x_train, columns = FEATURES).merge(y_train, left_index = True, right_index = True) # good
# Training for submission
training_sub = training_set[col_train] # good
# Same thing but for the test set
y_test = pd.DataFrame(y_test, columns = [LABEL])
testing_set = pd.DataFrame(x_test, columns = FEATURES).merge(y_test, left_index = True, right_index = True) # good
# Model
# tf.logging.set_verbosity(tf.logging.INFO)
tf.logging.set_verbosity(tf.logging.ERROR)
regressor = tf.contrib.learn.DNNRegressor(feature_columns=feature_cols,
hidden_units=[int(len(col_train)+1/2)],
model_dir = "/tmp/tf_model")
for k in regressor.get_variable_names():
print(k)
print(regressor.get_variable_value(k))
Example of hidden layer value difference
The variables are initialized with random values when you construct the network. Since there's likely to be many local minima of your loss function, the fitted parameters will change every time you run the network.
In addition if your loss function is convex (only one (global) minima) the order of the variables is somewhat arbitrary. If for example you fit a network with 1 hidden layers with 2 hidden nodes, the parameters of node 1 in your first run might correspond to the parameters of node 2 and vice versa.
In Machine Learnining, the current "knowledge state" of your neural network is expressed through the weights of the connections in your graph. Generally considered, your whole network represents a high-dimensional function and the task of learning means finding the global optimum of this funktion. The learning process changes the weights of the connections in your neural network according to the specified optimizer, which in your case is the default of tf.contrib.learn.DNNRegressor (which is the Adagrad optimizer). But there are other parameters that affect the final "knowledge state" in your model. There are for instance (and i guarantee no completeness in the following list):
The initial learning rate in your model
The learning rate schedule that adapts the learning rate over time
eventually defined regularities and early stopping
The initialization strategy used for weight initialization (e.g. He-initialization or random initialization)
Plus (and this is maybe the most important thing to understand why your weights are different after each retraining), you have to consider that you use a stochastic gradient descent algorithm during training. This means, that for each optimization step the algorithm choses a random subset of your whole training set. Therefore, one optimization step doesn't always point tho the global optimum of your high-dimensional function, but to the steepest descent that could be computed with the randomly chosen subset. Because of this stochastic component in the optimization process, you will likely never reach the global optimum for your task. But with carefully chosen hyperparameters (and of course good data) you will reach a good approximate solution, which lies whithin a local optimum of the function and which can change everytime you retrain the model.
So to conclude, don't look at the weights to judge the performance of your model, because they will be slightly different each time. Use a performance measure like the accuracy computed in a cross validation or a confusion matrix computed on the test set.
P.S. tf.contrib.learn.DNNRegressor is a deprecated function in the newest TensorFlow release, as you can see in the docs. Use tf.estimator.DNNRegressor instead.
Related
I am trying to find out, how exactly does BatchNormalization layer behave in TensorFlow. I came up with the following piece of code which to the best of my knowledge should be a perfectly valid keras model, however the mean and variance of BatchNormalization doesn't appear to be updated.
From docs https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization
in the case of the BatchNormalization layer, setting trainable = False on the layer means that the layer will be subsequently run in inference mode (meaning that it will use the moving mean and the moving variance to normalize the current batch, rather than using the mean and variance of the current batch).
I expect the model to return a different value with each subsequent predict call.
What I see, however, are the exact same values returned 10 times.
Can anyone explain to me why does the BatchNormalization layer not update its internal values?
import tensorflow as tf
import numpy as np
if __name__ == '__main__':
np.random.seed(1)
x = np.random.randn(3, 5) * 5 + 0.3
bn = tf.keras.layers.BatchNormalization(trainable=False, epsilon=1e-9)
z = input = tf.keras.layers.Input([5])
z = bn(z)
model = tf.keras.Model(inputs=input, outputs=z)
for i in range(10):
print(x)
print(model.predict(x))
print()
I use TensorFlow 2.1.0
Okay, I found the mistake in my assumptions. The moving average is being updated during training not during inference as I thought. This makes perfect sense, as updating the moving averages during inference would likely result in an unstable production model (for example a long sequence of highly pathological input samples [e.g. such that their generating distribution differs drastically from the one on which the network was trained] could potentially bias the network and result in worse performance on valid input samples).
The trainable parameter is useful when you're fine-tuning a pretrained model and want to freeze some of the layers of the network even during training. Because when you call model.predict(x) (or even model(x) or model(x, training=False)), the layer automatically uses the moving averages instead of batch averages.
The code below demonstrates this clearly
import tensorflow as tf
import numpy as np
if __name__ == '__main__':
np.random.seed(1)
x = np.random.randn(10, 5) * 5 + 0.3
z = input = tf.keras.layers.Input([5])
z = tf.keras.layers.BatchNormalization(trainable=True, epsilon=1e-9, momentum=0.99)(z)
model = tf.keras.Model(inputs=input, outputs=z)
# a dummy loss function
model.compile(loss=lambda x, y: (x - y) ** 2)
# a dummy fit just to update the batchnorm moving averages
model.fit(x, x, batch_size=3, epochs=10)
# first predict uses the moving averages from training
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# outputs the same thing as previous predict
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# here calling the model with training=True results in update of moving averages
# furthermore, it uses the batch mean and variance as in training,
# so the result is very different
pred = model(x, training=True).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# here we see again that the moving averages are used but they differ slightly after
# the previous call, as expected
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
In the end, I found that the documentation (https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) mentions this:
When performing inference using a model containing batch normalization, it is generally (though not always) desirable to use accumulated statistics rather than mini-batch statistics. This is accomplished by passing training=False when calling the model, or using model.predict.
Hopefully this will help someone with similar misunderstanding in the future.
I'm currently working on data with rare binary outcome, i.e. the response vector contains mostly 0 and only a few 1 (approximately 1.5% ones). I've got about 20 continuous explanatory variables. I tried to train models using GBM, Random Forests, TensorFlow with Keras backend.
I observed a special behavior of the models, regardless which method I used:
The accuracy is high (~98%) but the model predicts probabilities for class "0" for all outcomes as ~98.5% and for class "1" ~1,5%.
How can I prevent this behavior?
I'm using RStudio. For Example a TF model with Keras would be:
model <- keras_model_sequential()
model %>%
layer_dense(units = 256, activation = "relu", input_shape = c(20)) %>%
layer_dense(units = 256, activation = "relu") %>%
layer_dense(units = 2, activation = "sigmoid")
parallel_model <- multi_gpu_model(model, gpus=2)
parallel_model %>% compile(
optimizer = "adam",
loss = "binary_crossentropy",
metrics = "binary_accuracy")
histroy <- parallel_model %>% fit(
x_train, y_train,
batch_size = 64,
epochs = 100,
class_weight = list("0"=1,"1"=70),
verbose = 1,
validation_split = 0.2
)
But my observation is not limited to TF. This makes my question more general. I'm not asking for specific adjustments for the model above, rather I'd like to discuss at what point all outcomes are assigned the same probability.
I can guess, the issue is connected to the loss-function.
I know there is no way to use AUC as loss functions, since it's not differentiable. If I test models with AUC with unknown data, the result is not better than random guessing.
I don't mind answers with code in Python, since this isn't a problem about coding rather than general behavior and algorithms.
When your problem has unbalanced classes, I suggest using SMOTE (on the training data only!!! never use smote on your testing data!!!) before training the model.
For example:
from imblearn.over_sampling import SMOTE
X_trn_balanced, Y_trn_balanced = SMOTE(random_state=1, ratio=1).fit_sample(X_trn, Y_trn)
#next fit the model with the balanced data
model.fit(X_trn_balanced, Y_trn_balanced )
In my (not so big) experience with AUC problems and rare positives, I see models with one class (not two). It's either "result is positive (1)" or "result is negative (0)".
Metrics like accuracy are useless for these problems, you should use AUC based metrics with big batch sizes.
For these problems, it doesn't matter whether the outcome probabilities are too little, as long as there is a difference between them. (Forests, GBM, etc. will indeed output these little values, but this is not a problem)
For neural networks, you can try to use class weights to increase the output probabilities. But notice that if you split the result in two separate classes (considering only one class should be positive), it doesn't matter if you use weights, because:
For the first class, low weights: predict all ones is good
For the second class, high weights: predict all zeros is good (weighted to very good)
So, as an initial solution, you can:
Use a 'softmax' activation (to guarantee your model will have only one correct output) and a 'categorical_crossentropy' loss.
(Or, preferrably) Use a model with only one class and keep 'sigmoid' with 'binary_crossentropy'.
I always work with the preferrable option above. In this case, if you use batch sizes that are big enough to contain one or two positive examples (batch size around 100 for you), weights may even be discarded. If the batch sizes are too little and many batches don't contain positive results, you may have too many weight updates towards plain zeros, which is bad.
You may also resample your data and, for instance, multiply by 10 the number of positive examples, so your batches contain more positives and training becomes easier.
Example of AUC metric to determine when training should end:
#in python - considering outputs with only one class
def aucMetric(true, pred):
true= K.flatten(true)
pred = K.flatten(pred)
totalCount = K.shape(true)[0]
values, indices = tf.nn.top_k(pred, k = totalCount)
sortedTrue = K.gather(true, indices)
tpCurve = K.cumsum(sortedTrue)
negatives = 1 - sortedTrue
auc = K.sum(tpCurve * negatives)
totalCount = K.cast(totalCount, K.floatx())
positiveCount = K.sum(true)
negativeCount = totalCount - positiveCount
totalArea = positiveCount * negativeCount
return auc / totalArea
When I read tensorflow guidance about graph and session(Graphs and Sessions), I found they suggest to create two graphs for train and validation.
I think this reasonable and I want to use this because my train and validation models are different (for encoder-decoder mode or dropout). However, i don't know how to make variables in trained graph available for test graph without using tf.saver().
When I create two graphs and create variables inside each graph, I found these two variables are totally different as they belong to different graphs.
I have googled a lot and I know there are questions about this problems, such as question1. But there is still no useful answer. If there is any code example or anyone know how to create two graphs for train and validation separately, such as:
def train_model():
g_train = tf.graph()
with g_train.as_default():
train_models
def validation_model():
g_test = tf.graph()
with g_test.as_default():
test_models
One easy way of doing that is to create a 'forward function' that defines the model and change behaviour based on extra parameters.
Here is an example:
def forward_pass(x, is_training, reuse=tf.AUTO_REUSE, name='model_forward_pass'):
# Note the reuse attribute as it tells the getter to either create the graph or get the weights
with tf.variable_scope(name=name, reuse=reuse):
x = tf.layers.conv(x, ...)
...
x = tf.layers.dense(x, ...)
x = tf.layers.dropout(x, rate, training=is_training) # Note the is_training attribute
...
return x
Now you can call the 'forward_pass' function anywhere in your code. You simply need to provide the is_training attribute to use the correct mode for dropout for example. The 'reuse' argument will automatically get the correct values for your weights as long as the 'name' of the 'variable_scope' is the same.
For example:
train_logits_model1 = forward_pass(x_train, is_training=True, name='model1')
# Graph is defined and dropout is used in training mode
test_logits_model1 = forward_pass(x_test, is_training=False, name='model1')
# Graph is reused but the dropout behaviour change to inference mode
train_logits_model2 = forward_pass(x_train2, is_training=True, name='model2')
# Name changed, model2 is added to the graph and dropout is used in training mode
To add to this answer as you stated that you want to have 2 separated graph, you could to that using an assign function:
train_graph = forward_pass(x, is_training=True, reuse=False, name='train_graph')
...
test_graph = forward_pass(x, is_training=False, reuse=False, name='test_graph')
...
train_vars = tf.get_collection('variables', 'train_graph/.*')
test_vars = tf.get_collection('variables','test_graph/.*')
test_assign_ops = []
for test, train in zip(test_vars, train_vars):
test_assign_ops += [tf.assign(test, train)]
assign_op = tf.group(*test_assign_ops)
sess.run(assign_op) # Replace vars in the test_graph by the one in train_graph
I'm a big advocate of method 1 as it is way cleaner and reduce memory usage.
Here is what I'm trying to implement:
We calculate loss based on F(X), as usual. But we also define "adversarial loss" which is a loss based on F(X + e). e is defined as dF(X)/dX multiplied by some constant. Both loss and adversarial loss are backpropagated for the total loss.
In tensorflow, this part (getting dF(X)/dX) can be coded like below:
grad, = tf.gradients( loss, X )
grad = tf.stop_gradient(grad)
e = constant * grad
Below is my pytorch code:
class DocReaderModel(object):
def __init__(self, embedding=None, state_dict=None):
self.train_loss = AverageMeter()
self.embedding = embedding
self.network = DNetwork(opt, embedding)
self.optimizer = optim.SGD(parameters)
def adversarial_loss(self, batch, loss, embedding, y):
self.optimizer.zero_grad()
loss.backward(retain_graph=True)
grad = embedding.grad
grad.detach_()
perturb = F.normalize(grad, p=2)* 0.5
self.optimizer.zero_grad()
adv_embedding = embedding + perturb
network_temp = DNetwork(self.opt, adv_embedding) # This is how to get F(X)
network_temp.training = False
network_temp.cuda()
start, end, _ = network_temp(batch) # This is how to get F(X)
del network_temp # I even deleted this instance.
return F.cross_entropy(start, y[0]) + F.cross_entropy(end, y[1])
def update(self, batch):
self.network.train()
start, end, pred = self.network(batch)
loss = F.cross_entropy(start, y[0]) + F.cross_entropy(end, y[1])
loss_adv = self.adversarial_loss(batch, loss, self.network.lexicon_encoder.embedding.weight, y)
loss_total = loss + loss_adv
self.optimizer.zero_grad()
loss_total.backward()
self.optimizer.step()
I have few questions:
1) I substituted tf.stop_gradient with grad.detach_(). Is this correct?
2) I was getting "RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time." so I added retain_graph=True at the loss.backward. That specific error went away.
However now I'm getting a memory error after few epochs (RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCStorage.cu:58
). I suspect I'm unnecessarily retaining graph.
Can someone let me know pytorch's best practice on this? Any hint / even short comment will be highly appreciated.
I think you are trying to implement generative adversarial network (GAN), but from the code, I don't understand and can't follow to what you are trying to achieve as there are a few missing pieces for a GAN to works. I can see there's a discriminator network module, DNetwork but missing the generator network module.
If to guess, when you say 'loss function twice', I assumed you mean you have one loss function for the discriminator net and another for the generator net. If that's the case, let me share how I would implement a basic GAN model.
As an example, let's take a look at this Wasserstein GAN Jupyter notebook
I'll skip the less important bits and zoom into the important ones here:
First, import PyTorch libraries and set up
# Set up batch size, image size, and size of noise vector:
bs, sz, nz = 64, 64, 100 # nz is the size of the latent z vector for creating some random noise later
Build a discriminator module
class DCGAN_D(nn.Module):
def __init__(self):
... truncated, the usual neural nets stuffs, layers, etc ...
def forward(self, input):
... truncated, the usual neural nets stuffs, layers, etc ...
Build a generator module
class DCGAN_G(nn.Module):
def __init__(self):
... truncated, the usual neural nets stuffs, layers, etc ...
def forward(self, input):
... truncated, the usual neural nets stuffs, layers, etc ...
Put them all together
netG = DCGAN_G().cuda()
netD = DCGAN_D().cuda()
Optimizer needs to be told what variables to optimize. A module automatically keeps track of its variables.
optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-4)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-4)
One forward step and one backward step for Discriminator
Here, the network can calculate gradient during the backward pass, depends on the input to this function. So, in my case, I have 3 type of losses; generator loss, dicriminator real image loss, dicriminator fake image loss. I can get gradient of loss function three times for 3 different net passes.
def step_D(input, init_grad):
# input can be from generator's generated image data or input image from dataset
err = netD(input)
err.backward(init_grad) # backward pass net to calculate gradient
return err # loss
Control trainable parameters [IMPORTANT]
Trainable parameters in the model are those that require gradients.
def make_trainable(net, val):
for p in net.parameters():
p.requires_grad = val # note, i.e, this is later set to False below in netG update in the train loop.
In TensorFlow, this part can be coded like below:
grad = tf.gradients(loss, X)
grad = tf.stop_gradient(grad)
So, I think this will answer your first question, "I substituted tf.stop_gradient with grad.detach_(). Is this correct?"
Train loop
You can see here how's the 3 different loss functions are being called here.
def train(niter, first=True):
for epoch in range(niter):
# Make iterable from PyTorch DataLoader
data_iter = iter(dataloader)
i = 0
while i < n:
###########################
# (1) Update D network
###########################
make_trainable(netD, True)
# train the discriminator d_iters times
d_iters = 100
j = 0
while j < d_iters and i < n:
j += 1
i += 1
# clamp parameters to a cube
for p in netD.parameters():
p.data.clamp_(-0.01, 0.01)
data = next(data_iter)
##### train with real #####
real_cpu, _ = data
real_cpu = real_cpu.cuda()
real = Variable( data[0].cuda() )
netD.zero_grad()
# Real image discriminator loss
errD_real = step_D(real, one)
##### train with fake #####
fake = netG(create_noise(real.size()[0]))
input.data.resize_(real.size()).copy_(fake.data)
# Fake image discriminator loss
errD_fake = step_D(input, mone)
# Discriminator loss
errD = errD_real - errD_fake
optimizerD.step()
###########################
# (2) Update G network
###########################
make_trainable(netD, False)
netG.zero_grad()
# Generator loss
errG = step_D(netG(create_noise(bs)), one)
optimizerG.step()
print('[%d/%d][%d/%d] Loss_D: %f Loss_G: %f Loss_D_real: %f Loss_D_fake %f'
% (epoch, niter, i, n,
errD.data[0], errG.data[0], errD_real.data[0], errD_fake.data[0]))
"I was getting "RuntimeError: Trying to backward through the graph a second time..."
PyTorch has this behaviour; to reduce GPU memory usage, during the .backward() call, all the intermediary results (if you have like saved activations, etc.) are deleted when they are not needed anymore. Therefore, if you try to call .backward() again, the intermediary results don't exist and the backward pass cannot be performed (and you get the error you see).
It depends on what you are trying to do. You can call .backward(retain_graph=True) to make a backward pass that will not delete intermediary results, and so you will be able to call .backward() again. All but the last call to backward should have the retain_graph=True option.
Can someone let me know pytorch's best practice on this
As you can see from the PyTorch code above and from the way things are being done in PyTorch which is trying to stay Pythonic, you can get a sense of PyTorch's best practice there.
If you want to work with higher-order derivatives (i.e. a derivative of a derivative) take a look at the create_graph option of backward.
For example:
loss = get_loss()
loss.backward(create_graph=True)
loss_grad_penalty = loss + loss.grad
loss_grad_penalty.backward()
I am training neural network using Keras. Every time I train my model, I use slightly different set of features selected using Tree-based feature selection via ExtraTreesClassifier(). After training every time, I compute the AUCROC on my validation set and then go back in a loop to train the model again with different set of feature. This process is very inefficient and I want to select the optimum number of features using some optimization technique available in some python library.
The function to be optimized is the auroc for cross validation which can only be calculated after training the model on selected features. The features are selected via following function ExtraTreesClassifier(n_estimators=10, criterion=’gini’, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=’auto’) Here we see that the objective function is not directly dependent on the parameters to be optimized. The objective function which is auroc is related to the neural network training and the neural network takes features as input which are extracted on the basis of their important from ExtraTreesClassifier.
So in a way, the parameters for which I optimize auroc are n_estimators=10, criterion=’gini’, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=’auto’ or some other variables in ExtraTreesClassifier. These are not directly related to auroc.
You should combine GridSearchCV and Pipeline. Find more here
Use Pipeline when you need to run a set of instruction in sequence to get the optimal config.
For example, you have these steps to run:
1. Select KBest feature(s)
2. Use classifier DecisionTree or NaiveBayes
By combining GridSearchCV and Pipeline, you can select which features that best for a particular classifier, best config on the classifier, and so on, based on the scoring criteria.
Example:
#set your configuration options
param_grid = [{
'classify': [DecisionTreeClassifier()], #first option use DT
'kbest__k': range(1, 22), #range of n in SelectKBest(n)
#classifier's specific configs
'classify__criterion': ('gini', 'entropy'),
'classify__min_samples_split': range(2,10),
'classify__min_samples_leaf': range(1,10)
},
{
'classify': [GaussianNB()], #second option use NB
'kbest__k': range(1, 22), #range of n in SelectKBest(n)
}]
pipe = Pipeline(steps=[("kbest", SelectKBest()), ("classify", DecisionTreeClassifier())]) #I put DT as default, but eventually the program will ignore this when you use GridSearchCV.
# Here the might of GridSearchCV working, this may takes time especially if you have more than one classifiers to be evaluated
grid = GridSearchCV(pipe, param_grid=param_grid, cv=10, scoring='f1')
grid.fit(features, labels)
#Find your best params if you want to use optimal setting later without running the grid search again (by commenting all these grid search lines)
print grid.best_params_
#You can now use pipeline again to wrap the steps with it best configs to build your model
pipe = Pipeline(steps=[("kbest", SelectKBest(k=12)), ("classify", DecisionTreeClassifier(criterion="entropy", min_samples_leaf=2, min_samples_split=9))])
Hope this helps
The flow of my program is in two stages.
I am using Sklearn ExtraTreesClassifier along with SelectFromModelmethod to select the most important features. Here it should be noted that the ExtraTreesClassifier takes many parameters as input like n_estimators etc for classification and eventually giving different set of important features for different values of n_estimators via SelectFromModel. This means that I can optimize the n_estimators to get the best features.
In the second stage, I am traing my NN keras model based on the features selected in the first stage. I am using AUROC as the score for grid search but this AUROC is calculated using Keras based neural network. I want to use Grid Search for n_estimators in my ExtraTreesClassifier to optimize the AUROC of keras neural Network. I know I have to use Pipline but I am confused in implementing both together.
I don't know where to put Pipeline in my code. I am getting an error which saysTypeError: estimator should be an estimator implementing 'fit' method, <function fs at 0x0000023A12974598> was passed
#################################################################################
I concatenate the CV set and the train set so that I may select the most important features
in both CV and Train together.
##############################################################################
frames11 = [train_x_upsampled, cross_val_x_upsampled]
train_cv_x = pd.concat(frames11)
frames22 = [train_y_upsampled, cross_val_y_upsampled]
train_cv_y = pd.concat(frames22)
def fs(n_estimators):
m = ExtraTreesClassifier(n_estimators = tree_number)
m.fit(train_cv_x,train_cv_y)
sel = SelectFromModel(m, prefit=True)
##################################################
The code below is to get the names of the selected important features
###################################################
feature_idx = sel.get_support()
feature_name = train_cv_x.columns[feature_idx]
feature_name =pd.DataFrame(feature_name)
X_new = sel.transform(train_cv_x)
X_new =pd.DataFrame(X_new)
######################################################################
So Now the important features selected are in the data-frame X_new. In
code below, I am again dividing the data into train and CV but this time
only with the important features selected.
####################################################################
train_selected_x = X_new.iloc[0:train_x_upsampled.shape[0], :]
cv_selected_x = X_new.iloc[train_x_upsampled.shape[0]:train_x_upsampled.shape[0]+cross_val_x_upsampled.shape[0], :]
train_selected_y = train_cv_y.iloc[0:train_x_upsampled.shape[0], :]
cv_selected_y = train_cv_y.iloc[train_x_upsampled.shape[0]:train_x_upsampled.shape[0]+cross_val_x_upsampled.shape[0], :]
train_selected_x=train_selected_x.values
cv_selected_x=cv_selected_x.values
train_selected_y=train_selected_y.values
cv_selected_y=cv_selected_y.values
##############################################################
Now with this new data which only contains the important features,
I am training a neural network as below.
#########################################################
def create_model():
n_x_new=train_selected_x.shape[1]
model = Sequential()
model.add(Dense(n_x_new, input_dim=n_x_new, kernel_initializer='glorot_normal', activation='relu'))
model.add(Dense(10, kernel_initializer='glorot_normal', activation='relu'))
model.add(Dropout(0.8))
model.add(Dense(1, kernel_initializer='glorot_normal', activation='sigmoid'))
optimizer = keras.optimizers.Adam(lr=0.001)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
seed = 7
np.random.seed(seed)
model = KerasClassifier(build_fn=create_model, epochs=20, batch_size=400, verbose=0)
n_estimators=[10,20,30]
param_grid = dict(n_estimators=n_estimators)
grid = GridSearchCV(estimator=fs, param_grid=param_grid,scoring='roc_auc',cv = PredefinedSplit(test_fold=my_test_fold), n_jobs=1)
grid_result = grid.fit(np.concatenate((train_selected_x, cv_selected_x), axis=0), np.concatenate((train_selected_y, cv_selected_y), axis=0))