How to use the output of ANN model with different dataset as an input to another ANN model with different dataset? - pandas

I am building two ANN models (ANN_1 and ANN_2). I want to use the output of the ANN_1 as input to ANN_2.
The structure of ANN_1 goes as follows:
# define the Keras model
model_1 = Sequential()
model_1.add(Dense(46, input_dim=46, activation='relu'))
model_1.add(Dense(31, input_dim=46, activation='relu'))
model_1.add(Dense(1, activation='sigmoid'))
#Training the ANN_1 model
model_1.compile(loss="mean_absolute_error", optimizer='adam')
model_1.summary()
#Training the ANN on the Training set
history = model_1.fit(X_train, y_train, epochs=800, batch_size=15, validation_data=(X_test,
y_test))
The structure of ANN_2 goes as follows:
# define the Keras model
model_2 = Sequential()
model_2.add(Dense(52, input_dim=52, activation='relu'))
model_2.add(Dense(40, activation='relu'))
model_2.add(Dense(1, activation='sigmoid'))
#Training the ANN_2 model
model_2.compile(loss="mean_absolute_error", optimizer='Adam')
model_2.summary()
#Training the ANN_2 on the Training set
history = model_2.fit(X_train_2, y_train_2, epochs=1000, batch_size=15, validation_data=
(X_test_2, y_test_2))

Related

How to customise a CNN layers with TensorFlow 2, Feed new inputs at Dense Layers of CNN [duplicate]

I have 1D sequences which I want to use as input to a Keras VGG classification model, split in x_train and x_test. For each sequence, I also have custom features stored in feats_train and feats_test which I do not want to input to the convolutional layers, but to the first fully connected layer.
A complete sample of train or test would thus consist of a 1D sequence plus n floating point features.
What is the best way to feed the custom features first to the fully connected layer? I thought about concatenating the input sequence and the custom features, but I do not know how to make them separate inside the model. Are there any other options?
The code without the custom features:
x_train, x_test, y_train, y_test, feats_train, feats_test = load_balanced_datasets()
model = Sequential()
model.add(Conv1D(10, 5, activation='relu', input_shape=(timesteps, 1)))
model.add(Conv1D(10, 5, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.5, seed=789))
model.add(Conv1D(5, 6, activation='relu'))
model.add(Conv1D(5, 6, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.5, seed=789))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5, seed=789))
model.add(Dense(2, activation='softmax'))
model.compile(loss='logcosh', optimizer='adam', metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=20, shuffle=False, verbose=1)
y_pred = model.predict(x_test)
Sequential model is not very flexible. You should look into the functional API.
I would try something like this:
from keras.layers import (Conv1D, MaxPool1D, Dropout, Flatten, Dense,
Input, concatenate)
from keras.models import Model, Sequential
timesteps = 50
n = 5
def network():
sequence = Input(shape=(timesteps, 1), name='Sequence')
features = Input(shape=(n,), name='Features')
conv = Sequential()
conv.add(Conv1D(10, 5, activation='relu', input_shape=(timesteps, 1)))
conv.add(Conv1D(10, 5, activation='relu'))
conv.add(MaxPool1D(2))
conv.add(Dropout(0.5, seed=789))
conv.add(Conv1D(5, 6, activation='relu'))
conv.add(Conv1D(5, 6, activation='relu'))
conv.add(MaxPool1D(2))
conv.add(Dropout(0.5, seed=789))
conv.add(Flatten())
part1 = conv(sequence)
merged = concatenate([part1, features])
final = Dense(512, activation='relu')(merged)
final = Dropout(0.5, seed=789)(final)
final = Dense(2, activation='softmax')(final)
model = Model(inputs=[sequence, features], outputs=[final])
model.compile(loss='logcosh', optimizer='adam', metrics=['accuracy'])
return model
m = network()

How to improve Volatile GPU-Util?

Using Keras to train model, Volatile GPU-Util of two GPU is too low.
The block of code:
%%time
np.random.seed(seed)
model_d2v_01 = Sequential()
model_d2v_01.add(Dense(64, activation='relu', input_dim=400))
model_d2v_01.add(Dense(1, activation='sigmoid'))
model_d2v_01 = multi_gpu_model(model_d2v_01, gpus=2)
model_d2v_01.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model_d2v_01.fit(train_vecs_dbow_dmm, y_train, validation_data=(validation_vecs_dbow_dmm, y_validation), epochs=5, batch_size=32*2, verbose=2)
How to modify this code? some suggestion, please.
I found that this situation was normal. The cause was from few layers, so the model was too simple. when add more model layer, this situation will improve. for example:
%%time
np.random.seed(seed)
# with tf.device('/cpu:0'):
model_d2v_12 = Sequential()
model_d2v_12.add(Dense(512, activation='relu', input_dim=400))
model_d2v_12.add(Dense(512, activation='relu'))
model_d2v_12.add(Dense(512, activation='relu'))
model_d2v_12.add(Dense(1, activation='sigmoid'))
model_d2v_12 = multi_gpu_model(model_d2v_12, gpus=2)
model_d2v_12.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model_d2v_12.fit(train_vecs_dbow_dmm, y_train, validation_data=(validation_vecs_dbow_dmm, y_validation), epochs=10, batch_size=2048*2, verbose=2)

How do I merge several Keras models to get a single output without any further training

Here is the code of the simplest cnn model which i trained. Now my Problem is , instead of one model I need to create multiple models (for example 5 models = [model1,model2,..,model5] ) and train the data on those models using loops.
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.SGD(),
metrics=['accuracy'])
# train your model
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)

keras with tensorflow runs fine, until I add callbacks

I'm running a model using Keras and TensorFlow backend. Everything works perfect:
model = Sequential()
model.add(Dense(dim, input_dim=dim, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='Adam', metrics=['mae'])
history = model.fit(X, Y, epochs=12,
batch_size=100,
validation_split=0.2,
shuffle=True,
verbose=2)
But as soon as I include logger and callbacks so I can log for tensorboard, I get
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'input_layer_input_2' with dtype float and shape [?,1329]...
Here's my code: (and actually, it worked 1 time, the very first time, then ecer since been getting that error)
model = Sequential()
model.add(Dense(dim, input_dim=dim, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='Adam', metrics=['mae'])
logger = keras.callbacks.TensorBoard(log_dir='/tf_logs',
write_graph=True,
histogram_freq=1)
history = model.fit(X, Y,
epochs=12,
batch_size=100,
validation_split=0.2,
shuffle=True,
verbose=2,
callbacks=[logger])
A tensorboard callback uses tf.summary.merge_all function in order to collect all tensors for histogram computations. Because of that - your summary is collecting tensors from previous models not cleared from previous model runs. In order to clear these previous models try:
from keras import backend as K
K.clear_session()
model = Sequential()
model.add(Dense(dim, input_dim=dim, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='Adam', metrics=['mae'])
logger = keras.callbacks.TensorBoard(log_dir='/tf_logs',
write_graph=True,
histogram_freq=1)
history = model.fit(X, Y,
epochs=12,
batch_size=100,
validation_split=0.2,
shuffle=True,
verbose=2,
callbacks=[logger])

Keras - Stationary result using model.fit()

I'm implementing this simple neural network, with these inputs data:
x_train = np.asarray(x_train)
y_train = np.asarray(y_train)
x_test = np.asarray(x_test)
y_test = np.asarray(y_test)
After have defined the network's structure:
model = Sequential()
model.add(Dense(20, input_dim=5, init='normal', activation='sigmoid'))
model.add(Dense(1, init='normal', activation='sigmoid'))
I run this to train and evaluate the NN:
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(x_train, y_train, nb_epoch=10, validation_split=0.2)
and I always get the same result from model.fit( ... ) :
32/143 [=====>........................] - ETA: 0s.
It seems that doesn't work at all, despite I obtain consistent results on training and validation. How does I have to interpreter this stationary result about model. fit output?