I am learning keras and would like to understand how I can apply a classifier (sequential) to all rows in my data set and not just the x% left for test validation.
The confusion I am having is, when I define my data split, I will have a portion for train and test. How would I apply model to full data set to show me the predicted values for each row? The end goal I have is to produce an concatenate the predicted value for every customer in the data set.
dataset = pd.read_csv('BankCustomers.csv')
X = dataset.iloc[:, 3:13]
y = dataset.iloc[:, 13]
feature_train, feature_test, label_train, label_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
sc = StandardScaler()
feature_train = sc.fit_transform(feature_train)
feature_test = sc.transform(feature_test)
For completeness the classifier looks like below.
# Initialising the ANN
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(activation="relu", input_dim=11, units=6, kernel_initializer="uniform"))
# Adding the second hidden layer
classifier.add(Dense(activation="relu", units=6, kernel_initializer="uniform"))
# Adding the output layer
classifier.add(Dense(activation="sigmoid", units=1, kernel_initializer="uniform"))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(feature_train, label_train, batch_size = 10, nb_epoch = 100)
The course I am doing will suggest ways to get accuracy and predictions for the test set like below, but not the full batch.
# Predicting the Test set results
label_pred = classifier.predict(feature_test)
label_pred = (label_pred > 0.5) # FALSE/TRUE depending on above or below 50%
cm = confusion_matrix(label_test, label_pred)
accuracy=accuracy_score(label_test,label_pred)
I tried concatenating the model applied to both training and test data, but i then was unsure how to ascertain which index matched up with the original data set (i.e. I don't know which of the 20% test data is relative to the original set).
I apologise in advance if this question is superfluous, I have been looking for answers on stack and via the course but so far no luck.
You can utilize pandas indexes to sort your results back to the original order.
Predict on each feature_train and feature_test (not sure why you'd want to predict on feature_train though.)
Add a new column to each feature_train and feature_test, which would contain the predictions
feature_train["predictions"] = pd.Series(classifier.predict(feature_train))
feature_test["predictions"] = pd.Series(classifier.predict(feature_test))
If you look at the indexes of each data frame above, you can see they're shuffled (because of the train_test_split).
You can now concatenate them, use sort_index, and retrieve the predictions column, which would have the predictions according to the order that appeared in your initial dataframe (X)
pd.concat([feature_train, feature_test], axis=0).sort_index()
Related
I have 2 models I am training, one for each column of data in my dataset.
It seems 1 model is fairly accurate in its results so I want to give it a better weight in determining the actual outputs.
I do not know if I should be trying to concatenate these to models and somehow provide the weights using something like a Rescaling layer in keras OR if I should leave them separate then just do my own processing after?
What are the pro's and con's of each?
def get_model(n_inputs_1, n_inputs_2, n_outputs):
DENSE_LAYER_SIZE = 20
inp1 = keras.layers.Input(shape=(n_inputs_1,))
de1 = keras.layers.Dense(DENSE_LAYER_SIZE, activation='relu')(inp1) #
dr1 = keras.layers.Dropout(.2)(de1)
inp2 = keras.layers.Input(shape=(n_inputs_2,))
de2 = keras.layers.Dense(DENSE_LAYER_SIZE, activation='relu')(inp2) #
dr2 = keras.layers.Dropout(.2)(de2)
rs2 = keras.layers.Rescaling(0.01)(dr2) # reduce impact of input 2 - is this ok?
conc = keras.layers.Concatenate()([dr1, rs2])
out = keras.layers.Dense(n_outputs, activation='sigmoid')(conc)
model = keras.models.Model([inp1, inp2], out)
opt = keras.optimizers.Adam(learning_rate=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['categorical_accuracy'])
return model
Full code here
I've been back and forth with this for ages, but without being able to find a solution so far anywhere. So, I have a HuggingFace model ('bert-base-cased') that I'm using with TensorFlow and a custom dataset. I've: (1) tokenized my data (2) split the data; (3) converted the data to TF dataset format; (4) instantiated, compiled and fit the model.
During training, it behaves as you'd expect: training and validation accuracy go up. But when I evaluate the model on the test dataset using TF's model.evaluate and model.predict, the results are very different. The accuracy as reported by model.evaluate is higher (and more or less in line with the validation accuracy); the accuracy as reported by model.predict is about 10% lower. (Maybe it's just a coincidence, but it's similar to the reported training accuracy after the single epoch of fine-tuning.)
Can anyone figure out what's causing this? I include snippets of my code below.
# tokenize the dataset
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path="bert-base-cased",use_fast=False)
def tokenize_function(examples):
return tokenizer(examples['text'], padding = "max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
# splitting dataset
trainSize = 0.7
valTestSize = 1 - trainSize
train_testvalid = tokenized_datasets.train_test_split(test_size=valTestSize,stratify_by_column='class')
valid_test = train_testvalid['test'].train_test_split(test_size=0.5,stratify_by_column='class')
# renaming each of the datasets for convenience
train_set = train_testvalid['train']
val_set = valid_test['train']
test_set = valid_test['test']
# converting the tokenized datasets to TensorFlow datasets
data_collator = DefaultDataCollator(return_tensors="tf")
tf_train_dataset = train_set.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=['class'],
shuffle=True,
collate_fn=data_collator,
batch_size=8)
tf_validation_dataset = val_set.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=['class'],
shuffle=False,
collate_fn=data_collator,
batch_size=8)
tf_test_dataset = test_set.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=['class'],
shuffle=False,
collate_fn=data_collator,
batch_size=8)
# loading tensorflow model
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=1)
# compiling the model
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-6),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=tf.metrics.BinaryAccuracy())
# fitting model
history = model.fit(tf_train_dataset,
validation_data=tf_validation_dataset,
epochs=1)
# Evaluating the model on the test data using `evaluate`
results = model.evaluate(x=tf_test_dataset,verbose=2) # reports binary_accuracy: 0.9152
# first attempt at using model.predict method
hits = 0
misses = 0
for x, y in tf_test_dataset:
logits = tf.keras.backend.get_value(model(x, training=False).logits)
labels = tf.keras.backend.get_value(y)
for i in range(len(logits)):
if logits[i][0] < 0:
z = 0
else:
z = 1
if z == labels[i]:
hits += 1
else:
misses += 1
print(hits/(hits+misses)) # reports binary_accuracy: 0.8187
# second attempt at using model.predict method
modelPredictions = model.predict(tf_test_dataset).logits
testDataLabels = np.concatenate([y for x, y in tf_test_dataset], axis=0)
hits = 0
misses = 0
for i in range(len(modelPredictions)):
if modelPredictions[i][0] >= 0:
z = 1
else:
z = 0
if z == testDataLabels[i]:
hits += 1
else:
misses += 1
print(hits/(hits+misses)) # reports binary_accuracy: 0.8187
Things I've tried include:
different loss functions (it's a binary classification problem with the label column of the dataset filled with either a zero or a one for each row);
different ways of unpacking the test dataset and feeding it to model.predict;
altering the 'num_labels' parameter between 1 and 2.
I fixed the problem by changing the num_labels parameter to two and the loss function to sparse categorical cross entropy. (I then had to change my model.predict loop by taking the argmax of the two logits produced by the model.)
I'm trying to understand how to create a simple tensorflow 2.2 keras model that can predict a simple function value:
f(a, b, c, d) = a < b : max(a/2, c/3) : max (b/2, d/3)
I know this exact question can be reduced to a categorical classification but my intention is to find a good way to build a model that can estimate the value and build more and more functions based on that with a more and more complex conditions later on.
For start I am stumbled upon understanding why a simple function works that hard.
For using with tensorflow on a created model I have:
def generate_input(multiplier):
return np.random.rand(1024 * multiplier, 4) * 1000
def generate_output(input):
def compute_func(row):
return max(row[0]/2, row[2]/3) if row[0] < row[1] else max(row[1]/2, row[3]/3)
return np.apply_along_axis(compute_func, 1, input)
for epochs in range(0, 512):
# print('Generating data...')
train_input = generate_input(1000)
train_output = generate_output(train_input)
# print('Training...')
fit_history = model.fit(
train_input, train_output,
epochs=1,
batch_size=1024
)
I have tried with different models that are less or more complex but I still didn't got a good conversion.
For example a simple liniar one:
input = Input(shape=(4,))
layer = Dense(8, activation=tanh)(input)
layer = Dense(16, activation=tanh)(layer)
layer = Dense(32, activation=tanh)(layer)
layer = Dense(64, activation=tanh)(layer)
layer = Dense(128, activation=tanh)(layer)
layer = Dense(32, activation=tanh)(layer)
layer = Dense(8, activation=tanh)(layer)
output = Dense(1)(layer)
model = Model(inputs=input, outputs=output)
model.compile(optimizer=Adam(), loss=mean_squared_error)
Can you give point to the direction one should follow in order to solve this type of conditional functions?
Or do I miss some pre-processing?
In my honest opinion, you have a pretty deep model, and therefore, you do not have enough data to train. I do not think you will need that much deep architecture.
Your problem definition is not what I would have done. You actually do not desire to generate the max value at the output, but you want the max value to get selected, right? If it is the case, I would go with a multiclass classification instead of a regression problem in my design. That's saying, I would go with an output = Dense(4)(layer,activation=softmax) as the last layer and in my optimizer, I would use a categorical cross-entropy. Of course, in the output generation, you need to manage to return an array of 3 zeros and one 1, something like this:
def compute_func(row):
ret_value=[0,0,0,0]
if row[0] < row[1]:
if row[0] < row[2]:
ret_value[2]=1
else:
ret_value[0]=1
else:
if row[1]< row[3]:
ret_value[3]=1
else:
ret_value[1]=1
return ret_value
I want to implement word2vec using tensorflow 2.0
I have prepared dataset according to the skip-gramm model and I have got approx. 18 million observations(target and context words).
I have used the followng dataset for my goal:
https://www.kaggle.com/c/quora-question-pairs/notebooks
I have created a new dataset for n-gramm model. I have used windows_size 2 and number of skips equal to 2 as well in order to create for each target word(as our input) create context word(that is what I have to predict). It looks like this:
target context
1 3
1 1
2 1
2 1222
Here is my code:
dataset_train = tf.data.Dataset.from_tensor_slices((target, context))
dataset_train = dataset_train.shuffle(buffer_size=1024).batch(64)
#Parameters:
num_words = len(word_index)#approximately 100000
embed_size = 300
num_sampled = 64
initializer_softmax = tf.keras.initializers.GlorotUniform()
#Variables:
embeddings_weight = tf.Variable(tf.random.uniform([num_words,embed_size],-1.0,1.0))
softmax_weight = tf.Variable(initializer_softmax([num_words,embed_size]))
softmax_bias = tf.Variable(initializer_softmax([num_words]))
optimizer = tf.keras.optimizers.Adam()
#As before, we are supplying a list of integers (that correspond to our validation vocabulary words) to the embedding_lookup() function, which looks up these rows in the normalized_embeddings tensor, and returns the subset of validation normalized embeddings.
#Now that we have the normalized validation tensor, valid_embeddings, we can multiply this by the full normalized vocabulary (normalized_embedding) to finalize our similarity calculation:
#tf.function
def training(X,y):
with tf.GradientTape() as tape:
embed = tf.nn.embedding_lookup(embeddings_weight,X)
loss = tf.reduce_mean(tf.nn.sampled_softmax_loss(weights = softmax_weight, biases = softmax_bias, inputs = embed,
labels = y, num_sampled = num_sampled, num_classes = num_words))
variables = [embeddings_weight,softmax_weight,softmax_bias]
gradients = tape.gradient(loss,variables)
optimizer.apply_gradients(zip(gradients,variables))
EPOCHS = 30
for epoch in range(EPOCHS):
print('Epoch:',epoch)
for X,y in dataset_train:
training(X,y)
#compute similarity of words:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings_weight), 1, keepdims=True))
norm_embed = embeddings_weight/ norm
temp_emb = tf.nn.embedding_lookup(norm_embed,X)
similarity = tf.matmul(temp_emb,tf.transpose(norm_embed))
But the computation of even 1 epoch lasts too long. Is it possible somehow to improve the perfomance of my code?(I am using google colab for the code execution)
EDIT: this is a shape of my train dataset
dataset_train
<BatchDataset shapes: ((None,), (None, 1)), types: (tf.int64, tf.int64)>
I was following the instructions from this guide: https://adventuresinmachinelearning.com/word2vec-tutorial-tensorflow/
This is because softmax function is computationally quite expensive while dealing with possibilities of millions of points in Word2Vec algorithm as explained here. A faster training would be possible with negative sampling.
I am working with time series EEG data recorded from 10 individual locations on the body to classify future behavior in terms of increasing heart activity. I would like to better understand how my labeled data corresponds to the training inputs.
So far, several RNN configurations as well as countless combinations of vanilla dense networks have not gotten me great results and I'd figure a 1D convnet is worth a try.
The things I'm having trouble understanding are:
1.) Feeding data into the model.
orig shape = (30000 timesteps, 10 channels)
array fed to layer = (300 slices, 100 timesteps, 10 channels)
Are the slices separated by 1 time step, giving me 300 slices of timesteps at either end of the original array, or are they separated end to end? If the second is true, how could I create an array of (30000 - 100) slices separated by one ts and is also compatible with the 1D CNN layer?
2) Matching labels with the training and testing data
My understanding is that when you feed in a sequence of train_x_shape = (30000, 10), there are 30000 labels with train_y_shape = (30000, 2) (2 classes) associated with the train_x data.
So, when (300 slices of) 100 timesteps of train_x data with shape = (300, 100, 10) are fed into the model, does the label value correspond to the entire 100 ts (one label per 100 ts, with this label being equal to the last time step's label), or are each 100 rows/vectors in the slice labeled- one for each ts?
Train input:
train_x = train_x.reshape(train_x.shape[0], 1, train_x.shape[1])
n_timesteps = 100
n_channels = 10
layer : model.add(Convolution1D(filters = n_channels * 2, padding = 'same', kernel_size = 3, input_shape = (n_timesteps, n_channels)))
final layer : model.add(Dense(2, activation = 'softmax'))
I use categorical_crossentropy for loss.
Answer 1
This will really depend on "how did you get those slices"?
The answer is totally dependent on what "you're doing". So, what do you want?
If you have simply reshaped (array.reshape(...)) the original array from shape (30000,10) to shape (300,100,10), the model will see:
300 individual (and not connected) sequences
100 timesteps in each sequence
Sequence 1 goes from step 0 to 299;
Sequence 2 goes from step 300 to 599 and so on.
Creating overlapping slices - Sliding window
If you want to create sequences shifted by only one timestep, make a loop for that.
import numpy as np
originalSequence = someArrayWithShape((30000,10))
newSlices = [] #empty list
start = 0
end = start + 300
while end <= 30000:
newSlices.append(originalSequence[start:end])
start+=1
end+=1
newSlices = np.asarray(newSlices)
Beware: if you do this in the input data, you will have to do a similar thing in your output data as well.
Answer2
Again, that's totally up to you. What do you want to achieve?
Convolutional layers will keep the timesteps with these options:
If you use padding='same', the final length will be the same as the input
If you don't, the final length will be reduced depending on the kernel size you choose
Recurrent layers will keep the timesteps or not depending on:
Whether you use return_sequences=True - Output has timesteps
Or you use return_sequences=False - Output has no timesteps
If you want only one output for each sequence (not per timestep):
Recurrent models:
Use LSTM(...., return_sequences=True) until the last LSTM
The last LSTM will be LSTM(..., return_sequences=False)
Convolutional models:
At some point after the convolutions, choose one of these to add:
GlobalMaxPooling1D
GlobalAveragePooling1D
Flatten (but treat the number of channels later with a Dense(2)
Reshape((2,))
I think I'd go with GlobalMaxPooling2D if using convoltions, but recurrent models seem better for this. (Not a rule, though).
You can choose to use intermediate MaxPooling1D layers to gradually reduce the length from 100 to 50, then to 25 and so on. This will probably reach a better output.
Remember to keep X and Y paired:
import numpy as np
train_x = someArrayWithShape((30000,10))
train_y = someArrayWithShape((30000,2))
newXSlices = [] #empty list
newYSlices = [] #empty list
start = 0
end = start + 300
while end <= 30000:
newXSlices.append(train_x[start:end])
newYSlices.append(train_y[end-1:end])
start+=1
end+=1
newXSlices = np.asarray(newXSlices)
newYSlices = np.asarray(newYSlices)