Electricity categorization - tensorflow

I'm trying to categorize which electronic devices that are turned ON based only the sum of all electricity for my apartment. I have a setup where I measure each watt hour (blink of a LED), so the current consumption in watts have a precision of about 10 seconds, which is great.
I am trying to do this in tensorflow, and in the first iteration I want to use only one input (the total watts, e.g. 200W), and I want to have one output per electronic device. I also use dummy data now to see how it works (and because it would be very troublesome to categorize every measurement to be able to teach the algorithm).
Here is my code now:
import tensorflow as tf
import numpy as np
LABELS = [
'Nothing',
'Toaster', # Toaster uses 800W
'Lamp'] # Lamp uses just 100W
DATA_LENGTH = 20000
np.random.seed(1) # To be able to reproduce
# Create dummy data (1:s or 0:s)
nothing_data = np.array([1] * DATA_LENGTH)
toaster_data = np.random.randint(2, size=DATA_LENGTH)
lamp_data = np.random.randint(2, size=DATA_LENGTH)
labels = np.array(list(zip(nothing_data, toaster_data, lamp_data)))
x_train = (toaster_data * 800 + lamp_data * 100) / 900 # Normalize
y_train = labels
# Split up train and test data
x_test = x_train[15000:]
y_test = y_train[15000:]
x_train = x_train[:15000]
y_train = y_train[:15000]
# The model
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, input_dim=1),
tf.keras.layers.Dense(4, activation=tf.nn.relu),
tf.keras.layers.Dense(4, activation=tf.nn.relu),
tf.keras.layers.Dense(3, activation=tf.nn.sigmoid)
])
model.compile(optimizer='adadelta',
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=10)
val_loss, val_acc = model.evaluate(x_test, y_test)
print(val_loss, val_acc)
Now to the problem, the val_acc is 1.0, 100%. (val_loss=0.059, val_acc=1.0)
Still when I predict, the predictions are very off.
# Predict
predict_input = [0.88888, 0.111111, 1.0000, 0.222]
predict_output = model.predict(predict_input)
First one should be toaster + nothing, but it also has 33% lamp. I would have liked binary output, if that was possible.
Do I need to have a "nothing" output?

You need to match the model type to your problem. You've applied what is basically a mixed linear regression prediction, to a problem of binary classification. The model is good if you want to predict that wattage, given the appliances turned on, but it's not so good in the opposite direction.
It's going to try all sorts of things with the paucity of data given and the freedom inherent in the model. Note that you really have only four training inputs: making multiple copies in equal amounts doesn't really make your training better.
Most of all, why are you not doing this with the "sum to target" algorithm, a much simpler and more effective way to solve the problem. The presented problem isn't really a ML sort of problem.
If you simply want to do this by training a model, then build one with multiple binary outputs. You can research "multiple labels" for leads on how to do so. If you're doing it only for a handful of appliances in your home, you might want to beat it to death with 2^n output states, and not worry about the structural accuracy.

Related

Training with Dataset API and numpy array yields completely different results

I have a CNN regression model and feature comes in (2000, 3000, 1) shape, where 2000 is total number of samples with each being a (3000, 1) 1D array. Batch size is 8, 20% of the full dataset is used for validation.
However, zip feature and label into tf.data.Dataset gives completely different scores from feeding numpy arrays directly in.
The tf.data.Dataset code looks like:
# Load features and labels
features = np.array(features) # shape is (2000, 3000, 1)
labels = np.array(labels) # shape is (2000,)
dataset = tf.data.Dataset.from_tensor_slices((features, labels))
dataset = dataset.shuffle(buffer_size=2000)
dataset = dataset.batch(8)
train_dataset = dataset.take(200)
val_dataset = dataset.skip(200)
# Training model
model.fit(train_dataset, validation_data=val_dataset,
batch_size=8, epochs=1000)
The numpy code looks like:
# Load features and labels
features = np.array(features) # exactly the same as previous
labels = np.array(labels) # exactly the same as previous
# Training model
model.fit(x=features, y=labels, shuffle=True, validation_split=0.2,
batch_size=8, epochs=1000)
Except for this, other code is exactly the same, for example
# Set global random seed
tf.random.set_seed(0)
np.random.seed(0)
# No preprocessing of feature at all
# Load model (exactly the same)
model = load_model()
# Compile model
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
loss=tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.mean_absolute_error, ],
)
The former method via tf.data.Dataset API yields mean absolute error (MAE) around 10-3 on both training and validation set, which looks quite suspicious as the model doesn't have any drop-out or regularization to prevent overfitting. On the other hand, feeding numpy arrays right in gives training MAE around 0.1 and validation MAE around 1.
The low MAE of tf.data.Dataset method looks super suspicious however I just couldn't figure out anything wrong with the code. Also I could confirm the number of training batches is 200 and validation batches is 50, meaning I didn't use the training set for validation.
I tried to vary the global random seed or use some different shuffle seeds, which didn't change the results much. Training was done on NVIDIA V100 GPUs, and I tried tensorflow version 2.9, 2.10, 2.11 which didn't make much difference.
The problem lies in the default behaviour of "shuffle" method of tf.data.Dataset, more specificially the reshuffle_each_iteration argument which is by default True. Meaning if I implement the following code:
dataset = tf.data.Dataset.from_tensor_slices((features, labels))
dataset = dataset.shuffle(buffer_size=2000)
dataset = dataset.batch(8)
train_dataset = dataset.take(200)
val_dataset = dataset.skip(200)
model.fit(train_dataset, validation_data=val_dataset, batch_size=8, epochs=1000)
The dataset would actually be shuffle after each epoch though it might not look so apparently so. As a result, the validation data would leak into training set (in fact there would be no distinguish between these two sets as the order is shuffled every epoch).
So make sure to set reshuffle_each_iteration to False if you would like to shuffle the dataset and then do train-val split.
UPDATE: TensorFlow confirms this issue and warning would be added in future docs.
PS: It's a hard lesson for me, as I have been using the model for analysing the results for several months (as a graduating MPhil student).

Why can't I classify my data perfectly on this simple problem using a NN?

I have a set of observations made of 10 features, each of these features being a real number in the interval (0,2). Say I wanted to train a simple neural network to classify whether the average of those features is above or below 1.0.
Unless I'm missing something, it should be enough with a two-layer network with one neuron on each layer. The activation functions would be a linear one (i.e. no activation function) on the first layer and a sigmoid on the output layer. An example of a NN with this architecture that would work is one that calculates the average on the first layer (i.e. all weights = 0.1 and bias=0) and asseses whether that is above or below 1.0 in the second layer (i.e. weight = 1.0 and bias = -1.0).
When I implement this using TensorFlow (see code below), I obviously get a very high accuracy quite quickly, but never get to 100% accuracy... I would like some help to understand conceptually why this is the case. I don't see why the backppropagation algorithm does not reach a set of optimal weights (may be this is related with the loss function I'm using, which has local minmums?). Also I would like to know whether a 100% accuracy is achievable if I use different activations and/or loss function.
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
X = [np.random.random(10)*2.0 for _ in range(10000)]
X = np.array(X)
y = X.mean(axis=1) >= 1.0
y = y.astype('int')
train_ratio = 0.8
train_len = int(X.shape[0]*0.8)
X_train, X_test = X[:train_len,:], X[train_len:,:]
y_train, y_test = y[:train_len], y[train_len:]
def create_classifier(lr = 0.001):
classifier = tf.keras.Sequential()
classifier.add(tf.keras.layers.Dense(units=1))
classifier.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))#, input_shape=input_shape))
optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
metrics=[tf.keras.metrics.BinaryAccuracy()],
classifier.compile(optimizer=optimizer, loss=tf.keras.losses.BinaryCrossentropy(from_logits=False), metrics=metrics)
return classifier
classifier = create_classifier(lr = 0.1)
history = classifier.fit(X_train, y_train, batch_size=1000, validation_split=0.1, epochs=2000)
Ignoring the fact that a neural network is an odd approach for this problem, and answering your specific question - it looks like your learning rate might be too high which could explain the fluctuations around the optimal point.

Keras/Tensorflow - Generate predictions in batch for imagenet (I get only one result back)

I am generating imagenet tags for all keyframes in a video with a single call and have this code:
# all keras/tf/mobilenet imports
model_imagenet = MobileNetV2(weights='imagenet')
frames_list = []
for frame in frame_set:
frame_img = frame.to_image()
frame_pil = frame_img.resize((224,224), Image.ANTIALIAS)
ts = int(frame.pts)
x = image.img_to_array(frame_pil)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
frames_list.append(x)
print(len(frames_list))
preds_list = model_imagenet.predict_on_batch(frames_list)
print("[*]",preds_list)
The result appears thus:
frames_list count: 125
and the predictions thus, one row of 1000 dimensions (imagenet classes), shouldn't it be 125 rows?:
[[1.15425530e-04 1.83317825e-04 4.28701424e-05 2.87547664e-05
:
7.91769926e-05 1.30803732e-04 4.81895368e-05 3.06891889e-04]]
This is generating prediction for a single row in the batch. I have tried both predict and predict_on_batch with the same result.
How can I get a bulk prediction for say 200 frames at one go with Keras/Tensorflow/Mobilenet?
ImageNet is a popular database which consists of 1000 different categories.
The dimension of 1000 is natural and to be expected, since for one image the softmax outputs a probability for each of the 1000 classes.
EDIT: For multiple image predictions, you should use predict_generator(). In addition, as of TensorFlow 2.0, if you use the Keras backend, predict_generator() has been deprecated in favor of simple predict, which also allows input data as generators.
E.g. : (from How to use predict_generator with ImageDataGenerator?) :
test_datagen = ImageDataGenerator(rescale=1./255)
#Modify the batch size here
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(200, 200),
color_mode="rgb",
shuffle = False,
class_mode='categorical',
batch_size=1)
filenames = test_generator.filenames
nb_samples = len(filenames)
predict = model.predict_generator(test_generator,steps = nb_samples)
Please bear in mind that it will be highly unlikely to have a lot of predictions at once, since it is constrained to the memory of the video card.
Also, note the difference between predict and predict_on_batch: What is the difference between the predict and predict_on_batch methods of a Keras model?
OK, here is how I solved it, hope this helps someone else:
preds_list = model_imagenet.predict(np.vstack(frames_list),batch_size=32)
print("[*]",preds_list)
Please note the np.vstack and adjust the batch_size to whatever your computer is capable of.

Low accuracy of DNN created using tf.keras on dataset having small feature set

total train data record: 460000
total cross-validation data record: 89000
number of output class: 392
tensorflow 1.8.0 CPU installation
Each data record has 26 features, where 25 are numeric and one is categorical which is one hot encoded into 19 additional features. Initially, not all feature value was present for each data record. I have used avg to fill missing float type features and most frequent value for missing int type feature. Output can be any of 392 classes labeled as 0 to 391.
Finally, all features are passed through a StandardScaler()
Here is my model:
output_class = 392
X_train, X_test, y_train, y_test = get_data()
# y_train and y_test contains int from 0-391
# Make y_train and y_test categorical
y_train = tf.keras.utils.to_categorical(y_train, unique_dtc_count)
y_test = tf.keras.utils.to_categorical(y_test, unique_dtc_count)
# Convert to float type
y_train = y_train.astype(np.float32)
y_test = y_test.astype(np.float32)
# tf.enable_eager_execution() # turned off to use rmsprop optimizer
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(400, activation=tf.nn.relu, input_shape=
(44,)))
model.add(tf.keras.layers.Dense(40000, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(392, activation=tf.nn.softmax))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
import logging
logging.getLogger().setLevel(logging.INFO)
model.fit(X_train, y_train, epochs=3)
loss, acc = model.evaluate(X_test, y_test)
print('Accuracy', acc)
But this model gives only 28% accuracy on both on training and test data. What should I change here to get a good accuracy on both training and test data? Should I go wider and deeper? Or should I consider taking more features?
Note: there were total 400 unique features in the dataset. But most of the features only appeared randomly in 5 to 10 data record. And some features have no relevance in other data records. I picked 26 features based on domain knowledge and frequency in data records.
Any suggestion is appreciated. Thanks.
EDIT: I forgot to add this in the original post, #Neb suggested a less wide deeper network, I actually tried this. My first model was a [44,400,400,392] layer. It gave me around 30% accuracy in training and testing.
Your model is too wider. You have 400 nodes in the first hidden layer and 40.000 in the second layer, for a total of 400*44 + 40.000*400 + 392*400 = 16.174.400 parameters. However, you only input 44 features!
Because of this, your net is capable of detecting even the smallest, most imperceptible variations in inputs and finally it considers them as valuable information instead of noise. I'm quite sure that if you leave your network training for a long time (here I only see 3 epoch), it will end up with overfitting your training set.
You have some solutions:
reduce the number of nodes per levels. You may also experiment adding 1 or 2 new layers. A possible structure might be [44, 128, 512, 392]
Implement regression. You have multiple way to do this:
restrict the range the range in which network parameters live
implement Dropout
implement Batch normalization (which is known to have a small regularization effect)
use Adam Optimizer instead of RMSprop
If your features are somewhat correlated, you may try a CNN instead of a Fully connected network.
Then, to improve generalization you can:
explore the dataset looking for outliers and remove them. An outlier is a sample which can confuse the network or does not convey any additional information.
"randomly" initialize your parameters, e.g using Xavier's Initialization
Finally, I would say: do you really need 392 classes? Could you merge some of them?

How can I improve my LSTM accuracy in Tensorflow

I'm trying to figure out how to decrease the error in my LSTM. It's an odd use-case because rather than classifying, we are taking in short lists (up to 32 elements long) and outputting a series of real numbers, ranging from -1 to 1 - representing angles. Essentially, we want to reconstruct short protein loops from amino acid inputs.
In the past we had redundant data in our datasets, so the accuracy reported was incorrect. Since removing the redundant data our validation accuracy has gotten much worse, which suggests our network had learned to memorise the most frequent examples.
Our dataset is 10,000 items, split 70/20/10 between train, validation and test. We use a bi-directional, LSTM as follows:
x = tf.cast(tf_train_dataset, dtype=tf.float32)
output_size = FLAGS.max_cdr_length * 4
dmask = tf.placeholder(tf.float32, [None, output_size], name="dmask")
keep_prob = tf.placeholder(tf.float32, name="keepprob")
sizes = [FLAGS.lstm_size,int(math.floor(FLAGS.lstm_size/2)),int(math.floor(FLAGS.lstm_size/ 4))]
single_rnn_cell_fw = tf.contrib.rnn.MultiRNNCell( [lstm_cell(sizes[i], keep_prob, "cell_fw" + str(i)) for i in range(len(sizes))])
single_rnn_cell_bw = tf.contrib.rnn.MultiRNNCell( [lstm_cell(sizes[i], keep_prob, "cell_bw" + str(i)) for i in range(len(sizes))])
length = create_length(x)
initial_state = single_rnn_cell_fw.zero_state(FLAGS.batch_size, dtype=tf.float32)
initial_state = single_rnn_cell_bw.zero_state(FLAGS.batch_size, dtype=tf.float32)
outputs, states = tf.nn.bidirectional_dynamic_rnn(cell_fw=single_rnn_cell_fw, cell_bw=single_rnn_cell_bw, inputs=x, dtype=tf.float32, sequence_length = length)
output_fw, output_bw = outputs
states_fw, states_bw = states
output_fw = last_relevant(FLAGS, output_fw, length, "last_fw")
output_bw = last_relevant(FLAGS, output_bw, length, "last_bw")
output = tf.concat((output_fw, output_bw), axis=1, name='bidirectional_concat_outputs')
test = tf.placeholder(tf.float32, [None, output_size], name="train_test")
W_o = weight_variable([sizes[-1]*2, output_size], "weight_output")
b_o = bias_variable([output_size],"bias_output")
y_conv = tf.tanh( ( tf.matmul(output, W_o)) * dmask, name="output")
Essentially, we use 3 layers of LSTM, with 256, 128 and 64 units each. We take the last step of both the Forward and Backward passes and concatenate them together. These feed into a final, fully connected layer that presents the data in the way we need it. We use a mask to set these steps we don't need to zero.
Our cost function uses a mask again, and takes the mean of the squared difference. We build the mask from the test data. Values to ignore are set to -3.0.
def cost(goutput, gtest, gweights, FLAGS):
mask = tf.sign(tf.add(gtest,3.0))
basic_error = tf.square(gtest-goutput) * mask
basic_error = tf.reduce_sum(basic_error)
basic_error /= tf.reduce_sum(mask)
return basic_error
To train the net I've used a variety of optimizers. The lowest scores have been obtained with the AdamOptimizer. The others, such as Adagrad, Adadelta, RMSProp tend to flatline around 0.3/0.4 error which is not particularly great.
Our learning rate is 0.004, batch size of 200. We use a 0.5 probability dropout layer.
I've tried adding more layers, changing learning rates, batch sizes, even the representation of the data. I've attempted batch regularisation, L1 and L2 weight regularisation (though perhaps incorrectly) and I've even considered switching to a convnet approach instead.
Nothing seems to make any difference. What has seemed to work is changing the optimizer. Adam seems noisier as it improves, but it does get closer than the other optimizers.
We need to get down to a value much closer to 0.05 or 0.01. Sometimes the training error touches 0.09 but the validation doesn't follow. I've run this network for about 500 epochs so far (about 8 hours) and it tends to settle around 0.2 validation error.
I'm not quite sure what to attempt next. Decayed learning rate might help but I suspect there is something more fundamental I need to do. It could be something as simple as a bug in the code - I need to double check the masking,