Identify misclassified images with Tensorflow - tensorflow

I have been working on an image classifier and I would like to have a look at the images that the model has misclassified in the validation. My idea was to compare the true and predicted values and used the index of the values that didn't match to get the images.
However, when I tried to compare the accuracy I don't get the same result I got when I use the evaluate method.
This is what I have done:
I import the data using this function:
def create_dataset(folder_path, name, split, seed, shuffle=True):
return tf.keras.preprocessing.image_dataset_from_directory(
folder_path, labels='inferred', label_mode='categorical', color_mode='rgb',
batch_size=32, image_size=(320, 320), shuffle=shuffle, interpolation='bilinear',
validation_split=split, subset=name, seed=seed)
train_set = create_dataset(dir_path, 'training', 0.1, 42)
valid_set = create_dataset(dir_path, 'validation', 0.1, 42)
# output:
# Found 16718 files belonging to 38 classes.
# Using 15047 files for training.
# Found 16718 files belonging to 38 classes.
# Using 1671 files for validation.
Then to evaluate the accuracy on the validation set I use this line:
model.evaluate(valid_set)
# output:
# 53/53 [==============================] - 22s 376ms/step - loss: 1.1322 - accuracy: 0.7349
# [1.1321837902069092, 0.7348892688751221]
which is fine since the values are exactly the same I got in the last epoch of training.
To extract the true labels from the validation set I use this line of code based on this answer. Note that I need to create the validation again because every time I call the variable that refers to the validation set, the validation set gets shuffled.
I thought that it was this factor to cause the inconsistent accuracy, but apparently it didn't solve the problem.
y_val_true = np.concatenate([y for x, y in create_dataset(dir_path, 'validation', 0.1, 42)], axis=0)
y_val_true = np.argmax(y_val_true, axis=1)
I make the prediction:
y_val_pred = model.predict(create_dataset(dir_path, 'validation', 0.1, 42))
y_val_pred = np.argmax(y_val_pred, axis=1)
And finally I compute once again the accuracy to verify that everything is ok:
m = tf.keras.metrics.Accuracy()
m.update_state(y_val_true, y_val_pred)
m.result().numpy()
# output:
# 0.082585275
As you can see, instead of getting the same value I got when I ran the evaluate method, now I get only 8%.
I would be truly grateful if you could point out where my approach is flawed.
And since the my first question I post, I apologize in advance for any mistake I made.

This method can help provide giving insights if you want to display or analyse batch-by-batch
m = tf.keras.metrics.Accuracy()
# Iterating over individual batches to keep track of the images
# being fed to the model.
for valid_images, valid_labels in valid_set.as_numpy_iterator():
y_val_true = np.argmax(valid_labels, axis=1)
# Model can take inputs other than dataset as well. Hence, after images
# are collected you can give them as input.
y_val_pred = model.predict(valid_images)
y_val_pred = np.argmax(y_val_pred, axis=1)
# Update the state of the accuracy metric after every batch
m.update_state(y_val_true, y_val_pred)
m.result().numpy()
If you want to feed altogether
valid_ds = create_dataset(dir_path, 'validation', 0.1, 42, shuffle=False)
y_val_true = np.concatenate([y for x, y in valid_ds, axis=0)
y_val_true = np.argmax(y_val_true, axis=1)
y_val_pred = model.predict(valid_ds)
y_val_pred = np.argmax(y_val_pred, axis=1)
m = tf.keras.metrics.Accuracy()
m.update_state(y_val_true, y_val_pred)
m.result().numpy()
I couldn't find the bug in your code though.

Related

Can I use the output of tf.keras.utils.image_dataset_from_directory to train an autoencoder?

To put it simply, I'd like to be able to use a keras dataset created from a local image directory to train an autoencoder. To clarify, this is a model that approximates the Identity function for images : ideally, the output is exactly equal to the input.
The dataset is too large to fit in memory, so converting the dataset to a numpy array with np.concatenate will not help me here.
Or in other words, I'd like an Identity image dataset, where the label for each image in the dataset is exactly equal to the image itself.
Here's my (non-working) sample code:
train_ds, validate_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
labels=None,
validation_split=0.1,
subset="both",
shuffle=True,
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size,
crop_to_aspect_ratio=True)
history = autoencoder.fit(
x=train_ds,
y=train_ds,
validation_data=(validate_ds, validate_ds),
epochs=epochs,
batch_size=16
)
The image_dataset_from_directory function gives me a dataset of images with no labels. So far so good.
The second command fails with the error message:
ValueError: `y` argument is not supported when using dataset as input.
On the other hand, if I exclude the y variable I get this error:
ValueError: Target data is missing. Your model was compiled with loss=binary_crossentropy, and therefore expects target data to be provided in `fit()`.
Which is not at all surprising, because there are NO labels, as I requested none. But yet it won't let me use the dataset as the labels which is what I need to do.
Any help would be appreciated.
While there are ways to modify the dataset, I think the best option is to write a custom model class. This is modified from the official tutorial:
class Autoencoder(tf.keras.Model):
def train_step(self, data):
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x = data # CHANGE 1: changed from x, y = data
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compiled_loss(x, y_pred, regularization_losses=self.losses) # CHANGE 2: replaced y by x as label
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(x, y_pred) # CHANGE 3: like change 2
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
def test_step(self, data):
# CHANGED in the same way
x = data
# Compute predictions
y_pred = self(x, training=False)
# Updates the metrics tracking the loss
self.compiled_loss(x, y_pred, regularization_losses=self.losses)
# Update the metrics.
self.compiled_metrics.update_state(x, y_pred)
# Return a dict mapping metric names to current value.
# Note that it will include the loss (tracked in self.metrics).
return {m.name: m.result() for m in self.metrics}
This is for the functional API (tf.keras.Model). In case you are using a Sequential model, you should inherit from that instead. You can use this as a direct replacement for the normal model constructor.
Another option could be to use train_zipped = tf.data.Dataset.zip((train_ds, train_ds)) to create an input, target dataset that you can put directly into the usual model and loss function. Personally, I don't like the duplication. Also, I'm not sure if this will behave correctly for the shuffled data (will both copies of train_ds be shuffled in the same way?).
You could circumvent this by setting shuffle=False in image_dataset_from_directory, and then use train_zipped = train_zipped.shuffle(buffer_size) instead. However, in my experience this is very slow.

Printing the multi layer Perceptron ANN prediction value in the data frame

I am the beginner to tensor flow and need to
get the predicted value (if customer will subscribe the term deposit or not) as the output (in data frame format) for Multi-Perception ANN Model for given data frame as the input..
for Banking Campaign..
We are referring this sample
https://github.com/ManikandanJeyabal/Workplace/blob/master/ANN/TensorFlow/BankMarketing.py
We have tried to run this in notebooks on Azure virtual machine with Python 3.6
In above sample , we will need to modify the source code below to get the predictions (in the form of data frame , so that it can be displayed as the report.)
plt.plot(mse_his, 'r')
plt.show()
plt.plot(accu_his)
plt.show()
# print the final accuracy
correct_pred = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
print("Test Accuracy--> ", (sess.run(accuracy, feed_dict={x: x_test, y_: y_test})))
# print final mean square error
pred_y = sess.run(y, feed_dict={x: x_test})
mse = tf.reduce_mean(tf.square(pred_y - y_test))
print("MSE: %.4f" % sess.run(mse))
print(correct_pred)
print(y_test) ```
we need to get the output in the form of panadas dataframe along with the predicted columns?
Please guide me here
----------------------------------------------
Updates:
Thank you for the response,McAngus..
After the changes in comments below.. I could render the dataframe output but with this output , How can I derive True or False Predicted Value?
[Dataframe Output][1]
[1]: https://i.stack.imgur.com/f8iJ9.png
If I understand correctly, you're looking to put the results in a data frame. From here you can use pd.DataFrame.from_dict like so:
pd.DataFrame.from_dict({"target": y_test.tolist(), "prediction": pred_y.tolist()})
This will give the column headers target and prediction.

How To Get the Confusion Matrix when Using Tflearn

I want to get confusion matrix
yet for that I need the set of predicated items and labels.
how can I get this data from tflearn for example for this example (Pannous speech_data) https://github.com/llSourcell/tensorflow_speech_recognition_demo/blob/master/demo.py
thanks!
model.fit(trainX, trainY, n_epoch=10, validation_set=(testX, testY), show_metric=True,batch_size=batch_size)
_y=model.predict(X)
predictions.append(_y)
labels.append(trainY)
bp()
confusionMat=tf.confusion_matrix(labels,predictions,num_classes=classes,dtype=tf.int32,name=None,weights=None)
print(np.matrix(confusionMat))
_y=model.predict(X) # predictions
y = train_Y # i think this is actual labels data
tf.confusion_matrix(
labels, # put y here
predictions, #put _y here
num_classes=None,
dtype=tf.int32,
name=None,
weights=None
)

How to average summaries over multiple batches?

Assuming I have a bunch of summaries defined like:
loss = ...
tf.scalar_summary("loss", loss)
# ...
summaries = tf.merge_all_summaries()
I can evaluate the summaries tensor every few steps on the training data and pass the result to a SummaryWriter.
The result will be noisy summaries, because they're only computed on one batch.
However, I would like to compute the summaries on the entire validation dataset.
Of course, I can't pass the validation dataset as a single batch, because it would be too big.
So, I'll get summary outputs for each validation batch.
Is there a way to average those summaries so that it appears as if the summaries have been computed on the entire validation set?
Do the averaging of your measure in Python and create a new Summary object for each mean. Here is what I do:
accuracies = []
# Calculate your measure over as many batches as you need
for batch in validation_set:
accuracies.append(sess.run([training_op]))
# Take the mean of you measure
accuracy = np.mean(accuracies)
# Create a new Summary object with your measure
summary = tf.Summary()
summary.value.add(tag="%sAccuracy" % prefix, simple_value=accuracy)
# Add it to the Tensorboard summary writer
# Make sure to specify a step parameter to get nice graphs over time
summary_writer.add_summary(summary, global_step)
I would avoid calculating the average outside the graph.
You can use tf.train.ExponentialMovingAverage:
ema = tf.train.ExponentialMovingAverage(decay=my_decay_value, zero_debias=True)
maintain_ema_op = ema.apply(your_losses_list)
# Create an op that will update the moving averages after each training step.
with tf.control_dependencies([your_original_train_op]):
train_op = tf.group(maintain_ema_op)
Then, use:
sess.run(train_op)
That will call maintain_ema_op because it is defined as a control dependency.
In order to get your exponential moving averages, use:
moving_average = ema.average(an_item_from_your_losses_list_above)
And retrieve its value using:
value = sess.run(moving_average)
This calculates the moving average within your calculation graph.
I think it's always better to let tensorflow do the calculations.
Have a look at the streaming metrics. They have an update function to feed the information of your current batch and a function to get the averaged summary.
It's going to look somewhat like this:
accuracy = ...
streaming_accuracy, streaming_accuracy_update = tf.contrib.metrics.streaming_mean(accuracy)
streaming_accuracy_scalar = tf.summary.scalar('streaming_accuracy', streaming_accuracy)
# set up your session etc.
for i in iterations:
for b in batches:
sess.run([streaming_accuracy_update], feed_dict={...})
streaming_summ = sess.run(streaming_accuracy_scalar)
writer.add_summary(streaming_summary, i)
Also see the tensorflow documentation: https://www.tensorflow.org/versions/master/api_guides/python/contrib.metrics
and this question:
How to accumulate summary statistics in tensorflow
You can average store the current sum and recalculate the average after each batch, like:
loss_sum = tf.Variable(0.)
inc_op = tf.assign_add(loss_sum, loss)
clear_op = tf.assign(loss_sum, 0.)
average = loss_sum / batches
tf.scalar_summary("average_loss", average)
sess.run(clear_op)
for i in range(batches):
sess.run([loss, inc_op])
sess.run(average)
For future reference, the TensorFlow metrics API now supports this by default. For example, take a look at tf.mean_squared_error:
For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the mean_squared_error. Internally, a squared_error operation computes the element-wise square of the difference between predictions and labels. Then update_op increments total with the reduced sum of the product of weights and squared_error, and it increments count with the reduced sum of weights.
These total and count variables are added to the set of metric variables, so in practice what you would do is something like:
x_batch = tf.placeholder(...)
y_batch = tf.placeholder(...)
model_output = ...
mse, mse_update = tf.metrics.mean_squared_error(y_batch, model_output)
# This operation resets the metric internal variables to zero
metrics_init = tf.variables_initializer(
tf.get_default_graph().get_collection(tf.GraphKeys.METRIC_VARIABLES))
with tf.Session() as sess:
# Train...
# On evaluation step
sess.run(metrics_init)
for x_eval_batch, y_eval_batch in ...:
mse = sess.run(mse_update, feed_dict={x_batch: x_eval_batch, y_batch: y_eval_batch})
print('Evaluation MSE:', mse)
I found one solution myself. I think it's kind of hacky and I hope there is a more elegant solution.
During setup:
valid_loss_placeholder = tf.placeholder(dtype=tf.float32, shape=[])
valid_loss_summary = tf.scalar_summary("valid loss", valid_loss_placeholder)
Or for tensorflow versions after 0.12 (change in name for tf.scalar_summary):
valid_loss_placeholder = tf.placeholder(dtype=tf.float32, shape=[])
valid_loss_summary = tf.summary.scalar("valid loss", valid_loss_placeholder)
Within training loop:
# Compute valid loss in python by doing sess.run() for each batch
# and averaging
valid_loss = ...
summary = sess.run(valid_loss_summary, {valid_loss_placeholder: valid_loss})
summary_writer.add_summary(summary, step)
As of August 2018, streaming metrics have been depreciated. However, unintuitively, all metrics are streaming. So, use tf.metrics.accuracy.
However, if you want accuracy (or another metric) over only a subset of batches, then you can use Exponential Moving Average, as in the answer by #MZHm or reset any of the the tf.metric's by following this very informative blog post
For quite some time I'm only saving the summary once per epoch. I never knew that TensorFlows summary would then only save the summary for the last run batch.
Shocked I looked into this problem. This is the solution I came up with (using the dataset API):
loss = ...
train_op = ...
loss_metric, loss_metric_update = tf.metrics.mean(ae_loss)
tf.summary.scalar('loss', loss_metric)
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(os.path.join(res_dir, 'train'))
test_writer = tf.summary.FileWriter(os.path.join(res_dir, 'test'))
init_local = tf.initializers.local_variables()
init_global = tf.initializers.global_variables()
sess.run(init_global)
def train_run(epoch):
sess.run([dataset.train_init_op, init_local]) # test_init_op is the operation that switches to test data
for i in range(dataset.num_train_batches): # num_test_batches is the number of batches that should be run for the test set
sess.run([train_op, loss_metric_update])
summary, cur_loss = sess.run([merged, loss_metric])
train_writer.add_summary(summary, epoch)
return cur_loss
def test_run(epoch):
sess.run([dataset.test_init_op, init_local]) # test_init_op is the operation that switches to test data
for i in range(dataset.num_test_batches): # num_test_batches is the number of batches that should be run for the test set
sess.run(loss_metric_update)
summary, cur_loss = sess.run([merged, loss_metric])
test_writer.add_summary(summary, epoch)
return cur_loss
for epoch in range(epochs):
train_loss = train_run(epoch+1)
test_loss = test_run(epoch+1)
print("Epoch: {0:3}, loss: (train: {1:10.10f}, test: {2:10.10f})".format(epoch+1, train_loss, test_loss))
For the summary I'm just wrapping the tensor I'm interested in into tf.metrics.mean(). For each batch run I call the metrics update operation. At the end of every epoch the metrics tensor will return the correct mean of all batch results.
Don't forget to initialize local variables every time you switch between training and test data. Otherwise your train and test metrics will be near identical.
I had the same problem when I realized I had to iterate over my validation data when the memory space cramped up and the OOM errors flooding.
As multiple of these answers say, the tf.metrics have this built in, but I'm not using tf.metrics in my project. So inspired by that, I made this:
import tensorflow as tf
import numpy as np
def batch_persistent_mean(tensor):
# Make a variable that keeps track of the sum
accumulator = tf.Variable(initial_value=tf.zeros_like(tensor), dtype=tf.float32)
# Keep count of batches in accumulator (needed to estimate mean)
batch_nums = tf.Variable(initial_value=tf.zeros_like(tensor), dtype=tf.float32)
# Make an operation for accumulating, increasing batch count
accumulate_op = tf.assign_add(accumulator, tensor)
step_batch = tf.assign_add(batch_nums, 1)
update_op = tf.group([step_batch, accumulate_op])
eps = 1e-5
output_tensor = accumulator / (tf.nn.relu(batch_nums - eps) + eps)
# In regards to the tf.nn.relu, it's a hacky zero_guard:
# if batch_nums are zero then return eps, else it'll be batch_nums
# Make an operation to reset
flush_op = tf.group([tf.assign(accumulator, 0), tf.assign(batch_nums, 0)])
return output_tensor, update_op, flush_op
# Make a variable that we want to accumulate
X = tf.Variable(0., dtype=tf.float32)
# Make our persistant mean operations
Xbar, upd, flush = batch_persistent_mean(X)
Now you send Xbar to your summary e.g. tf.scalar_summary("mean_of_x", Xbar), and where you'd do sess.run(X) before, you'll do sess.run(upd). And between epochs you'd do sess.run(flush).
Testing behaviour:
### INSERT ABOVE CODE CHUNK IN S.O. ANSWER HERE ###
sess = tf.InteractiveSession()
with tf.Session() as sess:
sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()])
# Calculate the mean of 1+2+...+20
for i in range(20):
sess.run(upd, {X: i})
print(sess.run(Xbar), "=", np.mean(np.arange(20)))
for i in range(40):
sess.run(upd, {X: i})
# Now Xbar is the mean of (1+2+...+20+1+2+...+40):
print(sess.run(Xbar), "=", np.mean(np.concatenate([np.arange(20), np.arange(40)])))
# Now flush it
sess.run(flush)
print("flushed. Xbar=", sess.run(Xbar))
for i in range(40):
sess.run(upd, {X: i})
print(sess.run(Xbar), "=", np.mean(np.arange(40)))

tensorflow neural net with continuous / floating point output?

I'm trying to create a simple neural net in tensorflow that learns some simple relationship between inputs and outputs (for example, y=-x) where the inputs and outputs are floating point values (meaning, no softmax used on the output).
I feel like this should be pretty easy to do, but I must be messing up somewhere. Wondering if there are any tutorials or examples out there that do something similar. I looked through the existing tensorflow tutorials and didn't see anything like this and looked through several other sources of tensorflow examples I found by googling, but didn't see what I was looking for.
Here's a trimmed down version of what I've been trying. In this particular version, I've noticed that my weights and biases always seem to be stuck at zero. Perhaps this is due to my single input and single output?
I've had good luck altering the mist example for various nefarious purposes, but everything I've gotten to work successfully used softmax on the output for categorization. If I can figure out how to generate a raw floating point output from my neural net, there are several fun projects I'd like to do with it.
Anyone see what I'm missing? Thanks in advance!
- J.
# Trying to define the simplest possible neural net where the output layer of the neural net is a single
# neuron with a "continuous" (a.k.a floating point) output. I want the neural net to output a continuous
# value based off one or more continuous inputs. My real problem is more complex, but this is the simplest
# representation of it for explaining my issue. Even though I've oversimplified this to look like a simple
# linear regression problem (y=m*x), I want to apply this to more complex neural nets. But if I can't get
# it working with this simple problem, then I won't get it working for anything more complex.
import tensorflow as tf
import random
import numpy as np
INPUT_DIMENSION = 1
OUTPUT_DIMENSION = 1
TRAINING_RUNS = 100
BATCH_SIZE = 10000
VERF_SIZE = 1
# Generate two arrays, the first array being the inputs that need trained on, and the second array containing outputs.
def generate_test_point():
x = random.uniform(-8, 8)
# To keep it simple, output is just -x.
out = -x
return ( np.array([ x ]), np.array([ out ]) )
# Generate a bunch of data points and then package them up in the array format needed by
# tensorflow
def generate_batch_data( num ):
xs = []
ys = []
for i in range(num):
x, y = generate_test_point()
xs.append( x )
ys.append( y )
return (np.array(xs), np.array(ys) )
# Define a single-layer neural net. Originally based off the tensorflow mnist for beginners tutorial
# Create a placeholder for our input variable
x = tf.placeholder(tf.float32, [None, INPUT_DIMENSION])
# Create variables for our neural net weights and bias
W = tf.Variable(tf.zeros([INPUT_DIMENSION, OUTPUT_DIMENSION]))
b = tf.Variable(tf.zeros([OUTPUT_DIMENSION]))
# Define the neural net. Note that since I'm not trying to classify digits as in the tensorflow mnist
# tutorial, I have removed the softmax op. My expectation is that 'net' will return a floating point
# value.
net = tf.matmul(x, W) + b
# Create a placeholder for the expected result during training
expected = tf.placeholder(tf.float32, [None, OUTPUT_DIMENSION])
# Same training as used in mnist example
cross_entropy = -tf.reduce_sum(expected*tf.log(tf.clip_by_value(net,1e-10,1.0)))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.Session()
init = tf.initialize_all_variables()
sess.run(init)
# Perform our training runs
for i in range( TRAINING_RUNS ):
print "trainin run: ", i,
batch_inputs, batch_outputs = generate_batch_data( BATCH_SIZE )
# I've found that my weights and bias values are always zero after training, and I'm not sure why.
sess.run( train_step, feed_dict={x: batch_inputs, expected: batch_outputs})
# Test our accuracy as we train... I am defining my accuracy as the error between what I
# expected and the actual output of the neural net.
#accuracy = tf.reduce_mean(tf.sub( expected, net))
accuracy = tf.sub( expected, net) # using just subtract since I made my verification size 1 for debug
# Uncomment this to debug
#import pdb; pdb.set_trace()
batch_inputs, batch_outputs = generate_batch_data( VERF_SIZE )
result = sess.run(accuracy, feed_dict={x: batch_inputs, expected: batch_outputs})
print " progress: "
print " inputs: ", batch_inputs
print " outputs:", batch_outputs
print " actual: ", result
Your loss should be the squared difference of output and true value:
loss = tf.reduce_mean(tf.square(expected - net))
This way the network learns to optimize this loss and make the output closer to the real result. Cross entropy should only be used for output values between 0 and 1 i.e. for classification.
If anyone is interested, I got this example to work. Here's the code:
# Trying to define the simplest possible neural net where the output layer of the neural net is a single
# neuron with a "continuous" (a.k.a floating point) output. I want the neural net to output a continuous
# value based off one or more continuous inputs. My real problem is more complex, but this is the simplest
# representation of it for explaining my issue. Even though I've oversimplified this to look like a simple
# linear regression problem (y=m*x), I want to apply this to more complex neural nets. But if I can't get
# it working with this simple problem, then I won't get it working for anything more complex.
import tensorflow as tf
import random
import numpy as np
INPUT_DIMENSION = 1
OUTPUT_DIMENSION = 1
TRAINING_RUNS = 100
BATCH_SIZE = 10000
VERF_SIZE = 1
# Generate two arrays, the first array being the inputs that need trained on, and the second array containing outputs.
def generate_test_point():
x = random.uniform(-8, 8)
# To keep it simple, output is just -x.
out = -x
return (np.array([x]), np.array([out]))
# Generate a bunch of data points and then package them up in the array format needed by
# tensorflow
def generate_batch_data(num):
xs = []
ys = []
for i in range(num):
x, y = generate_test_point()
xs.append(x)
ys.append(y)
return (np.array(xs), np.array(ys))
# Define a single-layer neural net. Originally based off the tensorflow mnist for beginners tutorial
# Create a placeholder for our input variable
x = tf.placeholder(tf.float32, [None, INPUT_DIMENSION])
# Create variables for our neural net weights and bias
W = tf.Variable(tf.zeros([INPUT_DIMENSION, OUTPUT_DIMENSION]))
b = tf.Variable(tf.zeros([OUTPUT_DIMENSION]))
# Define the neural net. Note that since I'm not trying to classify digits as in the tensorflow mnist
# tutorial, I have removed the softmax op. My expectation is that 'net' will return a floating point
# value.
net = tf.matmul(x, W) + b
# Create a placeholder for the expected result during training
expected = tf.placeholder(tf.float32, [None, OUTPUT_DIMENSION])
# Same training as used in mnist example
loss = tf.reduce_mean(tf.square(expected - net))
# cross_entropy = -tf.reduce_sum(expected*tf.log(tf.clip_by_value(net,1e-10,1.0)))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
sess = tf.Session()
init = tf.initialize_all_variables()
sess.run(init)
# Perform our training runs
for i in range(TRAINING_RUNS):
print("trainin run: ", i, )
batch_inputs, batch_outputs = generate_batch_data(BATCH_SIZE)
# I've found that my weights and bias values are always zero after training, and I'm not sure why.
sess.run(train_step, feed_dict={x: batch_inputs, expected: batch_outputs})
# Test our accuracy as we train... I am defining my accuracy as the error between what I
# expected and the actual output of the neural net.
# accuracy = tf.reduce_mean(tf.sub( expected, net))
accuracy = tf.subtract(expected, net) # using just subtract since I made my verification size 1 for debug
# tf.subtract()
# Uncomment this to debug
# import pdb; pdb.set_trace()
print("W=%f, b=%f" % (sess.run(W), sess.run(b)))
batch_inputs, batch_outputs = generate_batch_data(VERF_SIZE)
result = sess.run(accuracy, feed_dict={x: batch_inputs, expected: batch_outputs})
print(" progress: ")
print(" inputs: ", batch_inputs)
print(" outputs:", batch_outputs)
print(" actual: ", result)
When using the built in, easy way of constructing the NN, I used
loss=tf.keras.losses.MeanSquaredError().