How do I batch a TF.Dataset without adding an additional dimension? - tensorflow

Short:
I'm doing a variety of manipulations to a tensorflow.python.data.ops.dataset_ops.BatchDataset object. For a variety of reasons, I need to do several manipulations before batching the dataset. In pseudo code, I'm doing something like this:
# create data
X = np...
y = np...
window_size = ...
# create tf dataset
Xy_ds = tf.keras.utils.timeseries_dataset_from_array(X,y,window_size,...batch_size=1)
#doing operations
Xy_ds = Xy_ds.filter(...)
Xy_ds = Xy_ds.map(...)
#batching
Xy_ds = Xy_ds.batch(...)
The issue is, the shape of Xy_ds ends up being (batch size, 1, window_size, n), presumably because I'm batching something which already has a batch_size of 1. How do I batch this so that the shape becomes (batch size, window_size, n)?
Long:
This is the actual code I'm working with. It's essentially the pseudo code, but runable.
import tensorflow as tf
import numpy as np
#creating numpy data structures representing the problem
X = np.random.random((100,5))
y = np.random.random((100))
src = np.expand_dims(np.array([0]*50 + [1]*50),1)
window_size = 10
batch_size = 3
#appending source information to X, for filtration
X = np.append(src, X, 1)
#making a time series dataset which does not respect src
Xy_ds = tf.keras.utils.timeseries_dataset_from_array(X, y, sequence_length=window_size, batch_size=1,
sequence_stride=1, shuffle=True)
#filtering by and removing src info
def single_source(x,y):
source = x[:,:,0]
return tf.reduce_all(source == source[0])
def drop_source(x,y):
x_ = x[:, :, 1:]
return x_, y
Xy_ds = Xy_ds.filter(single_source)
Xy_ds = Xy_ds.map(drop_source)
#batching
Xy_ds = Xy_ds.batch(batch_size)
i = 0
for x, y in Xy_ds:
if i == 0:
print('batch shape: ',x.shape)
i+=1
print('total batches: {}'.format(i))
print('total datums: {}'.format(i*batch_size))
the result of the printout is
batch shape: (3, 1, 10, 5)
total batches: 31
total datums: 93
This question is a spinoff of this question, if you're curious.

use
Xy_ds = Xy_ds.unbatch().batch(batch_size)
src: https://github.com/tensorflow/tensorflow/issues/31548

Also, you started with:
Xy_ds = tf.keras.utils.timeseries_dataset_from_array(...batch_size=1)
I checked the code and is skips batching if you set batch size=None Try:
Xy_ds = tf.keras.utils.timeseries_dataset_from_array(...batch_size=None)

Related

Time-Series Transformer Model trains well but performs worse on Test Data

I have created a transformer model for multivariate time series predictions (many-to-one classification model).
Details about the Dataset
I have the hourly varying data i.e., 8 different features (hour, month, temperature, humidity, windspeed, solar radiations concentration etc.) and with them I am trying to predict the time sequence (energy consumption of a building. So my input has the shape X.shape = (8783, 168, 8) i.e., 8783 time sequences, each sequence contains 168 hourly entries/vectors and each vector contains 8 features. My output has the shape Y.shape = (8783,1) i.e., 8783 sequences each containing 1 output value (i.e., building energy consumption value after every hour).
Model Details
I took as a model an example from the official keras site. It is created for classification problems, I modified it for my regression problem by changing the activation of last output layer from sigmoid to relu.
Input shape (train_f) = (8783, 168, 8)
Output shape (train_P) = (8783,1)
When I train the model for 100 no. of epochs it converges very well for less number of epochs as compared to my reference models (i.e., LSTMs and LSTMS with self attention). After training, when the model is asked to make prediction by feeding in the test data, the prediction performance is worse as compare to the reference models.
I would be grateful if you please have a look at the code and let me know of the potential steps to improve the prediction/test accuracy.
Here is the code;
df_weather = pd.read_excel(r"Downloads\WeatherData.xlsx")
df_energy = pd.read_excel(r"Downloads\Building_energy_consumption_record.xlsx")
visa = pd.concat([df_weather, df_energy], axis = 1)
df_data = visa.loc[:, ~visa.columns.isin(["Time1", "TD", "U", "DR", "FX"])
msna.bar(df_data)
plt.figure(figsize = (16,6))
sb.heatmap(df_data.corr(), annot = True, linewidths=1, fmt = ".2g", cmap= 'coolwarm')
plt.xticks(rotation = 'horizontal') # how the titles will look likemeans their orientation
extract_for_normalization = list(df_data)[1:9]
df_data_float = df_data[extract_for_normalization].astype(float)
from sklearn.model_selection import train_test_split
train_X, test_X = train_test_split(df_data_float, train_size = 0.7, shuffle = False)
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_train_X=scaler.fit_transform(train_X)
**Converting train_X into required shape (inputs,sequences, features)**
train_f = [] #features input from training data
train_p = [] # prediction values
#test_q = []
#test_r = []
n_future = 1 #number of days we want to predict into the future
n_past = 168 # no. of time series input features to be considered for training
for val in range(n_past, len(scaled_train_X) - n_future+1):
train_f.append(scaled_train_X[val - n_past:val, 0:scaled_train_X.shape[1]])
train_p.append(scaled_train_X[val + n_future - 1:val + n_future, -1])
train_f, train_p = np.array(train_f), np.array(train_p)
**Transformer Model**
def transformer_encoder(inputs, head_size, num_heads, ff_dim, dropout=0):
# Normalization and Attention
x = layers.LayerNormalization(epsilon=1e-6)(inputs)
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
x = layers.Dropout(dropout)(x)
res = x + inputs
# Feed Forward Part
x = layers.LayerNormalization(epsilon=1e-6)(res)
x = layers.Conv1D(filters=ff_dim, kernel_size=1, activation="relu")(x)
x = layers.Dropout(dropout)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
return x + res
def build_model(
input_shape,
head_size,
num_heads,
ff_dim,
num_transformer_blocks,
mlp_units,
dropout=0,
mlp_dropout=0,
):
inputs = keras.Input(shape=input_shape)
x = inputs
for _ in range(num_transformer_blocks):
x = transformer_encoder(x, head_size, num_heads, ff_dim, dropout)
x = layers.GlobalAveragePooling1D(data_format="channels_first")(x)
for dim in mlp_units:
x = layers.Dense(dim, activation="relu")(x)
x = layers.Dropout(mlp_dropout)(x)
outputs = layers.Dense(train_p.shape[1])(x)
return keras.Model(inputs, outputs)
input_shape = (train_f.shape[1], train_f.shape[2])
model = build_model(
input_shape,
head_size=256,
num_heads=4,
ff_dim=4,
num_transformer_blocks=4,
mlp_units=[128],
mlp_dropout=0.4,
dropout=0.25,
)
model.compile(loss=tf.keras.losses.mean_absolute_error,
optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
metrics=["mse"])
model.summary()
history = model.fit(train_f, train_p, epochs=100, batch_size = 32, validation_split = 0.15, verbose = 1)
trainYPredict = model.predict(train_f)
**Inverse transform the prediction and keep the last value(output)**
trainYPredict1 = np.repeat(trainYPredict, scaled_train_X.shape[1], axis = -1)
trainYPredict_actual = scaler.inverse_transform(trainYPredict1)[:, -1]
train_p_actual = np.repeat(train_p, scaled_train_X.shape[1], axis = -1)
train_p_actual1 = scaler.inverse_transform(train_p_actual)[:, -1]
Prediction_mse=mean_squared_error(train_p_actual1 ,trainYPredict_actual)
print("Mean Squared Error of prediction is:", str(Prediction_mse))
Prediction_rmse =sqrt(Prediction_mse)
print("Root Mean Squared Error of prediction is:", str(Prediction_rmse))
prediction_r2=r2_score(train_p_actual1 ,trainYPredict_actual)
print("R2 score of predictions is:", str(prediction_r2))
prediction_mae=mean_absolute_error(train_p_actual1 ,trainYPredict_actual)
print("Mean absolute error of prediction is:", prediction_mae)
**Testing of model**
scaled_test_X = scaler.transform(test_X)
test_q = []
test_r = []
for val in range(n_past, len(scaled_test_X) - n_future+1):
test_q.append(scaled_test_X[val - n_past:val, 0:scaled_test_X.shape[1]])
test_r.append(scaled_test_X[val + n_future - 1:val + n_future, -1])
test_q, test_r = np.array(test_q), np.array(test_r)
testPredict = model.predict(test_q )
Validation and training loss image is also attached Training and validation Loss

Retrieve (or add) filenames when loading tensorflow dataset

I am training a 3D CNN using keras and I would like to find a way to preserve the filenames from every input that comes into the dataset. The data consists on a series of .NII images that I load using tools from the Nibabel package. However, when I use the function get_fdata() from nibabel, I lose all the file names from the header and I just get the 3D arrays from the image (which I need to create the tensors that go into the model). I would like to know if there is a way to either preserve those names or add them as an attribute of each 3D array (maybe incorporating them into the data loader). I have followed the example from the keras CT scan classification, with some modifications for these fMRI images. Here I added some code to show what I have so far:
import os
from re import A
import zipfile
import numpy as np
import tensorflow as tf
import random
from tensorflow import keras
from tensorflow.keras import layers
import nibabel as nib
## Loading data and preprocessing
MNAME = "test_A.h5"
def read_nifti_file(filepath):
"""Read and load volume"""
# Read file
scan = nib.load(filepath)
# Get raw data
scan = scan.get_fdata()
return scan
scene_D_paths = np.asarray([
os.path.join(os.getcwd(),'data\scene_D', x)
for x in os.listdir("data\scene_D")
])
scene_H_paths = np.asarray([
os.path.join(os.getcwd(),'data\scene_H', x)
for x in os.listdir("data\scene_H")
])
# Read the scans.
D_scans = np.array([read_nifti_file(path) for path in scene_D_paths])
H_scans = np.array([read_nifti_file(path) for path in scene_H_paths])
# One hot encoding
D_labels = np.array([[1., 0., 0., 0.] for _ in range(len(D_scans))])
H_labels = np.array([[0., 1., 0., 0.] for _ in range(len(H_scans))])
# Divide dataset in 0.7 training and 0.3 testing
# We use a single randomization per scene and we keep track of the indexing
def split_dataset(scene):
n_images = len(scene)
n_train = round((n_images-1)*.7)
train_idx = np.full(n_images, False)
train_idx[random.sample(range(0, n_images), n_train)] = True
test_idx = np.full(len(train_idx), True)
test_idx[train_idx] = False
return(train_idx, test_idx)
D_index = split_dataset(scene_D_paths)
H_index = split_dataset(scene_H_paths)
paths_train = np.concatenate((scene_D_paths[D_index[0]], scene_H_paths[H_index[0]]), axis = 0)
x_train = np.concatenate((D_scans[D_index[0]], H_scans[H_index[0]]), axis=0)
y_train = np.concatenate((D_labels[D_index[0]], H_labels[H_index[0]]), axis=0)
paths_val = np.concatenate((scene_D_paths[D_index[1]], scene_H_paths[H_index[1]]), axis = 0)
x_val = np.concatenate((D_scans[D_index[1]], H_scans[H_index[1]]), axis=0)
y_val = np.concatenate((D_labels[D_index[1]], H_labels[H_index[1]]), axis=0)
# Define data loaders.
train_filenames = tf.constant(paths_train)
train_loader = tf.data.Dataset.from_tensor_slices((#train_filenames,
x_train, y_train))
validation_filenames = tf.constant(paths_val)
validation_loader = tf.data.Dataset.from_tensor_slices((#validation_filenames,
x_val, y_val))
# If increase this value, reduce learning rate
batch_size = 2
# Augment the on the fly during training.
train_dataset = (
train_loader.shuffle(len(x_train))
.batch(batch_size)
.prefetch(2)
)
# Only rescale.
validation_dataset = (
validation_loader.shuffle(len(x_val))
.batch(batch_size)
.prefetch(2)
)
def get_model(width=53, height=63, depth=52):
"""Build a 3D convolutional neural network model."""
inputs = keras.Input((width, height, depth, 1))
x = layers.Conv3D(filters=52, kernel_size=3, activation="relu")(inputs)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=52, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=104, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=208, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
# Tweak the number of units. More will tend to overfit to the training, less will tend to underfit
# We can increase dropout or reduce the number of parameters
# Reduce number of epoch during the initial training
x = layers.GlobalAveragePooling3D()(x)
x = layers.Dense(units=208, activation="relu")(x)
x = layers.Dropout(0.3)(x)
outputs = layers.Dense(units=4, activation="softmax")(x)
# Define the model.
model = keras.Model(inputs, outputs, name="3d_tinn_fmri")
return model
# Build model.
model = get_model(width=53, height=63, depth=52)
model.summary()
Thanks for your help,
Eduardo

Can neural networks handle redundant inputs?

I have a fully connected neural network with the following number of neurons in each layer [4, 20, 20, 20, ..., 1]. I am using TensorFlow and the 4 real-valued inputs correspond to a particular point in space and time, i.e. (x, y, z, t), and the 1 real-valued output corresponds to the temperature at that point. The loss function is just the mean square error between my predicted temperature and the actual temperature at that point in (x, y, z, t). I have a set of training data points with the following structure for their inputs:
(x,y,z,t):
(0.11,0.12,1.00,0.41)
(0.34,0.43,1.00,0.92)
(0.01,0.25,1.00,0.65)
...
(0.71,0.32,1.00,0.49)
(0.31,0.22,1.00,0.01)
(0.21,0.13,1.00,0.71)
Namely, what you will notice is that the training data all have the same redundant value in z, but x, y, and t are generally not redundant. Yet what I find is my neural network cannot train on this data due to the redundancy. In particular, every time I start training the neural network, it appears to fail and the loss function becomes nan. But, if I change the structure of the neural network such that the number of neurons in each layer is [3, 20, 20, 20, ..., 1], i.e. now data points only correspond to an input of (x, y, t), everything works perfectly and training is all right. But is there any way to overcome this problem? (Note: it occurs whether any of the variables are identical, e.g. either x, y, or t could be redundant and cause this error.) I have also attempted different activation functions (e.g. ReLU) and varying the number of layers and neurons in the network, but these changes do not resolve the problem.
My question: is there any way to still train the neural network while keeping the redundant z as an input? It just so happens the particular training data set I am considering at the moment has all z redundant, but in general, I will have data coming from different z in the future. Therefore, a way to ensure the neural network can robustly handle inputs at the present moment is sought.
A minimal working example is encoded below. When running this example, the loss output is nan, but if you simply uncomment the x_z in line 12 to ensure there is now variation in x_z, then there is no longer any problem. But this is not a solution since the goal is to use the original x_z with all constant values.
import numpy as np
import tensorflow as tf
end_it = 10000 #number of iterations
frac_train = 1.0 #randomly sampled fraction of data to create training set
frac_sample_train = 0.1 #randomly sampled fraction of data from training set to train in batches
layers = [4, 20, 20, 20, 20, 20, 20, 20, 20, 1]
len_data = 10000
x_x = np.array([np.linspace(0.,1.,len_data)])
x_y = np.array([np.linspace(0.,1.,len_data)])
x_z = np.array([np.ones(len_data)*1.0])
#x_z = np.array([np.linspace(0.,1.,len_data)])
x_t = np.array([np.linspace(0.,1.,len_data)])
y_true = np.array([np.linspace(-1.,1.,len_data)])
N_train = int(frac_train*len_data)
idx = np.random.choice(len_data, N_train, replace=False)
x_train = x_x.T[idx,:]
y_train = x_y.T[idx,:]
z_train = x_z.T[idx,:]
t_train = x_t.T[idx,:]
v1_train = y_true.T[idx,:]
sample_batch_size = int(frac_sample_train*N_train)
np.random.seed(1234)
tf.set_random_seed(1234)
import logging
logging.getLogger('tensorflow').setLevel(logging.ERROR)
tf.logging.set_verbosity(tf.logging.ERROR)
class NeuralNet:
def __init__(self, x, y, z, t, v1, layers):
X = np.concatenate([x, y, z, t], 1)
self.lb = X.min(0)
self.ub = X.max(0)
self.X = X
self.x = X[:,0:1]
self.y = X[:,1:2]
self.z = X[:,2:3]
self.t = X[:,3:4]
self.v1 = v1
self.layers = layers
self.weights, self.biases = self.initialize_NN(layers)
self.sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=False,
log_device_placement=False))
self.x_tf = tf.placeholder(tf.float32, shape=[None, self.x.shape[1]])
self.y_tf = tf.placeholder(tf.float32, shape=[None, self.y.shape[1]])
self.z_tf = tf.placeholder(tf.float32, shape=[None, self.z.shape[1]])
self.t_tf = tf.placeholder(tf.float32, shape=[None, self.t.shape[1]])
self.v1_tf = tf.placeholder(tf.float32, shape=[None, self.v1.shape[1]])
self.v1_pred = self.net(self.x_tf, self.y_tf, self.z_tf, self.t_tf)
self.loss = tf.reduce_mean(tf.square(self.v1_tf - self.v1_pred))
self.optimizer = tf.contrib.opt.ScipyOptimizerInterface(self.loss,
method = 'L-BFGS-B',
options = {'maxiter': 50,
'maxfun': 50000,
'maxcor': 50,
'maxls': 50,
'ftol' : 1.0 * np.finfo(float).eps})
init = tf.global_variables_initializer()
self.sess.run(init)
def initialize_NN(self, layers):
weights = []
biases = []
num_layers = len(layers)
for l in range(0,num_layers-1):
W = self.xavier_init(size=[layers[l], layers[l+1]])
b = tf.Variable(tf.zeros([1,layers[l+1]], dtype=tf.float32), dtype=tf.float32)
weights.append(W)
biases.append(b)
return weights, biases
def xavier_init(self, size):
in_dim = size[0]
out_dim = size[1]
xavier_stddev = np.sqrt(2/(in_dim + out_dim))
return tf.Variable(tf.truncated_normal([in_dim, out_dim], stddev=xavier_stddev), dtype=tf.float32)
def neural_net(self, X, weights, biases):
num_layers = len(weights) + 1
H = 2.0*(X - self.lb)/(self.ub - self.lb) - 1.0
for l in range(0,num_layers-2):
W = weights[l]
b = biases[l]
H = tf.tanh(tf.add(tf.matmul(H, W), b))
W = weights[-1]
b = biases[-1]
Y = tf.add(tf.matmul(H, W), b)
return Y
def net(self, x, y, z, t):
v1_out = self.neural_net(tf.concat([x,y,z,t], 1), self.weights, self.biases)
v1 = v1_out[:,0:1]
return v1
def callback(self, loss):
global Nfeval
print(str(Nfeval)+' - Loss in loop: %.3e' % (loss))
Nfeval += 1
def fetch_minibatch(self, x_in, y_in, z_in, t_in, den_in, N_train_sample):
idx_batch = np.random.choice(len(x_in), N_train_sample, replace=False)
x_batch = x_in[idx_batch,:]
y_batch = y_in[idx_batch,:]
z_batch = z_in[idx_batch,:]
t_batch = t_in[idx_batch,:]
v1_batch = den_in[idx_batch,:]
return x_batch, y_batch, z_batch, t_batch, v1_batch
def train(self, end_it):
it = 0
while it < end_it:
x_res_batch, y_res_batch, z_res_batch, t_res_batch, v1_res_batch = self.fetch_minibatch(self.x, self.y, self.z, self.t, self.v1, sample_batch_size) # Fetch residual mini-batch
tf_dict = {self.x_tf: x_res_batch, self.y_tf: y_res_batch, self.z_tf: z_res_batch, self.t_tf: t_res_batch,
self.v1_tf: v1_res_batch}
self.optimizer.minimize(self.sess,
feed_dict = tf_dict,
fetches = [self.loss],
loss_callback = self.callback)
def predict(self, x_star, y_star, z_star, t_star):
tf_dict = {self.x_tf: x_star, self.y_tf: y_star, self.z_tf: z_star, self.t_tf: t_star}
v1_star = self.sess.run(self.v1_pred, tf_dict)
return v1_star
model = NeuralNet(x_train, y_train, z_train, t_train, v1_train, layers)
Nfeval = 1
model.train(end_it)
I think your problem is in this line:
H = 2.0*(X - self.lb)/(self.ub - self.lb) - 1.0
In the third column fo X, corresponding to the z variable, both self.lb and self.ub are the same value, and equal to the value in the example, in this case 1, so it is acutally computing:
2.0*(1.0 - 1.0)/(1.0 - 1.0) - 1.0 = 2.0*0.0/0.0 - 1.0
Which is nan. You can work around the issue in a few different ways, a simple option is to simply do:
# Avoids dividing by zero
X_d = tf.math.maximum(self.ub - self.lb, 1e-6)
H = 2.0*(X - self.lb)/X_d - 1.0
This is an interesting situation. A quick check on an online tool for regression shows that even simple regression suffers from the problem of unable to fit data points when one of the inputs is constant through the dataset. Taking a look at the algebraic solution for a two-variable linear regression problem shows the solution involving division by standard deviation which, being zero in a constant set, is a problem.
As far as solving through backprop is concerned (as is the case in your neural network), I strongly suspect that the derivative of the loss with respect to the input (these expressions) is the culprit, and that the algorithm is not able to update the weights W using W := W - α.dZ, and ends up remaining constant.

Simple custom layer in keras, tensorflow confusion

First before I explain, here's the relevant code snippet:
input = Input(shape=(784, ))
hidden1 = Dense(784, activation='relu')(input)
hidden2 = Dense(784, activation='relu')(hidden1)
hidden3 = Dense(1568, activation='relu')(hidden2)
hidden4 = Lambda(lambda x: makeComplex(x))(hidden3)
hidden5 = Reshape((1, 28, 28))(hidden4)
hidden6 = Lambda(lambda x: ifft2(x))(hidden5)
hidden7 = Flatten()(hidden6)
output = Dense(train_targets.shape[1], activation='linear')(hidden7)
model = Model(inputs=input, outputs=output)
print(model.summary())
where ifft2(x) is
def ifft2(x):
import tensorflow as tf
return tf.cast(tf.spectral.ifft2d(tf.cast(x,dtype=tf.complex64)),tf.float32)
My goal now is to implement the makeComplex method.
Basically, it gets a vector of size 1568, and I want it to return a vector of size 784 in the following very simple fashion:
new[k] = old[k] + old[k + 1] * i where i is the imaginary unit
Here's my attempt:
def makeComplex(x):
y = np.zeros((1, 784))
for i in range(784):
y[i] = np.complex(x[i], x[i + 1])
return y
ofcourse this doesnt work because x is not actually a vector but rather a tensorflow tensor. Something I know nothing about. How can I make this work?
An example of tensor [1.,2.,3.,4.], what you want is [1.+2.j,3.+4.j]. I think you can use tf.gather to get two tensors [1.,2.] and [3.,4.], then use tf.complex to get the answer.
import tensorflow as tf
import numpy as np
a = tf.constant([1.,2.,3.,4.])
real = tf.gather(a,np.arange(0,a.get_shape().as_list()[0],2))
imag = tf.gather(a,np.arange(1,a.get_shape().as_list()[0],2))
res = tf.complex(real, imag)

TensorFlow code not giving intended results

The following code has the irritating trait of making every row of "out" the same. I am trying to classify k time series in Xtrain as [1,0,0,0], [0,1,0,0], [0,0,1,0], or [0,0,0,1], according to the way they were generated (by one of four random algorithms). Anyone know why? Thanks!
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import copy
n = 100
m = 10
k = 1000
hidden_layers = 50
learning_rate = .01
training_epochs = 10000
Xtrain = []
Xtest = []
Ytrain = []
Ytest = []
# ... fill variables with data ..
x = tf.placeholder(tf.float64,shape = (k,1,n,1))
y = tf.placeholder(tf.float64,shape = (k,1,4))
conv1_weights = 0.1*tf.Variable(tf.truncated_normal([1,m,1,hidden_layers],dtype = tf.float64))
conv1_biases = tf.Variable(tf.zeros([hidden_layers],tf.float64))
conv = tf.nn.conv2d(x,conv1_weights,strides = [1,1,1,1],padding = 'VALID')
sigmoid1 = tf.nn.sigmoid(conv + conv1_biases)
s = sigmoid1.get_shape()
sigmoid1_reshape = tf.reshape(sigmoid1,(s[0],s[1]*s[2]*s[3]))
sigmoid2 = tf.nn.sigmoid(tf.layers.dense(sigmoid1_reshape,hidden_layers))
sigmoid3 = tf.nn.sigmoid(tf.layers.dense(sigmoid2,4))
penalty = tf.reduce_sum((sigmoid3 - y)**2)
train_op = tf.train.AdamOptimizer(learning_rate).minimize(penalty)
model = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(model)
for i in range(0,training_epochs):
sess.run(train_op,{x: Xtrain,y: Ytrain})
out = sigmoid3.eval(feed_dict = {x: Xtest})
Likely because your loss function is mean squared error. If you're doing classification you should be using cross-entropy loss
Your loss is penalty = tf.reduce_sum((sigmoid3 - y)**2) that's the squared difference elementwise between a batch of predictions and a batch of values.
Your network output (sigmoid3) is a tensor with shape [?, 4] and y (I guess) is a tensor with shape [?, 4] too.
The squared difference has thus shape [?, 4].
This means that the tf.reduce_sum is computing in order:
The sum over the second dimension of the squared difference, producing a tensor with shape [?]
The sum over the first dimension (the batch size, here indicated with ?) producing a scalar value (shape ()) that's your loss value.
Probably you don't want this behavior (the sum over the batch dimension) and you're looking for the mean squared error over the batch:
penalty = tf.reduce_mean(tf.squared_difference(sigmoid3, y))