orthogonal weights in Network with Keras / Tensorflow - tensorflow

I want to initialise orthogonal weights with Keras / Tensorflow 1.14
So, I am modifying the following code (which works fine):
def __dictNN(self, x):
# Parameters
dim = self.__dim
hdim = self.__hdim
ddim = self.__ddim
kmatdim = ddim + 1 + dim
num_layers = self.__num_layers
std = 1.0 / np.sqrt(hdim)
std_proj = 1.0 / np.sqrt(dim)
with tf.variable_scope("Input_projection",
initializer=tf.random_uniform_initializer(
maxval=std_proj, minval=-std_proj)):
P = tf.get_variable(name='weights',
shape=(dim,hdim),
dtype=tf.float64)
res_in = tf.matmul(x, P)
with tf.variable_scope("Residual"):
for j in range(self.__num_layers):
layer_name = "Layer_"+str(j)
with tf.variable_scope(layer_name):
W = tf.get_variable(name="weights", shape=(hdim,hdim),
dtype=tf.float64)
b = tf.get_variable(name="biases", shape=(hdim),
dtype=tf.float64)
if j==0: # first layer
res_out = res_in + self.__tf_nlr(
tf.matmul(res_in, W) + b)
else: # subsequent layers
res_out = res_out + self.__tf_nlr(
tf.matmul(res_out, W) + b)
with tf.variable_scope("Output_projection",
initializer=tf.random_uniform_initializer(
maxval=std, minval=-std)):
W = tf.get_variable(name="weights", shape=(hdim, ddim),
dtype=tf.float64)
b = tf.get_variable(name="biases", shape=(ddim),
dtype=tf.float64)
out = tf.matmul(res_out, W) + b
return out
So I am substituting the initializer=tf.random_uniform_initializer() for initializer=tf.orthogonal_initializer() , but it does not seam to work, the error that I have is:
ValueError: The tensor to initialize must be at least two-dimensional
I hope that you can help me

According to tensorflow doc orthogonal initializer normalize a 2D tensor to be orthogonal. If the tensor is not squared, the columns or the rows (depending on which is fewer) will be orthogonal. When the rank is > 2 the tensor is reshaped to have 2 dimensions. However 1D tensor like b, are not handled.
The trick: change your biases to 2D tensors:
b = tf.get_variable(name="biases", shape=(1, ddim),
dtype=tf.float64)
It should "orthogonalize" the row (here it means just normalize I guess). Note that I don't test it and it can raises an error because it expects more than 1 row.
Or more simpler, change the initialization for the biases.

Related

Can neural networks handle redundant inputs?

I have a fully connected neural network with the following number of neurons in each layer [4, 20, 20, 20, ..., 1]. I am using TensorFlow and the 4 real-valued inputs correspond to a particular point in space and time, i.e. (x, y, z, t), and the 1 real-valued output corresponds to the temperature at that point. The loss function is just the mean square error between my predicted temperature and the actual temperature at that point in (x, y, z, t). I have a set of training data points with the following structure for their inputs:
(x,y,z,t):
(0.11,0.12,1.00,0.41)
(0.34,0.43,1.00,0.92)
(0.01,0.25,1.00,0.65)
...
(0.71,0.32,1.00,0.49)
(0.31,0.22,1.00,0.01)
(0.21,0.13,1.00,0.71)
Namely, what you will notice is that the training data all have the same redundant value in z, but x, y, and t are generally not redundant. Yet what I find is my neural network cannot train on this data due to the redundancy. In particular, every time I start training the neural network, it appears to fail and the loss function becomes nan. But, if I change the structure of the neural network such that the number of neurons in each layer is [3, 20, 20, 20, ..., 1], i.e. now data points only correspond to an input of (x, y, t), everything works perfectly and training is all right. But is there any way to overcome this problem? (Note: it occurs whether any of the variables are identical, e.g. either x, y, or t could be redundant and cause this error.) I have also attempted different activation functions (e.g. ReLU) and varying the number of layers and neurons in the network, but these changes do not resolve the problem.
My question: is there any way to still train the neural network while keeping the redundant z as an input? It just so happens the particular training data set I am considering at the moment has all z redundant, but in general, I will have data coming from different z in the future. Therefore, a way to ensure the neural network can robustly handle inputs at the present moment is sought.
A minimal working example is encoded below. When running this example, the loss output is nan, but if you simply uncomment the x_z in line 12 to ensure there is now variation in x_z, then there is no longer any problem. But this is not a solution since the goal is to use the original x_z with all constant values.
import numpy as np
import tensorflow as tf
end_it = 10000 #number of iterations
frac_train = 1.0 #randomly sampled fraction of data to create training set
frac_sample_train = 0.1 #randomly sampled fraction of data from training set to train in batches
layers = [4, 20, 20, 20, 20, 20, 20, 20, 20, 1]
len_data = 10000
x_x = np.array([np.linspace(0.,1.,len_data)])
x_y = np.array([np.linspace(0.,1.,len_data)])
x_z = np.array([np.ones(len_data)*1.0])
#x_z = np.array([np.linspace(0.,1.,len_data)])
x_t = np.array([np.linspace(0.,1.,len_data)])
y_true = np.array([np.linspace(-1.,1.,len_data)])
N_train = int(frac_train*len_data)
idx = np.random.choice(len_data, N_train, replace=False)
x_train = x_x.T[idx,:]
y_train = x_y.T[idx,:]
z_train = x_z.T[idx,:]
t_train = x_t.T[idx,:]
v1_train = y_true.T[idx,:]
sample_batch_size = int(frac_sample_train*N_train)
np.random.seed(1234)
tf.set_random_seed(1234)
import logging
logging.getLogger('tensorflow').setLevel(logging.ERROR)
tf.logging.set_verbosity(tf.logging.ERROR)
class NeuralNet:
def __init__(self, x, y, z, t, v1, layers):
X = np.concatenate([x, y, z, t], 1)
self.lb = X.min(0)
self.ub = X.max(0)
self.X = X
self.x = X[:,0:1]
self.y = X[:,1:2]
self.z = X[:,2:3]
self.t = X[:,3:4]
self.v1 = v1
self.layers = layers
self.weights, self.biases = self.initialize_NN(layers)
self.sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=False,
log_device_placement=False))
self.x_tf = tf.placeholder(tf.float32, shape=[None, self.x.shape[1]])
self.y_tf = tf.placeholder(tf.float32, shape=[None, self.y.shape[1]])
self.z_tf = tf.placeholder(tf.float32, shape=[None, self.z.shape[1]])
self.t_tf = tf.placeholder(tf.float32, shape=[None, self.t.shape[1]])
self.v1_tf = tf.placeholder(tf.float32, shape=[None, self.v1.shape[1]])
self.v1_pred = self.net(self.x_tf, self.y_tf, self.z_tf, self.t_tf)
self.loss = tf.reduce_mean(tf.square(self.v1_tf - self.v1_pred))
self.optimizer = tf.contrib.opt.ScipyOptimizerInterface(self.loss,
method = 'L-BFGS-B',
options = {'maxiter': 50,
'maxfun': 50000,
'maxcor': 50,
'maxls': 50,
'ftol' : 1.0 * np.finfo(float).eps})
init = tf.global_variables_initializer()
self.sess.run(init)
def initialize_NN(self, layers):
weights = []
biases = []
num_layers = len(layers)
for l in range(0,num_layers-1):
W = self.xavier_init(size=[layers[l], layers[l+1]])
b = tf.Variable(tf.zeros([1,layers[l+1]], dtype=tf.float32), dtype=tf.float32)
weights.append(W)
biases.append(b)
return weights, biases
def xavier_init(self, size):
in_dim = size[0]
out_dim = size[1]
xavier_stddev = np.sqrt(2/(in_dim + out_dim))
return tf.Variable(tf.truncated_normal([in_dim, out_dim], stddev=xavier_stddev), dtype=tf.float32)
def neural_net(self, X, weights, biases):
num_layers = len(weights) + 1
H = 2.0*(X - self.lb)/(self.ub - self.lb) - 1.0
for l in range(0,num_layers-2):
W = weights[l]
b = biases[l]
H = tf.tanh(tf.add(tf.matmul(H, W), b))
W = weights[-1]
b = biases[-1]
Y = tf.add(tf.matmul(H, W), b)
return Y
def net(self, x, y, z, t):
v1_out = self.neural_net(tf.concat([x,y,z,t], 1), self.weights, self.biases)
v1 = v1_out[:,0:1]
return v1
def callback(self, loss):
global Nfeval
print(str(Nfeval)+' - Loss in loop: %.3e' % (loss))
Nfeval += 1
def fetch_minibatch(self, x_in, y_in, z_in, t_in, den_in, N_train_sample):
idx_batch = np.random.choice(len(x_in), N_train_sample, replace=False)
x_batch = x_in[idx_batch,:]
y_batch = y_in[idx_batch,:]
z_batch = z_in[idx_batch,:]
t_batch = t_in[idx_batch,:]
v1_batch = den_in[idx_batch,:]
return x_batch, y_batch, z_batch, t_batch, v1_batch
def train(self, end_it):
it = 0
while it < end_it:
x_res_batch, y_res_batch, z_res_batch, t_res_batch, v1_res_batch = self.fetch_minibatch(self.x, self.y, self.z, self.t, self.v1, sample_batch_size) # Fetch residual mini-batch
tf_dict = {self.x_tf: x_res_batch, self.y_tf: y_res_batch, self.z_tf: z_res_batch, self.t_tf: t_res_batch,
self.v1_tf: v1_res_batch}
self.optimizer.minimize(self.sess,
feed_dict = tf_dict,
fetches = [self.loss],
loss_callback = self.callback)
def predict(self, x_star, y_star, z_star, t_star):
tf_dict = {self.x_tf: x_star, self.y_tf: y_star, self.z_tf: z_star, self.t_tf: t_star}
v1_star = self.sess.run(self.v1_pred, tf_dict)
return v1_star
model = NeuralNet(x_train, y_train, z_train, t_train, v1_train, layers)
Nfeval = 1
model.train(end_it)
I think your problem is in this line:
H = 2.0*(X - self.lb)/(self.ub - self.lb) - 1.0
In the third column fo X, corresponding to the z variable, both self.lb and self.ub are the same value, and equal to the value in the example, in this case 1, so it is acutally computing:
2.0*(1.0 - 1.0)/(1.0 - 1.0) - 1.0 = 2.0*0.0/0.0 - 1.0
Which is nan. You can work around the issue in a few different ways, a simple option is to simply do:
# Avoids dividing by zero
X_d = tf.math.maximum(self.ub - self.lb, 1e-6)
H = 2.0*(X - self.lb)/X_d - 1.0
This is an interesting situation. A quick check on an online tool for regression shows that even simple regression suffers from the problem of unable to fit data points when one of the inputs is constant through the dataset. Taking a look at the algebraic solution for a two-variable linear regression problem shows the solution involving division by standard deviation which, being zero in a constant set, is a problem.
As far as solving through backprop is concerned (as is the case in your neural network), I strongly suspect that the derivative of the loss with respect to the input (these expressions) is the culprit, and that the algorithm is not able to update the weights W using W := W - α.dZ, and ends up remaining constant.

simple LSTM implementation in tensorflow: Consider casting elements to a supported type error

I'm trying to implement a simple LSTM cell on Tensorflow to compare its performance with another one I implemented previously.
x = tf.placeholder(tf.float32,[BATCH_SIZE,SEQ_LENGTH,FEATURE_SIZE])
y = tf.placeholder(tf.float32,[BATCH_SIZE,SEQ_LENGTH,FEATURE_SIZE])
weights = { 'out': tf.Variable(tf.random_normal([FEATURE_SIZE, 8 * FEATURE_SIZE, NUM_LAYERS]))}
biases = { 'out': tf.Variable(tf.random_normal([4 * FEATURE_SIZE, NUM_LAYERS]))}
def RNN(x, weights, biases):
x = tf.unstack(x, SEQ_LENGTH, 1)
lstm_cell = tf.keras.layers.LSTMCell(NUM_LAYERS)
outputs = tf.keras.layers.RNN(lstm_cell, x, dtype=tf.float32)
return outputs
pred = RNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
I used an example I found on GitHub and tried to change it to get the behavior I want but I got this error message :
TypeError: Failed to convert object of type <class 'tensorflow.python.keras.layers.recurrent.RNN'> to Tensor. Contents: <tensorflow.python.keras.layers.recurrent.RNN object at 0x7fe437248710>. Consider casting elements to a supported type.
Try
outputs = tf.keras.layers.RNN(lstm_cell, dtype=tf.float32) (x)
instead
Here are the examples from TF documentation:
# Let's use this cell in a RNN layer:
cell = MinimalRNNCell(32)
x = keras.Input((None, 5))
layer = RNN(cell)
y = layer(x)
# Here's how to use the cell to build a stacked RNN:
cells = [MinimalRNNCell(32), MinimalRNNCell(64)]
x = keras.Input((None, 5))
layer = RNN(cells)
y = layer(x)

How create linear regression model for multi dimensional data in Tensorflow?

I have read a guide start for tensorflow here's a link
In this guide use an example of one dimension for model y = W*x +b.
After that i tried create 2 dimensional for x . Follow is my code :
import tensorflow as tf
import numpy as np
import random as rd
rd.seed(2)
#model is 2*x1 + x2 - 3 = y
def create_data_train():
x_train = np.asarray([[2,3],[6,7],[1,5],[4,6],[10,-1],[0,0],[5,6],
[8,9],[4.5,6.2],[1,1],[0.3,0.2]])
w_train = np.asarray([[2,1]])
b = np.asarray([[-3]])
y_train = np.dot(x_train, w_train.T) + b
for i in range(y_train.shape[0]):
for j in range(y_train.shape[1]):
y_train[i][j] += 1-rd.randint(0,2)
return x_train,y_train
# step 1
x = tf.placeholder(tf.float32, [None, 2])
W = tf.Variable(tf.zeros([2, 1]))
b = tf.Variable(tf.zeros([1]))
y = tf.matmul(x, W) + b
y_ = tf.placeholder(tf.float32, [None, 1])
# step 2
loss = tf.reduce_sum(tf.pow(tf.subtract(y,y_),2))
optimizer = tf.train.GradientDescentOptimizer(0.05)
train = optimizer.minimize(loss)
#step 3
x1,y1 = create_data_train()
x_train = tf.convert_to_tensor(x1)
y_train = tf.convert_to_tensor(y1)
print(x_train)
print(y_train)
init = tf.global_variables_initializer()
sess = tf.Session()
print(sess.run(x_train))
sess.run(init)
for i in range(1000):
sess.run(train,feed_dict={x:x_train,y_:y_train})
endW ,endb = sess.run([W,b])
print(endW)
print(endb)
But when i run, i encounter an error is :
TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles.
The error here because you cannot feed a tensor to feed_dict.
Since you have following two lines,
x_train = tf.convert_to_tensor(x1)
y_train = tf.convert_to_tensor(y1)
You are converting x1 and y1 to tensors (i.e your input and outputs). Feed x1 and y1 directly without converting to tensors.
Hope this helps.

Multiplying a rank 3 tensor with a rank 2 tensor in Tensorflow

Given a rank 3 tensor:
sentence_max_length = 5
batch_size = 3
n_hidden = 10
n_classes = 2
x = tf.constant(np.reshape(np.arange(150),(batch_size,sentence_max_length, n_hidden)), dtype = tf.float32)
And a rank 2 tensor:
W = tf.constant(np.reshape(np.arange(20), (n_hidden, n_classes)), dtype = tf.float32)
And a rank 1 bias tensor:
b = tf.constant(np.reshape(np.arange(5), (n_classes), dtype = tf.float32))
I was wondering how one would the last two axis of x by W such that the resulting vector Z would be of shape (batch_size, max_length, n_classes) though batch_size would not be known during graph creation I've just given it a value here for demonstration purposes
So to clarify:
Z[0] = tf.matmul(x[0,:,:], W) + b
So that the W and b are shared across all the batches. The reason for this is that I am trying to use the output of tf.dynamic_rnn whereby the output is of shape (batch_size, sentence_max_length, n_hidden) and build another layer ontop of output which has shared weights W and b.
One approach could be ...
import tensorflow as tf
import numpy as np
from tensorflow.python.layers.core import Dense
sentence_max_length = 5
batch_size = 3
n_hidden = 10
n_classes = 2
x = tf.constant(np.reshape(np.arange(150),(batch_size,sentence_max_length, n_hidden)), dtype = tf.float32)
linear_layer = Dense(n_classes, use_bias=True) #required projection value
z = linear_layer(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
res = sess.run(z)
res.shape
(3, 5, 2)
Internally, Dense layer creates trainable W & b variables. And, it uses standard_ops.tensordot operation to transform the last dimension to the projected value. For further details, refer to the source code here.

Why is the gradient of tf.sign() not equal to 0?

I expected the gradient for tf.sign() in TensorFlow to be equal to 0 or None. However, when I examined the gradients, I found that they were equal to very small numbers (e.g. 1.86264515e-09). Why is that?
(If you are curious as to why I even want to know this, it is because I want to implement the "straight-through estimator" described here, and before overriding the gradient for tf.sign(), I wanted to check that the default behavior was in fact what I was expecting.)
EDIT: Here is some code which reproduces the error. The model is just the linear regression model from the introduction to TensorFlow, except that I use y=sign(W)x + b instead of y=Wx + b.
import tensorflow as tf
import numpy as np
def gvdebug(g, v):
g2 = tf.zeros_like(g, dtype=tf.float32)
v2 = tf.zeros_like(v, dtype=tf.float32)
g2 = g
v2 = v
return g2,v2
# Create 100 phony x, y data points in NumPy, y = x * 0.1 + 0.3
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data * 0.1 + 0.3
# Try to find values for W and b that compute y_data = W * x_data + b
# (We know that W should be 0.1 and b 0.3, but TensorFlow will
# figure that out for us.)
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = tf.sign(W) * x_data + b
# Minimize the mean squared errors.
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
grads_and_vars = optimizer.compute_gradients(loss)
gv2 = [gvdebug(gv[0], gv[1]) for gv in grads_and_vars]
apply_grads = optimizer.apply_gradients(gv2)
# Before starting, initialize the variables. We will 'run' this first.
init = tf.initialize_all_variables()
# Launch the graph.
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.01)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
sess.run(init)
# Fit the line.
for step in range(201):
sess.run(apply_grads)
if (step % 20 == 0) or ((step-1) % 20 == 0):
print("")
print(sess.run(gv2[0][1])) #the variable
print(sess.run(gv2[0][0])) #the gradient
print("")
print(step, sess.run(W), sess.run(b))
# Learns best fit is W: [0.1], b: [0.3]