TensorFlow - TypeError: 'Tensor' object does not support item assignment - tensorflow

when I try to run the following code:
def pixel_drop(image, drop_rate=0.5):
img_h, img_w, _ = image.shape
pixel_count = img_h * img_w
drop_num = pixel_count * drop_rate
for drop in range(int(drop_num)):
rand_x = random.randint(0, img_h - 1)
rand_y = random.randint(0, img_w - 1)
image[rand_x, rand_y,:] = 0
return image
I seem to get the following error:
TypeError: 'Tensor' object does not support item assignment
It looks like I can't assign things to a tensor. How should I go about implementing this?

This notebook has the details about how to assign values to different variables and constants.
This example assigns zeros of the appropriate shape to the tensor. But you may have a different type of variable in your code.
import tensorflow as tf
import numpy as np
def pixel_drop(image, drop_rate=0.5):
img_h, img_w, _ = image.shape
pixel_count = img_h * img_w
drop_num = pixel_count * drop_rate
for drop in range(int(drop_num)):
rand_x = np.random.randint(0, img_h - 1)
rand_y = np.random.randint(0, img_w - 1)
image[rand_x, rand_y,:].assign(tf.zeros(shape=(3,)))
return image
img_data = tf.Variable(tf.random.uniform((100, 100, 3)))
print(pixel_drop(img_data))

Related

how to implement moving max (and min) calculation in the customized tf2.keras layer

During the training procedure, I want to calculate the moving maximum(and minimum) values of a batch feature maps and then I will implement quantization alogrithm based on the moving max (or min) values. For example: moving_max = (1-momentum)x(previous moving_max) + momentum x (current max value of a batch).
I implement the following codes based on the customized tf2.keras.layer:
from tensorflow.keras.layers import Layer
class QATQuantizerLayer(Layer):
def __init__(self, num_bits, momentum=0.01, **kwargs):
super(QATQuantizerLayer, self).__init__(**kwargs)
self.num_bits = num_bits
self.momentum = momentum
self.num_flag = 0
self.quant_min_val = 0
self.quant_max_val = (1 << self.num_bits) - 1
self.quant_range = float(self.quant_max_val - self.quant_min_val)
def build(self, input_shape):
self.moving_min = self.add_weight("moving_min", shape=(1,), initializer=tf.constant_initializer(-6), trainable=False)
self.moving_max = self.add_weight("moving_max", shape=(1,), initializer=tf.constant_initializer(6), trainable=False)
return super(QATQuantizerLayer, self).build(input_shape)
def call(self, inputs, training, **kwargs):
if training is None:
training = False
if training == True:
batch_min = tf.reduce_min(inputs)
batch_max = tf.reduce_max(inputs)
if self.num_flag == 0:
self.num_flag += 1
self.moving_min = batch_min
self.moving_max = batch_max
else:
temp_min = (1 - self.momentum) * self.moving_min + self.momentum * batch_min
temp_max = (1 - self.momentum) * self.moving_max + self.momentum * batch_max
self.moving_min = temp_min
self.moving_max = temp_max
float_range = self.moving_max - self.moving_min
scale = float_range / self.quant_range
scale = tf.maximum(scale, tf.keras.backend.epsilon())
zero_point = tf.math.round(self.moving_min / scale)
output = (tf.clip_by_value(_round_imp(inputs / scale) - zero_point,
self.quant_min_val, self.quant_max_val) + zero_point) * scale
return output
However, when I start to train I get the following errors:
TypeError: An op outside of the function building code is being passed a "Graph" tensor. It is possible to have Graph tensors leak out of the function building context by including a tf.init_scope in your function building code. For example, the following function will fail:......
If I change the following statement: [temp_min = (1 - self.momentum) * self.moving_min + self.momentum * batch_min] to [temp_min = (1 - self.momentum) + self.momentum * batch_min], the error is disappeared. (That is, remove self.moving_min from the statement)
How can I solve this problem?
Thank you very much.

Deep neural-network with backpropagation implementation does not work - python

I want to implement a multilayer NN with backpropagation. I have been trying for days, but it simply does not work. It is extremely clear in my head how it is supposed to work, I have streamline my code to be as simple as possible but I can't do it. It's probably something stupid, but I cannot see it.
The implementation I have done is with an input layer of 784 (28x28), two (L) hidden layers of 300 and an output of 10 classes. I have a bias in every layer (except last...)
The output activation is softmax and the hidden activation is ReLU.
I use mini batches of 600 examples over a dataset of 60k examples with 50 to 500 epoches.
Here the core of my code:
Preparation:
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
L = 2
K = len(np.unique(train_labels))
lr = 0.001
nb_epochs = 50
node_per_hidden_layer = 300
nb_batches = 100
W = []
losses_test = []
X_train = np.reshape(train_images, (train_images.shape[0], train_images.shape[1]*train_images.shape[2]))
X_test = np.reshape(test_images, (test_images.shape[0], train_images.shape[1]*train_images.shape[2]))
Y_train = np.zeros((train_labels.shape[0], K))
Y_train[np.arange(Y_train.shape[0]), train_labels] = 1
Y_test = np.zeros((test_labels.shape[0], K))
Y_test[np.arange(Y_test.shape[0]), test_labels] = 1
W.append(np.random.normal(0, 0.01, (X_train.shape[1]+1, node_per_hidden_layer)))
for i in range(L-1):
W.append(np.random.normal(0, 0.01, (node_per_hidden_layer+1, node_per_hidden_layer)))
W.append(np.random.normal(0, 0.01, (node_per_hidden_layer+1, K)))
Helper function:
def softmax(z):
exp = np.exp(z - z.max(1)[:,np.newaxis])
return np.array(exp / exp.sum(1)[:,np.newaxis])
def softmax_derivative(z):
sm = softmax(z)
return sm * (1-sm)
def ReLU(z):
return np.maximum(z, 0)
def ReLU_derivative(z):
return (z >= 0).astype(int)
def get_loss(y, y_pred):
return -np.sum(y * np.log(y_pred))
fitting
def fit():
minibatch_size = len(X_train) // nb_batches
for epoch in range(nb_epochs):
permutaion = list(np.random.permutation(X_train.shape[0]))
X_shuffle = X_train[permutaion]
Y_shuffle = Y_train[permutaion]
print("Epoch----------------", epoch)
for batche in range(0, X_shuffle.shape[0], minibatch_size):
Z = [None] * (L + 2)
a = [None] * (L + 2)
delta = [None] * (L + 2)
X = X_train[batche:batche+minibatch_size]
Y = Y_shuffle[batche:batche+minibatch_size]
### forward propagation
a[0] = np.append(X, np.ones((minibatch_size, 1)), axis=1)
for i in range(L):
Z[i + 1] = a[i] # W[i]
a[i + 1] = np.append(ReLU(Z[i+1]), np.ones((minibatch_size, 1), dtype=int), axis=1)
Z[-1] = a[L] # W[L]
a[-1] = softmax(Z[-1])
### back propagation
delta[-1] = (Y - a[-1]) * softmax_derivative(Z[-1])
for i in range(L, 0, -1):
delta[i] = (delta[i+1] # W[i].T)[:,:-1] * ReLU_derivative(Z[i])
for i in range(len(W)):
g = a[i].T # delta[i+1] / minibatch_size
W[i] = W[i] + lr * g
get_loss_on_test()
loss
def get_loss_on_test():
Z_test = [None] * (L + 2)
a_test = [None] * (L + 2)
a_test[0] = np.append(X_test, np.ones((len(X_test), 1)), axis=1)
for i in range(L):
Z_test[i + 1] = a_test[i] # W[i]
a_test[i + 1] = np.append(ReLU(Z_test[i+1]), np.ones((len(X_test), 1)), axis=1)
Z_test[-1] = a_test[L] # W[L]
a_test[-1] = softmax(Z_test[-1])
losses_test.append(get_loss(Y_test, a_test[-1]))
main
losses_test.clear()
fit()
plt.plot(losses_test)
plt.show()
If you want to see it in my notebook with an example of losses graph, here the link: https://github.com/beurnii/INF8225/blob/master/tp2/jpt.ipynb
If you want more details on my assignment, this is part 1b (page 2 for english):
https://github.com/beurnii/INF8225/blob/master/tp2/INF8225_TP2_2020.pdf

Create custom Keras/Tensorflow functions that uses NumPy arrays

I'm training a neural network to act as a nonlinear controller. Basically, the ANN (F*) must provide a signal w = F*(u) that does B(G(w)) = G(u) for some dynamical model B.
To simulate systems and nonlinearities, I'm using Python Control, and using Keras to create a Sequential model:
# Creating model:
F = Sequential (name = 'compensator')
F.add (Dense (4, input_dim = 1, activation = 'linear', name = 'input_layer'))
F.add (Dense (4, activation = deadzoneInverse, name = 'dense_layer'))
F.add (Dense (1, activation = 'linear', name = 'output_layer'))
and adding another layer for simulation:
F.add (Dense (1, activation = simulation, name = 'simulation_layer'))
since simulation is a custom function that uses Python Control modules, in special control.matlab.lsim, it computations needs to be done in numpy arrays. The models/functions and Keras Tensors conversions can be done like:
for B inverse:
# NumPy function:
def _dstar (x):
y = x
if (x > 5. * eps) or (x < -5. * eps):
y = x
elif (x > eps):
y = x + a
elif (x < -eps):
y = x - a
else:
y = x * (1. + a / eps)
return np.reshape(y, (-1, 1))
# Keras conversion:
def deadzoneInverse (x):
x_array = K.eval(x)
y_array = _dstar (x)
return K.variable (y_array)
and for simulation:
def _simul (x):
x_array = x
t_array = np.linspace (0, currTime, int (currTime / Ts))
y_array, _, _ = cm.lsim (G, x_array, t_array)
y_array = B(y_array, t_array, a)
return y_array[-1]
def simulation (x):
x_array = K.eval(x)
y_value = _simul(x_array)
return K.variable (y_value)
But when I try to F.compile, I get:
InvalidArgumentError: You must feed a value for placeholder tensor 'input_layer_input_14' with dtype float and shape [?,1]
[[Node: input_layer_input_14 = Placeholder[dtype=DT_FLOAT, shape=[?,1], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Is there a better way to implement these functions, even using Python Control (and, therefore, numPy arrays evaluated)?

tensorfow tf.expand_dims Error

I'm beginner in tensorflow and i use tf.expand_dims and i get error which i can't understand the reason , so what am i missing?
This is the code
ML_OUTPUT = None
input_for_classification = None
def ConstructML( input_tensor, layers_count, node_for_each_layer):
global ML_OUTPUT
global input_for_classification
FeatureVector = np.array(input_tensor)
FeatureVector = FeatureVector.flatten()
print(FeatureVector.shape)
ML_ModelINT(FeatureVector, layers_count, node_for_each_layer)
def ML_ModelINT(FeatureVector, layers_count, node_for_each_layer):
hidden_layer = []
Alloutputs = []
hidden_layer.append({'weights': tf.Variable(tf.random_normal([FeatureVector.shape[0], node_for_each_layer[0]])),'biases': tf.Variable(tf.random_normal([node_for_each_layer[0]]))})
for i in range(1, layers_count):
hidden_layer.append({'weights': tf.Variable(tf.random_normal([node_for_each_layer[i - 1], node_for_each_layer[i]])),'biases': tf.Variable(tf.random_normal([node_for_each_layer[i]]))})
FeatureVector = tf.expand_dims(FeatureVector,0)
layers_output = tf.add(tf.matmul(FeatureVector, hidden_layer[0]['weights']), hidden_layer[0]['biases'])
layers_output = tf.nn.relu(layers_output)
Alloutputs.append(layers_output)
for j in range(1, layers_count):
layers_output = tf.add(tf.matmul(layers_output, hidden_layer[j]['weights']), hidden_layer[j]['biases'])
layers_output = tf.nn.relu(layers_output)
Alloutputs.append(layers_output)
ML_OUTPUT = layers_output
input_for_classification = Alloutputs[1]
return ML_OUTPUT
ML_Net = ConstructML(input,3,[1024,512,256])
And it give me error in this line
FeatureVector = tf.expand_dims(FeatureVector,0)
The error is Expected binary or unicode string, got tf.Tensor 'Relu_11:0' shape=(?, 7, 7, 512) dtype=float32
Note The input is output tensor of another network and it is works well
Okey, the numpy part was the error because when predection function is firstly called it has no feed yet for input_imgs and numpy code will not be excuted correctly, and i replaced it with tensorflow ops and it is worked now.

Tensorflow: constructing the params tensor for tf.map_fn

import tensorflow as tf
import numpy as np
def lineeqn(slope, intercept, y, x):
return np.sign(y-(slope*x) - intercept)
# data size
DS = 100000
N = 100
x1 = tf.random_uniform([DS], -1, 0, dtype=tf.float32, seed=0)
x2 = tf.random_uniform([DS], 0, 1, dtype=tf.float32, seed=0)
# line representing the target function
rand1 = np.random.randint(0, DS)
rand2 = np.random.randint(0, DS)
T_x1 = x1[rand1]
T_x2 = x1[rand2]
T_y1 = x2[rand1]
T_y2 = x2[rand2]
slope = (T_y2 - T_y1)/(T_x2 - T_x1)
intercept = T_y2 - (slope * T_x2)
# extracting training samples from the data set
training_indices = np.random.randint(0, DS, N)
training_x1 = tf.gather(x1, training_indices)
training_x2 = tf.gather(x2, training_indices)
training_x1_ex = tf.expand_dims(training_x1, 1)
training_x2_ex = tf.expand_dims(training_x2, 1)
slope_tensor = tf.fill([N], slope)
slope_ex = tf.expand_dims(slope_tensor, 1)
intercept_tensor = tf.fill([N], intercept)
intercept_ex = tf.expand_dims(intercept_tensor, 1)
params = tf.concat(1, [slope_ex, intercept_ex, training_x2_ex, training_x1_ex])
training_y = tf.map_fn(lineeqn, params)
The lineeqn function requires 4 parameters, so params should be a tensor where each element is 4-element tensor. When I try to run the above code, I get the error TypeError: lineeqn() takes exactly 4 arguments (1 given). Can someone please explain what is wrong with the way I have constructed the params tensor? What does tf.map_fn do to the params tensor?
A similar question has been asked here. The reason you are getting this error is because the function called by map_fn - lineeqn in your case - is required to take exactly one tensor argument.
Rather than a list of arguments to the function, the parameter elems is expected to be a list of items, where the mapped function is called for each item contained in the list.
So in order to take multiple arguments to your function, you would have to unpack them yourself from each item, e.g.
def lineeqn(item):
slope, intercept, y, x = tf.unstack(item, num=4)
return np.sign(y - (slope * x) - intercept)
and call it as
training_y = tf.map_fn(lineeqn, list_of_parameter_tensors)
Here, you call the line equation for each tensor in the list_of_parameter_tensors, where each tensor would describe a tuple (slope, intercept, y, x) of packed arguments.
(Note that depending on the shape of the actual argument tensors, it might also be that instead of tf.concat you could have to use tf.pack.)