When I have an image , I can standardize the image channel-wise as follows :
image[:, :, 0] = ((image[:, :, 0]-mean_1))/std_1
image[:, :, 1] = ((image[:, :, 1]-mean_2))/std_2
image[:, :, 2] = ((image[:, :, 2]-mean_3))/std_3
Where mean_1 and std_1 are the first channel mean and standard deviation . Same for mean_2, std_2 ,mean_3 and std_3. But right now the image is a tensor and has the following info :
(460, 700, 3) <dtype: 'float32'>
<class 'tensorflow.python.framework.ops.Tensor'>
I am new to tensorflow and I don't know how to convert the above formulas to a code that perform the same task on the tensor image ?
Edit : The means and the stds are calculated over all the dataset images by me. So I have their values.
Update 1 : I have tried to solve this problem using tf.keras.layers.Normalization impeded into my model :
inputs = keras.Input(shape=(460,700,3))
norm_layer = Normalization(mean=[200.827,160.252,195.008],
variance=[np.square(33.154),
np.square(45.877),
np.square(29.523)])
inputs=norm_layer(inputs)
This raises new two questions :
Does tf.keras.layers.Normalization and the above code normalizes the inputs per channel as I need ?
Using the above code , does tf.keras.layers.Normalization will work on test and validation data or training data only ? I need it to work on all the datasets.
Please help me guys :( I am so confused .
Update 1: Fix to show how to use with preprocessing layer
import tensorflow as tf
import numpy as np
# Create random img
img = tf.Variable(np.random.randint(0, 255, (10, 224, 224 ,3)), dtype=tf.uint8)
# Create prerprocessing layer
# Note: Works with tensorflow 2.6 and up
norm_layer = tf.keras.layers.Normalization(mean=[0.485, 0.456, 0.406], variance=[np.square(33.154), np.square(45.877), np.square(29.523)])
# Apply norm_layer to your image
# You need not add it to your model
norm_img = norm_layer(img)
# or
# Use via numpy but the output is a tensor since your running a preprocesisng layer
# norm_img = norm_layer(img.numpy())
# Run model prediction
predictions = model.predict(norm_img)
Related
I am following this tutorial on Policy Gradient using Keras,
and can't quite figure out the below.
In the below case, how exactly are input tensors with different shapes fed to the model?
Layers are neither .concated or .Added.
input1.shape = (4, 4)
input2.shape = (4,)
"input" layer has 4 neurons, and accepts input1 + input2 as 4d vector??
The code excerpt (modified to make it simpler) :
import tensorflow as tf
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras import backend as K
import numpy as np
input = tf.keras.Input(shape=(4, ))
advantages = tf.keras.Input(shape=[1])
dense1 = layers.Dense(32, activation='relu')(input)
dense2 = layers.Dense(32, activation='relu')(dense1)
output = layers.Dense(2, activation='softmax')(dense2)
model = tf.keras.Model(inputs=[input, advantages], outputs=[output])
# *********************************
input1 = np.array(
[[ 4.52281174e-02, 4.31672811e-02, -4.57789579e-02, 4.35560472e-02],
[ 4.60914630e-02, -1.51269339e-01, -4.49078369e-02, 3.21451106e-01],
[ 4.30660763e-02, 4.44624011e-02, -3.84788148e-02, 1.49510297e-02],
[ 4.39553243e-02, -1.50087194e-01, -3.81797942e-02, 2.95249428e-01]]
)
input2 = np.array(
[ 1.60063125, 1.47153674, 1.34113826, 1.20942261]
)
label = np.array(
[[1, 0],
[0, 1],
[1, 0],
[0, 1]]
)
model.compile(optimizer=optimizers.Adam(lr=0.0005), loss="binary_crossentropy")
model.train_on_batch([input1, input2], label)
In cases where you might want to figure out what type of graph you have just build, it is helpful to use the model.summary() or tf.keras.utils.plot_model() methods for debugging:
tf.keras.utils.plot_model(model, to_file="test.png", show_shapes=True, show_layer_names=True, show_dtype=True)
This will show you that your input_2 is indeed not used. Since you haven't connected it to the main graph with any operations, it has no weights associated with it (the graph runs but there is nothing to update on the right side):
I am trying to print and log the custom metrics (dice score) for all classes for validation set during training. I want the Keras to calculate custom metrics on validation set after each epoch. My current program is working but I have to use some tricks that ultimately cause memory problems during training.
The issue is to print and log the dice scores of all classes, the calculations are done on tensors which I am unable to print. I cant use eager mode due to some compatibility issues with TensorFlow 2.0 and forced to initialize another session.
My custom metrics class is given below:
class Metrics(tf.keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.val_lv = []
self.val_rk = []
self.val_lk = []
self.val_sp = []
def on_epoch_end(self, batch, logs={}):
layer_name = 'loss6'
self.intermediate_layer_model = tf.keras.models.Model(inputs=self.model.input,
outputs=self.model.get_layer(layer_name).output)
for batch_index in range(0, len(self.validation_data)):
temp_targ = self.validation_data[batch_index][1][0]
temp_targ=temp_targ.astype('float32')
temp_predict = (np.asarray( self.intermediate_layer_model.predict(
self.validation_data[batch_index][0]))).round()
val_lvs = tf.reduce_mean((dice_coef(temp_targ[:,1, :, :], temp_predict[:,1, :, :])))
val_rks = tf.reduce_mean(dice_coef(temp_targ[:, 2, :, :], temp_predict[:, 2, :, :]))
val_lks = tf.reduce_mean(dice_coef(temp_targ[:, 3, :, :], temp_predict[:, 3, :, :]))
val_sps = tf.reduce_mean(dice_coef(temp_targ[:, 4, :, :], temp_predict[:, 4, :, :]))
self.val_lv.append(val_lvs)
self.val_rk.append(val_rks)
self.val_lk.append(val_lks)
self.val_sp.append(val_sps)
sess = tf.Session()
print('liver-score:', sess.run(tf.reduce_mean(self.val_lv)))
print('rk-score:', sess.run(tf.reduce_mean(self.val_rk)))
print('lk-score:', sess.run(tf.reduce_mean(self.val_lk)))
print('sp-score:', sess.run(tf.reduce_mean(self.val_sp)))
logs['liver-score'] = sess.run(tf.reduce_mean(self.val_lv))
logs['rk-score'] = sess.run(tf.reduce_mean(self.val_rk))
logs['lk-score'] = sess.run(tf.reduce_mean(self.val_lk))
logs['sp-score'] = sess.run(tf.reduce_mean(self.val_sp))
sess.close()
return
Note that the variables lv, rk, lk and sp are abbreviations for my class names.
Any alternative way to print and log the metrics except for using session?
As far as i understand, temp_predict and temp_predict are numpy arrays. So the only way you can end up with tensors is because you are using tf.reduce_mean. You can replace it with np.mean. This will only work if dice_coef has no tensorflow ops. If it does then you will have to replace them with numpy functions. Once you do that, then you wouldn't have to open new sessions.
And also instead of creating a new model at the end of every epoch (intermediate_layer_model), you can construct a keras function using tf.keras.backend.function more about it here.
I want to create a tensor which is some kind of a transformation matrix (rotation matrix for instance)
My model predicts 2 parameters: x1 and x2
so the output is a tensor of (B, 2), when B is number of batches.
however, when I write my loss, I have to know this "B" since I want to iterate over it:
def get_rotation_tensor(x):
roll_mat = K.stack([ [[1, 0, 0],
[0, K.cos(x[i, 0]), -K.sin(x[i, 0])],
[0, K.sin(x[i, 0]), K.cos(x[i, 0])]] for i in range(BATCH_SIZE)])
pitch_mat = K.stack([ [[K.cos(x[i, 1]), 0, K.sin(x[i, 1])],
[0, 1, 0],
[-K.sin(x[i, 1]), 0, K.cos(x[i, 1])]] for i in range(BATCH_SIZE)])
return K.batch_dot(pitch_mat, roll_mat)
the only solution I could have think of is to pre-define the BATCH_SIZE in advance.. but is there a way to write a general loss function that will work for every batch size?
THANKS
I found a solution
def get_rotation_tensor(x):
ones = K.ones_like(x[:, 0])
zeros = K.zeros_like(x[:, 0])
roll_mat = K.stack([[ones, zeros, zeros],
[zeros, K.cos(x[:, 0]), -K.sin(x[:, 0])],
[zeros, K.sin(x[:, 0]), K.cos(x[:, 0])]])
pitch_mat = K.stack([[K.cos(x[:, 1]), zeros, K.sin(x[:, 1])],
[zeros, ones, zeros],
[-K.sin(x[:, 1]), zeros, K.cos(x[:, 1])]])
return K.batch_dot(K.permute_dimensions(pitch_mat, (2, 0, 1)),
K.permute_dimensions(roll_mat, (2, 0, 1)))
Perhaps I'm not fully understanding your issue, but can't you just determine the batch size by the shape of the tensors passed into the loss function. Below is an example that shows the idea. I hope this helps.
# Install TensorFlow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
print(tf.__version__)
print(tf.executing_eagerly())
# Setup repro section from Keras FAQ with TF1 to TF2 adjustments
import numpy as np
import random as rn
# The below is necessary for starting Numpy generated random numbers
# in a well-defined initial state.
np.random.seed(42)
# The below is necessary for starting core Python generated random numbers
# in a well-defined state.
rn.seed(12345)
# Force TensorFlow to use single thread.
# Multiple threads are a potential source of non-reproducible results.
# For further details, see: https://stackoverflow.com/questions/42022950/
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
# The below tf.set_random_seed() will make random number generation
# in the TensorFlow backend have a well-defined initial state.
# For further details, see:
# https://www.tensorflow.org/api_docs/python/tf/set_random_seed
tf.compat.v1.set_random_seed(1234)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
# Rest of code follows ...
# Custom Loss
def my_custom_loss(y_true, y_pred):
tf.print('inside my_custom_loss:')
tf.print('y_true:')
tf.print(y_true)
tf.print('y_true column 0:')
tf.print(y_true[:,0])
tf.print('y_true column 1:')
tf.print(y_true[:,1])
tf.print('y_pred:')
tf.print(y_pred)
# get length/batch size
batch_size=tf.shape(y_pred)[0]
tf.print('batch_size:')
tf.print(batch_size)
y_zeros = tf.zeros_like(y_pred)
y_mask = tf.math.greater(y_pred, y_zeros)
res = tf.boolean_mask(y_pred, y_mask)
logres = tf.math.log(res)
finres = tf.math.reduce_sum(logres)
return finres
# Define model
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(1, activation='linear', input_dim=1, name="Dense1"))
model.compile(optimizer='rmsprop', loss=my_custom_loss)
print('model.summary():')
print(model.summary())
# Generate dummy data
data = np.array([[2.0],[1.0],[1.0],[3.0],[4.0]])
labels = np.array([[[2.0],[1.0]],
[[0.0],[3.0]],
[[0.0],[3.0]],
[[0.0],[3.0]],
[[0.0],[3.0]]])
# Train the model.
print('training the model:')
print('-----')
model.fit(data, labels, epochs=1, batch_size=3)
print('done training the model.')
print(data.shape)
print(labels.shape)
I am currently trying to get a simple tensorflow model to train by data provided by a custom input pipeline. It should work as efficient as possible. Although I've read lots of tutorials, I can't get it to work.
THE DATA
I have my training data split over several csv files. File 'a.csv' has 20 samples and 'b.csv' has 30 samples in it, respectively. They have the same structure with the same header:
feature1; feature2; feature3; feature4
0.1; 0.2; 0.3; 0.4
...
(No labels, as it is for an autoencoder.)
THE CODE
I have written an input pipeline and would like to feed the data from it to the model. My code looks like this:
import tensorflow as tf
def input_pipeline(filenames, batch_size):
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.flat_map(
lambda filename: (
tf.data.TextLineDataset(filename)
.skip(1)
.shuffle(10)
.map(lambda csv_row: tf.decode_csv(
csv_row,
record_defaults=[[-1.0]]*4,
field_delim=';'))
.batch(batch_size)
)
)
return dataset.make_initializable_iterator()
iterator = input_pipeline(['/home/sku/data/a.csv',
'/home/sku/data/b.csv'],
batch_size=5)
next_element = iterator.get_next()
# Build the autoencoder
x = tf.placeholder(tf.float32, shape=[None, 4], name='in')
z = tf.contrib.layers.fully_connected(x, 2, activation_fn=tf.nn.relu)
x_hat = tf.contrib.layers.fully_connected(z, 4)
# loss function with epsilon for numeric stability
epsilon = 1e-10
loss = -tf.reduce_sum(
x * tf.log(epsilon + x_hat) + (1 - x) * tf.log(epsilon + 1 - x_hat))
train_op = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(loss)
with tf.Session() as sess:
sess.run(iterator.initializer)
sess.run(tf.global_variables_initializer())
for i in range(50):
batch = sess.run(next_element)
sess.run(train_op, feed_dict={x: batch, x_hat: batch})
THE PROBLEM
When trying to feed the data to the model, I get an error:
ValueError: Cannot feed value of shape (4, 5) for Tensor 'in:0', which has shape '(?, 4)'
When printing out the shapes of the batched data, I get this for example:
(array([ 4.1, 5.9, 5.5, 6.7, 10. ], dtype=float32), array([0.4, 7.7, 0. , 3.4, 8.7], dtype=float32), array([3.5, 4.9, 8.3, 7.2, 6.4], dtype=float32), array([-1. , -1. , 9.6, -1. , -1. ], dtype=float32))
It makes sense, but where and how do I have to reshape this? Also, this additional info dtype only appears with batching.
I also considered that I did the feeding wrong. Do I need input_fn or something like that? I remember that feeding dicts is way to slow. If somebody could give me an efficient way to prepare and feed the data, I would be really grateful.
Regards,
I've figured out a solution, that requires a second mapping function. You have to add the following line to the input function:
def input_pipeline(filenames, batch_size):
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.flat_map(
lambda filename: (
tf.data.TextLineDataset(filename)
.skip(1)
.shuffle(10)
.map(lambda csv_row: tf.decode_csv(
csv_row,
record_defaults=[[-1.0]]*4,
field_delim=';'))
.map(lambda *inputs: tf.stack(inputs)) # <-- mapping required
.batch(batch_size)
)
)
return dataset.make_initializable_iterator()
This seems to convert the array-like output to a matrix, that can be fed to the network.
However, I'm still not sure if feeding it via feed_dict is the most efficient way. I'd still appreciate support here!
I am trying to use the scikit-learn in Keras to fine tune the model that has one input(images) and 2 outputs(rotational vector and translation vector). The code snippet is as below,
img_input =Input(shape=(img_rows, img_cols, img_channels))
model = KerasRegressor(build_fn = toy_model, verbose = 1)
loss_weights = [[1.0, 250.0], [1.0, 500.0], [1.0, 750.0]]
epochs =[10, 20]
batches = [5, 10]
param_grid = dict(loss_weight= loss_weights, epochs = epochs,
batch_size = batches)
grid = GridSearchCV(estimator = model, param_grid=param_grid)
grid_result = grid.fit(train_imgs, [train_pose_tx, train_pose_rt])
I want to fine tune the "loss_weights" parameter for this model. However, I get the following error
ValueError: Found input variables with inconsistent numbers of samples:[895, 2]
As I understand since this model has single input, this functionality must be supported.
Link to Github gist :
https://gist.github.com/sushant4788/1f84cd2781f96fb752ee1f16a56d1bcb