Can we train specific part of tensor with tebnsorflow? - tensorflow

I am trying to make an adversarial image for the inceptionV3 model with tensorflow. For that I use a specific loss on the pixel of my input image. This works well
model_input_layer = model.layers[0].input
model_output_layer = model.layers[-1].output
cost_function = model_output_layer[0, object_type_to_fake]
gradient_function = K.gradients(cost_function, model_input_layer)[0]
grab_cost_and_gradients_from_model = K.function([model_input_layer, K.learning_phase()], [cost_function, gradient_function])
Now I would like to make only certain pixels trainable to create a patch on a certain square and not on the all input image.
I have tried to use variable = tf.slice(model_input_layer, [0, 100, 100, 0], [-1, 100, 100, -1]) but it does not work.
Does anyone has already done this ?

Related

Input 0 is incompatible with layer model_1: expected shape=(None, 244, 720, 3), found shape=(None, 720, 3)

I wanted to test my model by uploading an image but I got this error. And I think I got the error somewhere in these lines, I'm just not sure how to fix.
IMAGE_SIZE = [244,720]
inception = InceptionV3(input_shape=IMAGE_SIZE + [3], weights='imagenet',include_top=False)
Also here's the code of uploading my test image
picture = image.load_img('/content/DSC_0365.JPG', target_size=(244,720))
img = img_to_array(picture)
prediction = model.predict(img)
print (prediction)
I'm still a newbie in Machine learning so my knowledge right now is not yet that deep.
This is mostly because you didn't prepare your input (its dimension) for your inception model. Here is one possible solution.
Model
from tensorflow.keras.applications import *
IMAGE_SIZE = [244,720]
inception = InceptionV3(input_shape=IMAGE_SIZE + [3],
weights='imagenet', include_top=False)
# check it's input shape
inception.input_shape
(None, 244, 720, 3)
Inference
Let's test a sample by passing it to the model.
from PIL import Image
a = Image.open('/content/1.png').convert('RGB')
display(a)
Check its basic properties.
a.mode, a.size, a.format
('RGB', (297, 308), None)
So, its shape already in (297 x 308 x 3). But to able to pass it to the model, we need an extra axis which is the batch axis. To do that, we can do
import tensorflow as tf
import numpy as np
a = tf.expand_dims(np.array(a), axis=0)
a.shape
TensorShape([1, 308, 297, 3])
Much better. Now, we may want to normalize our data and resize it according to the model input shape. To do that, we can do:
a = tf.divide(a, 255)
a = tf.image.resize(a, [244,720])
a.shape
TensorShape([1, 244, 720, 3])
And lastly, pass it to the model.
inception(a).shape
TensorShape([1, 6, 21, 2048])
# or, preserve the prediction to later analysis
y_pred = inception(a)
Updated
If you're using the [tf.keras] image processing function which loads the image into PIL format, then we can do simply:
image = tf.keras.preprocessing.image.load_img('/content/1.png',
target_size=(244,720))
input_arr = tf.keras.preprocessing.image.img_to_array(image)
input_arr = np.array([input_arr]) # Convert single image to a batch.
inception(input_arr).shape
TensorShape([1, 6, 21, 2048])

Using datasets in tensorflow

So, I am trying to figure out how to use the new input pipeline framework in TF. The toy model I am using for it tries to memorize an image by training on pixel coordinates as inputs and RGB values as labels. The code I have at the moment goes something like this
W=442
H=500
image = tf.read_file('kitteh.png')
image = tf.image.decode_png(image, channels=3)
# normalize to 0-1 range
image = (image - tf.reduce_min(image)) / (tf.reduce_max(image) - tf.reduce_min(image))
# features and labels
coordinates = tf.constant([(x, y) for x in range(W) for y in range(H)], dtype=tf.float32)
rgb = tf.reshape(image, [-1, 3])
# dataset and input pipeline
features = tf.data.Dataset.from_tensors(coordinates)
labels = tf.data.Dataset.from_tensors(rgb)
data = tf.data.Dataset.zip((features, labels))
batched = data.batch(100)
iterator = batched.make_one_shot_iterator()
inputs, labels = iterator.get_next()
def net(inputs, reuse=False):
l1 = tf.layers.dense(inputs, 20, activation=tf.nn.relu, name='l1', reuse=reuse)
l2 = tf.layers.dense(l1, 20, activation=tf.nn.relu, name='l2', reuse=reuse)
l3 = tf.layers.dense(l2, 20, activation=tf.nn.relu, name='l3', reuse=reuse)
l4 = tf.layers.dense(l3, 20, activation=tf.nn.relu, name='l4', reuse=reuse)
l5 = tf.layers.dense(l4, 20, activation=tf.nn.relu, name='l5', reuse=reuse)
l6 = tf.layers.dense(l5, 20, activation=tf.nn.relu, name='l6', reuse=reuse)
l7 = tf.layers.dense(l6, 20, activation=tf.nn.relu, name='l7', reuse=reuse)
return tf.layers.dense(l7, 3, activation=tf.nn.sigmoid, name='out', reuse=reuse)
model = net(inputs)
loss = tf.losses.mean_squared_error(labels, model)
step = tf.train.get_global_step()
train = tf.train.AdamOptimizer().minimize(loss, global_step=step)
test = net(coordinates, reuse=True)
with tf.Session() as session:
session.run((tf.global_variables_initializer(), tf.local_variables_initializer()))
orig = session.run(image)
for i in range(50000):
f, l = session.run([inputs, labels])
print(f.shape, l.shape)
And here are the questions:
This code doesn't work. For whatever reason, the batch() function doesn't work right. When I try to print my label and input shapes, I expect to get (100, 2) and (100, 3), but I get (1, 221000, 2), (1, 221000, 3) and an OutOfRangeError. I seem to be following the "importing data" tutorial, but I do not get the expected result.
How do I get a full set of data from a dataset? I want to have it generate a complete picture on every Nth step, can I get all the coordinates from the dataset?
I have width and height of the image hard-coded, but it would be nice to get them from the decoded data. I tried to do W = image.get_shape()[0] but it resulted in my parser() function failing because W is not defined yet. Is there a solution?
Edit #1: updated the code to my latest attempt and updated the questions to reflect the latest problem I am getting.
Edit #3: it seems I made a mistake in my previous edit. The problem seems to be with batch() rather than zip(). When I print output shapes for data and batched datasets, I get the following
(TensorShape([Dimension(221000), Dimension(2)]),
TensorShape([Dimension(221000), Dimension(3)]))
(TensorShape([Dimension(None), Dimension(221000), Dimension(2)]),
TensorShape([Dimension(None), Dimension(221000), Dimension(3)]))
Solved the first and primary issue. It seems that there is a subtle difference between Dataset.from_tensor() and 'Dataset.from_tensor_slices()`. Both take tensors as arguments, but the prior method treats the whole tensor as a single training sample, where the later treats the first axis as samples and the rest as data, thus splitting the tensor into samples along the first axis. Using the later function fixed the problem I had.
As for questions 2 and 3, I would still love to hear any answers, but currently there seems to be no way to do it the way I want, so I have ended up using hard-coded image dimensions and using coordinates constant tensor to paint a complete image.

Using scikit-learn for single input multiple output model in Keras

I am trying to use the scikit-learn in Keras to fine tune the model that has one input(images) and 2 outputs(rotational vector and translation vector). The code snippet is as below,
img_input =Input(shape=(img_rows, img_cols, img_channels))
model = KerasRegressor(build_fn = toy_model, verbose = 1)
loss_weights = [[1.0, 250.0], [1.0, 500.0], [1.0, 750.0]]
epochs =[10, 20]
batches = [5, 10]
param_grid = dict(loss_weight= loss_weights, epochs = epochs,
batch_size = batches)
grid = GridSearchCV(estimator = model, param_grid=param_grid)
grid_result = grid.fit(train_imgs, [train_pose_tx, train_pose_rt])
I want to fine tune the "loss_weights" parameter for this model. However, I get the following error
ValueError: Found input variables with inconsistent numbers of samples:[895, 2]
As I understand since this model has single input, this functionality must be supported.
Link to Github gist :
https://gist.github.com/sushant4788/1f84cd2781f96fb752ee1f16a56d1bcb

TensorFlow: Rerun network with a different input tensor?

Suppose I have a typical CNN model in TensorFlow.
def inference(images):
# images: 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
conv_1 = conv_layer(images, 64, 7, 2)
pool_2 = pooling_layer(conv_1, 2, 2)
conv_3 = conv_layer(pool_2, 192, 3, 1)
pool_4 = pooling_layer(conv_3, 2, 2)
...
conv_28 = conv_layer(conv_27, 1024, 3, 1)
fc_29 = fc_layer(conv_28, 512)
fc_30 = fc_layer(fc_29, 4096)
return fc_30
A typical forward pass could be done like this:
images = input()
logits = inference(images)
output = sess.run([logits])
Now suppose my input function now returns a pair of arguments, left_images and right_images (stereo camera). I want to run right_images up to conv_28 and left_images up to fc_30. So something like this
images = tf.placeholder(tf.float32, [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3])
left_images, right_images = input()
conv_28, fc_30 = inference(images)
right_images_val = sess.run([conv_28], feed_dict={images: right_images})
left_images_val = sess.run([fc_30], feed_dict={images: left_images})
This however fails with
TypeError: The value of a feed cannot be a tf.Tensor object.
Acceptable feed values include Python scalars, strings, lists, or
numpy ndarrays.
I want to avoid having to evaluate inputs to then feed it back to TensorFlow. Calling inference twice with different arguments will also not work because functions like conv_layer create variables.
Is it possible to rerun the network with a different input tensor?
Tensorflow shared Variables is what you are looking for. Replace all calls of tf.Variable with tf.get_variable() in inference. Then you can run:
images_left, images_right = input()
with tf.variable_scope("logits") as scope:
logits_left = inference(images_left)
scope.reuse_variables()
logits_right = inference(images_right)
output = sess.run([logits_left, logits_right])
Variables are not created again in the second call of inference. Left and right images are processed using the same weights. Also check out my Tensorflow CNN training toolkit (Look at training code). I utilize this technique to run validation and training forwards in the same TensorFlow graph.

Minimal RNN example in tensorflow

Trying to implement a minimal toy RNN example in tensorflow.
The goal is to learn a mapping from the input data to the target data, similar to this wonderful concise example in theanets.
Update: We're getting there. The only part remaining is to make it converge (and less convoluted). Could someone help to turn the following into running code or provide a simple example?
import tensorflow as tf
from tensorflow.python.ops import rnn_cell
init_scale = 0.1
num_steps = 7
num_units = 7
input_data = [1, 2, 3, 4, 5, 6, 7]
target = [2, 3, 4, 5, 6, 7, 7]
#target = [1,1,1,1,1,1,1] #converges, but not what we want
batch_size = 1
with tf.Graph().as_default(), tf.Session() as session:
# Placeholder for the inputs and target of the net
# inputs = tf.placeholder(tf.int32, [batch_size, num_steps])
input1 = tf.placeholder(tf.float32, [batch_size, 1])
inputs = [input1 for _ in range(num_steps)]
outputs = tf.placeholder(tf.float32, [batch_size, num_steps])
gru = rnn_cell.GRUCell(num_units)
initial_state = state = tf.zeros([batch_size, num_units])
loss = tf.constant(0.0)
# setup model: unroll
for time_step in range(num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
step_ = inputs[time_step]
output, state = gru(step_, state)
loss += tf.reduce_sum(abs(output - target)) # all norms work equally well? NO!
final_state = state
optimizer = tf.train.AdamOptimizer(0.1) # CONVERGEs sooo much better
train = optimizer.minimize(loss) # let the optimizer train
numpy_state = initial_state.eval()
session.run(tf.initialize_all_variables())
for epoch in range(10): # now
for i in range(7): # feed fake 2D matrix of 1 byte at a time ;)
feed_dict = {initial_state: numpy_state, input1: [[input_data[i]]]} # no
numpy_state, current_loss,_ = session.run([final_state, loss,train], feed_dict=feed_dict)
print(current_loss) # hopefully going down, always stuck at 189, why!?
I think there are a few problems with your code, but the idea is right.
The main issue is that you're using a single tensor for inputs and outputs, as in:
inputs = tf.placeholder(tf.int32, [batch_size, num_steps]).
In TensorFlow the RNN functions take a list of tensors (because num_steps can vary in some models). So you should construct inputs like this:
inputs = [tf.placeholder(tf.int32, [batch_size, 1]) for _ in xrange(num_steps)]
Then you need to take care of the fact that your inputs are int32s, but a RNN cell works on float vectors - that's what embedding_lookup is for.
And finally you'll need to adapt your feed to put in the input list.
I think the ptb tutorial is a reasonable place to look, but if you want an even more minimal example of an out-of-the-box RNN you can take a look at some of the rnn unit tests, e.g., here.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/kernel_tests/rnn_test.py#L164