per_image_whitening in Python - tensorflow

I'm trying to set up TensorFlow to accept one image at a time but I believe I'm getting incorrect results because I pass a regular array without first performing tf.image.per_image_whitening() beforehand. Is there an easy way to do this in Python to an individual image without using the image queue?
Here's my code so far:
im = Image.open(request.FILES.values()[0])
im = im.convert('RGB')
im = im.crop((0, 0, cifar10.IMAGE_SIZE, cifar10.IMAGE_SIZE))
(width, height) = im.size
image_array = list(im.getdata())
image_array = np.array(image_array)
image_array = image_array.reshape((1, height, width, 3))
# tf.image.per_image_whitening() should be done here
#mean = numpy.mean(image_array)
#stddev = numpy.std(image_array)
#adjusted_stddev = max(stddev, 1.0/len(image_array.flatten())))
feed_dict = {"shuffle_batch:0": image_array}
# predictions always returns something close to [1, 0]
predictions = sess.run(tf.nn.softmax(logits), feed_dict=feed_dict)

If you want to avoid the image queue and do the predictions one by one, I think
image_array = (image_array - mean) / adjusted_stddev
should be able to do the trick.
If you want to do the prediction by batches, it's a little bit complicated as per_image_whitening (now per_image_standardization) only works with single images. So you need to do it before you form the batch like the way above or setup a preprocess procedure.

Related

Unfurling TFRecords growing slower and slower

I'm trying to convert a TFRecord dataset back to images and I'm using the following code to do so:
def get_im_and_label_helper(parsed_features, im_format, label_format):
im = tf.image.decode_png(parsed_features['image/encoded'])
label = tf.image.decode_png(parsed_features['image/segmentation/class/encoded'])
im, label = im.eval(), label.eval()
return im, label
for tfr_file_path_name in tfr_files_list:
tfr_file_path = os.path.join(sub_dataset_dir, tfr_file_path_name)
record_iterator = tf.python_io.tf_record_iterator(tfr_file_path)
for string_record in record_iterator:
parsed_features = tf.parse_single_example(string_record, READ_FEATURES)
filename = parsed_features['image/filename'].eval().decode("utf-8")
im, label = get_im_and_label_helper(parsed_features, im_format, label_format)
imageio.imwrite(os.path.join(target_dir, "images", filename + ".png"), im)
imageio.imwrite(os.path.join(target_dir, "labels", filename + ".png"), label)
It works fine and does what I expect - extracts the images and labels and saves them in the proper place. It starts fast and it gets slower and slower as it goes on. I'm inexperienced with tensorflow, so I assume I'm causing some computation graph to grow bigger and bigger, but I don't really know.
Any ideas?
Using tf.enable_eager_execution() followed by tf.executing_eagerly(), and replacing all .eval()with .numpy()solved the problem.

How to exporting adversarial examples for Facenet in Cleverhans?

I am trying to follow this blog https://brunolopezgarcia.github.io/2018/05/09/Crafting-adversarial-faces.html to generate adversarial face images against Facenet. The code is here https://github.com/tensorflow/cleverhans/tree/master/examples/facenet_adversarial_faces and works fine! My question is how can I export these adversarial images. Is this question too straightforward, so the blog didn't mention it, but only shows some sample pictures.
I was thinking it is not a hard problem, since I know the generated adversarial samples are in the "adv". But this adv (float32) came from faces1, after being prewhiten and normalized. To restore the int8 images from adv(float32), I have to reverse the normalization and prewhiten process. It seems like if we want output some images from facenet, we have to do this process.
I am new to Facenet and Cleverhans, I am not sure whether this is the best way to do that, or is that common way(such as functions) for people to export images from Facenet.
In facenet_fgsm.py, we finally got the adversarial samples. I need to export adv to plain int images.
adv = sess.run(adv_x, feed_dict=feed_dict)
In set_loader.py. There are some kinda of normalization.
def load_testset(size):
# Load images paths and labels
pairs = lfw.read_pairs(pairs_path)
paths, labels = lfw.get_paths(testset_path, pairs, file_extension)
# Random choice
permutation = np.random.choice(len(labels), size, replace=False)
paths_batch_1 = []
paths_batch_2 = []
for index in permutation:
paths_batch_1.append(paths[index * 2])
paths_batch_2.append(paths[index * 2 + 1])
labels = np.asarray(labels)[permutation]
paths_batch_1 = np.asarray(paths_batch_1)
paths_batch_2 = np.asarray(paths_batch_2)
# Load images
faces1 = facenet.load_data(paths_batch_1, False, False, image_size)
faces2 = facenet.load_data(paths_batch_2, False, False, image_size)
# Change pixel values to 0 to 1 values
min_pixel = min(np.min(faces1), np.min(faces2))
max_pixel = max(np.max(faces1), np.max(faces2))
faces1 = (faces1 - min_pixel) / (max_pixel - min_pixel)
faces2 = (faces2 - min_pixel) / (max_pixel - min_pixel)
In the facenet.py load_data function, there is a prewhiten process.
nrof_samples = len(image_paths)
images = np.zeros((nrof_samples, image_size, image_size, 3))
for i in range(nrof_samples):
img = misc.imread(image_paths[i])
if img.ndim == 2:
img = to_rgb(img)
if do_prewhiten:
img = prewhiten(img)
img = crop(img, do_random_crop, image_size)
img = flip(img, do_random_flip)
images[i,:,:,:] = img
return images
I hope some expert can point me some hidden function in facenet or cleverhans that can directly export the adv images, otherwise reversing normalization and prewhiten process seems akward. Thank you very much.
I don't know much about the Facenet code. From your discussion, it seems like you will have to save the values of min_pixel,max_pixelto reverse the normalization, and then look at theprewhiten` function to see how you can reverse it. I'll email Bruno to see if he has any further comments to help you out.
EDIT: Now image exporting is included in the Facenet example of Cleverhans: https://github.com/tensorflow/cleverhans/commit/08f6fb9cf2a7f199467d5ed60179fc3ae9140458

how to merge 'Conv-BN-Scale' into a single 'Conv' layer for tensorflow?

For faster inference one model, I want to merge 'Conv-BN-Scale' into a single 'Conv' layer for my tensorflow model, but I can not find some useful complete example about how to do it?
Anyone can give some advises or complete code example?
Thanks!
To merge two layers, you will need to pass a Tensor and get a tensor back that is after both the layers are applied, suppose your input tensor is X.
def MlConvBnScale(X ,kernel,strides , padding = 'SAME' , scale = False, beta_initializer = 0.1, gamma_initializer = 0.1, moving_mean_initializer = 0.1, moving_variance_initializer = 0.1):
convLout = tf.nn.conv2d(X,
filter = Kernel,
strides = strides,
padding = padding)
return tf.nn.batch_normalization(convLout,
scale = scale,
beta_initializer = beta_initializer,
gamma_initializer = gamma_initializer,
moving_mean_initializer = moving_mean_intializer,
moving_variance_initializer = moving_variance_initializer )
And that will return a tensor after performing both the operations, I have taken default values of variables but you can modify them in your function call, and in case your input is not already a tensor but a numpy array you can use tf.convert_to_tensor() from this link https://www.tensorflow.org/api_docs/python/tf/convert_to_tensor, and in case you are struggling with kernel/filter and its application, check out this thread. What does tf.nn.conv2d do in tensorflow?
If you have any queries or run into trouble implementing it, comment down below and we will see.

tensorflow error - you must feed a value for placeholder tensor 'in'

I'm trying to implement queues for my tensorflow prediction but get the following error -
you must feed a value for placeholder tensor 'in' with dtype float and shape [1024,1024,3]
The program works fine if I use the feed_dict, Trying to replace feed_dict with queues.
The program basically takes a list of positions and passes the image np array to the input tensor.
for each in positions:
y,x = each
images = img[y:y+1024,x:x+1024,:]
a = images.astype('float32')
q = tf.FIFOQueue(capacity=200,dtypes=dtypes)
enqueue_op = q.enqueue(a)
qr = tf.train.QueueRunner(q, [enqueue_op] * 1)
tf.train.add_queue_runner(qr)
data = q.dequeue()
graph=load_graph('/home/graph/frozen_graph.pb')
with tf.Session(graph=graph,config=tf.ConfigProto(log_device_placement=True)) as sess:
p_boxes = graph.get_tensor_by_name("cat:0")
p_confs = graph.get_tensor_by_name("sha:0")
y = [p_confs, p_boxes]
x = graph.get_tensor_by_name("in:0")
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord,sess=sess)
confs, boxes = sess.run(y)
coord.request_stop()
coord.join(threads)
How can I make sure the input data that I populated to the queue is recognized while running the graph in the session.
In my original run I call the
confs, boxes = sess.run([p_confs, p_boxes], feed_dict=feed_dict_testing)
I'd suggest not using queues for this problem, and switching to the new tf.data API. In particular tf.data.Dataset.from_generator() makes it easier to feed in data from a Python function. You can rewrite your code to be much simpler, as follows:
def generator():
for y, x in positions:
images = img[y:y+1024,x:x+1024,:]
yield images.astype('float32')
dataset = tf.data.Dataset.from_generator(
generator, tf.float32, [1024, 1024, img.shape[3]])
# Add any extra transformations in here, like `dataset.batch()` or
# `dataset.repeat()`.
# ...
iterator = dataset.make_one_shot_iterator()
data = iterator.get_next()
Note that in your program, there's no connection between the data tensor and the graph you loaded in load_graph() (at least, assuming that load_graph() doesn't grab data from the global state!). You will probably need to use tf.import_graph_def() and the input_map argument to associate data with one of the tensors in your frozen graph (possibly "in:0"?) to complete the task.

Dynamic Axes with a custom RNN

I’m running into a number of issues relating to dynamic axes. I am trying to implement a convolutional rnn similar to the of the LSTM() function but handles sequential image input and outputs an image.
I’m able to build the network and pass dummy data through it to produce output, but when I try to compute the error with an input_variable label I consistently see the following error:
RuntimeError: Node '__v2libuid__Input471__v2libname__img_label' (InputValue operation): DataFor: FrameRange's dynamic axis is inconsistent with matrix: {numTimeSteps:1, numParallelSequences:2, sequences:[{seqId:0, s:0, begin:0, end:1}, {seqId:1, s:1, begin:0, end:1}]} vs. {numTimeSteps:2, numParallelSequences:1, sequences:[{seqId:0, s:0, begin:0, end:2}]}`
If I understand this error message correctly, it claims that the value I passed in as the label has inconsistent axes to what is expected with 2 time steps and 1 parallel sequence, when what is desired is 1 time-step and 2 sequences. This makes sense to me, but I’m not sure how the data I’m passing in is not conforming to this. Here are (roughly) the variable declarations and eval statements:
…
img_input = input_variable(shape=img_shape, dtype=np.float32, name="img_input")
convlstm = Recurrence(conv_lstm_cell, initial_state=initial_state)(img_input)
out = select_last(convlstm)
img_label = input_variable(shape=img_shape, dynamic_axes=out.dynamic_axes, dtype=np.float32, name="img_label”)
error = squared_error(out, img_label)
…
dummy_input = np.ones(shape=(2, 3, 3, 32, 32)) # (batch, seq_len, channels, height, width)
dummy_label = np.ones(shape=(2, 3, 32, 32)) # (batch, channels, height, width)
out = error.eval({img_input:dummy_input, img_label:dummy_label})
I believe part of the issue is with the dynamic_axes set when creating the img_label input_variable, I’ve also tried setting it to [Axis.default_batch_axis()] and not setting it at all and either squared error complains about inconsistent axes between out and img_label or I see the same error as above.
The only issue I see with the above setup is that your dummy label should have an explicit dynamic axis so it should be declared as
dummy_label = np.ones(shape=(2, 1, 3, 32, 32))
Assuming your convlstm works similar to an lstm, then the following works without issues for me and it evaluates the loss for two input/output pairs.
x = C.input_variable((3,32,32))
cx = convlstm(x)
lx = C.sequence.last(cx)
y = C.input_variable(lx.shape, dynamic_axes=lx.dynamic_axes)
loss = C.squared_error(y, lx)
x0 = np.arange(2*3*3*32*32,dtype=np.float32).reshape(2,3,3,32,32)
y0 = np.arange(2*1*3*32*32,dtype=np.float32).reshape(2,1,3,32,32)
loss.eval({x:x0, y:y0})