This question is based on: Tensorflow image reading & display
Following their code we have the following:
string = ['/home/user/test.jpg']
filepath_queue = tf.train.string_input_producer(string)
self.reader = tf.WholeFileReader()
key, value = self.reader.read(filepath_queue)
print(value)
# Output: Tensor("ReaderRead:1", shape=TensorShape([]), dtype=string)
my_img = tf.image.decode_jpeg(value, channels=3)
print(my_img)
# Output: Tensor("DecodeJpeg:0", shape=TensorShape([Dimension(None), Dimension(None), Dimension(3)]), dtype=uint8)
Why does my_img have no dimensions? (Dimension(3) is only because of the argument channels=3)
Does this mean that the image is not properly loaded? (img = misc.imread('/home/user/test.jpg') does load that image).
The image will be properly loaded, but TensorFlow doesn't have enough information to infer the image's shape until the op is run. This arises because tf.image.decode_jpeg() can produce tensors of different shapes (heights and widths), depending on the contents of the string tensor value. This enables you to build input pipelines using a collection of images with different sizes.
The Dimension(None) in the shape means "unknown" rather than "empty".
If you happen to know that all images read by this operation will have the same size, you can use Tensor.set_shape() to provide this information, and doing so will help to validate the shapes of later parts of the graph:
my_img = tf.image.decode_jpeg(value, channels=3)
KNOWN_HEIGHT = 28
KNOWN_WIDTH = 28
my_img.set_shape([KNOWN_HEIGHT, KNOWN_WIDTH, 3])
print(my_img)
# Output: Tensor("DecodeJpeg:0", shape=TensorShape([Dimension(28), Dimension(28), Dimension(3)]), dtype=uint8)
Related
I am using this function to predict the output of never seen images
def predictor(img, model):
image = cv2.imread(img)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (224, 224))
image = np.array(image, dtype = 'float32')/255.0
plt.imshow(image)
image = image.reshape(1, 224,224,3)
clas = model.predict(image).argmax()
name = dict_class[clas]
print('The given image is of \nClass: {0} \nSpecies: {1}'.format(clas, name))
how to change it, if I want the top 2(or k) accuracy
i.e
70% chance its dog
15% its a bear
If you are using TensorFlow + Keras and probably doing multi-class classification, then the output of model.predict() is a tensor representing either the logits or already the probabilities (softmax on top of logits).
I am taking this example from here and slightly modifying it : https://www.tensorflow.org/api_docs/python/tf/math/top_k.
#See the softmax, probabilities add up to 1
network_predictions = [0.7,0.2,0.05,0.05]
prediction_probabilities = tf.math.top_k(network_predictions, k=2)
top_2_scores = prediction_probabilities.values.numpy()
dict_class_entries = prediction_probabilities.indices.numpy()
And here in dict_class_entries you have then the indices (sorted ascendingly) in accordance with the probabilities. (i.e. dict_class_entries[0] = 0 (corresponds to 0.7) and top_2_scores[0] = 0.7 etc.).
You just need to replace network_probabilities with model.predict(image).
Notice I removed the argmax() in order to send an array of probabilities instead of the index of the max score/probability position (that is, argmax()).
I am trying to read the image dataset for the segmentation problem (1-class) by following this link. My main folder contains two folders i.e. (a) img (b) mask. img contains image samples and mask contains corresponding masks. My approach was, generate the path for image and then change the string path (i.e. img->mask). I modified the code provided here which now looks as:
def process_path(file_path):
file_path_str = str(file_path)
file_path_mask = file_path_str.replace('img', 'mask')
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
mask = tf.io.read_file(str(file_path_mask))
mask = decode_mask(mask)
return img, mask
However, when I am trying to see the size of my samples using:
for image, mask in labeled_ds.take(1):
print("Image shape: ", image.numpy().shape)
print("Mask shape: ", mask.numpy().shape)
I am getting the following error:
InvalidArgumentError: NewRandomAccessFile failed to Create/Open: Tensor("arg0:0", shape=(), dtype=string) : The filename, directory name, or volume label syntax is incorrect.
; Unknown error
[[{{node ReadFile_1}}]] [Op:IteratorGetNextSync]
Question: Any suggestion on how to read image and mask both from a given folder without above error?
We can use tf.regex.replace to rename string. So, in place of python string replacement, use:file_path_mask = tf.regex_replace(file_path, "img", "mask"). For TF 2.0, use tf.strings.regex_replace.
Alternative workaround for a similar problem. I have 200 (nb_of_images = 200) grayscale images of shape (512, 512) loaded as np.array and 200 binary masks also of shape (512, 512) and loaded as np.array. Within a for loop, I take all the images, convert them to EagerTensor (with tf.convert_to_tensor), cast them to tf.float32 through the dtype arg, add one dimension with:
img = img[:, :, tf.newaxis]
so that my images are now EagerTensors of shape (512, 512, 1), and finally I append them to an external list called images.
Within the same loop, I do the exact same operations for the masks and in the end I append them to an external list called masks.
Once the for loop is finished, I basically have two lists of EagerTensors, with
len(images) == len(masks) == nb_of_images
Lastly, I re-convert the two lists to tf.Tensor with:
images_tf = tf.convert_to_tensor(images) # convert list back to tf.Tensor
masks_tf = tf.convert_to_tensor(masks) # convert list back to tf.Tensor
and finally I create the tf.data.Dataset with:
dataset = tf.data.Dataset.from_tensor_slices((images_tf, masks_tf)) # create tf.data.Dataset
I am trying to follow this blog https://brunolopezgarcia.github.io/2018/05/09/Crafting-adversarial-faces.html to generate adversarial face images against Facenet. The code is here https://github.com/tensorflow/cleverhans/tree/master/examples/facenet_adversarial_faces and works fine! My question is how can I export these adversarial images. Is this question too straightforward, so the blog didn't mention it, but only shows some sample pictures.
I was thinking it is not a hard problem, since I know the generated adversarial samples are in the "adv". But this adv (float32) came from faces1, after being prewhiten and normalized. To restore the int8 images from adv(float32), I have to reverse the normalization and prewhiten process. It seems like if we want output some images from facenet, we have to do this process.
I am new to Facenet and Cleverhans, I am not sure whether this is the best way to do that, or is that common way(such as functions) for people to export images from Facenet.
In facenet_fgsm.py, we finally got the adversarial samples. I need to export adv to plain int images.
adv = sess.run(adv_x, feed_dict=feed_dict)
In set_loader.py. There are some kinda of normalization.
def load_testset(size):
# Load images paths and labels
pairs = lfw.read_pairs(pairs_path)
paths, labels = lfw.get_paths(testset_path, pairs, file_extension)
# Random choice
permutation = np.random.choice(len(labels), size, replace=False)
paths_batch_1 = []
paths_batch_2 = []
for index in permutation:
paths_batch_1.append(paths[index * 2])
paths_batch_2.append(paths[index * 2 + 1])
labels = np.asarray(labels)[permutation]
paths_batch_1 = np.asarray(paths_batch_1)
paths_batch_2 = np.asarray(paths_batch_2)
# Load images
faces1 = facenet.load_data(paths_batch_1, False, False, image_size)
faces2 = facenet.load_data(paths_batch_2, False, False, image_size)
# Change pixel values to 0 to 1 values
min_pixel = min(np.min(faces1), np.min(faces2))
max_pixel = max(np.max(faces1), np.max(faces2))
faces1 = (faces1 - min_pixel) / (max_pixel - min_pixel)
faces2 = (faces2 - min_pixel) / (max_pixel - min_pixel)
In the facenet.py load_data function, there is a prewhiten process.
nrof_samples = len(image_paths)
images = np.zeros((nrof_samples, image_size, image_size, 3))
for i in range(nrof_samples):
img = misc.imread(image_paths[i])
if img.ndim == 2:
img = to_rgb(img)
if do_prewhiten:
img = prewhiten(img)
img = crop(img, do_random_crop, image_size)
img = flip(img, do_random_flip)
images[i,:,:,:] = img
return images
I hope some expert can point me some hidden function in facenet or cleverhans that can directly export the adv images, otherwise reversing normalization and prewhiten process seems akward. Thank you very much.
I don't know much about the Facenet code. From your discussion, it seems like you will have to save the values of min_pixel,max_pixelto reverse the normalization, and then look at theprewhiten` function to see how you can reverse it. I'll email Bruno to see if he has any further comments to help you out.
EDIT: Now image exporting is included in the Facenet example of Cleverhans: https://github.com/tensorflow/cleverhans/commit/08f6fb9cf2a7f199467d5ed60179fc3ae9140458
I am adding this summarization of my issue to make it easier to understand:
I want to do exactly what is done in the following tensorflow example:
https://www.tensorflow.org/guide/datasets
# Reads an image from a file, decodes it into a dense tensor, and resizes it
# to a fixed shape.
def _parse_function(filename, label):
image_string = tf.read_file(filename)
image_decoded = tf.image.decode_jpeg(image_string)
image_resized = tf.image.resize_images(image_decoded, [28, 28])
return image_resized, label
# A vector of filenames.
filenames = tf.constant(["/var/data/image1.jpg", "/var/data/image2.jpg", ...])
# `labels[i]` is the label for the image in `filenames[i].
labels = tf.constant([0, 37, ...])
dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))
dataset = dataset.map(_parse_function)
The only differences are: I read the data from CSV that has many more features and then I call the map method:
dataset = tf.data.experimental.make_csv_dataset(file_pattern=CSV_PATH_TRAIN,
batch_size=2,
header=True,
label_name = 'label').map(_parse_function)
How does my _parse_function need to look like? How do I access the image path features, updates it to be an image presentation and return a modified numeric matrix feature of the image without changing anything at the other features?
thanks,
eilalan
==================Here are my code tries:==================
My code reads a CSV with feature columns and label. One of the features is image path, the others are strings.
The image path need to be processed into image numbers matrix.
I have tried doing so with the following options. In both ways tf.read_file fails with the input dimension error.
My question is how to pass one image at a time into the map methods
def read_image_png_option_1(image_path, depth=3, scale=False):
"""Reads the image from image_path (tf.string tensor) [jpg image].
Cast the result to float32 and if scale=True scale it in [-1,1]
using scale_image. Otherwise the values are in [0,1]
Reuturn:
the decoded jpeg image, casted to float32
"""
image = tf.image.convert_image_dtype(
tf.image.decode_png(tf.read_file(image_path), channels=depth),
dtype=tf.float32)
if scale:
image = scale_image(image)
return image
def read_image_png_option_2(features, depth=3, scale=False):
"""Reads the image from image_path (tf.string tensor) [jpg image].
Cast the result to float32 and if scale=True scale it in [-1,1]
using scale_image. Otherwise the values are in [0,1]
Reuturn:
the decoded jpeg image, casted to float32
"""
image = tf.image.convert_image_dtype(
tf.image.decode_png(tf.read_file(features['image']), channels=depth),
dtype=tf.float32)
if scale:
image = scale_image(image)
features['image'] = image
return features
def make_input_fn(fileName,batch_size=8, perform_shuffle=True):
"""An input function for training """
def _input_fn():
def decode_csv(line):
print('line is ',line)
filename_col,label_col,gender_col,ethinicity = tf.decode_csv(line,
[[""]]*amount_of_columns_csv,
field_delim=",",
na_value='NA',
select_cols=None)
image_col = read_image_png_option_1(filename_col)
d = dict(zip(['image','label','gender','ethinicity'], [image_col,label_col,gender_col,ethinicity])), label
return d
## OPTION 1:
# filenames could be more than one
# dataset = tf.data.TextLineDataset(filenames=fileName).skip(1).batch(batch_size).map(decode_csv)
## OPTION 2:
dataset = tf.data.experimental.make_csv_dataset(file_pattern=CSV_PATH_TRAIN,
batch_size=2,
header=True,
label_name = 'label').map(read_image_png_option_2)
#select_columns=[0,1]) #[tf.string,tf.string,tf.string,tf.string])
if perform_shuffle:
dataset = dataset.shuffle(buffer_size=256)
return dataset
return _input_fn()
train_input_fn = lambda: make_input_fn(CSV_PATH_TRAIN)
train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=50)
eval_input_fn = lambda: make_input_fn(CSV_PATH_VAL)
eval_spec = tf.estimator.EvalSpec(eval_input_fn)
feature_columns = [tf.feature_column.numeric_column("image",shape=(224,224)), # here i need a pyhton method to transform
tf.feature_column.categorical_column_with_vocabulary_list("gender", ["ww","ee"]),
tf.feature_column.categorical_column_with_vocabulary_list("ethinicity",["xx","yy"])]
estimator = tf.estimator.DNNClassifier(feature_columns=feature_columns,hidden_units=[1024, 512, 256],warm_start_from=ws)
tf.estimator.train_and_evaluate(estimator, train_spec=train_spec, eval_spec=eval_spec)
Error for option 2:
ValueError: Shape must be rank 0 but is rank 1 for 'ReadFile' (op: 'ReadFile') with input shapes: [2].
Error for option 1:
ValueError: Shape must be rank 0 but is rank 1 for 'ReadFile' (op: 'ReadFile') with input shapes: [?].
Any help is appreciated.
Thanks
First you need to read the CSV file into dataset.
Then for each row in your CSV you can call your parse function.
def getInput(fileList):
# returns a dataset containing list of filenames
files = tf.data.Dataset.from_tensor_slices(fileList)
# Returs a dataset containing list of rows taken from all the files in file list.
# dataset is filled dynamically and not all entries are read at once
dataset = files.interleave(tf.data.TextLineDataset)
# call parse function for each row
# returned dataset will contain list of whatever the parse function is returning for the row
# we want the image path to be converted to decoded image in parse function
dataset = dataset.map(_parse_function, num_parallel_calls=8)
# return an iterator for the dataset which will be used to get elements.
return dataset.make_one_shot_iterator().get_next()
The parse function will be passed only one parameter that will be a single row from the CSV file. You need to decode the CSV and do further processing on each value.
Let's say you have 3 columns in your CSV each being a string.
def _parse_function(value):
columns_default = [[""], [""], [""]]
# this will be a tensor of columns in the row
columns = tf.decode_csv(value, record_defaults=columns_default,
field_delim=',')
col_names = ["label", "imagepath", "c3"]
features = dict(zip(col_names, columns))
for f, tensor in features.items():
# process imagepath to decoded image
if f == "imagepath":
image_string = tf.read_file(tensor)
image_decoded = tf.image.decode_jpeg(image_string)
image_resized = tf.image.resize_images(image_decoded, [28, 28])
features[f] = image_resized
labels = tf.equal(features.pop('label'), "1")
labels = tf.expand_dims(labels, 0)
return features, labels
Edit:
Explanation for comment:
Dataset object simply contains a list of elements. The elements can be tensors or a tuple of tensors etc. Tensor object can contain anything. It could represent a single feature, a single record or a batch of record. Further dataset API provide handy methods to manipulate the elements within.
If you are using dataset with another API like estimator then they expect the dataset elements to be in specific format which is what need to return from our input function for eg.
https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator#train
I have edited my code block above to describe what dataset object at each step will contain.
From what I understand is that you have image path as one of the field in your CSV and you want to convert that path into an actual decoded image which you will use as one of the feature.
Since the image is going to be just one of the feature, you should not try to create a dataset using image files alone. Dataset object will include all your features at once.
So doing this would be incorrect:
files = tf.data.Dataset.from_tensor_slices(ds['imagepath'])
dataset = files.interleave(tf.data.TextLineDataset)
If you are using make_csv() function to read your csv then it will convert each row of your csv into one record where one record will contain list of all features, same as columns of csv.
So each element in the returned dataset should contain a single tensor containing all your features.
Here your image path will be one of the features. now you want to transform that image path to decoded image.
I suppose you can do it by applying a parse function to elements of dataset using map() function but it will be slightly tricky as all your features are already packed inside a single tensor.
I want to perform a multi-label classification with TensorFlow.
I have about 95000 images and for each image there is a corresponding label vector. For every image there are 7 labels. These 7 labels are represented as a tensor with size 7. Each image has the shape of (299,299,3).
How can I now write the image with the corresponding label vector/tensor to the .tfrecords File
my current code/approach:
def get_decode_and_resize_image(image_id):
image_queue = tf.train.string_input_producer(['../../original-data/'+image_id+".jpg"])
image_reader = tf.WholeFileReader()
image_key, image_value = image_reader.read(image_queue)
image = tf.image.decode_jpeg(image_value,channels=3)
resized_image= tf.image.resize_images(image, 299, 299, align_corners=False)
return resized_image
init_op = tf.initialize_all_variables()
with tf.Session() as sess:
# Start populating the filename queue.
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
# get all labels and image ids
csv= pd.read_csv('../../filteredLabelsToPhotos.csv')
#create a writer for writing to the .tfrecords file
writer = tf.python_io.TFRecordWriter("tfrecords/data.tfrecords")
for index,row in csv.iterrows():
# the labels
image_id = row['photo_id']
lunch = tf.to_float(row["lunch"])
dinner= tf.to_float(row["dinner"])
reservations= tf.to_float(row["TK"])
outdoor = tf.to_float(row["OS"])
waiter = tf.to_float(row["WS"])
classy = tf.to_float(row["c"])
gfk = tf.to_float(row["GFK"])
labels_list = [lunch,dinner,reservations,outdoor,waiter,classy,gfk]
labels_tensor = tf.convert_to_tensor(labels_list)
#get the corresponding image
image_file= get_decode_and_resize_image(image_id=image_id)
#here : how do I now create a TFExample and write it to the .tfrecords file
coord.request_stop()
coord.join(threads)
And after I´ve created the .tfrecords file, can i then read it from my TensorFlow Training Code and batch the data automatically?
To expand on Alexandre's answer, you can do something like this:
# Set this up before your for-loop, you'll use this repeatedly
tfrecords_filename = 'myfile.tfrecords'
writer = tf.python_io.TFRecordWriter(tfrecords_filename)
# Then within your for-loop, you can write like so:
for ...:
#here : how do I now create a TFExample and write it to the .tfrecords file
example = tf.train.Example(features=tf.train.Features(feature={
'image_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_file])),
# the other features, labels you wish to include go here too
}))
writer.write(example.SerializeToString())
# then finally, don't forget to close the writer.
writer.close()
This assumes you have already converted the image into a byte array in the image_file variable.
I adapted this from this very helpful post that goes into detail on serialising images & may be helpful to you if my assumption above is false.
To create a tf.train.Example simply do example = tf.train.Example(). You can then manipulate it using the normal protocol buffers python API.