I am pytorch user and i encounter data that contain .tfrec i want to convert them to jpeg/png format so that i can read it in my pytorch code.
I have search the google but found nothing.
Any help how pytorch user handle tfre
if i read them directly like
import torchvision.transforms as T
from torchvision.datasets import ImageFolder
transform_train = T.Compose([
T.RandomCrop(128, padding_mode="reflect"),
T.RandomHorizontalFlip(),
T.ToTensor()
])
train_ds = ImageFolder(
root=path_to_folder,
transform=transform_train
)
it will through err
RuntimeError: Found 0 files in subfolders Supported extensions are:
.jpg,.jpeg,.png,.ppm,.bmp,.pgm,.tif,.tiff,.webp
Related
I can't seem to find any documentation on how to use this model.
I am trying to use it to print out the objects that appear in a video
any help would be greatly appreciated
I am just starting out so go easy on me
I am trying to use it to print out the objects that appear in a video
I interpret that your problem is to print out the name of the found objects.
I don't know how you implemented where you got Fast RCNN trained on OpenImages v4. Therefore, I will give you the way with the model from Tensorflow Hub. Google Colab. AI Hub
After some digging around and a LOT of trial and error I came up with this
#!/home/ahmed/anaconda3/envs/TensorFlow/bin/python3.8
import tensorflow as tf
import tensorflow_hub as hub
import time,imageio,sys,pickle
# sys.argv[1] is used for taking the video path from the terminal
video = sys.argv[1]
#passing the video file to ImageIO to be read later in form of frames
video = imageio.get_reader(video)
dictionary = {}
#download and extract the model( faster_rcnn/openimages_v4/inception_resnet_v2 or
# openimages_v4/ssd/mobilenet_v2) in the same folder
module_handle = "*Path to the model folder*"
detector = hub.load(module_handle).signatures['default']
#looping over every frame in the video
for index, frames in enumerate(video):
# converting the images ( video frames ) to tf.float32 which is the only acceptable input format
image = tf.image.convert_image_dtype(frames, tf.float32)[tf.newaxis]
# passing the converted image to the model
detector_output = detector(image)
class_names = detector_output["detection_class_entities"]
scores = detector_output["detection_scores"]
# in case there are multiple objects in the frame
for i in range(len(scores)):
if scores[i] > 0.3:
#converting form bytes to string
object = class_names[i].numpy().decode("ascii")
#adding the objects that appear in the frames in a dictionary and their frame numbers
if object not in dictionary:
dictionary[object] = [index]
else:
dictionary[object].append(index)
print(dictionary)
I've been trying to use a hyperspectral image dataset that was in .mat files. I found that using the scipy library with its loadmat function I can load the hyperspectral images and selecting some bands to see them as an RGB.
def RGBread(image):
images = loadmat(image).get('new_image')
return abs(images[:,:,(12,6,4)])
def SIread(image):
images = loadmat(image).get('new_image')
return abs(images[:,:,:])
After trying to implement the pix2pix architecture I found an unexpected error. When passing the list of the names of the dataset files by a function that is responsible for load the data(which are still .mat files), Tensor Flow does not have a direct method for this reading or coding, so I get these data with my RGBread and SIread method and then I turned them into tensors.
def load_image(filename, augment=True):
inimg = tf.cast( tf.convert_to_tensor(RGBread(ImagePATH+'/'+filename)
,dtype=tf.float32),tf.float32)[...,:3]
tgimg = tf.cast( tf.convert_to_tensor(SIread(ImagePATH+'/'+filename)
,dtype=tf.float32),tf.float32)[...,:12]
inimg, tgimg = resize(inimg, tgimg,IMG_HEIGH,IMG_WIDTH)
if augment:
inimg, tgimg = random_jitter(inimg, tgimg)
return inimg, tgimg
When loading an image with the load_image method, using the name and path of a single .mat file (a hyperspectral image) of my dataset as argument of my function the method worked perfectly.
plt.imshow(load_train_image(tr_urls[1])[0])
The problem started when I created my dataSet tensor, because my RGBread function does not receive a tensor as a parameter since loadmat('.mat') expects a string. Having the following error.
train_dataset = tf.data.Dataset.from_tensor_slices(tr_urls)
train_dataset = train_dataset.map(load_train_image,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
TypeError: expected str, bytes or os.PathLike object, not Tensor
After reading a lot about reading .mat files I found a user who recommended passing the data to TFrecord format. I've been trying to do it but I couldn't. Someone could help me?
Rasterio may be useful here.
https://rasterio.readthedocs.io/en/latest/
It can read hyperspectral .tif which can be passed to tf.data using a tf.keras data-generator. It may be a bit slow and perhaps should be done before training rather than at runtime.
An alternative is to ask whether you need the geotiff metadata. If not, you can preprocess and save as numpy arrays for tfrecords.
I got a TFRecord data file filename = train-00000-of-00001 which contains images of unknown size and maybe other information as well. I know that I can use dataset = tf.data.TFRecordDataset(filename) to open the dataset.
How can I extract the images from this file to save it as a numpy-array?
I also don't know if there is any other information saved in the TFRecord file such as labels or resolution. How can I get these information? How can I save them as a numpy-array?
I normally only use numpy-arrays and am not familiar with TFRecord data files.
1.) How can I extract the images from this file to save it as a numpy-array?
What you are looking for is this:
record_iterator = tf.python_io.tf_record_iterator(path=filename)
for string_record in record_iterator:
example = tf.train.Example()
example.ParseFromString(string_record)
print(example)
# Exit after 1 iteration as this is purely demonstrative.
break
2.) How can I get these information?
Here is the official documentation. I strongly suggest that you read the documentation because it goes step by step in how to extract the values that you are looking for.
Essentially, you have to convert example to a dictionary. So if I wanted to find out what kind of information is in a tfrecord file, I would do something like this (in context with the code stated in the first question): dict(example.features.feature).keys()
3.) How can I save them as a numpy-array?
I would build upon the for loop mentioned above. So for every loop, it extracts the values that you are interested in and appends them to numpy arrays. If you want, you could create a pandas dataframe from those arrays and save it as a csv file.
But...
You seem to have multiple tfrecord files...tf.data.TFRecordDataset(filename) returns a dataset that is used to train models.
So in the event for multiple tfrecords, you would need a double for loop. The outer loop will go through each file. For that particular file, the inner loop will go through all of the tf.examples.
EDIT:
Converting to np.array()
import tensorflow as tf
from PIL import Image
import io
for string_record in record_iterator:
example = tf.train.Example()
example.ParseFromString(string_record)
print(example)
# Get the values in a dictionary
example_bytes = dict(example.features.feature)['image_raw'].bytes_list.value[0]
image_array = np.array(Image.open(io.BytesIO(example_bytes)))
print(image_array)
break
Sources for the code above:
Base code
Converting bytes to PIL.JpegImagePlugin.JpegImageFile
Converting from PIL.JpegImagePlugin.JpegImageFile to np.array
Official Documentation for PIL
EDIT 2:
import tensorflow as tf
from PIL import Image
import io
import numpy as np
# Load image
cat_in_snow = tf.keras.utils.get_file(path, 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
#------------------------------------------------------Convert to tfrecords
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def image_example(image_string):
feature = {
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
with tf.python_io.TFRecordWriter('images.tfrecords') as writer:
image_string = open(cat_in_snow, 'rb').read()
tf_example = image_example(image_string)
writer.write(tf_example.SerializeToString())
#------------------------------------------------------
#------------------------------------------------------Begin Operation
record_iterator = tf.python_io.tf_record_iterator(path to tfrecord file)
for string_record in record_iterator:
example = tf.train.Example()
example.ParseFromString(string_record)
print(example)
# OPTION 1: convert bytes to arrays using PIL and IO
example_bytes = dict(example.features.feature)['image_raw'].bytes_list.value[0]
PIL_array = np.array(Image.open(io.BytesIO(example_bytes)))
# OPTION 2: convert bytes to arrays using Tensorflow
with tf.Session() as sess:
TF_array = sess.run(tf.image.decode_jpeg(example_bytes, channels=3))
break
#------------------------------------------------------
#------------------------------------------------------Compare results
(PIL_array.flatten() != TF_array.flatten()).sum()
PIL_array == TF_array
PIL_img = Image.fromarray(PIL_array, 'RGB')
PIL_img.save('PIL_IMAGE.jpg')
TF_img = Image.fromarray(TF_array, 'RGB')
TF_img.save('TF_IMAGE.jpg')
#------------------------------------------------------
Remember that tfrecords is just simply a way of storing information for tensorflow models to read in an efficient manner.
I use PIL and IO to essentially convert the bytes to an image. IO takes the bytes and converts them to a file like object that PIL.Image can then read
Yes, there is a pure tensorflow way to do it: tf.image.decode_jpeg
Yes, there is a difference between the two approaches when you compare the two arrays
Which one should you pick? Tensorflow is not the way to go if you are worried about accuracy as stated in Tensorflow's github : "The TensorFlow-chosen default for jpeg decoding is IFAST, sacrificing image quality for speed". Credit for this information belongs to this post
I'm trying to get TFDV working with RGB images as feature inputs, reading from a TFRecords file. I can read/write the image data to TFRecord files fine. Here's the relevant code snippets for writing, where img is a numpy [32,32,3] array:
feature = {'train/label': _int64_feature(y_train[i]),
'train/image': _bytes_feature(tf.compat.as_bytes(img.tostring()))
}
And reading back:
read_features = {'train/label': tf.FixedLenFeature([], tf.int64),
'train/image': tf.FixedLenFeature([], tf.string)}
I can then use frombuffer and reshape to get back my image correcty.
The issue is that when I run tfdv.generate_statistics_from_tfrecord() using that TFRecords file. It throws an error :
ValueError: '\xff ...... \x87' has type str, but isn't valid UTF-8 encoding. Non-UTF-8 strings must be converted to unicode objects before being added. [while running 'GenerateStatistics/RunStatsGenerators/TopKStatsGenerator/TopK_ConvertToSingleFeatureStats']
I've tried all kinds of different ways of writing the images using astype(unicode) and more, but I can;t get this working.
Any ideas please?
Thanks,
Paul
try the following:
image_string = open(image_location, 'rb').read()
feature = {'train/label': _int64_feature(y_train[i]),
'train/image': _bytes_feature(image_string)
}
referred from official tutorial
I have noticed that Tensorflow provides standard procedures for decoding jpeg, png and gif images after reading files. For instance for png:
import tensorflow as tf
filename_queue = tf.train.string_input_producer(['/Image.png']) # list of files to read
reader = tf.WholeFileReader()
key, value = reader.read(filename_queue)
decoded_image = tf.image.decode_png(value) # use png or jpg decoder based on your files.
However, the tiff format decoder seems to be missing.
So what solutions exist for tiff files? Surely, I could convert my input images to png, but this doesn't seem to be a very smart solution.
There's currently no decoder for TIFF images. Look in tensorflow/core/kernels and you see
decode_csv_op.cc
decode_gif_op.cc
decode_jpeg_op.cc
decode_png_op.cc
decode_raw_op.cc
No decode_tiff_op.cc. This could be a good target for community contribution.
As of February 2019, some (limited & experimental) TIFF support has been added as part of the Tensorflow I/O library:
Added a very preliminary TIFF support. TIFF format is rather complex so compressions such as JPEG have not been supported yet, but could be added if needed.
The following methods are currently available:
tfio.experimental.image.decode_tiff
Decode a TIFF-encoded image to a uint8 tensor.
tfio.experimental.image.decode_tiff_info
Decode a TIFF-encoded image meta data.
An example usage from a Tensorflow tutorial:
import tensorflow as tf
import tensorflow.io as tfio
...
def parse_image(img_path: str) -> dict:
...
image = tf.io.read_file(img_path)
tfio.experimental.image.decode_tiff(image)
...
If tf.experimental.image.decode_tiff() won't work for you (as it won't work with my 32-bit TIFF files), you could try using cv2 as described in the answer to this post.
Other options are to use the .map() function with (a) rasterio, (b) skimage, or (c) pillow packages.