I am trying to classify wav files into different classes. However the number of sound files is far more than what I can load on my RAM (>10000 files). So the only optimum way I can input these files is in batches, by using a DataGenerator function (like the ImageGenerator Function & flow_from_directory). Can someone please help me with it? I have a custom spectrogram function that i would like to apply on each wav file as it is being processed.
Related
I am trying to modify a tensorflow project so that it becomes compatible with TPU.
For this, I started with the code explained on this site.
Here COCO dataset is downloaded and first its features are extracted using InceptionV3 model.
I wanted to modify this code so that it supports TPU.
For this, I added the mandatory code for TPU as per this link.
Withe TPU strategy scope, I created the InceptionV3 model using keras library and loaded model with ImageNet weights as per existing code.
Now, since TPU needs data to be stored on Google Cloud storage, I created a tf records file using tf.Example with the help of this link.
Now, I tried to create this file in several ways so that it will have the data that TPU will find through TFRecordDataset.
At first I directly added image data and image path to the file and uploaded it to GCP bucket but while reading this data, I realized that this image data is not useful as it does not contain shape/size information which it will need and I had not resized it to the required dimension before storage. This file size became 2.5GB which was okay.
Then I thought lets only keep image path at cloud, so I created another tf records file with only image path, then I thought that this may not be an optimized code as TPU will have to open the image individually resize it to 299,299 and then feed to model and it will be better if I have image data through .map() function inside TFRecordDataset, so I again tried, this time by using this link, by storing R, G and B along with image path inside tf records file.
However, now I see that the size of tf records file is abnormally large, like some 40-45GB and ultimately, I stopped the execution as my memory was getting filled up on Google Colab TPU.
The original size of COCO dataset is not that large. It almost like 13GB.. and from that the dataset is being created with only first 30,000 records. so 40GB looks weird number.
May I know what is the problem with this way of feature storage? Is there any better way to store image data in TF records file and then extract through TFRecordDataset.
I think the COCO dataset processed as TFRecords should be around 24-25 GB on GCS. Note that TFRecords aren't meant to act as a form of compression, they represent data as protobufs so it can be optimally loaded into TensorFlow programs.
You might have more success if you refer to: https://cloud.google.com/tpu/docs/coco-setup (corresponding script can be found here) for converting COCO (or a subset) into TFRecords.
Furthermore, we have implemented detection models for COCO using TF2/Keras optimized for GPU/TPU here which you might find useful for optimal input pipelines. An example tutorial can be found here. Thanks!
I see in explanation a TFRecord contains multiple classes and multiple images (a cat and a bridge). When it was written, both images are written into one TFRecord. During the read back, it is verified that this TFRecord contains two images.
Elsewhere I have seen people generating one TFRecord per image, I know you can load multiple TFRecord files like this:
train_dataset = tf.data.TFRecordDataset("<Path>/*.tfrecord")
But which way is recommended? should I build one tfrecord per image, or one tfrecord for multiple images? If put multiple images into one tfrecord, then how many is maximum?
As you said, it is possible to save an arbitrary amount of entries in a single TFRecord file, and one can create as many TFRecord files as desired.
I would recommend using practical considerations to decide how to proceed:
On one hand, try to use fewer TFRecord files for easier handling moving files in the filesystem
On the other hand, avoid growing TFRecord files to a size that can become a problem for filesystem
Keep in mind that it is useful to keep separate TFRecord files for train / validation / test split
Sometimes the nature of the dataset makes it obvious how to split into separate files (for example, I have a video dataset where I use one TFRecord file per participant session)
I am new to this field, I have collected some point cloud data using lidar sensor and camera and now I have .pcd files for the point cloud and .png files for the images. I wanted to make this data like the KITTI dataset structure for 3d object detection to use it in a model that uses kitti dataset as training data for 3D object detection. Therefore I want to change my .pcd files to .bin files like in kitti and also I need to have .txt files for labels, so i need to annotate my data in such a way that will give me the same label files like in kitti dataset. Can somebody help me ?. I searched a lot and all the labelling tools don’t output the same attributes that are in the .txt files of KITTI.
This is the link for the KITTI 3D dataset.
http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d
There is a lot of different questions in your post, so I'm going to answer those I can. Here is a snippet of code how you can read pcd file:
import open3d as o3d
pcd = o3d.io.read_point_cloud("../../path_to_your_file.pcd")
#print(pcd)
and then you can format it as you want, including writing to binary.
This could be a helpful library, check this out
link to open3D docs
link to open3D github
You can get more references from below -
https://paperswithcode.com/task/3d-object-detection
I want to train my word2vec models on the hpc cluster provided through my university. However, I have been told that in order to optimize storage on the cluster, I must transform my data into HDF5 and upload that data instead into the cluster. My data consists of txt files (basically the txt files I want to train word2vec on). How am I supposed to transform txt files into HDF5 ?
I am surfing the documentation but cannot seem to find a tool for txt files, or should I write a certain script ?
I am a new user of Tensorflow. I would like to use it for training a dataset of 2M images. I did this experiment in caffe using lmdb file format.
After reading Tensorflow related posts, I realized that TFRecord is the most suitable file format to do so. Therefore, I am looking for complete CNN examples which use TFRecord data. I noticed that the image related tutorials (mnist and cifar10 in link1 and link2) are provided with a different binary file format where the entire data-set is loaded at once. Therefore, I would like to know if anyone knows if these tutorials (mnist and cifar10) are available using TFRecord data (for both CPU and GPU).
I assume that you want to both, write and read TFRecord files. I think what is done here reading_data.py should help you converting MNIST data into TFRecors.
For reading it back, this script does the trick: fully_connected_reader.py
This could be done similarly with cifar10.