How to feed the training examples one by one in tensorflow (e.g., cifar example) - tensorflow

Just want to feed the data according to the original order in tensorflow. Specifically, I am working on the cifar10 example. Do anyone knows how to guarantee that examples are fed exactly the same order as they are in the data file?

Related

How to extract weights of DQN agent in TF-Agents framework?

I am using TF-Agents for a custom reinforcement learning problem, where I train a DQN (constructed using DqnAgents from the TF-Agents framework) on some features from my custom environment, and separately use a keras convolutional model to extract these features from images. Now I want to combine these two models into a single model and use transfer learning, where I want to initialize the weights of the first part of the network (images-to-features) as well as the second part which would have been the DQN layers in the previous case.
I am trying to build this combined model using keras.layers and compiling it with the Tf-Agents tf.networks.sequential class to bring it to the necessary form required when passing it to the DqnAgent() class. (Let's call this statement (a)).
I am able to initialize the image feature extractor network's layers with the weights since I saved it as a .h5 file and am able to obtain numpy arrays of the same. So I am able to do the transfer learning for this part.
The problem is with the DQN layers, where I saved the policy from the previous example using the prescribed Tensorflow Saved Model Format (pb) which gives me a folder containing model attributes. However, I am unable to view/extract the weights of my DQN in this way, and the recommended tf.saved_model.load('policy_directory') is not really transparent with respect to what data I can see regarding the policy. If I have to follow the transfer learning as I do in statement (a), I need to extract the weights of my DQN and assign them to the new network. The documentation seems to be quite sparse for this case where transfer learning needs to be applied.
Can anyone help me in this, by explaining how I can extract weights from the Saved Model method (from the pb file)? Or is there a better way to go about this problem?

Keras - Changing the way images are taken in a dataset in fit()

I'm studying machine learning and deep learning. I'm trying to customize this model from the Keras website https://keras.io/examples/generative/wgan_gp/
My model takes 3 512x512 images in each training iteration (from 10 different directories), which are then divided into patches used to train the generator and discriminator. These images must be consecutive and belong to the same directory. The directory can be chosen randomly in each iteration, and the 3 images must be taken from it.
In summary, for each training iteration, the algorithm must select a random directory, take 3 consecutive images and divide them into patches to train the two networks.
How can I customize the way I iterate over the dataset in fit() to achieve this?
Providing solution here from the answer provided by Shubham Panchal in comment section for the benefit of the community.
You can do this using TensorFlow. See this tutorial on DCGAN. With the TensorFlow API, you can create a custom training loop with any existing Keras model. You may implement the custom training loop from the tutorial above and using use the WGAN model you have.

Classification of a sequence of images (fixed number)

I successfully trained a CNN for a single image classification, using pre-trained resnet50 from tensorflow_hub.
Now my goal is to give as input to my network a chronological sequence of images (not a video), to classify the behavior of the subject.
Each sequence consists of 20 images taken every 100ms.
What is the best kind of NN? Where can I find documentation/examples for problems similar to mine?
Any time there is sequential data some type of Recurrent Neural Network is a great candidate (usually in the form of an LSTM).
Your model may look like a combination of an CNN-LSTM because your pictures have some sort of sequential relationship.
Here is a link to some examples and tutorials. He will set up a CNN in his example but you could probably rig your architecture to use the resNet you have already made. Though your are not dealing with a video your problem shares the same domain.
Here is a paper than uses a NN architecture like the one described above you might find useful.

Using TensorFlow object detection API models at prediction

I have used the TensorFlow object detection API to train the SSD Inception model from scratch. The evaluation script shows that the model has learned something and now I want to use the model.
I have looked at the object detection ipynb that can feed single images to a trained model. However, this is for SSD with MobileNet. I have used the following line (after loading the meta graph) to print the tensor names of the TensorFlow model I trained.
print([str(op.name) for op in tf.get_default_graph().get_operations()] )
But it does not contain the same input or output tensor names as in the ipynb. I have also searched through the code, but many functions point toward each other and it is difficult to find what I am looking for.
How can I find the tensor names I need? Or is there another method I do not know about?
To use the graph, you need to freeze/export it, using this provided script. The resulting .pb file will contain the nodes you need. I don't know why it's organized like that, but it is.

mnist and cifar10 examples with TFRecord train/test file

I am a new user of Tensorflow. I would like to use it for training a dataset of 2M images. I did this experiment in caffe using lmdb file format.
After reading Tensorflow related posts, I realized that TFRecord is the most suitable file format to do so. Therefore, I am looking for complete CNN examples which use TFRecord data. I noticed that the image related tutorials (mnist and cifar10 in link1 and link2) are provided with a different binary file format where the entire data-set is loaded at once. Therefore, I would like to know if anyone knows if these tutorials (mnist and cifar10) are available using TFRecord data (for both CPU and GPU).
I assume that you want to both, write and read TFRecord files. I think what is done here reading_data.py should help you converting MNIST data into TFRecors.
For reading it back, this script does the trick: fully_connected_reader.py
This could be done similarly with cifar10.