How to use torch.nn.parallel.DistributedDataParallel in this case? - tensorflow

In my case, the training data is too large to save in one single computer or one computing node in a cluster (due to the limited disk space in each node), so it is split into several parts and each part is saved in one computing node. Suppose there are 3 computing nodes: A, B, and C. The folder for saving the part 1 in A is the /data/training_data/part1/, the folder for saving the part 2 in B is the /data/training_data/part2/, and the folder for saving the part 3 in C is the /data/training_data/part3/.
Then, how do I train a convolutional neural network using the torch.nn.parallel.DistributedDataParallel in this circumstance?
Could you please give some advice? Thanks a lot!

Related

Person recolonization using ML.NET/TensorFlow

I am noob in ML. I have a Person table that have,
-----------------------------------
User
-----------------------------------
UserId | UserName | UserPicturePath
1 | MyName | MyName.jpeg
Now I have tens of millions of persons in my database. I wanna train my model to predict the UserId by giving images(png/jpeg/tiff) in bytes. So, input will be images and the output I am looking is UserId. Right now I am looking for a solution in ML.NET but I am open to switch to TensorFlow.
Well, this is nothing but a mapping problem, particularly an id-to-face mapping problem, and neural nets excell at this more than on anything else.
As you have understood by now, you can do this using tensorflow, pytorch or any library of the same purpose.
But if you want to use tensorflow, read on for a ready code at the end. It is easiest to achieve your task by transfer learning, i.e. by loading some pretrained model, freezing all but last layer and then training the network to produce a latent one-dimensional vector for a given face image. Then you can save this vector into a database and map it into an id.
Then, whenever there is a new image and you want to predict an id for the image, you run your image through the network, get your vector and compute cosine similarity with vectors in your database. If the similarity is above some threshold and it is the highest among other similarities, you have found your id.
There are many ways to go about this. Sure you have to preprocess your data, and augment it at the same time, but if you want some ready code to play with then have a look at this famous happy house tutorial from Andrew NG and his team:
https://github.com/gemaatienza/Deep-Learning-Coursera/blob/master/4.%20Convolutional%20Neural%20Networks/Keras%20-%20Tutorial%20-%20Happy%20House%20v2.ipynb
This should suffice your needs.
Hope it helps!

Multiple-input multiple-output CNN with custom loss function

I have a set of 2D input arrays m x n namely A,B,C and I have to predict two 2D output arrays namely d,e for which I do have the expected values. You can think of the inputs/outputs as grey images if you like.
Because of the spatial information is relevant (these are actually 2D physical domains) I want to use a Convolutional Neural Network to predict d and e. My design (not tested yet) looks as follows:
Because I have multiple inputs, I guess I should use multiple columns (or branches) to find different features for each of the inputs (they look fairly different). Each of these columns follows a encoding-decoding architecture used in segmentation (see SegNet): Conv2D block involves a convolution+batch normalisation+ReLU layer. Deconv2D involves a deconvolution+batch normalisation+ReLU.
Then, I can merge the output of each column by either concatenating, averaging or taking the maximum for example. To obtain the original m x n shape for each of the outputs I have seen I could do this with a 1 x 1 kernel convolution.
I want to predict the two outputs from that single layer. Is that okay from the network structure point of view? Finally my loss function depends on the outputs themselves compared to the target plus another relation I want to impose.
A would like to have some expert opinion on this since this is my first design of a CNN and I am not sure if I it makes sense as it is now and/or if there are better approaches (or network architectures) to this problem.
I posted this originally in datascience but I did not get much feedback. I am now posting it here since there is a bigger community on these topics plus I would be very grateful to receive implementation tips beside network architectural ones. Thanks.
I think your design makes sense in general:
since A, B, and C are fairly different, you make each input a transform sub-network, and then fuse them together, which is your intermediate representation.
from the intermediate representation, you apply additional CNN to decode D and E, respectively.
Several things:
A, B, and C looking different does not necessarily mean you can't stack them together as a 3-channel input. The decision should be made upon the fact that whether the values in A, B, and C mean differently or not. For example, if A is a gray scale image, B is a depth map, C is a also a gray image captured by a different camera. Then A and B are better processed in your suggested way, but A and C can be concatenated as one single input before feeding it to your network.
D and E are two outputs of the network and will be trained in the multi-task manner. Of course, they should share some latent feature, and one should split at this feature to apply a down-stream non-shared weight branch for each output. However, where to split is usually tricky.
It is really a broad question, asking for answers relying mostly on opinions. Here are my two cents though, which you might find interesting as it does not go along the previous answers here and on datascience.
First, I wouldn't go with separate columns for each input. AFAIK, when different inputs are processed by different columns, it is almost always the case that the network is some sort of Siemese network and the columns share the same weights; or at least the columns all need to produce a similar code. It is not your case here, so I would simply not bother.
Second, you are blessed with a problem with a dense output and no need to learn a code. This should direct you straight to U-nets, which outperforms any bottleneck-designed network without much effort. U-nets were introduced for dense segmentation but they shine at any dense-output problem really.
In short, just stack your inputs together and use a U-net.

How does Nvidia Digits batch size and data shuffling work?

I am trying to train a neural network to detect steganographic images using Tensorflow and Nvidia Digits. I loaded a data set which has two sub directories - Cover Images and Steg Images. I think the network has to process the cover/stegano image pairs together to learn which are the covers and which are steganographic images. Am I correct?
How does batch size work? If I give 1 does it take one image from both sub directories and process them? or do I have to input batch number as 2 for that?
How does shuffling data on each epoch work? does it shuffle both sub directories equally? as an example will 1.jpg be the third photo on both folders or will it be different on them both?
I think the network has to process the cover/stegano image pairs
together to learn which are the covers and which are steganographic
images. Am I correct?
I am not familiar with object detection (right?) in Nvidia Digits, so please check out their tutorials for more information.
You need to think about the kind of labeling the training data first. Usually in the examples I see only use one training folder and one validation folder (each: images and labels) - Digits divides your dataset, e.g. into 90 % training and 10 % validation images.
How does batch size work? If I give 1 does it take one image from both
sub directories and process them? or do I have to input batch number
as 2 for that?
With batch number you tell Digits how many images you use per iteration. It's used for dataset division (memory for calculations is limited; you can't fit the whole dataset into one iteration). In one epoch the whole dataset is processed.
As written above, one image at a time, as far as I know.
How does shuffling data on each epoch work? does it shuffle both sub
directories equally? as an example will 1.jpg be the third photo on
both folders or will it be different on them both?
The data should be shuffled automatically.

how can we get benefit from sharding the data to speed the training time?

My main issue is : I have 204 GB training tfrecords for 2 million images, and 28GB for validation tf.records files, of 302900 images. it takes 8 hour to train one epoch and this will take 33 day for training. I want to speed that by using multiple threads and shards but I am little bit confused about couple of things.
In tf.data.Dataset API there is shard function , So in the documentation they mentioned the following about shard function :
Creates a Dataset that includes only 1/num_shards of this dataset.
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset.
When reading a single input file, you can skip elements as follows:
d = tf.data.TFRecordDataset(FLAGS.input_file)
d = d.shard(FLAGS.num_workers, FLAGS.worker_index)
d = d.repeat(FLAGS.num_epochs)
d = d.shuffle(FLAGS.shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=FLAGS.num_map_threads)
Important caveats:
Be sure to shard before you use any randomizing operator (such as shuffle).
Generally it is best if the shard operator is used early in the dataset pipeline. >For example, when reading from a set of TFRecord files, shard before converting >the dataset to input samples. This avoids reading every file on every worker. The >following is an example of an efficient sharding strategy within a complete >pipeline:
d = Dataset.list_files(FLAGS.pattern)
d = d.shard(FLAGS.num_workers, FLAGS.worker_index)
d = d.repeat(FLAGS.num_epochs)
d = d.shuffle(FLAGS.shuffle_buffer_size)
d = d.repeat()
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=FLAGS.num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=FLAGS.num_map_threads)
So my question regarding the code above is when I try to makes d.shards of my data using shard function, if I set the number of shards (num_workers)to 10 , I will have 10 splits of my data , then should I set the num_reader in d.interleave function to 10 to guarantee that each reader take one split from the 10 split?
and how I can control which split the function interleave will take? because if I set the shard_index (worker_index) in shard function to 1 it will give me the first split. Can anyone give me an idea how can I perform this distributed training using the above functions?
then what about the num_parallel_call . should I set it to 10 as well?
knowing that I have single tf.records file for training and another one for validation , I don't split the tf.records files into multiple files.
First of all, how come dataset is 204GB for only 2million images? I think your image is way too large. Try to resize the image. After all, you would probably need to resize it to 224 x 224 in the end.
Second, try to reduce the size of your model. your model could be either too deep or not efficient enough.
Third, try to parallelize your input reading process. It could the bottleneck.

Inference on several inputs in order to calculate the loss function

I am modeling a perceptual process in tensorflow. In the setup I am interested in, the modeled agent is playing a resource game: it has to choose 1 out of n resouces, by relying only on the label that a classifier gives to the resource. Each resource is an ordered pair of two reals. The classifier only sees the first real, but payoffs depend on the second. There is a function taking first to second.
Anyway, ideally I'd like to train the classifier in the following way:
In each run, the classifier give labels to n resources.
The agent then gets the payoff of the resource corresponding to the highest label in some predetermined ranking (say, A > B > C > D), and randomly in case of draw.
The loss is taken to be the normalized absolute difference between the payoff thus obtained and the maximum payoff in the set of resources. I.e., (Payoff_max - Payoff) / Payoff_max
For this to work, one needs to run inference n times, once for each resource, before calculating the loss. Is there a way to do this in tensorflow? If I am tackling the problem in the wrong way feel free to say so, too.
I don't have much knowledge in ML aspects of this, but from programming point of view, I can see doing it in two ways. One is by copying your model n times. All the copies can share the same variables. The output of all of these copies would go into some function that determines the the highest label. As long as this function is differentiable, variables are shared, and n is not too large, it should work. You would need to feed all n inputs together. Note that, backprop will run through each copy and update your weights n times. This is generally not a problem, but if it is, I heart about some fancy tricks one can do by using partial_run.
Another way is to use tf.while_loop. It is pretty clever - it stores activations from each run of the loop and can do backprop through them. The only tricky part should be to accumulate the inference results before feeding them to your loss. Take a look at TensorArray for this. This question can be helpful: Using TensorArrays in the context of a while_loop to accumulate values