CNN parameter estimate [closed] - tensorflow

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to design a CNN for a binary image classification task, which is to detect a small object present or absent in the images. The images are greyscale (unsigned short) with size 512x512 (dowmsampled from 2048x2048 already), and I have thousands of those images for training and test.
It's my first time using CNN for this kind of task, and I hope to achieve ~80% accuracy to start, so I'd like to know, IN GENERAL, how to design the CNN such that I have the best chance to achieve my goal.
My specific questions are:
How many convolution layers and fully-connected layers should I use?
How many feature maps are in each convolution layer and how many nodes in each fully-connected layer?
What's the filter size in each convolution layer?
I'm trying to implement the CNN using Keras with TensorFlow backend, and my computer's specs are: 8 Intel Xeon CPUs # 3.5 GHz; 32 GB memory; 2 Nvidia GPUs: GeForce GTX 980 and Quadro K4200
With those hardware and software, I'd also like to know the computational time of the training. Specifically,
How long will it take to train the CNN (with above structure) with 1000 images mentioned above in epoch, and (in general) how many epochs are needed to achieve ~80% accuracy?
The reason I want to know the typical computational time is to make sure I set up everything properly.
Hope I didn't ask too many questions in my first post.

You'd probably go very well if you take one of the already existing models that keras makes available for that task, such as VGG16, VGG19, InceptionV3 and others: https://keras.io/applications/.
You may experiment on them, try different paramters, little tweaks here and there, and stuff like that. Since you've got only one class, you can probably try smaller versions of them.
All the codes can be found in https://github.com/fchollet/keras/tree/master/keras/applications
Speed is very very relative. It's impossible to tell the speed because each installation method, each driver, each version, each operational system may or may not actually use your hardware capabilities properly or entirely.
But with your specifications, it should be pretty fast, if everything is set up well.

Related

Why is the main purpose of ResNet if the vanishing gradient problem is solved using RELU activation function? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I read that ResNet solves the problem of vanishing gradient problem by using skip functions. But are they not already solved using RELU? Is there some other important thing I'm missing about ResNet or does Vanishing gradient problem occur even after using RELU?
The ReLU activation solves the problem of vanishing gradient that is due to sigmoid-like non-linearities (the gradient vanishes because of the flat regions of the sigmoid).
The other kind of "vanishing" gradient seems to be related to the depth of the network (e.g. see this for example). Basically, when backpropagating gradient from layer N to N-k, the gradient vanishes as a function of depth (in vanilla architectures). The idea of resnets is to help with gradient backpropagation (see for example Identity mappings in deep residual networks, where they present resnet v2 and argue that identity skip connections are better at this).
A very interesting and relatively recent paper that sheds light on the working on resnets is resnets behaves as ensembles of relatively small networks. The tl;dr of this paper could be (very roughly) summarized as this: residual networks behave as an ensemble: removing a single layer (i.e. a single residual branch, not its skip connection) doesn't really affect performance, but performance decreases in an smooth manner as a function of the number of layers that are removed, which is the way in which ensembles behave. Most of the gradient during training comes from short paths. They show that training only this short paths doesn't affect performance in a statistically significant way compared to when all paths are trained. This means that the effect of residual networks doesn't really come from depth as the effect of long paths is almost non-existant.
The main purpose of ResNet is to create more deeper models. In theory deeper models (speaking about VGG models) must show better accuracy, but in the real life they usually do not. But if we add short-connection to the model, we can increase the number of layers and accuracy as well
While the ReLU activation function does solve the problem of vanishing gradients, it does not provide the deeper layers with extra information as in the case of ResNets. The idea of propagating the original input data as deep as possible through the network hence helping the network learn much more complex features is why ResNet architecture was introduced and achieves such high accuracy on a variety of tasks.

CPU simulations for deep reinforcement learning cause latency problems when running CNN on the GPU [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Recently, I've trained an AI for the game Snake using deep reinforcement learning. For this I've used the Keras API with tensorflow-gpu 2.
(Not sure if you need to look at the code, but if so, these are the most important parts:
https://github.com/aeduin/snake_ai/blob/2e935fd124a54ef19f4287d19be9bd88af50f337/train.py
https://github.com/aeduin/snake_ai/blob/2e935fd124a54ef19f4287d19be9bd88af50f337/ai.py#L125
https://github.com/aeduin/snake_ai/blob/2e935fd124a54ef19f4287d19be9bd88af50f337/convnet_ai.py#L161
)
I measured that the total time spend in model.predict is 30 times more than in model.fit, while there are only 3 times as many predictions as actual training examples. My hypothesis would be that this is because the simulation of the game runs on the CPU, and latency of transferring data between CPU and GPU is what causes predicting to be so slow. (Fitting happens at the end of a simulated game, so all the training data is transferred in one go)
A secondary question would be: how do I test this hypothesis? But the main question is: how do I solve this problem? I know how to write CUDA programs. I could write a CUDA implementation of Snake. However, I don't know enough about Tensorflow or any other ML framework to know how to let it use this data without letting the data go through the CPU/RAM first. How would I do that?
One thing that helps is increasing the amount of simultaneously simulated games. The 30 times factor was with 512 or 256 (I don't remember which one) simultaneous games, and with lower counts, it gets much worse (50 times with only 2 simultaneous games). But I don't expect more than 512 simultaneous games adds a lot of value.
I've also tried running the convolutional neural network on the CPU. While the ratio is more reasonable (predicting is only 4 times slower than fitting), fitting and predicting both take more time than when using the GPU.

How to specify the layers to be "fine-tuned" in TF object detection API? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I followed the steps on TF object detection API tutorial page using the ssd_mobilenet_v1_pets.config configuration at https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v1_pets.config
It is very cool that I am able to train a custom detector using my own images. However, as the whole training process appears to me as a "black-box", I am wondering how I could configure the layers to be fine-tuned, just like how one could configure the layers to be retrained using an Inception model in tensorflow/keras.
I think that the layers to be fine-tuned could(should) be different if I had, say, 10000 images rather than 100 images.
To fine tune a specific layer, you have to freeze the rest. If you know the name of the layers of your model you want to freeze, than you can add them to the freeze_variables (Source) in the train part of your config file.and add '*' in front of '.FeatureExtractor.'
For example, this:
train_config: {
...
freeze_variables: ".*FeatureExtractor."
}
will freeze all layers of the Feature Extractor of Faster R-CNN architecture.

Weights of neural network not changing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm working on a tensorflow project, in which I have a neural network in a reinforcement learning system, used to predict the Q values. I have 50 inputs and 10 outputs. Some of the inputs are in the range 30-70, and the rest are between 0-1, so I normalize only the first group, using this formula:
x_new = (x - x_min)/(x_max - x_min)
Although I know the mathematical base of neural networks, I do not have experience applying them in real cases, so I do not really know if the hyperparameters I am using are correctly chosen. The ones I have currently are:
2 hidden layers with 10 and 20 neurons each
Learning rate of 0.5
Batch size of 10 (I have tried with different values until 256 obtaining the same result)
The problem I'm not able to solve is that the weights of this neural network only change in the first two or three iterations, and stay fixed afterwards.
What I had read in other posts is that the algorithm is finding a local optima, and that the normalization of the inputs is a good idea to solve it. However, after normalizing the inputs, I am still in the same state. So, my question is if anyone knows where the problem may be, and if there is any other technique (like normalization) that I should add to my pipeline.
I haven't added any line of code in the question, because I think my problem is rather conceptual. However, in case more details were needed, I would insert it.
Some pointers you can check:
50 input data points with 10 classes?... The data is too small for the netowrk to learn anything at all if this is the case
Which activation function are you using. Try ReLU instead of sigmoid or tanh:
activation functions
How deep is your network? maybe all your graident is either vanishing or exploding:
vanishing or exploding gradients
check if your training data overfits. if not your network is not learning anything

New docs representation in doc2vec Tensorflow [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I trained doc2vec model in TensorFlow. So now I have embeded vectors for words in dictionary and vectors for the documents.
In the paper
"Distributed Representations of Sentences and Documents"
Quoc Le, Tomas Mikolov
authors write
“the inference stage” to get paragraph vectors D for new paragraphs
(never seen before) by adding more columns in D and gradient
descending on D while holding W,U,b fixed.
I have pretrained model so we have W, U and b as graph variables. Question is how to implement inference of D(new document) efficiently in Tensorflow?
For most neural networks, the output of the network (class for classification problems, number for regression,...) if the value you are interested in. In those cases, inference means running the frozen network on some new data (forward propagation) to compute the desired output.
For those cases, several strategies can be used to deliver quickly the desired output for multiple new data points : scaling horizontally, reduce the complexity of calculation through quantisation of the weights, optimising the freezed graph computation (see https://devblogs.nvidia.com/tensorrt-3-faster-tensorflow-inference/),...
doc2Vec (and word2vec) are different use case is however different : the neural net is used to compute an output (prediction of the next word), but the meaningful and useful data are the weights used in the neural network after training. The inference stage is therefore different : you do not want to get the output of the neural net to get a vector representation of a new document, you need to train the part of the neural net that provides you the vector representation of your document. Part of the neural net is then frozen (W,U,b).
How can you efficiently compute D (document vector) in Tensorflow :
Make experiments to define the optimal learning rate (a smaller value might be a better fit for shorter document) as it defines how quick your neural network representation of a document.
As the other part of the neural net are frozen, you can scale the inference on multiple processes / machines
Identify the bottle necks : what is currently slow ? model computation ? Text retrieval from disk of from external data source ? Storage of the results ?
Knowing more about your current issues, and the context might help.