I am training object detection pipeline which is developed using TensorFlow library. My problem is, even after stopping the script memory usage is really high and not going down. Can somebody recommend a remedy to this problem?
I am using TensorFlow=2.6 and the object detection API from tensorflow to train on my data.
Even after I re-ran my script (model_main_tf2) after stopping the older ones, these older ones are still consuming a lot of memory (with same name as model_main_tf2) as can be seen in the figure below.
try running:
tf.keras.backend.clear_session()
after you are finished running a model
I have programmed some code doing an inference with Tensorflow's C API (CPU only). It is running on a cluster node, where I have access to 24 CPUs and 1 GPU. I do not make use of the GPU as I will need to do the task CPU-only later on.
Somehow every time I call the Tensorflow-Code from the other program (OpenFOAM) Tensorflow seems to run on all CPUs parallelized. However I have not done anything to cause this behavior. Now I would like to know whether Tensorflow does this parallelization by default?
Greets and thanks in advance!
I am not sure how you are using tensorflow. But a typical TensorFlow training has an input pipeline which can be thought as an ETL process. Following are the main activities involved:
Extract: Read data from persistent storage
Transform: Use CPU cores to parse and perform preprocessing operations on the data such as image decompression, data augmentation transformations (such as random crop, flips, and color distortions), shuffling, and batching.
Load: Load the transformed data onto the accelerator device(s) (for example, GPU(s) or TPU(s)) that execute the machine learning model.
CPUs are generally used during the data transformation. During the transformation, the data input elements are preprocessed. To improve the performance of the pre-processing, it is parallelized across multiple CPU cores by default.
Tensorflow provides the tf.data API which offers the tf.data.Dataset.map transformation. To control the parallelism, the map provides the num_parallel_calls argument.
Read more on this from here:
https://www.tensorflow.org/guide/performance/datasets
I have a system with two GPUs, and am using Keras with Tensorflow backend. Gpu:0 is being allocated to PyCUDA, which is performing a unique operation which is fed forward to Keras, and changes with each batch. As such, I would like to run a Keras model on gpu:1 while leaving gpu:0 allocated to PyCUDA.
Is there any way to do this? Looking through prior threads I've found several depreciated solutions.
So I don't think that this feature is meaningfully implemented in Keras currently. Found a workaround that I recommend whereby you just create multiple processes using Python's default multiprocessing library.
Note: Currently for this setup you need to spawn the new process, rather than fork it, to avoid a weird interaction with one of the PyCUDA backend libraries.
I am making a neural network using tensorflow and I ran into a problem trying to use a generator to split my data up, basically it's too slow.
My training data consists of 52x52 numpy arrays. I need to split each array into a 52x52x3 array before I input it into my NN. As mentioned I have a generator working that does this, but I noticed that even though my NN is running on the GPU my GPU usage is very low (under 10% usually). I think this might be caused by me doing the generator on the CPU.
Is there any way of running my generator on the GPU?
What I tried:
- I thought of trying to use pyCUDA in order to program the generator on the GPU but found that tensorflow and pyCUDA don't support each other
-I tried using the from_generator function from the Dataset API as mentioned here:
https://www.tensorflow.org/api_docs/python/tf/contrib/data/Dataset
But while having issues with it I ran into this github thread mentioning that this function isn't supported to run on the GPU anyway:
https://github.com/tensorflow/tensorflow/issues/13610
Any help would be greatly appreciated.
UPDATE: I have to re-write this question as after some investigation I realise that this is a different problem.
Context: running keras in a gridsearch setting using the kerasclassifier wrapper with scikit learn. Sys: Ubuntu 16.04, libraries: anaconda distribution 5.1, keras 2.0.9, scikitlearn 0.19.1, tensorflow 1.3.0 or theano 0.9.0, using CPUs only.
Code:
I simply used the code here for testing: https://machinelearningmastery.com/use-keras-deep-learning-models-scikit-learn-python/, the second example 'Grid Search Deep Learning Model Parameters'. Pay attention to line 35, which reads:
grid = GridSearchCV(estimator=model, param_grid=param_grid)
Symptoms: When grid search uses more than 1 jobs (means cpus?), e.g.,, setting 'n_jobs' on the above line A to '2', line below:
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=2)
will cause the code to hang indefinitely, either with tensorflow or theano, and there is no cpu usage (see attached screenshot, where 5 python processes were created but none is using cpu).
By debugging, it appears to be the following line with 'sklearn.model_selection._search' that causes problems:
line 648: for parameters, (train, test) in product(candidate_params,
cv.split(X, y, groups)))
, on which the program hangs and cannot continue.
I would really appreciate some insights as to what this means and why this could happen.
Thanks in advance
Are you using a GPU? If so, you can't have multiple threads running each variation of the params because they won't be able to share the GPU.
Here's a full example on how to use keras, sklearn wrappers in a Pipeline with GridsearchCV: Pipeline with a Keras Model
If you really want to have multiple jobs in the GridSearchCV, you can try to limit the GPU fraction used by each job (e.g. if each job only allocates 0.5 of the available GPU memory, you can run 2 jobs simultaneously)
See these issues:
Limit the resource usage for tensorflow backend
GPU memory fraction does not work in keras 2.0.9 but it works in 2.0.8
I dealt with this problem too and it really slowed me down not being able to run what is essentially trivially-parallelizable code. The issue is indeed with the tensorflow session. If a session in created in the parent process before GridSearchCV.fit(), it will hang!
The solution for me was to keep all session/graph creation code restricted to the KerasClassifer class and the model creation function i passed to it.
Also what Felipe said about the memory is true, you will want to restrict the memory usage of TF in either the model creation function or a subclass of KerasClassifier.
Related info:
Session hang issue with python multiprocessing
Keras + Tensorflow and Multiprocessing in Python
TL;DR Answer: You can't because your Keras model can't be serialized, and serialization is needed for parallelizing in Python with joblib.
This problem is much detailed here: https://www.neuraxle.org/stable/scikit-learn_problems_solutions.html#problem-you-can-t-parallelize-nor-save-pipelines-using-steps-that-can-t-be-serialized-as-is-by-joblib
The solution to parallelize your code is to make your Keras estimator serializable. This can be done using savers as described at the link above.
If you're lucky enough to be using TensorFlow v2's prebuilt Keras module, the following practical code sample will reveal to be useful to you as you'd practically just need to take the code and modify it with yours:
https://github.com/guillaume-chevalier/seq2seq-signal-prediction
In this example, all the saving and loading code is all pre-written for you using Neuraxle-TensorFlow, and this makes it parallelizeable if you use Neuraxle's AutoML methods (e.g.: Neuraxle's grid search and Neuraxle's own parallelism things).