I'm using Google Colab to train my model. After training, I want to change the model but I can't because there is not enough RAM for it. I tried to re-assign old model to None but RAM used didn't decrease.
I don't want to close the session and start from the beginning. Is there any way to free up RAM used in google colab?
I had this problem. I was looping through different models I was building and it helped me to clear the session from memory after each run, as per this other Stackoverflow contribution:
from tensorflow.keras import backend as K
K.clear_session()
For some other users this also helped:
tf.reset_default_graph()
It might also be, without you noticing, that your RAM gets exhausted because you are loading your data from a pandas dataframe. In such a case this might help you, too, more precisely adding the following lines under each loop cleared the memory in my case:
import gc
import pandas as pd
del(df)
gc.collect()
df=pd.DataFrame()
For a work around to increase your RAM to 25 gigs you can run below code and wait for the notebook to popup the RAM increasing option. There you go, you increased RAM to 25GB.
d =[]
while(1):
d.append('1')
Colab does not provide this feature to increase RAM now.
workaround that you can opt is to del all variables as soon as these are used.
Secondly, try to dump your intermediate variable results using pickle or joblib libraries.
so if the RAM crashes so you don't have to start all over again.
example:
from sklearn.externals import joblib
from google.colab import files
#you can save variable into file on colab files
joblib.dump(var, 'var.pkl')
#this will download file to your local downloads
files.download('var.pkl')
#reload your saved data.
var = joblib.load('var.pkl')
Colab dosen't support this feature. The only option is to start all over again.
Related
Do you know any elegant way to do inference on 2 python processes with 1 GPU tensorflow?
Suppose I have 2 processes, first one is classifying cats/dogs, 2nd one is classifying birds/planes, each process is running different tensorflow model and run on GPU. These 2 models will be given images from different cameras continuously.
Usually, tensorflow will occupy all memory of the entire GPU. So when you start another process, it will crash saying OUT OF MEMORY or failed convolution CUDA or something along that line.
Is there a tutorial/article/sample code that shows how to load 2 models in different processes and both run in parallel?
This is very useful also in case you are running a model inference while you are doing some heavy graphics e.g. playing games. I also want to know how running the model affects the game.
I've tried using python Thread and it works but each model predicts 2 times slower (and you know that python thread is not utilizing multiple CPU cores). I want to use python Process but it's not working. If you have sample few lines of code that work I would appreciate that very much.
I've attached current Thread code also:
As summarized here, you can specify the proportion of GPU memory allocated per process.
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
Using Keras, it may be simpler to allow 'memory growth' which will expand the allocated memory on demand as described here.
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
except RuntimeError as e:
print(e)
The following should work for Tensorflow 2.0:
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.2
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
Apart from setting gpu memory fraction, you need to enable MPS in CUDA to get better speed if you are running more than one model on GPU simultaneoulsy. Otherwise, inference speed will be slower as compared to single model running on GPU.
sudo nvidia-smi -i 0 -c EXCLUSIVE_PROCESS
sudo nvidia-cuda-mps-control -d
Here 0 is your GPU number
After finishing stop the MPS daemon
echo quit | sudo nvidia-cuda-mps-control
OK. I think I've found the solution now.
I use tensorflow 2 and there are essentially 2 methods to manage the memory usage of GPU.
set memory growth to true
set memory limit to some number
You can use both methods, ignore all the warning messages about out of memory stuff. I still don't know what it exactly means but the model is still running and that's what I care about.
I measured the exact time the model uses to run and it's a lot better than running on CPU. If I run both processes at the same time, the speed drop a bit, but it's still lot better than running on CPU.
For memory growth approach, my GPU is 3GB so first process try to allocate everything and then 2nd process said out of memory. But it still works.
For memory limit approach, I set the limit to some number e.g. 1024 MB. Both processes work.
So What is the right minimum number that you can set?
I tried reducing the memory limit until I found that my model works with 64 MB limit fine. The prediction speed is still the same as when I set the memory limit to 1024 MB. When I set the memory limit to 32MB, I noticed 50% speed drop. When I set to 16 MB, the model refuses to run because it does not have enough memory to store the image tensor.
This means that my model requires minimum of 64 MB which is very little considering that I have 3GB to spare. This also allows me to run the model while playing some video games.
Conclusion: I chose to use the memory limit approach with 64 MB limit. You can check how to use memory limit here: https://www.tensorflow.org/guide/gpu
I suggest you to try changing the memory limit to see the minimum you need for your model. You will see speed drop or model refusing to run when the memory is not enough.
Your session crashed for an unknown reason
when I run the following cell in Google Colab:
from keras import backend as K
if 'tensorflow' == K.backend():
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.visible_device_list = "0"
set_session(tf.Session(config=config))
I receive this message since I have uploaded two data sets to google drive.
Does anyone know this message and can give me some advice?
Many thanks for every hint.
Update:
I always receive the message
Update
I have removed the data sets from Google Drive, but the session is still crashing.
Google Colab is crashing because you are trying to Run Code related to GPU with Runtime as CPU.
The execution is successful if you change the Runtime as GPU. Steps for the same are mentioned below:
Runtime -> Change Runtime -> GPU (Select from dropdown).
Please find the Working code in Github Gist.
Just a side note: sometimes you may want to reinstall an litle older version of the related module (see from the error log). It works for me in a case.
This error happens when the expected device and the actual device are different.
For example, if you run the code that is written with torch_xla, which is for TPU training, on the GPU (cuda) then the Colab will return you this error.
It is really tricky since it does not give you an actual debugging message, etc, which makes you hard to find what is the actual problem.
I have the issue that my GPU memory is not released after closing a tensorflow session in Python. These three line suffice to cause the problem:
import tensorflow as tf
sess=tf.Session()
sess.close()
After the third line the memory is not released. I have been up and down many forums and tried all sorts of suggestions, but nothing has worked for me. For details please also see my comment at the bottom here:
https://github.com/tensorflow/tensorflow/issues/19731
Here I have documented the ways in which I mange to kill the process and thus release the memory, but this is not useful for long-running and automated processes. I would very much appreciate any further suggestions to try. I am using Windows.
EDIT: I have now found a solution that at least allows me to do what I am trying to do. I am still NOT able to release the memory, but I am able to 'reuse' it. The code has this structure:
import tensorflow as tf
from keras import backend as K
cfg=K.tf.ConfigProto()
#cfg.gpu_options.allow_growth=True #this is optional
cfg.gpu_options.per_process_gpu_memory_fraction = 0.8 #you can use any percentage here
#upload your data and define your model (2 layers in this case) here
for i in range(len(neuron1)):
for j in range(len(neuron2)):
K.set_session(K.tf.Session(config=cfg))
#train your NN for i,j
The first time the script enters the loop the GPU memory is still allocated (80% in the above example) and thus cluttered, however this code nonetheless seems to reuse the same memory somehow. I reckon the K.set_session(K.tf.Session(config=cfg)) somehow destorys or resets the old session allowing the memory to be 'reused' within this context at least. Note that I am not using sess.close() or K.clear_session() or resetting the default graph explicitly. This still does not work for me. When done with the loops the GPU memory is still full.
Refer to this discussion. You can reuse your allocated memory but if you want to free the memory, then you would have to exit the Python interpreter itself.
If I'm understanding correctly, it should be as simple as:
from numba import cuda
cuda.close()
UPDATE: I have to re-write this question as after some investigation I realise that this is a different problem.
Context: running keras in a gridsearch setting using the kerasclassifier wrapper with scikit learn. Sys: Ubuntu 16.04, libraries: anaconda distribution 5.1, keras 2.0.9, scikitlearn 0.19.1, tensorflow 1.3.0 or theano 0.9.0, using CPUs only.
Code:
I simply used the code here for testing: https://machinelearningmastery.com/use-keras-deep-learning-models-scikit-learn-python/, the second example 'Grid Search Deep Learning Model Parameters'. Pay attention to line 35, which reads:
grid = GridSearchCV(estimator=model, param_grid=param_grid)
Symptoms: When grid search uses more than 1 jobs (means cpus?), e.g.,, setting 'n_jobs' on the above line A to '2', line below:
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=2)
will cause the code to hang indefinitely, either with tensorflow or theano, and there is no cpu usage (see attached screenshot, where 5 python processes were created but none is using cpu).
By debugging, it appears to be the following line with 'sklearn.model_selection._search' that causes problems:
line 648: for parameters, (train, test) in product(candidate_params,
cv.split(X, y, groups)))
, on which the program hangs and cannot continue.
I would really appreciate some insights as to what this means and why this could happen.
Thanks in advance
Are you using a GPU? If so, you can't have multiple threads running each variation of the params because they won't be able to share the GPU.
Here's a full example on how to use keras, sklearn wrappers in a Pipeline with GridsearchCV: Pipeline with a Keras Model
If you really want to have multiple jobs in the GridSearchCV, you can try to limit the GPU fraction used by each job (e.g. if each job only allocates 0.5 of the available GPU memory, you can run 2 jobs simultaneously)
See these issues:
Limit the resource usage for tensorflow backend
GPU memory fraction does not work in keras 2.0.9 but it works in 2.0.8
I dealt with this problem too and it really slowed me down not being able to run what is essentially trivially-parallelizable code. The issue is indeed with the tensorflow session. If a session in created in the parent process before GridSearchCV.fit(), it will hang!
The solution for me was to keep all session/graph creation code restricted to the KerasClassifer class and the model creation function i passed to it.
Also what Felipe said about the memory is true, you will want to restrict the memory usage of TF in either the model creation function or a subclass of KerasClassifier.
Related info:
Session hang issue with python multiprocessing
Keras + Tensorflow and Multiprocessing in Python
TL;DR Answer: You can't because your Keras model can't be serialized, and serialization is needed for parallelizing in Python with joblib.
This problem is much detailed here: https://www.neuraxle.org/stable/scikit-learn_problems_solutions.html#problem-you-can-t-parallelize-nor-save-pipelines-using-steps-that-can-t-be-serialized-as-is-by-joblib
The solution to parallelize your code is to make your Keras estimator serializable. This can be done using savers as described at the link above.
If you're lucky enough to be using TensorFlow v2's prebuilt Keras module, the following practical code sample will reveal to be useful to you as you'd practically just need to take the code and modify it with yours:
https://github.com/guillaume-chevalier/seq2seq-signal-prediction
In this example, all the saving and loading code is all pre-written for you using Neuraxle-TensorFlow, and this makes it parallelizeable if you use Neuraxle's AutoML methods (e.g.: Neuraxle's grid search and Neuraxle's own parallelism things).
I'm currently implementing YOLO in TensorFlow and I'm a little surprised on how much memory that is taking. On my GPU I can train YOLO using their Darknet framework with batch size 64. On TensorFlow I can only do it with batch size 6, with 8 I already run out of memory. For the test phase I can run with batch size 64 without running out of memory.
I am wondering how I can calculate how much memory is being consumed by each tensor? Are all tensors by default saved in the GPU? Can I simply calculate the total memory consumption as the shape * 32 bits?
I noticed that since I'm using momentum, all my tensors also have a /Momentum tensor. Could that also be using a lot of memory?
I am augmenting my dataset with a method distorted_inputs, very similar to the one defined in the CIFAR-10 tutorial. Could it be that this part is occupying a huge chunk of memory? I believe Darknet does the modifications in the CPU.
Now that 1258 has been closed, you can enable memory logging in Python by setting an environment variable before importing TensorFlow:
import os
os.environ['TF_CPP_MIN_VLOG_LEVEL']='3'
import tensorflow as tf
There will be a lot of logging as a result of this. You'll want to grep the results to find the appropriate lines. For example:
grep MemoryLogTensorAllocation train.log
Sorry for the slow reply. Unfortunately right now the only way to set the log level is to edit tensorflow/core/platform/logging.h and recompile with e.g.
#define VLOG_IS_ON(lvl) ((lvl) <= 1)
There is a bug open 1258 to control logging more elegantly.
MemoryLogTensorOutput entries are logged at the end of each Op execution, and indicate the tensors that hold the outputs of the Op. It's useful to know these tensors since the memory is not released until the downstream Op consumes the tensors, which may be much later on in a large graph.
See the description in this (commit).
The memory allocation is raw info is there although it needs a script to collect the information in an easy to read form.