Tensorflow installation made execution of basic programs slow - tensorflow

I am new to ML. Just installed tensorflow through anaconda and it seems to have tremendously slow speeds. Even for basic printing like the following , the execution time was around 5 seconds
import tensorflow as tf
print(1+3)
How to fix this ?

Related

Google Colab GPU speed-up works with 2.x, but not with 1.x

In https://colab.research.google.com/notebooks/gpu.ipynb, which I assume is an official demonstration of GPU speed-up by Google, if I follow the steps, the GPU speed-up (around 60 times faster than with CPU) using Tensorflow 2.x works. However, if I want to use version 1.15 like in https://colab.research.google.com/drive/12dduH7y0GPztxSM0AFlfpjj8FU5x8YSv (the only change compared to the notebook from the first link is getting rid of "%tensorflow_version 2.x" both times), tf.test.gpu_device_name() returns the string /device:GPU:0 but there is no speed-up. I would really love to use the a Tensorflow version between 1.5 and 1.15 though, as the code I want to run uses functions removed in Tensorflow 2.x. Does anyone know how to use Tensorflow 1.x while still getting the GPU speed-up?
In your notebook your code is not executed actually, since you didn't called session.run() nor tf.enable_eager_execution().
Add tf.enable_eager_execution() at the top of your code and you'll see the real difference between cpu and gpu times.

keras + scikit-learn wrapper, appears to hang when GridSearchCV with n_jobs >1

UPDATE: I have to re-write this question as after some investigation I realise that this is a different problem.
Context: running keras in a gridsearch setting using the kerasclassifier wrapper with scikit learn. Sys: Ubuntu 16.04, libraries: anaconda distribution 5.1, keras 2.0.9, scikitlearn 0.19.1, tensorflow 1.3.0 or theano 0.9.0, using CPUs only.
Code:
I simply used the code here for testing: https://machinelearningmastery.com/use-keras-deep-learning-models-scikit-learn-python/, the second example 'Grid Search Deep Learning Model Parameters'. Pay attention to line 35, which reads:
grid = GridSearchCV(estimator=model, param_grid=param_grid)
Symptoms: When grid search uses more than 1 jobs (means cpus?), e.g.,, setting 'n_jobs' on the above line A to '2', line below:
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=2)
will cause the code to hang indefinitely, either with tensorflow or theano, and there is no cpu usage (see attached screenshot, where 5 python processes were created but none is using cpu).
By debugging, it appears to be the following line with 'sklearn.model_selection._search' that causes problems:
line 648: for parameters, (train, test) in product(candidate_params,
cv.split(X, y, groups)))
, on which the program hangs and cannot continue.
I would really appreciate some insights as to what this means and why this could happen.
Thanks in advance
Are you using a GPU? If so, you can't have multiple threads running each variation of the params because they won't be able to share the GPU.
Here's a full example on how to use keras, sklearn wrappers in a Pipeline with GridsearchCV: Pipeline with a Keras Model
If you really want to have multiple jobs in the GridSearchCV, you can try to limit the GPU fraction used by each job (e.g. if each job only allocates 0.5 of the available GPU memory, you can run 2 jobs simultaneously)
See these issues:
Limit the resource usage for tensorflow backend
GPU memory fraction does not work in keras 2.0.9 but it works in 2.0.8
I dealt with this problem too and it really slowed me down not being able to run what is essentially trivially-parallelizable code. The issue is indeed with the tensorflow session. If a session in created in the parent process before GridSearchCV.fit(), it will hang!
The solution for me was to keep all session/graph creation code restricted to the KerasClassifer class and the model creation function i passed to it.
Also what Felipe said about the memory is true, you will want to restrict the memory usage of TF in either the model creation function or a subclass of KerasClassifier.
Related info:
Session hang issue with python multiprocessing
Keras + Tensorflow and Multiprocessing in Python
TL;DR Answer: You can't because your Keras model can't be serialized, and serialization is needed for parallelizing in Python with joblib.
This problem is much detailed here: https://www.neuraxle.org/stable/scikit-learn_problems_solutions.html#problem-you-can-t-parallelize-nor-save-pipelines-using-steps-that-can-t-be-serialized-as-is-by-joblib
The solution to parallelize your code is to make your Keras estimator serializable. This can be done using savers as described at the link above.
If you're lucky enough to be using TensorFlow v2's prebuilt Keras module, the following practical code sample will reveal to be useful to you as you'd practically just need to take the code and modify it with yours:
https://github.com/guillaume-chevalier/seq2seq-signal-prediction
In this example, all the saving and loading code is all pre-written for you using Neuraxle-TensorFlow, and this makes it parallelizeable if you use Neuraxle's AutoML methods (e.g.: Neuraxle's grid search and Neuraxle's own parallelism things).

Import tensorflow contrib module is slow in tensorflow 1.2.1

Is there a reason for the tensorflow contrib module imports being slower in 1.2.1 than 1.1.0? I am using Python 3.5.
The overhead is not significant using the command-line, its perhaps around 2-3 seconds. However in an IDE, it becomes quite significant (~ 10 seconds to import tensorflow.contrib as opposed to (~0.5 seconds) in tensorflow 1.1.0.
Thanks in advance.
It is slow to import because it is doing inspect.stack() many times, which take a lot of time each time. I have reported this issue, and have submitted a PR with a fix.

Does tensorflow automatically detect GPU or do I have to specify it manually?

I have a code written in tensorflow that I run on CPUs and it runs fine.
I am transferring to a new machine which has GPUs and I run the code on the new machine but the training speed did not improve as expected (takes almost the same time).
I understood that Tensorflow automatically detects GPUs and run the operations on them (https://www.quora.com/How-do-I-automatically-put-all-my-computation-in-a-GPU-in-TensorFlow) & (https://www.tensorflow.org/tutorials/using_gpu).
Do I have to change the code to make it manually runs the operations on GPUs (for now I have a single GPU)? and what would be gained by doing that manually?
Thanks
If the GPU version of TensorFlow is installed and if you don't assign all your tensors to CPU, some of them should be assigned to GPU.
To find out which devices (CPU, GPU) are available to TensorFlow, you can use this:
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
Regarding the question of the performance, it's quite a broad subject and it really depends of your model, your data and so on. Here are a few and wide remarks on TensorFlow performance.

Performance of tf inside jupyter notebooks vs. command line

I am noticing quite significant performance (speed) differences when running tensorflow code from inside a jupyter notebook, versus running it as a script from the command line.
For example, below are the results of running the MNIST CNN tutorial (https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/fully_connected_feed.py)
Setup:
AWS instance with 32 Xeon-CPUS, 62GB memory, 4 K520 GPUS (4GB mem)
Linux: 3.13.0-79 Ubuntu
Tensorflow: 0.10.0rc0 (built from sources with GPU support)
Python: 3.5.2 |Anaconda custom (64-bit)|
CUDA libraries : libcublas.so.7.5 , libcudnn.so.5, libcufft.so.7.5, libcuda.so.1, libcurand.so.7.5
Training steps: 2000
Jupyter notebook execution time:
This does not include time for imports and loading the dataset - just the training phase
CPU times: user 8min 58s, sys: 0 ns, total: 8min 58s
Wall time: 8min 20s
Command line execution:
This is the time for execution of the full script.
real 0m18.803s
user 0m11.326s
sys 0m13.200s
The GPU is being used in both cases, but utilization is higher (typically 35% during training phase for the command-line vs 2-3% for the notebook version). I even tried manually placing it on different GPUs but that doesn't make a big difference to the notebook execution time.
Any ideas / suggestions about why this might be ?
I am seeing the reverse case. GPU utilisation in notebook is better than command line.
I have been training over pong using DQN, the frame speed using command line falls to 17fps, while using notebooks it falls to 100fps.
I saw nvidia-smi stats, which shows 294MB usage in command line method, 984MB usage in Jupiter notebook method.
Don't know the reason, but similar observation in colab also