I have this code to disable GPU usage:
import numpy as np
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
import tensorflow as tf
w = tf.Variable(
[
[1.],
[2.]
])
I get this output still, not sure why :
E:\MyTFProject\venv\Scripts\python.exe E:/MyTFProject/tfvariable.py
2021-11-03 14:09:16.971644: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-11-03 14:09:16.971644: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-11-03 14:09:19.563793: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-11-03 14:09:19.566793: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: newtonpc
2021-11-03 14:09:19.567793: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: mypc
2021-11-03 14:09:19.567793: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
TF Version: '2.6.1'
Not able to stop it from loading Cuda DLLs. I dont want to setup cuda just right now. Maybe later.
I am using the latest PyCharm and installed tensorflow as given in the site with pip.
You can try to reinstall tensorflow with CPU-only version. The links are available here depending on your OS and your python version:
https://www.tensorflow.org/install/pip?hl=fr#windows_1
Related
I am new with Deep learning. I have a A100 GPU installed with CUDA 11.6. I installed using Conda tensor flow-1.15 and tensorflow gpu - 1.15, cudatoolkit 10.0, python 3.7 but the code I am trying to run from github has given a note as below and it shows errors which I am finding difficult to interpret where I went wrong.The error is displayed as
failed to run cuBLAS routine: CUBLAS_STATUS_EXECUTION_FAILED 2022-06-30 09:37:12.049400: I tensorflow/stream_executor/stream.cc:4925] [stream=0x55d668879990,impl=0x55d668878ac0] did not memcpy device-to-host; source: 0x7f2fe2d0d400 2022-06-30 09:37:12.056385: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at iterator_ops.cc:867 : Cancelled: Operation was cancelled
tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found. (0) Internal: Blas GEMM launch failed : a.shape=(25, 25), b.shape=(25, 102400), m=25, n=102400, k=25 [[{{node Hyperprior/HyperAnalysis/layer_Hyperprior_1/MatMul}}]] (1) Internal: Blas GEMM launch failed : a.shape=(25, 25), b.shape=(25, 102400), m=25, n=102400, k=25 [[{{node Hyperprior/HyperAnalysis/layer_Hyperprior_1/MatMul}}]] [[Hyperprior/truediv_3/_3633]]
NOTE: At the moment, we only support CUDA 10.0, Python 3.6-3.7, TensorFlow 1.15, and Tensorflow Compression 1.3. TensorFlow must be installed via pip, not conda. Unfortunately, newer versions of Tensorflow or Python will not work due to various constraints in the dependencies and in the TF binary API.
Excuse for the stupid question from a newbie. I just got my TensorFlow, both GPU and CPU, installed on Ubuntu 20.04 LTS on WSL2 with CUDA. My GPU is GeForce 940MX. When I try to test the TensorFlow installation, the first attempt got some warning as the message below. However, the exact same syntax following went through smoothly. I wonder what happened here? And is it using CPU or GPU actually?
>>> import tensorflow as tf
2021-01-22 12:01:14.284464: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
>>> print(tf.reduce_sum(tf.random.normal([1000, 1000])))
2021-01-22 12:01:20.034653: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-01-22 12:01:20.035316: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-11.2/lib64
2021-01-22 12:01:20.035400: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-01-22 12:01:20.035518: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (DESKTOP-3E6PSHT): /proc/driver/nvidia/version does not exist
2021-01-22 12:01:20.035984: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-01-22 12:01:20.036526: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
tf.Tensor(-872.4863, shape=(), dtype=float32)
>>> print(tf.reduce_sum(tf.random.normal([1000, 1000])))
tf.Tensor(590.53516, shape=(), dtype=float32)
>>> print(tf.reduce_sum(tf.random.normal([1000, 1000])))
tf.Tensor(294.59973, shape=(), dtype=float32)
>>> print(tf.reduce_sum(tf.random.normal([1000, 1000])))
tf.Tensor(261.34412, shape=(), dtype=float32)
Anything helps!
I am trying to run tensorflow with CPU support.
tensorflow:
Version: 1.14.0
Keras:
Version: 2.3.1
When I try to run the following piece of code :
def run_test_harness(trainX,trainY,testX,testY):
datagen=ImageDataGenerator(rescale=1.0/255.0)
train_it = datagen.flow(trainX, trainY, batch_size=1)
test_it = datagen.flow(testX, testY, batch_size=1)
model=define_model()
history = model.fit_generator(train_it, steps_per_epoch=len(train_it),
validation_data=test_it, validation_steps=len(test_it), epochs=1, verbose=0)
I get the following error as shown in image:
Image shows the error
I tried to configure bazel for the same but it was of no use. It would be helpful if someone could direct me to resources or help with the problem. Thank you
EDIT : (Warning messages)
WARNING:tensorflow:From /home/neha/valiance/kerascpu/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
WARNING:tensorflow:From /home/neha/valiance/kerascpu/lib/python3.6/site-packages/tensorflow/python/ops/nn_impl.py:180: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
2020-10-22 12:41:36.023849: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-10-22 12:41:36.326420: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2299965000 Hz
2020-10-22 12:41:36.327496: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5502350 executing computations on platform Host. Devices:
2020-10-22 12:41:36.327602: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
2020-10-22 12:41:36.679930: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
2020-10-22 12:41:36.890241: W tensorflow/core/framework/allocator.cc:107] Allocation of 3406823424 exceeds 10% of system memory.
^Z
[1]+ Stopped python3 model.py
You should try running your code on google colab. I think there aren't enough resources available on your PC for the task you are trying to run even though you are using a batch_size of 1.
I downloaded CUDA Toolkit 8.0 GA2, cuDNN v6.0 for CUDA 8 and tensorflow 1.4. I have an Nvidia 740M graphics chip. I tried running this code to test tensorflow:
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
This is what it returns:
2017-11-09 16:25:42.999638: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compil
ed to use: AVX AVX2
2017-11-09 16:25:43.007712: E C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_driver.cc:406] failed call to cuInit: CUDA_ERROR_UNKNOWN
2017-11-09 16:25:43.010457: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: DESKTOP-06398IN
2017-11-09 16:25:43.010576: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:165] hostname: DESKTOP-06398IN
b'Hello, TensorFlow!'
What does the error CUDA_ERROR_UNKNOWN mean? How do I fix it? Thanks
Updating the graphics driver fixed it.
I have those warnings:
2017-09-26 14:50:45.956966: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-26 14:50:45.956986: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-09-26 14:50:45.956990: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-26 14:50:45.956996: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
So, based on what I found on internet, I then followed this link:
https://www.tensorflow.org/install/install_sources
However, when I try:
$ python
and :
# Python
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
The output should be : Hello, TensorFlow!
However, I don't get that at all...
emixam23#pt-mguittet:~/Workspace$ python
Python 3.6.2 (default, Sep 4 2017, 16:58:35)
[GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.42)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2017-09-26 14:56:33.905065: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-26 14:56:33.905096: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-09-26 14:56:33.905105: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-26 14:56:33.905112: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
>>> print(sess.run(hello))
b'Hello, TensorFlow!'
>>>
Even after I installed it, I still have the warnings, why so? I used Xcode 7.3 and I didn't put the GPU in the ./configuration process.
Any idea? :/ Thank in advance !
You can mask these warnings like so:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
Notice that your program still output 'Hello, TensorFlow!', just simply after all of the warnings regarding SSE4.2.
Why are you saying it doesn't seem to install?
You are getting what you expected (Hello, Tensorflow!) but you are also getting these warnings. Regarding the warnings, they have been discussed here: Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Regarding the "b'" before Hello, Tensorflow!, see What does the 'b' character do in front of a string literal?.
Hope it helps.