Is tensorflow running on GPU or CPU? (windows) - tensorflow

Im trying since a while to install tensorflow-gpu and had a lot of trouble with CUDA. First the Visual Studio integration in the CUDA setup always gave me an error, but if i leave out the Visual studio integration during CUDA installation, the installation is working.
Is the Studio integration mandatory for using tensorflow-gpu?
So then i installed all of the 3 patches for CUDA 9.0 and placed the Cudnn files in the folder.
Next, i went to my environtment variables and added this path (C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0)
So when i open a command prompt and import tensorflow, in all tutorials i saw about this topic there where some lines about stuff being loaded successfully? I dont have these.
So im running this little code
import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
And am getting this as answer
"C:\Program Files\Python35\python.exe" C:/Users/Felix/Documents/Uni/Semesterarbeit/doesitwork.py
2018-06-21 13:41:41.187933: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-06-21 13:41:41.748188: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.7085
pciBusID: 0000:21:00.0
totalMemory: 8.00GiB freeMemory: 6.63GiB
2018-06-21 13:41:41.748527: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-21 13:43:44.853239: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-21 13:43:44.853436: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0
2018-06-21 13:43:44.853564: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N
2018-06-21 13:43:44.853860: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6401 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:21:00.0, compute capability: 6.1)
Device mapping:
2018-06-21 13:43:45.164653: I T:\src\github\tensorflow\tensorflow\core\common_runtime\direct_session.cc:284] Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1070, pci bus id: 0000:21:00.0, compute capability: 6.1
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1070, pci bus id: 0000:21:00.0, compute capability: 6.1
Process finished with exit code 0
Is this what its supposed to look like?
Looking forward to an answer
Cheers,
Felix

Take a look at the last line in your log.
'job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1070, pci bus id: 0000:21:00.0, compute capability: 6.1'

Related

Error generated during running of model. (tensorflow)

When i run my model (fit a model) got below error and it stops a kernel.. Please guide me for the same.
I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance‑critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /device:GPU:0 with 1661 MB memory: ‑> device: 0, name: NVIDIA GeForce RTX 3050 Ti Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1661 MB memory: ‑> device: 0, name: NVIDIA GeForce RTX 3050 Ti Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6
I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8303
I have Nvidia Geforce RTX 3050 Ti Laptop GPU.
NVIDIA GeForce RTX 3050 Ti Laptop GPU
Driver version: 31.0.15.1640
Driver date: 06-06-2022
DirectX version: 12 (FL 12.1)
Physical location: PCI bus 1, device 0, function 0
Utilization 0%
Dedicated GPU memory 0.8/4.0 GB
Shared GPU memory 0.1/6.9 GB
GPU Memory 0.9/10.9 GB
Tensorflow version - 2.9.1
Cuda - cuda_11.5.0_496.13
Cuddn - cudnn_8.3.3.40
Windows 11

Why does Tensorflow show memory available less than GPU specs?

This command is giving GPU with 4614 MB Memory.
But the RTX2060 has 6GB Memory. Why is it showing only just over 4GB?
>>> tf.test.is_built_with_cuda()
True
>>> tf.test.is_gpu_available(cuda_only=False,min_cuda_compute_capability=None)
2019-10-29 17:02:40.062465: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-10-29 17:02:40.072455: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library nvcuda.dll
2019-10-29 17:02:40.105489: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce RTX 2060 major: 7 minor: 5 memoryClockRate(GHz): 1.2
pciBusID: 0000:01:00.0
2019-10-29 17:02:40.111138: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-10-29 17:02:40.117217: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-10-29 17:02:44.865862: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-10-29 17:02:44.870341: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2019-10-29 17:02:44.872351: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2019-10-29 17:02:44.876727: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/device:GPU:0 with 4614 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2060, pci bus id: 0000:01:00.0, compute capability: 7.5)
True
Tensorflow reports the memory it's able to use on the device. So if you have anything else using some GPU memory, the memory reported by TensorFlow will be lower than the total available memory.
Also, some OS only allows a certain fraction of the memory to be used by applications to ensure it will still be able to use the device (cf this issue on TensorFlow's GitHub: https://github.com/tensorflow/tensorflow/issues/22623)

Almost no free 1080 ti memory allocation when running a tensorflow-gpu device

I am testing a recently bought ASUS ROG STRIX 1080 ti (11 GB) card via a simple test python (matmul.py) program from https://learningtensorflow.com/lesson10/ .
The virtual environment (venv) setup is as follows : ubuntu=16.04, tensorflow-gpu==1.5.0, python=3.6.6, CUDA==9.0, Cudnn==7.2.1.
CUDA_ERROR_OUT_OF_MEMORY occured.
And, the most strangest : totalMemory: 10.91GiB freeMemory: 61.44MiB ..
I am not sure whether if it was due to the environmental setup or due to the 1080 ti itself. I would appreciate if any excerpts could advise here.
The terminal showed -
(venv) xx#xxxxxx:~/xx$ python matmul.py gpu 1500
2018-10-01 09:05:12.459203: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-10-01 09:05:12.514203: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:895] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-10-01 09:05:12.514445: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1105] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.607
pciBusID: 0000:01:00.0
totalMemory: 10.91GiB freeMemory: 61.44MiB
2018-10-01 09:05:12.514471: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1195] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-10-01 09:05:12.651207: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 11.44M (11993088 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
......
It can happen that a Python process gets stuck on the GPU. Always check the processes with nvidia-smi and kill them manually if necessary.
I solved this problem by putting a cap on the memory usage:
def gpu_config():
config = tf.ConfigProto(
allow_soft_placement=True, log_device_placement=False)
config.gpu_options.allow_growth = True
config.gpu_options.allocator_type = 'BFC'
config.gpu_options.per_process_gpu_memory_fraction = 0.8
print("GPU memory upper bound:", upper)
return config
Then you can do just do:
config = gpu_config()
with tf.Session(config=config) as sess:
....
After reboot, I was able to run sample codes of tersorflow.org - https://www.tensorflow.org/guide/using_gpu without memory issues.
Before running tensorflow samples codes for checking 1080 ti, I had a difficulty training Mask-RCNN models as posted -
Mask RCNN Resource exhausted (OOM) on my own dataset
After replacing cudnn 7.2.1 with 7.0.5, no more resource exhausted (OOM) issue occurred.

Although pc recognizes gpu, it uses a CPU in tensorflow-gpu

I am using tensorflow-gpu. I want to use GTX1070, but tensorflow-gpu uses my CPU. I don't know what to do.
I use CUDA 9.0 and CUDNN 7.1.4. My tensorflow-gpu version is 1.9.
After running this command on the official website
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2018-07-30 10:53:43.369025: I
T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141]
Your CPU supports instructions that this TensorFlow binary was not
compiled to use: AVX2 2018-07-30 10:53:43.829922: I
T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1392]
Found device 0 with properties: name: GeForce GTX 1070 major: 6 minor:
1 memoryClockRate(GHz): 1.683 pciBusID: 0000:01:00.0 totalMemory:
8.00GiB freeMemory: 6.63GiB 2018-07-30 10:53:43.919043: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1392]
Found device 1 with properties: name: GeForce GTX 1050 major: 6 minor:
1 memoryClockRate(GHz): 1.455 pciBusID: 0000:05:00.0 totalMemory:
2.00GiB freeMemory: 1.60GiB 2018-07-30 10:53:43.926001: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1456]
Ignoring visible gpu device (device: 1, name: GeForce GTX 1050, pci
bus id: 0000:05:00.0, compute capability: 6.1) with Cuda
multiprocessor count: 5. The minimum required count is 8. You can
adjust this requirement with the env var
TF_MIN_GPU_MULTIPROCESSOR_COUNT. 2018-07-30 10:53:43.934810: I
T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1471]
Adding visible gpu devices: 0 2018-07-30 10:53:44.761551: I
T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:952]
Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-30 10:53:44.765678: I
T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:958]
0 1 2018-07-30 10:53:44.768363: I
T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971]
0: N N 2018-07-30 10:53:44.771773: I
T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971]
1: N N 2018-07-30 10:53:44.774913: I
T:\src\github\tensorflow\tensorflow\coenter code
herere\common_runtime\gpu\gpu_device.cc:1084] Created TensorFlow
device (/job:localhost/replica:0/task:0/device:GPU:0 with 6395 MB
memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus
id: 0000enter code here:01:00.0, compute capability: 6.1)
As I can see from the log excerpt of your tensorflow engine - it uses GPU device 0
(/job:localhost/replica:0/task:0/device:GPU:0 with 6395 MB memory) -> physical
GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute
capability: 6.1)
But refuses to use your GeForce GTX 1050. This is possible due to the Environment variable TF_MIN_GPU_MULTIPROCESSOR_COUNT which is seems set to 8.
Try to set it to value of 5 as advised in your log earlier:
set TF_MIN_GPU_MULTIPROCESSOR_COUNT=5
If you want to be sure which device is used - initialize the session with
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
You can read more on Using GPUs tensorflow docs page

Regarding the message of ignoring visible gpu devices

When running a tensorflow program leveraging GPU devices, I got the following message. What does the statement of Ignoring visible gpu
device (device: 0, name: Quadro 5000, pci bus id: 0000:05:00.0) with Cuda
compute capability 2.0. mean? What is to be ignored?
2017-12-04 16:06:17.784599: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0 with
properties:
name: Quadro 5000
major: 2 minor: 0 memoryClockRate (GHz) 1.026
pciBusID 0000:05:00.0
Total memory: 2.50GiB
Free memory: 2.22GiB
2017-12-04 16:06:17.784636: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-12-04 16:06:17.784645: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-12-04 16:06:17.784658: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:950] Ignoring visible gpu
device (device: 0, name: Quadro 5000, pci bus id: 0000:05:00.0) with Cuda
compute capability 2.0. The minimum required Cuda capability is 3.0.
TensorFlow prints this message when you have a GPU installed in the local machine but its compute capability is too low for TensorFlow to use that GPU for accelerated computation. As the error message states:
The minimum required Cuda capability is 3.0.
This means that you must use a CUDA-capable GPU with compute capability 3.0 or greater to use TensorFlow for GPU.