Screen slows down when GPU-Util is about 50% - gpu

Screen runs smoothly when GPU-Util is about 25%, but pretty slowly for 55%. In the first case GPU memory usage was around 5.7GB/8GB and the second one 5.2GB/8GB.
On a second GPU (which I'm pretty sure the OS is not using) I have GPU-Util 99%, which makes me think GPUs have the capability to reach very high GPU-Util if needed.
My hypothesis is there is nothing wrong with my computer, but that I'm missing something of how things work.
Why does the screen slow down at 55% and not in the 90s?
In case it helps, I'm on Linux14.04 with 2 GTX-1080 and I get GPU-Util running nvidia-smi.

Ended up being something dumb.
Even though the processes I was running were GPU intensive, RAM was even more strained.

Related

nvidia-smi gpu-util meaning

I am new to learn how to use GPU. I've been searching for the answers of the meaning about GPU-Util when using nvidia-smi, but I haven't get enough answers.
I attached the png file about nvidia-smi and my question is below:
As you can see, my current GPU-util is 100% and my GPU memory usage is 5838MiB and total GPU memory is 5941MiB. If I add the another process 'A' that uses 50MiB on GPU memory, will the process 'A' be pending because GPU-Util is already 100%? Otherwise, will the process 'A' be proceeding because there is enough gpu memory to run the process 'A'?

Does low GPU utilization indicate bad fit for GPU acceleration?

I'm running some GPU-accelerated PyTorch code and training it against a custom dataset, but while monitoring the state of my workstation during the process, I see GPU usage along the following lines:
I have never written my own GPU primitives, but I have a long history of doing low-level optimizations for CPU-intensive workloads and my experience there makes me concerned that while pytorch/torchvision are offloading the work to the GPU, it may not be an ideal workload for GPU acceleration.
When optimizing CPU-bound code, the goal is to try and get the CPU to perform as much (meaningful) work as possible in a unit of time: a supposedly CPU-bound task that shows only 20% CPU utilization (of a single core or of all cores, depending on whether the task is parallelizable or not) is a task that is not being performed efficiently because the CPU is sitting idle when ideally it would be working towards your goal. Low CPU usage means that something other than number crunching is taking up your wall clock time, whether it's inefficient locking, heavy context switching, pipeline flushes, locking IO in the main loop, etc. which prevents the workload from properly saturating the CPU.
When looking at the GPU utilization in the chart above, and again speaking as a complete novice when it comes to GPU utilization, it strikes me that the GPU usage is extremely low and appears to be limited by the rate at which data is being copied into the GPU memory. Is this assumption correct? I would expect to see a spike in copy (to GPU) followed by an extended period of calculations/transforms, followed by a brief copy (back from the GPU), repeated ad infinitum.
I notice that despite the low (albeit constant) copy utilization, the GPU memory is constantly peaking at the 8GB limit. Can I assume the workload is being limited by the low GPU memory available (i.e. not maxing out the copy bandwidth because there's only so much that can be copied)?
Does that mean this is a workload better suited for the CPU (in this particular case with this RTX 2080 and in general with any card)?

How to allocate more CPU and RAM to SUMO (Simulation of Urban Mobility)

I have downloaded and unzipped sumo-win64-0.32.0 and running sumo.exe this on a powerful machine (64GB ram, Xeon CPU E5-1650 v4 3.6GHz) for about 140k trips, 108k edges, and 25k vehicles types which are departed in the first 30 min of simulation. I have noticed that my CPU is utilized only 30% and Memory only 38%, Is there any way to increase the speed by forcing sumo to use more CPU and ram, or possibly run in parallel? From "Can SUMO be run in parallel (on multiple cores or computers)?
The simulation itself always runs on a single core."
it appears that parallel processing is not possible t, but what about dedicating more CPU and ram?
Windows usually shows the CPU utilization such that 100% means all cores are used, so 30% is probably already more than one core and there is no way of increasing that with a single threaded application as sumo. Also if your scenario fits within RAM completely there is no point of increasing that. You might want to try one of the several parallelization approaches SUMO has but none of them got further than some toy examples (and none is in the official distribution) and the speed improvements are sometimes only marginal. Probably the best you can do is to do some profiling and find the performance bottlenecks and/or send your results to the developers.

Tensorflow: GPU util big difference when setting CUDA_VISIBLE_DIVICES to different values

Linux: Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-38-generic x86_64)
Tensorflow: compile from source, 1.4
GPU: 4xP100
I am trying the new released object detection tutorial training program.
I noticed that there is big difference when I set CUDA_VISIBLE_DEVICES to different value. Specifically, when it is set to "gpu:0", the gpu util is
quite high like 80%-90%, but when I set it to other gpu devices, such as
gpu:1, gpu:2 etc. The gpu util is very low between 10%-30%.
As for the training speed, it seems to be roughly the same, much faster than that when using CPU only.
I just curious how this happens.
As this answer mentions GPU-Util is a measure of usage/business of the computation of each GPU.
I'm not an expert, but from my experience GPU 0 is generally where most of your processes run by default. CUDA_VISIBLE_DEVICES sets the GPUs seen by the processes you run on that bash. Therefore, by setting CUDA_VISIBLE_DEVICES to gpu:1/2 you are making it to run on less busy GPUs.
Moreover, you only reported 1 value, in theory you should have one per GPU; there is the possibility you were only looking at GPU-util for GPU-0 which would of course decrease if you are not using.

How fast is PhysX on GPU compared to physics engines on CPU?

I have an application that is written to use the Bullet physics engine. I am running it on an Intel i7 2600K CPU with 8 cores. The application has to process millions of chunks of physics work, each of which can be done independently. It currently runs with 8 processes, each process working through its quota of the total independently. In summary, this work has a lot of easy parallelism.
Assuming that I can acquire the best NVIDIA consumer graphics card (say Titan), what is the ballpark improvement in the physics engine performance I can see by switching from Bullet on CPU to Physx on GPU? That is, approximately how much faster will this application run if rewritten for Physx?
I found a few papers that compare the result quality between Bullet and Physx, but could not find anything about the performance comparison.
Pierre Terdimann has done an extensive series of performance comparisons between Bullet 2.81 and PhysX 2.8.4, 3.2 and 3.3 here. These are comparisons between Bullet and PhysX, both running on CPU. It can be seen that the performance difference between the two is dependent on what features of the engine are being used. For a few features, the performance is about the same, while for most others there is a 3-5x speedup.
He also mentions in the addendum that not all physics features have been ported to PhysX on GPU. Cloth and particles can be accelerated on GPU, while rigid bodies is being currently ported to GPU, in a feature called GPU Rigid Bodies (GRB). If there is a feature that is GPU accelerated, then you can expect it to be faster than on CPU, but by how much is not clear.
I found this, it's not a comparison against any specific CPU physics engine but one hopes they are comparing like with like and running PhysX on the CPU.
So it's rather unspecific and from a FAQ by the makers of PhysX so take with a pinch of salt.
From here:
Running PhysX on a mid-to-high-end GeForce GPU will enable 10-20 times
more effects and visual fidelity than physics running on a high-end
CPU.
Lets say physx is doing particle interactions such as gravity of fluid movement. Then the cache control is very important since they are emberassingly parallel. You cannot directly control your CPU's cache but you can access to cache of titan which makes it maybe 100x faster than a 8-thread cpu.
If it is not so parallel and has many branching and doesnt have exhausting computations then it is around 10x-5x speedup(or whatever bandwidth ratio of graphics ram /main RAM).