If the (discrete) GPU has its own video RAM, I have to copy my data from RAM to VRAM to be able to use them. But if the GPU is integrated with the CPU (e.g. AMD Ryzen) and shares the memory, do I still have to make copies, or can they both alternatively access the same memory block?
It is possible to avoid copying in case of integrated graphics, but this feature is platform specific, and it may work differently for different vendors.
How to Increase Performance by Minimizing Buffer Copies on Intel® Processor Graphics article describes how to achieve this for Intel hardware:
To create zero copy buffers, do one of the following:
Use CL_MEM_ALLOC_HOST_PTR and let the runtime handle creating a zero copy allocation buffer for you
If you already have the data and want to load the data into an OpenCL buffer object, then use CL_MEM_USE_HOST_PTR with a buffer allocated at a 4096 byte boundary (aligned to a page and cache line boundary) and a total size that is a multiple of 64 bytes (cache line size).
When reading or writing data to these buffers from the host, use clEnqueueMapBuffer(), operate on the buffer, then call clEnqueueUnmapMemObject().
GPU and CPU memory sharing ?
GPU have multiple cores without control unit but the CPU controls the GPU through control unit. dedicated GPU have its own DRAM=VRAM=GRAM faster then integrated RAM. when we say integrated GPU its mean that GPU placed on same chip with CPU, and CPU & GPU used same RAM memory (shared memory ).
References to other similar Q&As:
GPU - System memory mapping
Data sharing between CPU and GPU on modern x86 hardware with OpenCL or other GPGPU framework
Related
Does OpenCL local memory really exist on Mali/Adreno GPU or they only exist in some special mobile phones?
If they exist, in which case should we use local memory, such as GEMM/Conv or other cl kernel?
Interesting question. OpenCL defines a number of conceptual memories including local memory, constant memory, global memory, and private memory. And physically as you know, the hardware implementation of these memories is hardware dependent. For instance, some may emulate local memory using cache or system memory instead of having physical memory.
AFAIK, ARM Mali GPU does not have local memory, whereas Qualcomm Adreno GPU does have local memory.
For instance below table shows the definition of each memory in OpenCL and their
relative latency and physical locations in Adreno GPU cited from OpenCL Optimization and Best Practices for Qualcomm Adreno
GPUs∗
Answer updated:
as commented by SK-logic below, Mali6xx have a local memory (shared with cache).
Memory is shared on recent Mali, not local, but OpenCL still has the concept of the memory being separate, so there are special commands to make sure there is no copying. Use of private/local memory is not recommended.
For more information on best use of memory with Mali OpenCL, please read:
https://developer.arm.com/documentation/101574/0400/Optimizing-OpenCL-for-Mali-GPUs/Optimizing-memory-allocation/About-memory-allocation?lang=en
When running a TensorFlow job I sometimes get a non-fatal error that says GPU memory exceeded, and then I see the "Shared memory GPU usage" go up on the Performance Monitor on Windows 10.
How does TensorFlow achieve this? I have looked at CUDA documentation and not found a reference to the Dedicated and Shared concepts used in the Performance Monitor. There is a Shared Memory concept in CUDA but I think it is something on the device, not the RAM I see in the Performance Monitor, which is allocated by the BIOS from CPU RAM.
Note: A similar question was asked but not answered by another poster.
Shared memory in windows 10 does not refer to the same concept as cuda shared memory (or local memory in opencl), it refers to host accessible/allocated memory from the GPU. For integrated graphics processing host and device memory is usually the same as shared thanks to both the cpu and gpu being located on the same die and being able to access the same ram. For dedicated graphics with their own memory, this is separate memory allocated on the host side for use by the GPU.
Shared memory for compute APIs such as through GLSL compute shaders, or Nvidia CUDA kernels refer to a programmer managed cache layer (some times refereed to as "scratch pad memory") which on Nvidia devices, exists per SM, and can only be accessed by a single SM and is usually between 32kB to 96kB per SM. Its purpose is to speed up memory access to data which is used often.
If you see and increase shared memory used in Tensorflow, you have a dedicated graphics card, and you are experiencing "GPU memory exceeded" it most likely means you are using too much memory on the GPU itself, so it is trying to allocate memory from elsewhere (IE from system RAM). This potentially can make your program much slower as the bandwidth and latency will be much worse on non device memory for a dedicated graphics card.
I think I figured this out by accident. The "Shared GPU Memory" reported by Windows 10 Task Manager Performance tab does get used, if there are multiple processes hitting the GPU simultaneously. I discovered this by writing a Python programming that used multiprocessing to queue up multiple GPU tasks, and I saw the "Shared GPU memory" start filling up. This is the only way I've seen it happen.
So it is only for queueing tasks. Each individual task is still limited to the onboard DRAM minus whatever is permanently allocated to actual graphics processing, which seems to be around 1GB.
I am writing code to pick a physical device, but I want to put in some logic to prefer newer devices (more VRAM or higher clock speed) in case multiple ones fit my minimum feature requirements.
Is this possible?
Vulkan has no specific API calls to get such GPU details, for that you'd need to go with vendor specific APIs like NVAPI. The only hint may be deviceType member of VkPhysicalDeviceProperties that returns whether it's an integraed, discrete or virtual GPU.
The VRAM size though can be determined by finding the memory heap with the DEVICE_LOCAL bit set using vkGetPhysicalDeviceMemoryProperties. The VkPhysicalDeviceMemoryProperties returned by that function contains all available memory heaps in the memoryHeaps member. The configuration differs esp. between discrete and integrated GPUs, so this may not always be what you're looking for, e.g. on integrated GPUs with shared memory.
Heaps for a discrete GPU: http://vulkan.gpuinfo.org/displayreport.php?id=1432#memoryheaps
Heaps for an integrated GPU: http://vulkan.gpuinfo.org/displayreport.php?id=1200#memoryheaps
Can a 32-bit application utilize more than 4GB between system RAM and video RAM?
Context: Some games (for instance, Skyrim) are compiled under 32-bit architectures. I am running a 64-bit system with 16GB of DDR3 on the motherboard and 4GB of GDDR5 on the graphics card. Does the 32-bit architecture of the program limit its address capability to 4GB of total space, or is the graphics memory (which is on-board the graphics card) in a separate address space, thus neatly allowing the total sum of addressable memory to be greater than 4GB?
I ask because I have no way of knowing whether the paged amount in task manager and the amount of graphics memory used in GPU-Z are independent sets (which would seem to indicate that they are separately addressed, as the sum can be greater than 4GB), or if there is overlap between them.
It does not matters in which the game is compilled. It depends on your OS architecture. As you are uing 64 bit it will utilise all the memory
Most of the benchmarks for gpu performance and load testing are graphics related. Is there any benchmark that is computationally intensive but not graphics related ? I am using
DELL XPS 15 laptop,
nvidia GT 525M graphics card,
Ubuntu 11.04 with bumblebee installed.
I want to load test my system to come up with a max load the graphics cards can handle. Are there any non-graphics benchmarks for gpu ?
What exactly do you want to measure?
To measure GFLOPS on the card just write a simple Kernel in Cuda (or OpenCL).
If you have never written anything in CUDA let me know and i can post something for you.
If your application is not computing intensive (take a look at a roofline paper) then I/O will be the bottleneck. Getting data from global (card) memory to the processor takes 100's of cycles.
On the other hand if your application IS compute intensive then just time it and calculate how many bytes you process per second. In order to hit the maximum GFLOPS (your card can do 230) you need many FLOPs per memory access, so that the processors are busy and not stalling for memory and switching threads.