Are HOST_CACHED_BIT and HOST_COHERENT_BIT contradicting each other? - vulkan

There are two types of memory in Vulkan buzzling me:
VK_MEMORY_PROPERTY_HOST_COHERENT_BIT bit indicates that the host cache
management commands vkFlushMappedMemoryRanges and
vkInvalidateMappedMemoryRanges are not needed to flush host writes to
the device or make device writes visible to the host, respectively.
VK_MEMORY_PROPERTY_HOST_CACHED_BIT bit indicates that memory allocated
with this type is cached on the host. Host memory accesses to uncached
memory are slower than to cached memory, however uncached memory is
always host coherent.
From what I understand is that modification of memory of type COHERENT is seen immediately by both the host and the device, and modifications to memory of type CACHED may not be seen immediately by the host and/or the device, i.e. invalidating/flushing the memory is needed to invalidate the cache.
I have seen some implementations combine both flags, and it is valid combinations according to the 10.2. Device Memory section in the documentation. Isn't there a contradictory (cached and coherent)?

Cached/coherent memory effectively means that the GPU can see the CPU's caches. This often happens on architectures where the GPU and the CPU are sitting on the same chip. The GPU is effectively just another core on the CPU's die, with access to the CPU's core.
But it can happen on other architectures as well. Some standalone GPUs offer cached/coherent memory. Indeed, most of them don't offer cached memory without coherency. From an architectural standpoint, it represents some way for the GPU to access data through at least part of the CPU cache.
The key thing about cached/coherent memory you should remember is this: if there is an alternative memory type for that memory pool, then the alternative is probably faster for the device to access. Also, if alternatives exist, it is entirely possible that the device may not be able to have images or buffers of certain types/formats stored in such memory types. So unless you really need cached memory access from the CPU, or the device offers no alternative, it's best to avoid it.

There are cache schemes that monitor write accesses to RAM over the memory bus to invalidate the Host's cache when the memory is written to.
This allows the best of both worlds, cached coherent accesses but at the cost of a more complex architecture.

Related

Is there a way to map a host-cached Vulkan buffer to a specific memory location?

Vulkan is able to import host memory using VkImportMemoryHostPointerInfoEXT. I queried the supported memory types for VK_EXTERNAL_MEMORY_HANDLE_TYPE_HOST_ALLOCATION_BIT_EXT but the only kind of memory that was available for it was coherent, which does not work for my use case. The memory needs to use explicit invalidations/flushes for performance reasons. So really, I don't want the API to allocate any host-side memory, I just want to tell it the base address that the buffer should upload from/download to. Otherwise I have to use intermediate copies. Using the address returned by vkMapMemory for the host-side work is not desirable for my use-case.
If the Vulkan implementation does not allow you to import memory allocations as "CACHED", then you can't force it to do so. The API provides the opportunity for the implementation to advertise the ability to import your allocations as "CACHED", but the implementation explicitly refused to do it.
Which probably means that it can't. And you can't make the implementation do something it can't do.
So if you have some API that created and manipulates some memory (which cannot use memory provided by someone else), and the Vulkan implementation won't allow reading from that memory unless it is allowed to remove the cached nature of the allocation, and you need CPU caching of that memory, then you're going to have to fall back on memcpy.
I want to mirror memory between the CPU and GPU so that I can access it from either without an implicit PCI-e bus transfer.
If the GPU is discrete, that's impossible. In a discrete GPU setup, the GPU and the CPU have separate local memory pools, and access to either pool from the other requires some form of PCIe transfer operation. Vulkan lets you pick which one is going to have slower access, but one of them will have slower access to the memory.
If the GPU is integrated, then typically there is only one memory pool and one memory type for it. That type will be both local and coherent (and probably cached too), which represents fast access from both devices.
Whether VkImportMemoryHostPointerInfoEXT or vkMapMemory of non-DEVICE_LOCAL_BIT heap, you will typically get a COHERENT memory type.
Because well, the conventional host heap memory from malloc in C is naturally coherent (and the CPUs do typically have automatic cache-coherency mechanisms). There is no cflush() nor cinvalidate() in C.
There is no reason for there being implicit PCI-e transfers when R\W such memory from the Host side. Of course, the dedicated GPU has to read it somehow, so there would be bus transfers when the deviced tries to access the memory. Or you need to have an explicit memory in DEVICE_LOCAL_BIT heap, and transfer data between the two explicitly via vkCmdCopy* to keep them the same.
Actual UMA achitectures could have a non-COHERENT memory type. But their memory heap is always advertised as DEVICE_LOCAL_BIT (even if it is the main memory).

Transferring memory from GPU to CPU with Vulkan and vkInvalidateMappedMemoryRanges synchronization?

In Vulkan, when I want to transfer some memory the GPU back to the CPU, I think the most efficient way to do this is to write the data into memory which has the flags VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT.
Question #1: Is that assumption correct?
(Full list of available memory property flags can be found in Vulkan's documentation of VkMemoryPropertyFlagBits)
In order to get the latest data, I have to invalidate the memory using vkInvalidateMappedMemoryRanges, right?
Question #2: What is happening under the hood during vkInvalidateMappedMemoryRanges? Is this just a memcpy from some internal cache or can this be a longer procedure?
Question #3: If this could take longer (i.e. it is not a simple memcpy), then I probably should have some possibility to synchronize with the completion of it, right? However, vkInvalidateMappedMemoryRanges does not offer any synchronization parameters. Actually, my question is: IF I have to synchronize it, HOW do I synchronize it?
Question #1: Is that assumption correct?
Probably not, but it depends on your platform whether you support the alternative. For GPU->CPU transfers there are really three options:
1. HOST_VISIBLE
This type is visible to the host and guaranteed to be coherent, but not cached on the host. CPU reads will be very slow but that might be OK if you are only reading back a small amount of data (and might be cheaper than issuing vkInvalidateMappedMemoryRanges(), and there is little point streaming data into the CPU cache if you never expect to touch it again on the CPU).
2. HOST_VISIBLE | HOST_CACHED
This type is visible to the host and cached, but not guaranteed to be coherent (CPU and GPU might see different things at the same address if you don't manually enforce coherency). For this type of memory you must use vkInvalidateMappedMemoryRanges() after GPU writes and before CPU reads (or vkFlushMappedRange() for the other direction) to ensure that one processor can see what the other wrote, or you might read stale data.
Data access will be fast once in the cache, and you can benefit from CPU-side data fetch tricks such as explcit preloads and cache prefetching, but you will pay an overhead for the invalidate operation.
3. HOST_VISIBLE | HOST_CACHED | HOST_COHERENT
Finally you have the host cached AND coherent memory type, which sort of gives you best of both if you have high bandwidth reads on the CPU to make. Hardware provides the coherency implementation automatically, so no need to invalidate, BUT it's not guaranteed to be available on all platforms. For bulk data reads on the CPU I would expect this to be the most efficient in cases where it is available.
It's worth noting that there is no "best" memory settings for all allocations. Do not use host cached or host coherent memory for things you never expect to transfer back to the CPU (memory coherency isn't free in terms of power or memory performance).
Question #2: What is happening under the hood during vkInvalidateMappedMemoryRanges? Is this just a memcpy from some internal cache or can this be a longer procedure?
In the case where you have non-coherent memory then it does whatever is needed to make them coherent. Typically this means invalidating (discarding) cache lines in CPU cache which may contain stale copies of the data, ensuring that subsequent reads by the CPU see the version that the GPU actually wrote.
Question #3: If this could take longer (i.e. it is not a simple memcpy), then I probably should have some possibility to synchronize with the completion of it, right?
No. Invalidation is a CPU-side operation, so it takes CPU time to complete and the CPU is busy while the operation is completing. In general you can avoid the need to do it at all by using coherent memory though.

What's the difference between local device memory and dedicated memory?

In the specification, I saw several extensions for work with a so-called dedicated memory. My understanding is that this is the on-chip memory. But how then it differs from local device memory?
In the documentation of the VK_KHR_dedicated_allocation extension we can read:
This extension enables resources to be bound to a dedicated allocation, rather than suballocated. For any particular resource, applications can query whether a dedicated allocation is recommended, in which case using a dedicated allocation may improve the performance of access to that resource.
So I think the difference is not between dedicated memory and device-local memory but between dedicated allocation and normal/general suballocation. But where is this memory object allocated from that's another story. And extension allows to check if dedicated allocation is suggested or if given resource can use part of (can suballocate from) a larger memory.

Should I syncronize an access to a memory with HOST_VISIBLE_BIT | HOST_COHERENT_BIT flags?

In other words is it possible that the GPU will read the memory while I mapping it on the host and writing to it?
There is a distinction between "visibility" and "availability" in Vulkan's memory model. You need both if you want to access a value.
Coherency is about "visibility". But you still need availability. HOST_COHERENT says that you don't need vkFlushMappedMemoryRanges or vkInvalidateMappedMemoryRanges. For CPU writes, visibility requires vkFlushMappedMemoryRanges (which HOST_COHERENT effecitvely provides), but that alone is insufficient for availability:
vkFlushMappedMemoryRanges guarantees that host writes to the memory ranges described by pMemoryRanges can be made available to device access, via availability operations from the VK_ACCESS_HOST_WRITE_BIT access type.
The "availability operations" section links to the Vulkan section on "Execution and Memory Dependencies". So even with coherent mapping, you still need to have a dependency between the host writing that memory and the GPU operation reading it.
For GPU reading operations from CPU-written data, a call to vkQueueSubmit acts as a host memory dependency on any CPU writes to GPU-accessible memory, so long as those writes were made prior to the function call.
If you need more fine-grained write dependency (you want the GPU to be able to execute some stuff in a batch while you're writing data, for example), or if you need to read data written by the GPU, you need an explicit dependency.
For in-batch GPU reading, this could be handled by an event; the host sets the event after writing the memory, and the command buffer operation that reads the memory first issues vkCmdWaitEvents for that event. And you'll need to set the appropriate memory barriers and source/destination stages.
For CPU reading of GPU-written data, this could be an event, a timeline semaphore, or a fence.
But overall, CPU writes to GPU-accessible memory still need some form of synchronization.
Coherent memory just means that you don't need to manually manage the CPU caches with vkInvalidateMappedMemoryRanges and vkFlushMappedMemoryRanges. You still need to use synchronization to make sure that reads and writes from CPU and GPU happen in the right order, and you need memory barriers on the GPU side to manage GPU caches (make CPU writes visible to GPU reads, and make GPU writes available to CPU reads).

Vulkan on devices that share host memory

For the purpose of this question, we'll say vkMapMemory for all allocations on such a device cannot fail; they are trivially host-visible, and the result is a direct pointer to some other region of host memory (no work needs to be done).
Is there some way to detect this situation?
The purpose in mind is an arena-based allocator that aggressively maps any host-visible memory, and an objective is to avoid redundant allocations on such hardware.
Yes, it can be detected relatively reliably.
If vkGetPhysicalDeviceMemoryProperties has only one Memory Heap (which would be labeled VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) then it is certain it is the same memory as host.
In words of the authors:
https://www.khronos.org/registry/vulkan/specs/1.0-extensions/html/vkspec.html#memory-device
In a unified memory architecture (UMA) system, there is often only a single memory heap which is considered to be equally “local” to the host and to the device, and such an implementation must advertise the heap as device-local.
In other cases you know trivially if the memory is on the host (i.e. the given Memory Heap on dGPU would not have VK_MEMORY_HEAP_DEVICE_LOCAL_BIT set)
Though, implementations for UMA-based systems described by #krOoze have little reason to not expose direct pointers to buffer data.
Your question seems to proceed from a false assumption.
Vulkan is not OpenGL. Generally speaking, it does not try to hide things from you. If a memory heap cannot be accessed directly by the CPU, then the Vulkan implementation will not expose a memory type for that heap that is host-visible. Conversely, if a memory heap can be accessed directly by the CPU, then the Vulkan implementation will expose a memory type for that heap that is host-visible.
Therefore, if you can map a device allocation at all in Vulkan, then you should assume that you have a "direct pointer to buffer data".