What is the relationship of RenderPass and Pipeline in Vulkan? - vulkan

What is the logical relationship between RenderPass and Pipeline in Vulkan?
If you ignore RenderPass, my understanding of the rendering process is first, the vertex data prepared by the application layer, then the texture data can be submitted to the driver, and after that through the various stages of the pipeline, after writing to the Framebuffer, you can complete a rendering.
So what is the responsibility of the RenderPass? Is it an abstraction that provides metadata for rendering each stage (such as Format), or does it have some other role?
Is RenderPass and Pipeline dependent on feelings? For example, each Pipeline belongs to a Subpass. Or a dependency, such as the last output of the Pipeline, is handled by RenderPass. Or is it something else?

At the end of the day Vulkan is a nice modern-ish OO API. All the objects in Vulkan are practically only what parameters they take. Just saying this to ease your learning. You can look at vkCreateX and largely understand what VkX does in Vulkan.
VkPipeline is a GPU context. Think of GPU as a FPGA (which it isn't, but bear with me). Doing vkCmdBindPipeline would set the GPU to given gate configuration. Exept GPU is not FPGA — in our case it sets the GPU to a state where it can execute the shader programs and fixed-function pipeline stages defined by the VkPipeline.
VkRenderPass is a data oriented thing. It is necessitated by tiled architecture GPUs (mobile GPUs). On desktop GPUs it can still fill the role of being oracle for optimization and\or allow partially-tiled architecture (or any other architecture really that can use this).
Tiled-architecture GPUs need to "load" image\buffer from general-purpose RAM to "on-chip memory". When they are done they "store" their results back to RAM.
VkRenderPass defines what kind of inputs (attachments) will be needed. It defines how they get loaded and stored before and after the render pass instance*, respectively. It also has subpasses. It defines synchronization between them (replaces vkCmdPipelineBarriers). And defines the kind of purpose given render pass attachment will be filling (e.g. if it is color buffer, or a depth buffer).
* Render Pass Instance is the thing created from Render Pass instance by vkCmdBeginRenderPass. Yea, not confusing, right.

Related

Rendering Terrain Dynamically with Argument Buffers : Understanding why the particle buffer is not overwritten by the GPU inflight

I am looking through an Apple demo project that is associated with the 2017 WWDC video entitled "Introducing Metal 2" where the developers demonstrate the use of argument buffers. The project is linked here on the page titled "Rendering Terrain Dynamically with Argument Buffers" on the Apple developer website. Here, they synchronize resource writes by the CPU to prevent race conditions with a dispatch_semaphore_t, signaling it when the command buffer finishes executing on the GPU and waiting on it if the CPU is writing data several frames ahead of the GPU. This is consistent with what was shown in a previous 2014 WWDC "Working With Metal: Fundamentals".
I noticed that it seems the APPLParticleRenderer is sending data to be written by the GPU in a compute pass before it finishes reading from that same buffer from the fragment shader from a previous render pass. The resource storage mode of the buffer is MTLResourceStorageModePrivate. My question: does Metal automatically synchronize access to private id<MTLBuffer>s accessible only by the GPU? Do render, compute, and blit passes called from new id<MTLCommandEncoder> have access to the buffer only after other passes have written and read from it (exclusive access)? I have seen that there are guaranteed barriers within tile shaders, where tile memory is accessed exclusively by the kernel before subsequent fragment shaders access the memory.
Lastly, in the 2016 WWDC "What's New in Metal, Part 2", the first presenter, Charles Brissart, at 16:44 mentions that fragment and vertex functions reading and writing from the same buffer must be placed into two render command encoders, but for compute kernels one compute command encoder suffices. This is consistent with what is seen within the particle renderer.
See my comment on the original question for a brief version of this answer.
It turns out that Metal tracks dependencies between commands scheduled to the GPU by default for MTLResource types. The hazardTrackingMode property of a MTLResource is defaulted to MTLHazardTrackingModeTracked (MTLHazardTrackingMode.tracked in Swift) according to the Metal documentation. This means Metal tracks dependencies across commands that modify the resource, as is the case with the particle kernel, and delays execution until prior commands accessing the resource are complete.
Therefore, since the _particleDataPool buffer has a storage mode of MTLResourceStorageModePrivate (storageModePrivate in Swift), it can only be written to by the GPU; hence, no CPU/GPU synchronization is necessary with a semaphore for this buffer and thus no multi-buffer system is necessary for the resource.
Only when a resource can be written to by the CPU while the GPU is still reading from it do we want multiple buffers so the CPU is not idle.
Note that the default hazard tracking mode for a MTLHeap is MTLHazardTrackingModeUntracked (MTLHazardTrackingMode.untracked in Swift), in which case you are responsible for synchronizing resource writes by the GPU
EDIT
After reading into resource synchronization in Metal, there are some additional points I would like to make that I think further clarify what's going on. Note that the remaining portion is in Swift. To learn more in detail, I recommend reading the "Synchronization" section in the Metal documentation here.
MTLFence
Firstly, a MTLFence is used to synchronize accesses to untracked resources within the execution of a single command buffer. A fence gives you explicit control over when the GPU accesses resources and is necessary when you are working with an untracked resource. Otherwise, Metal will handle this synchronization for you
It is important to note that the automatic management I mention in the answer only occurs within a single command buffer between encoding passes. But this does not mean we need to synchronize across command buffers scheduled in the same command queue since a command buffer is not immediately scheduled for execution. In fact, according to the documentation on the addScheduledHandler(_:) method of the MTLCommandBuffer protocol found here
The device object schedules the command buffer after it identifies any dependencies with work tasks submitted by other command buffers or other APIs in the system.
at which point it would be safe to access these same buffers. Note that within a single render encoding pass, it is important to mention that if a vertex shader writes into a buffer the fragment shader in the same pass reads from, this is undefined. I mentioned this in the original question, the solution being to use two render pass encoders. I have yet to determine why this is not necessary for a compute encoder, but I imagine it has to do with how kernels are executed in comparison to vertex and fragment shaders
MTLEvent
In some cases, however, command buffers in different queues created by the same MTLDevice need access to the same resource or depend on one another in some way. In this case, synchronization is necessary because the separate queues schedule their own command buffers without knowledge of the other, meaning there is potential for the two command buffers to be executing concurrently.
To fix this problem, you use an MTLEvent instance created by the device using makeEvent() and encode event signals at specific points in each buffer.
MTLSharedEvent
In the event (no pun intended) that you have multiple processors (different CPU cores, CPU and GPU, or multi-GPU), resource synchronization is needed. Here, you create a MTLSharedEvent in place of a MTLEvent that can be used to synchronize across devices and processes. It is essentially the same API as that of the MTLEvent, but involves command queues on different devices.

How the GPU process non-graphic data in parallel?

As the introduction of programmable shaders in graphic pipeline enabled GPGPU concept which makes use of GPU as a general processing engine suited for parallel data.
However, as far as I know, because GPU is still used for graphic processing a lot compared to GPGPU, it makes use of lots of fixed graphic pipeline stages that cannot be programmed.
If my understanding is correct, when one data is processed by the GPU regardless of the type of data (graphic or general), it should be processed through the fixed graphic pipeline which includes programmable stages and non-programmable fixed stages.
Does that mean non-graphical processing should go through graphical processing stages even though it doesn't make use of it? Or can it bypass those fixed stages used for graphics? If one can explain how the GPU pipeline works for GPGPU I would appreciate it.
TL;DR:
GPGPU completely bypasses the rendering pipeline, but the pipeline is still used today.
GPUs consist of two main parts (in relation to your question). The first one is the processing part, which consists of the memory, registers, warp units, dispatchers and streaming processors. The other part is a set of controllers, that are responsible for geometry processing and the graphics pipeline. Those controllers just issue commands for the Streaming Processors on how to process the data for each of the steps of the rendering pipeline, either hardwired or based on user supplied shaders. NVidia calls them "PolyMorph Engine", AMD "Geometric Processor".
Historically, some of those controllers were hardwired to do things a single way, so you could only programm the vertexshader, fragmentshader and pixelshader. The tesselation controller e.g. was hardwired on the GPU and not user programmable. As demands grew, more and more of those controllers became user-programmable and today most of them are completely programmable (Wikipedia).
In the beginning days of GPGPU, the only way to do computing was to hack the available shaders by using a texture with your input data on a full-screen face to calculate the result and then read the rendered image back (See slide 26 on this introduction).
With CUDA, NVidia allowed users not only to program the shaders/polymorph Engine, but also directly interact with the Streaming processors and execute code on those (See slide 31 & 32).
This does not mean, that the graphics pipeline became obsolete, but now there is a way to completely bypass it and directly run code on the GPU processors. Nvidia has a nice explanation on how the pipeline works today, where you can also see both the PolyMorph Engine and the Streaming Processors here.
The Graphics pipeline still helps the dev by offloading repetitive and more complicated parts of the process, like managing the memory, managing warps, passing data and all that stuff. Theoretically you could probably write your own pipeline directly on the StreamingProcessors using CUDA and then render the result, but it would be tedious. Just how writing a GPGPU-Code using Shaders would be tedious.
Although old GPUs have pipelines hardcoded in the chip, modern GPU itself is just a large ASIC that can compute vectorized data at stupid fast speed. It is human who defines what it can do. So the render pipeline is defined in the graphics library like OpenGL, not in GPU. Thus, GPU does not care what it is computing, as long as it is vectorized data, it can do all the computation needed and give you a result.

What is the DirectX 12 equivalent of Vulkan's "transient attachment"?

I have a compute shader which I'd like to output to an image/buffer which is meant to be intermediate stoarge between two pipelines: a compute pipeline, and a graphics pipeline. The graphics pipeline is actually a "dummy", in that it does nothing apart from copy the contents of the intermediate buffer into a swapchain image. This is necessitated by the fact that DX12 deprecated the ability of compute pipelines to use UAVS to directly write into swapchain images.
I think the intermediate storage should be a "transient" attachment, in the Vulkan sense:
VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT specifies that the memory bound to this image will have been allocated with the VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT (see Memory Allocation for more detail). This bit can be set for any image that can be used to create a VkImageView suitable for use as a color, resolve, depth/stencil, or input attachment.`
This is explained in this article:
Finally, Vulkan includes the concept of transient attachments. These are framebuffer attachments that begin in an uninitialized or cleared state at the beginning of a renderpass, are written by one or more subpasses, consumed by one or more subpasses and are ultimately discarded at the end of the renderpass. In this scenario, the data in the attachments only lives within the renderpass and never needs to be written to main memory. Although we’ll still allocate memory for such an attachment, the data may never leave the GPU, instead only ever living in cache. This saves bandwidth, reduces latency and improves power efficiency.
Does DirectX 12 have a similar image usage concept?
Direct3D 12 does not have this concept. And the reason for that limitation ultimately boils down to why transient allocation exists. TL;DR: It's not for doing the kind of thing you're trying to do.
Vulkan's render pass system exists for one purpose: to make tile-based renderers first-class citizens of the rendering system. TBRs do not fit well in OpenGL or D3D's framebuffer model. In both APIs, you can just swap framebuffers in and out whenever you want.
TBRs do not render to memory directly. They perform rendering operations into internal buffers, which are seeded from memory and then possibly written to memory after the rendering operation is complete. Switching rendered images whenever you want works against this structure, which is why TBR vendors have a list of things you're not supposed to do if you want high-performance in your OpenGL ES code.
Vulkan's render pass system is an abstraction of a TBR system. In the abstract model, the render pass system potentially reads data from the images in the frame buffer, then performs a bunch of subpasses on copies of this data, and at the end, potentially writes the updated data back out into the images. So from the outside of the process, it looks like you're rendering to the images, but you're not. To maintain this illusion, for the duration of a render pass, you can only use those framebuffer images in the way that the render pass model allows: as attachments.
Now consider deferred rendering. In deferred rendering, you render to g-buffers, which you then read in your lighting passes to generate the final image. Once you've generated the final image, you don't need those g-buffers anymore. In a regular GPU, that doesn't mean anything; because rendering goes directly to memory, those g-buffers must take up actual storage.
But consider how a TBR works. It does rendering into a single tile; in optimal cases, it processes all of the fragments for a single tile at once. Which means it goes through the geometry and lighting passes. For a TBR, the g-buffer is just a piece of scratch memory you use to get the final answer; it doesn't need to be read from memory or copied to memory.
In short, it doesn't need memory.
Enter lazily allocated memory and transient attachment images. They exist to allow TBRs to keep g-buffers in tile memory and never to have to allocate actual storage for them (or at least, it only happens if some runtime circumstance occurs that forces it, like shoving too much geometry at the GPU). And it only works within a render pass; if you end a render pass and have to use one of the g-buffers in another render pass, then the magic has to go away and the data has to touch actual storage.
The Vulkan API makes how specific this use case is very explicit. You cannot bind a piece of lazily-allocated memory to an image that does not have the USAGE_TRANSIENT_ATTACHMENT flag set on it (or to a buffer of any kind). And you'll notice that it says "transient attachment", as in render pass attachments. It says this because you'll also notice that transient attachments cannot be used for non-attachment uses (part of the valid usage tests for VkImageCreateInfo). At all.
What you want to do is not the sort of thing that lazily allocated memory is made for. It can't work.
As for Direct3D 12, the API is not designed to run on mobile GPUs, and since only mobile GPUs are tile-based renderers (some recent desktop GPUs have TBR similarities, but are not full TBRs), it has no facilities designed explicitly for them. And thus, it has no need for lazily allocated memory or transient attachments.

Why do we need multiple render passes and subpasses?

I had some experience with DirectX12 in the past and I don't remember something similar to render passes in Vulkan so I can't make an analogy. If I'm understanding correctly command buffers inside the same subpass doesn't need to be synchronized. So why to complicate and make multiple of them? Why can't I just take one command buffer and put all my frame related info there?
Imagine that the GPU cannot render to images directly. Imagine that it can only render to special framebuffer memory storage, which is completely separate from regular image memory. You cannot talk to this framebuffer memory directly, and you cannot allocate from it. However, during a rendering operation, you can copy data from images into it, read data out of it into images, and of course render to this internal memory.
Now imagine that your special framebuffer memory is fixed in size, a size which is smaller than the size of the overall framebuffer you want to render to (perhaps much smaller). To be able to render to images that are bigger than your framebuffer memory, you basically have to execute all rendering commands for those targets multiple times. To avoid running vertex processing multiple times, you need a way to store the output of vertex processing stages.
Furthermore, when generating rendering commands, you need to have some idea of how to apportion your framebuffer memory. You may have to divide up your framebuffer memory differently if you're rendering to one 32-bpp image than if you're rendering to two. And how you assign your framebuffer memory can affect how your fragment shader code works. After all, this framebuffer rendering memory may be directly accessible by the fragment shader during a rendering operation.
That is the basic idea of the render pass model: you are rendering to special framebuffer memory, of an indeterminate size. Every aspect of the render pass system's complexity is based on this conceptual model.
Subpasses are the part where you determine exactly which things you're rendering to at the moment. Because this affects framebuffer memory arrangement, graphics pipelines are always built by referring to a subpass of a render pass. Similarly, secondary command buffers that are to be executed within a subpass must provide the subpass it will be used within.
When a render pass instance begins execution on a queue, it (conceptually) copies the attachment images we intend to render to into framebuffer rendering memory. At the end of the render pass, the data we render is copied back out to the attachment images.
During the execution of a render pass instance, the data for attachment images is considered "indeterminate". While the model says that we're copying into framebuffer rendering memory, Vulkan doesn't want to force implementations to actually copy stuff if they directly render to images.
As such, Vulkan merely states that no operation can access images that are being used as attachments, except for those which access the images as attachments. For example, you cannot read an attachment image as a texture. But you can read from it as an input attachment.
This is a conceptual description of the way tile-based renderers work. And this is the conceptual model that is the foundation of the Vulkan render pass architecture. Render targets are not accessible memory; they're special things that can only be accessed in special ways.
You can't "just" read from a G-buffer because, while you're rendering to that G-buffer, it exists in special framebuffer memory that isn't in the image yet.
Both features primarily exist for tile-based GPUs, which are common in mobile but, historically, uncommon on desktop computers. That's why DX12 doesn't have an equivalent, and Metal (iOS) does. Though both Nvidia's and AMD's recent architectures do a variant of tile-based rendering now also, and with the recent Windows-on-ARM PCs using Qualcomm chips (tile-based GPU), it will be interesting to see how DX12 evolves.
The benefit of render passes is that during pixel shading, you can keep the framebuffer data in on-chip memory instead of constantly reading and writing external memory. Caches help some, but without reordering pixel shading, the cache tends to thrash quite a bit since it's not large enough to store the entire framebuffer. A related benefit is you can avoid reading in previous framebuffer contents if you're just going to completely overwrite them anyway, and avoid writing out framebuffer contents at the end of the render pass if they're not needed after it's over. In many applications, tile-based GPUs never have to read and write depth buffer data or multisample data to or from external memory, which saves a lot of bandwidth and power.
Subpasses are an advanced feature that, in some cases, allow the driver to effectively merge multiple render passes into one. The goal and underlying mechanism is similar to the OpenGL ES Pixel Local Storage extension, but the API is a bit different in order to allow more GPU architectures to support it and to make it more extensible / future-proof. The classic example where this helps is with basic deferred shading: the first subpass writes out gbuffer data for each pixel, and later subpasses use that to light and shade pixels. Gbuffers can be huge, so keeping all of that on-chip and never having to read or write it to main memory is a big deal, especially on mobile GPUs which tend to be more bandwidth- and power-constrained.

TensorFlow Device Contexts, Streams and Context Switching

In the GPUDevice code, I noticed that one GPUDeviceContext is made per stream.
Is the purpose of this so that every context can control one OpKernelContext and then as the various streams need to be executed, then the contexts can just be switched which handles pushing different data/code onto the GPU and then executing.
Do the various streams get registered as different devices (ie. '/gpu:0' and '/gpu:1')?
Per this, ThreadPoolDevice's don't have contexts, but if I were to add contexts into ThreadPoolDevice, would they fit best as a sort of ThreadContext?
For GPU, we maintain a few streams for execution: a compute stream (on which most computational kernels run), and some memcopy streams (for executing memcopies between host and device and vice versa). This is done to overlap communication and computation on GPU devices, but is particular to the way that we use GPUs. One could easily also just create one GPU stream for all computation and communication and it would be correct, although slower.
We want to give the computation stream to kernels that do computation, and the memcopy stream to the kernels that do copying. We create a GPUDeviceContext object for each stream, and then pass the right device context object to the OpKernelContext.
So the particular implementations here reflect the properties of the asynchronous hardware device (GPU), which is why the ThreadPoolDevice doesn't have these sorts of mechanisms. On CPU all computation is synchronous, so there is no need for an abstraction such as streams.
The execution model of the custom hardware will likely determine what kind of state and management a custom device support will require in TensorFlow.