Impossible to acquire and present in parallel with rendering? - vulkan

Note: I'm self-learning Vulkan with little knowledge of modern OpenGL.
Reading the Vulkan specifications, I can see very nice semaphores that allow the command buffer and the swapchain to synchronize. Here's what I understand to be a simple (yet I think inefficient) way of doing things:
Get image with vkAcquireNextImageKHR, signalling sem_post_acq
Build command buffer (or use pre-built) with:
Image barrier to transition image away from VK_IMAGE_LAYOUT_UNDEFINED
render
Image barrier to transition image to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
Submit to queue, waiting on sem_post_acq on fragment stage and signalling sem_pre_present.
vkQueuePresentKHR waiting on sem_pre_present.
The problem here is that the image barriers in the command buffer must know which image they are transitioning, which means that vkAcquireNextImageKHR must return before one knows how to build the command buffer (or which pre-built command buffer to submit). But vkAcquireNextImageKHR could potentially sleep a lot (because the presentation engine is busy and there are no free images). On the other hand, the submission of the command buffer is costly itself, and more importantly, all stages before fragment can run without having any knowledge of which image the final result will be rendered to.
Theoretically, it seems to me that a scheme like the following would allow a higher degree of parallelism:
Build command buffer (or use pre-built) with:
Image barrier to transition image away from VK_IMAGE_LAYOUT_UNDEFINED
render
Image barrier to transition image to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
Submit to queue, waiting on sem_post_acq on fragment stage and signalling sem_pre_present.
Get image with vkAcquireNextImageKHR, signalling sem_post_acq
vkQueuePresentKHR waiting on sem_pre_present.
Which would, again theoretically, allow the pipeline to execute all the way up to the fragment shader, while we wait for vkAcquireNextImageKHR. The only reason this doesn't work is that it is neither possible to tell the command buffer that this image will be determined later (with proper synchronization), nor is it possible to ask the presentation engine for a specific image.
My first question is: is my analysis correct? If so, is such an optimization not possible in Vulkan at all and why not?
My second question is: wouldn't it have made more sense if you could tell vkAcquireNextImageKHR which particular image you want to acquire, and iterate through them yourself? That way, you could know in advance which image you are going to ask for, and build and submit your command buffer accordingly.

Like Nicol said you can record secondaries independent of which image it will be rendering to.
However you can take it a step further and record command buffers for all swpachain images in advance and select the correct one to submit from the image acquired.
This type of reuse does take some extra consideration into account because all memory ranges used are baked into the command buffer. But in many situations the required render commands don't actually change frame one frame to the next, only a little bit of the data used.
So the sequence of such a frame would be:
vkAcquireNextImageKHR(vk.dev, vk.swap, 0, vk.acquire, VK_NULL_HANDLE, &vk.image_ind);
vkWaitForFences(vk.dev, 1, &vk.fences[vk.image_ind], true, ~0);
engine_update_render_data(vk.mapped_staging[vk.image_ind]);
VkSubmitInfo submit = build_submit(vk.acquire, vk.rend_cmd[vk.image_ind], vk.present);
vkQueueSubmit(vk.rend_queue, 1, &submit, vk.fences[vk.image_ind]);
VkPresentInfoKHR present = build_present(vk.present, vk.swap, vk.image_ind);
vkQueuePresentKHR(vk.queue, &present);
Granted this does not allow for conditional rendering but the gpu is in general fast enough to allow some geometry to be rendered out of frame without any noticeable delays. So until the player reaches a loading zone where new geometry has to be displayed you can keep those command buffers alive.

Your entire question is predicated on the assumption that you cannot do any command buffer building work without a specific swapchain image. That's not true at all.
First, you can always build secondary command buffers; providing a VkFramebuffer is merely a courtesy, not a requirement. And this is very important if you want to use Vulkan to improve CPU performance. After all, being able to build command buffers in parallel is one of the selling points of Vulkan. For you to only be creating one is something of a waste for a performance-conscious application.
In such a case, only the primary command buffer needs the actual image.
Second, who says that you will be doing the majority of your rendering to the presentable image? If you're doing deferred rendering, most of your stuff will be written to deferred buffers. Even post-processing effects like tone-mapping, SSAO, and so forth will probably be done to an intermediate buffer.
Worst-case scenario, you can always render to your own image. Then you build a command buffer who's only contents is an image copy from your image to the presentable one.
all stages before fragment can run without having any knowledge of which image the final result will be rendered to.
You assume that the hardware has a strict separation between vertex processing and rasterization. This is true only for tile-based hardware.
Direct renderers just execute the whole pipeline, top to bottom, for each rendering command. They don't store post-transformed vertex data in large buffers. It just flows down to the next step. So if the "fragment stage" has to wait on a semaphore, then you can assume that all other stages will be idle as well while waiting.
wouldn't it have made more sense if you could tell vkAcquireNextImageKHR which particular image you want to acquire, and iterate through them yourself?
No. The implementation would be unable to decide which image to give you next. This is precisely why you have to ask for an image: so that the implementation can figure out on its own which image it is safe for you to have.
Also, there's specific language in the specification that the semaphore and/or event you provide must not only be unsignaled but there cannot be any outstanding operations waiting on them. Why?
Because vkAcquireNextImageKHR can fail. If you have some operation in a queue that's waiting on a semaphore that's never going to fire, that will cause huge problems. You have to successfully acquire first, then submit work that is based on the semaphore.
Generally speaking, if you're regularly having trouble getting presentable images in a timely fashion, you need to make your swapchain longer. That's the point of having multiple buffers, after all.

Related

How do I know when Vulkan isn't using memory anymore so I can overwrite it / reuse it?

When working with Vulkan it's common that when creating a buffer, such as a uniform buffer, that you create multiple (buffers 'versions'), because if you have double buffering for example you don't know if the graphics API is still drawing the last frame (using the memory you bound and instructed it to use the last loop). I've seen this happen with uniform buffers but not vertex or index buffers or image/texture buffers. Is this because uniform buffers are updated regularly and vertex buffers or images are not?
If you wanted to update an image or a vertex buffer how would you go about it given that you don't know whether the graphics API is still using it? Do you simply reallocate new memory for that image/buffer and start anew? Even if you just want to update a section of it? And if this is the case that you allocate a new buffer, when would you know to release the old buffer? Would say, for example 5 frames into the future be OK? Or 2 seconds? After all, it could still be being used. How is this done?
given that you don't know whether the graphics API is still using it?
But you do know.
Vulkan doesn't arbitrarily use resources. It uses them exactly and only how your code tells it to use the resource. You created and submitted the commands that use those resources, so if you need to know when a resource is in use, it is you who must keep track of it and manage this.
You have to use API synchronization functions to follow the GPU's execution of commands.
If an action command uses some set of resources, then those resources are in use while that command is being executed. You have tools like events which can be used to stop subsequent commands from executing until some prior commands have finished. And events can tell when a particular command has finished, so that you'll know when those resources are no longer in use.
Semaphores have similar powers, but at the level of a batch of work. If a semaphore is signaled, then all of the commands in the batch that signaled it have completed and are no longer using the resources they use. Fences can be used for extremely coarse synchronization, at the level of a submit command.
You multi-buffer uniform data because the nature of uniform data is such that it typically needs to change every frame. If you have vertex buffers or images to change every frame, then you'll need to do the same thing with those.
For infrequent changes, you may want to have extra memory available so that you can just create new images or buffers, then delete the old ones when the memory is no longer in use. Or you may have to stall the CPU until the GPU has finished using those resources.

Can vkQueuePresentKHR be synced using a pipeline barrier?

The fact vkQueuePresentKHR gets a queue parameter makes me think that it is like a command that is delivered to the queue for execution. If so, it is possible to make it waits (until the writing into the image to be presented is finished) using a pipeline barrier where source stage is VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT and destination is VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT. Or maybe even by an image barrier to ease the sync constraint for the image only.
But the fact that in every tutorial and books the sync is done using semaphore , makes me think that my assumption is wrong. If so, why vkQueuePresentKHR needs a queue parameter ? because the semaphore parameter is seems to be enough: when it is signaled, vkQueuePresentKHR can present the image according to the image index parameter and the swapchain handle parameter.
There are couple of outstanding Issues against the specification. Notably KhronosGroup/Vulkan-Docs#1308 is exactly your question.
Meanwhile everyone usually follows this language:
The processing of the presentation happens in issue order with other queue operations, but semaphores have to be used to ensure that prior rendering and other commands in the specified queue complete before the presentation begins.
Which implies semaphore has to be used. And given we are not 110 % sure, that means semaphore should be used until we know any better.
Another semi-official source is the sync wiki, which uses a semaphore.
Despite what this quote says, I think it is reasonable to believe it is also permissible to use other sync that makes the image already visible before the vkQueuePresent, such as fence wait.
But just pipeline barriers are likely not sufficient. The presentation is outside the queue system:
However, the scope of this set of queue operations does not include the actual processing of the image by the presentation engine.
Additionally there is no VkPipelineStageFlagBit for it, and vkQueuePresentKHR is not included in the submission order, so it cannot be in the synchronization scope of any vkCmdPipelineBarrier.
The confusing part is this unfortunate wording:
Any writes to memory backing the images referenced by the pImageIndices and pSwapchains members of pPresentInfo, that are available before vkQueuePresentKHR is executed, are automatically made visible to the read access performed by the presentation engine.
I believe the trick is the "before vkQueuePresentKHR is executed". As said above, vkQueuePresentKHR is not part of submission order, therefore you do not know if the memory was or wasn't made available via a pipeline barrier before the vkQueuePresentKHR is executed.
Presentation is a queue operation. That's why you submit it to a queue. A queue that will execute the presentation of the image. And specifically to a queue that is able to perform present operations.
As for how to synchronize... the specification is a bit ambiguous on this point.
Semaphores are definitely able to work; there's a specific callout for this:
Semaphores are not necessary for making the results of prior commands visible to the present:
Any writes to memory backing the images referenced by the pImageIndices and pSwapchains members of pPresentInfo, that are available before vkQueuePresentKHR is executed, are
automatically made visible to the read access performed by the presentation engine. This automatic visibility operation for an image happens-after the semaphore signal operation, and happens-before the presentation engine accesses the image.
While provisions are made for semaphores, there is no specific statement of other things. In particular, if you don't wait on a semaphore, it's not clear what "happens-after the semaphore signal operation" means, since no such signal operation happened.
Now, the API for vkQueuePresentKHR makes it clear that you don't need to provide a semaphore to wait on:
waitSemaphoreCount is the number of semaphores to wait for before issuing the present request.
The number may be zero.
One might thing that, as a queue operation, all prior synchronization on that queue would still affect presentation. For example, an external subpass dependency if you wrote to the swapchain image as an attachment. And it probably would... if not for one little problem.
See, synchronization is ultimately based on dependencies between stages. And presentation... doesn't have a stage. So while your source for the external dependency would be well-understood, it's not clear what destination stage would work. Even specifying the all-stages flag wouldn't necessarily work.
Does "not a stage" exist in the set of all stages?
In any case, it's best to just use a semaphore. You'll probably need one anyway, so just use that.

Copy a VkImage after TraceRaysKHR to CPU

Copying a VkImage that is being used to render to an offscreen framebuffer gives a black image.
When using a rasterizer the rendered image is non-empty but as soon as I switch to ray tracing the output image is empty:
// Offscreen render pass
vk::RenderPassBeginInfo offscreenRenderPassBeginInfo;
// setup framebuffer ..
if(useRaytracer) {
helloVk.raytrace(cmdBuf, clearColor);
} else {
cmdBuf.beginRenderPass(offscreenRenderPassBeginInfo, vk::SubpassContents::eInline);
helloVk.rasterize(cmdBuf);
cmdBuf.endRenderPass();
}
// saving to image
if(write_to_image)
{
helloVk.copy_to_image(cmdBuf);
}
Both the ray tracer and rasterizer are using the same resources (e.g. the output image) through shared descriptor sets. I have also a post processing stage where the output is tone mapped & rendered to the swapchain.
I copy the image via a linear image and vkCmdCopyImage.
I tried already so much but there are so many question
How can I get the ray traced image? Is it possible to get the output through memory barriers only in a single command buffer as I am using? Should I create an independent command buffer and get the output after a synchronization barrier? Does ray tracing need special VkPipelineStageFlags?
You always need to consider synchronization in Vulkan. It is a good idea to learn how this works, because in my opinion this is one of the most important things.
Here one of the blogs:
https://www.khronos.org/blog/understanding-vulkan-synchronization
In summary for lazy people:
Barriers (Command and Memory), split commands to groups and make sure groups execute in order and not parallel, to synchronize memory access on pipeline stages in one queue.
Events, similar to Barriers, but uses user-defined "stages", slower than barriers.
Subpass dependency, synchronizes sub passes.
Semaphores, synchronize multiple queues, used to synchronize shared data on GPU.
Fences, same as semaphores, but used to synchronize shared data GPU->CPU.
Timeline Semaphores (since Vulkan 1.2), synchronize work between CPU and GPU in both directions.
Queue waiting for idle, same as fences, synchronizes GPU->CPU, but for entire queue not only one particular commit.
Device waiting for idle, same as queue waiting for idle, synchronizes GPU->CPU, but for all queues.
Resolved by now:
When submitting the command buffer to the queue it would require an additional vkQueueWaitIdle(m_queue) since ray tracing finishes with a certain latency

Vulkan's execution model and sycnhronization [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to clear up my confusion around Vulkan's execution model and I would like to have my understanding verified and get answers to questions that still remain unclear to me.
So my understanding is following:
The host and the device execute completely asynchronously with respect to each other. I have to use VkFence to synchronize between them, i.e. when I want to know that a particular submission has finished executing on the device, I have to wait on the host for the appropriate VkFence to be signaled.
Different command queues execute asynchronously with respect to each other. Vulkan specification does not provide any guarantees about the order in which submissions to these queues start or finish execution. So vkQueueSubmit on queue A executes completely independently from vkQueueSubmit on queue B and I have to use VkSemaphore in order to make sure that for example submission to queue B starts executing after the submission to queue A is finished.
However different commands submitted to the same command queue respect their submission order, which means that commands submitted later won't start execution unless commands submitted earlier have already started their execution, but on the other hand this does not mean that these later commands cannot finish execution before earlier commands.
State setting commands (e.g. vkCmdBindPipeline, vkCmdBindVertexBuffers ...) are not asynchronous and delayed for later (like e.g. vkCmdDraw). Actually they execute right away on the host (not on the device) and modify the state of VkCommandBuffer and this cumulatively modified state is used in recording action commands that come after.
From the perspective of synchronization VkRenderPass can be thought of as just a simpler interface to pipeline barriers. It can be thought of as having one pipeline barrier in the beginning of render pass instance (in place of vkCmdBeginRenderPass), one at the end of render pass instance (in place of vkCmdEndRenderPass) and one pipeline barrier after each subpass (in place of vkCmdNextSubpass).
In my head the mental model of how commands execute on a single command queue is as one huge stream of commands (ordered in the order that they were recorded to command buffer and the order that these command buffers were submitted to the queue) split by pipeline barriers. Each pipeline barrier splits the stream into two sections, commands before the barrier (section A) and commands after the barrier (section B). Commands in section B are allowed to start (or rather continue their execution with pipeline stage Y) only after all commands in section A have finished executing pipeline stage X.
Questions:
The Vulkan specification (section 2.2.1. Queue Operation) states:
Command buffer submissions to a single queue respect submission order
and other implicit ordering guarantees, but otherwise may overlap or
execute out of order. Other types of batches and queue submissions
against a single queue (e.g. sparse memory binding) have no implicit
ordering constraints with any other queue submission or batch.
Lets say that in my program I have only one general queue, that can issue all kinds of commands (graphics, compute, transfer, presentation, ...), so does the above statement mean the following ?
vkQueueSubmit #3 starts execution only after vkQueueSubmit #2 has already started execution, which starts only after vkQueueSubmit #1 has already started, ... but vkQueueBindSparse or vkQueuePresentKHR can start at any time regardless of when they were issued by the host ... In other words I always have to use VkSemaphore to ensure that presentation (vkQueuePresentKHR) starts at the right time (only after all my graphics work was submitted and executed and thus is ready to be presented).
I am a little bit confused with the definition of submission order within command buffers themselves. Specification states (section 6.2. Implicit Synchronization Guarantees):
1)
For commands recorded outside a render pass, this includes all other
commands recorded outside a render pass, including
vkCmdBeginRenderPass and vkCmdEndRenderPass commands; it does not
directly include commands inside a render pass.
2)
For commands recorded inside a render pass, this includes all other
commands recorded inside the same subpass, including the
vkCmdBeginRenderPass and vkCmdEndRenderPass commands that delimit the
same render pass instance; it does not include commands recorded to
other subpasses.
The first bullet point seems to be clear. The submission order is the order in which commands were recorded to command buffers, whilst whatever is inside of a vkCmdBeginRenderPass and vkCmdEndRenderPass block is considered as one command for the purpose of this bullet point. The second bullet point is a bit unclear to me though. How is the submission order defined here ? It is clear that any command within a specific subpass does not start its execution unless a previous command has already started its execution or unless vkCmdBeginRenderPass was executed. But what about different subpasses ? Does this mean that subpass 1 can start its execution before subpass 0 has started its execution ? This does not make sense to me. What would make sense, is if later subpasses are only allowed to start once previous subpasses have finished.
Vulkan specification (section 6.1.2. Pipeline Stages) states:
Execution of operations across pipeline stages must adhere to implicit
ordering guarantees, particularly including pipeline stage order.
Does this mean that for example Vertex shader stage from draw call 2 is not allowed to begin execution unless vertex shader stage from draw call 1 has already started its execution ?
My mental model of Vulkan's command queue execution (number 6 of my understanding) provokes the question, whether a pipeline barrier submitted to the beginning of a command buffer (B) would affect an earlier command buffer (A). I mean would it make the commands in command buffer B wait to start execution until commands in command buffer A are finished ? I read somewhere that synchronization between different command buffers is the job for events, but according to my understanding this should also be possible with barriers.
Also if I used VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT as source stage and VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT as destination stage of a pipeline barrier that should basically disable any overlap between the commands before and after the barrier, right ?
So as I see it, there are several different parallelisms in Vulkan:
Between CPU and GPU, these are synchronized with VkFence
Between different commands queues on the GPU, these are synchronized with VkSemaphore
Between different submissions to the same queue, exception seem to be submissions with vkQueueSubmit. These are also synchronized with VkSemaphore.
Between different draw calls. These are synchronized with pipeline barrier.
This one is the most confusing to me. So if I have a drawcall that in some way uses the results of any previous drawcall or writes to the same render target (framebuffer), then as far as I understand, I need to make sure that the later drawcall sees the memory effects of all previous drawcalls. But what about, when I am rendering a scene with a bunch of game characters, trees and buildings. Lets say that each such object is one drawcall and all these drawcalls write to the same framebuffer. Do I need to issue a memory barrier after every drawcall ? Intuitively this feels redundant and the demos that I checked out did not issue any barriers in this case, but are there any guarantees that drawcalls logically following after will see the memory effects of drawcalls logically before them ? The question is, when do I need to synchronize between different drawcalls ?
Within a single draw call. Synchronization on this level is possible with shader atomic instructions.
However as far as I am not doing anything unusual, like writing to the same memory address from multiple shader instances or reading from the same memory that I have just written to (e.g. implementation of custom blending in fragment shader), I should be fine. In other words if every fragment shader reads and writes only its corresponding pixel or vertex data, I do not need to worry about synchronization within the same drawcall.
The host and the device execute completely asynchronously with respect to each other.
Yes.
Unless explicit synchronization is used (that is VkFence, vk*WaitIdle, VkEvent). Or the one rare implicit synchronization ( host writes are visible to device access from any subsequent vkQueueSubmit).
Do note there also has to be a "memory domain operation". I.e. you must use VK_PIPELINE_STAGE_HOST_BIT when reading output of GPU on the CPU. (VkFence alone, doing the execution and memory dependency, does not suffice).
Different command queues execute asynchronously with respect to each other.
Correct. In other words, commands from any two queues may run serially, next to each other (in parallel), or even be pre-empted and time-shared, or some combination of the above. Anything goes. Unless explicit synchronization (VkSemaphore or VkFence) is used.
However different commands submitted to the same command queue respect their submission order
Yes. But it is only specification formalism that has no real-world effect. It is only specified so we have formal linguistic framework in which to describe other stuff in the specification (e.g. it specifies nomenclature necessary to describe the behavior of pipeline barriers).
State setting commands (e.g. vkCmdBindPipeline, vkCmdBindVertexBuffers ...) are not asynchronous and delayed for later (like e.g. vkCmdDraw).
No, that is not exactly how I would describe it.
They are not "delayed". They are simply executed exactly where they are recorded in the command buffers.
This is perhaps one of the things where we need the "submission order" formalism. All commands later in submission order after state command see the new state. (I.e. only the commands recorded after the state command see the new state).
From the perspective of synchronization VkRenderPass can be thought of as just a simpler interface to pipeline barriers.
I don't think so. It is actually perhaps bit more complex.
What it does is more efficient synchronization, although it perhaps defines functionally the same synchronization as pipeline barriers could. What it does differently is that (among other things) it defines this synchronization as a monolith (i.e. you tell the driver upfront what resources you are gonna use, and you outline all the things you are gonna do to them later).
Render Pass is a harness necessitated by mobile tiling architecture GPU. On desktop it is also useful if they have some architectural inspiration from the mobile GPUs, or simply as an oracle for driver optimization.
so does the above statement mean the following ? vkQueueSubmit #3 starts execution only after vkQueueSubmit #2 has already started execution, which starts only after vkQueueSubmit #1 has already started
Yes, and no. Read above about the formalism of submission order.
Technically, yes, the commands are guaranteed to execute its VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT in order. But that stage does nothing.
AIS, it is only specification formalism used for other things. It does not say anything in of itself.
I am a little bit confused with the definition of submission order within command buffers themselves.
Yes, the language is bit tricky. The part that trips you up is the subpasses. Note that subpasses are by definition also asynchronous. Therefore we cannot use the simple rule in quote "1)".
If I decode it, what the spec quote means is:
a) Any command recorded before the Render Pass Instance (i.e. before vkCmdBeginRenderPass) is earlier in submission order than the vkCmdBeginRenderPass, and earlier than any and all the commands in the subpasses. (And vice versa, anything in the subpasses is later in submission order.)
b) Similarly any command recorded after the Render Pass Instance (i.e. after vkCmdEndRenderPass) is later in submission order than the vkCmdEndRenderPass, and later than any and all the commands in the subpasses.
c) The commands in a single subpass have the submission order same as the order they were recorded in (vkCmd*).
d) Commands in any two subpasses do not have submission order wrt each other.
Remember submission order is only a formalism. What "d)" means in reality is only that you cannot execute vkCmdPipelineBarrier in subpass 1 and expect that barrier to cover anything from subpass 0. (What you must do is use the VkSubpassDependency instead of vkCmdPipelineBarrier to achieve dependency between subpass 0 and 1.)
Execution of operations across pipeline stages must adhere to implicit ordering guarantees, particularly including pipeline stage order.
This is only an introductory statement linking to some of the other stuff in the specification. It does not say anything in of itself.
"implicit ordering guarantees" links to the submission order we covered.
"pipeline stage order" simply links to pipeline stage ordering. This simply specifies "logical order" between pipeline stages (e.g. Vertex Shader is before Fragment Shader). What it means is whenever you use stage flag bit in any srcStage parameter, Vulkan will implicitly assume you also mean any logically earlier stage flag bit. (And similarly for dstStage).
My mental model of Vulkan's command queue execution (number 6 of my understanding) provokes the question, whether a pipeline barrier submitted to the beginning of a command buffer (B) would affect an earlier command buffer (A)
Yes, that is the general idea.
Think of it like this: vkQueueSubmit concatenates the commands from command buffer at the end of the Queue. It is called "queue" for a reason. Therefore a pipeline barrier affects the command buffer that was submitted earlier. (And BTW that's why it is called submission order)
Also if I used VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT as source stage and VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT as destination stage of a pipeline barrier that should basically disable any overlap between the commands before and after the barrier, right ?
Yes, but that is a code rot.
In this case use VK_PIPELINE_STAGE_ALL_COMMANDS_BIT instead. It is much easier to understand for anyone reading such code.
So as I see it, there are several different parallelisms in Vulkan:
Asynchrony.
Parallelism is not guaranteed. I.e. the driver is allowed to serialize the workload, or time-share it.
But e.g. with some common sense you can guess there will be (notable) parallelism between CPU and GPU, if it is a dedicated GPU.
The question is, when do I need to synchronize between different drawcalls ?
Yes, I think no framebuffer sync between draw commands is one of the exceptions\simplifications Vulkan has.
I believe people support it by the specification of Primitive Order and Rasterization Order.
I.e. in a single subpass you should not need a pipeline barrier between two vkCmdDraw* to synchronize the color and depth buffer. (I think) you still need to explicitly synchronize draw in a subpass with other subpasses and with outside of the render pass instance.
However as far as I am not doing anything unusual, like writing to the same memory address from multiple shader instances or reading from the same memory that I have just written to (e.g. implementation of custom blending in fragment shader), I should be fine.
Yes. The pipeline and the fixed and programmable stages should work similarly as in OpenGL. You should for most part be able to use OpenGL's shaders with little to no modification and achieve the same behavior.

The best way to add a new 3D object at runtime

All the time till now I had 3D objects created during the startup. But now I need to add them dynamically. What can be simpler, I thought...
The main issue right now is how to upload the new object's data in the fastest way and find out when the data is uploaded.
Here's my setup:
I'm using the vulkan memory allocator library, so I'm free form memory management burden.
I'm planning to use a separate VkBuffer for every object - this way I don't need to manage offsets, alignments and it would be easier to add/remove objects.
And here are my thoughts/questions:
How to upload the data? I want the buffer to be gpu-visible only, that means I need a staging buffer.
If I use the staging buffer I need to know when the data is ready to use on the gpu. I don't want to flush the pipeline and wait. The only way I see is to use a fence per object and only call the draw command when this fence is ready.
If I use a staging buffer and want to upload multiple objects during a short frame, I need somehow to be sure that the parts of this staging buffer not being overridden by different objects. For this, I need to keep it big, handle alignment for the offsets. But how big?
I'm pretty sure I'm overcomplicating. I believe there should be a much simpler pattern. How would you do this?
I believe there should be a much simpler pattern.
It's Vulkan; it's an explicit, low-level API. "Simple" is not its goal.
Overall, your Vulkan code needs to be written to adapt to the capabilities of the hardware. That's the best way to get performance out of it.
The first decision that needs to be made is whether you need staging at all. Staging (for buffer copies) is only necessary if your device's DEVICE_LOCAL memory is not mappable. And yes, there are (integrated) GPUs that allow you to map DEVICE_LOCAL memory. If that is the case, then you can just write directly to where you need the data to go.
If staging is needed, then you need to decide if the hardware supports an independent transfer-only queue. If so, then you will likely get performance benefits by employing it. Not all hardware supports transfer-only queues, so your application needs to adapt. Also, transfer-only queues can have restrictions on the granularity of memory transfers taking place on those queues, so you need to check to see if your streaming strategy fits within the limits of that particular hardware.
Also, if there is no appropriate transfer queue, you can create the effect of a transfer queue by using a second compute or graphics queue... if the hardware supports multiple queues at all. Being able to submit transfer commands and rendering commands on different queues is a good thing, assuming you are taking advantage of threading (ie: issuing submits of the batches to the different queues on different threads).
If you are able to use a separate queue for transfers (whether a true transfer queue or just a separate compute/graphics queue), then you get to play around with semaphores. The batch that transfers data must signal a semaphore when it completes; this is part of the batch in the vkQueueSubmit call. The batch on the main queue that uses the transferred data for some process needs to wait on that semaphore. So both threads need to be using the same VkSemaphore object. And the wait on the semaphore should just have a global memory barrier, to make the memory visible.
The tricky part is this: you cannot submit the batch that waits on the semaphore until the submit call for the batch that signals it has been submitted. You don't have to wait until completion, but you do have to wait until the vkQueueSubmit call on the transfer queue has returned. So you need a way to transfer the semaphore between different threads, or you could just issue both submit commands on the same thread.
If you aren't using a second queue, then things are slightly simpler.
You still want to build the transfer command buffer itself on a different thread (to take advantage of threading CB construction). But that CB now needs to be communicated to the thread responsible for submitting the rendering stuff. And this channel of communication needs to know that this CB contains transfer commands, which some of the rendering CB processes ought to wait on.
The simplest and most flexible way to do this is to build the transfer CB so that the last command is a vkCmdSetEvent command (and the first command is a vkCmdResetEvent to reset it from previous frames of usage). The submission thread then only needs to create a small CB that only contains a vkCmdWaitEvents command which waits on the transfer event that will be set. That command should issue a full memory barrier, and that CB should execute between the transfer CB and any rendering CBs that read from the transferred data.
The flexibility of this is in the structure of the process. It is structured similarly to how the multi-queue version works. In both cases, a separate thread needs to communicate something to the render submission thread (in one case, a semaphore; in the other, a CB and an event). And the render submission thread needs to do things to wait on that "something", but without disrupting the process of building the rendering commands itself (in one case, you just change the batch to wait on the semaphore; in the other, you insert a CB that waits for the event).
If you want to get a bit smarter about execution dependencies, you can even have the transfer operation forward information about which pipeline stages need to wait on the operation. But that's mostly an optimization.
Here's the thing though: all of the staging cases are not performance-friendly. They're problematic because you can't do anything while the transfer operation is going on. And that is the case because... you're trying to read from the memory in the same frame you're writing to it. That's bad.
You should endeavor instead to delay rendering any objects for which loading is not complete. Or put another way, you want to load the data for new objects before you need them, not on the same frame you need them. This is what streaming systems do: they pre-emptively load data that will be needed soon, but not right now.
But how big?
Only you and your use cases can answer that question. If you are streaming in fixed-sized blocks (which you should do where possible), then it's fairly easy: your staging buffer should be one or maybe two streaming blocks in size. If your rendering system is more flexible, imposing few limitations on the higher-level code, then your staging buffer and your streaming system needs to be more flexible. And there's no right answer for that; it depends entirely on how it gets used.
Welcome to using explicit, low-level APIs.