webgl ,after the gl.drawArrays(),the background is gone? - background

gl.clearColor(0, 0, 0, 1);
gl.clear(gl.COLOR_BUFFER_BIT);
gl.vertexAttrib3f(position, 0,0,0);
setTimeout(()=>{
gl.drawArrays(gl.POINTS, 0, 1);
//the background is gone?why?
// I've set it up before,gl.clearColor(0, 0, 0, 1);
// why zhe background color opacity is 0
})
the background is gone?why?
I've set it up before,gl.clearColor(0, 0, 0, 1);
why the background color opacity is 0
please help me

he depth, stencil and antialias attributes, when set to true, are requests, not requirements. The WebGL implementation should make a best effort to honor them. When any of these attributes is set to false, however, the WebGL implementation must not provide the associated functionality. Combinations of attributes not supported by the WebGL implementation or graphics hardware shall not cause a failure to create a WebGLRenderingContext. The actual context parameters are set to the attributes of the created drawing buffer. The alpha, premultipliedAlpha and preserveDrawingBuffer attributes must be obeyed by the WebGL implementation.
WebGL presents its drawing buffer to the HTML page compositor immediately before a compositing operation, but only if at least one of the following has occurred since the previous compositing operation:
Context creation
Canvas resize
clear, drawArrays, or drawElements has been called while the drawing buffer is the currently bound framebuffer
Before the drawing buffer is presented for compositing the implementation shall ensure that all rendering operations have been flushed to the drawing buffer. By default, after compositing the contents of the drawing buffer shall be cleared to their default values, as shown in the table above.
This default behavior can be changed by setting the preserveDrawingBuffer attribute of the WebGLContextAttributes object. If this flag is true, the contents of the drawing buffer shall be preserved until the author either clears or overwrites them. If this flag is false, attempting to perform operations using this context as a source image after the rendering function has returned can lead to undefined behavior. This includes readPixels or toDataURL calls, using this context as the source image of another context's texImage2D or drawImage call, or creating an ImageBitmap [HTML] from this context's canvas.

Related

Vulkan Rendering - Portion of Surface

How to render a vulkan framebuffer(vkImage) in a portion of Surface?
When I draw in framebuffer, vulkan clear all surface with vkColorClear.
The surface has 800x600 but I would like vulkan render 300x200 using a offset 100x100, for example.
When you begin a render pass, you provide the VkRenderPassBeginInfo object. In this object is the renderArea rectangle, which defines the area of each of the attachment images that the render pass will affect. Any pixels of attachments outside of this area are unaffected by render pass operations, including the clear load op and vkCmdClearAttachments.
Note that the renderArea is subject to the limitations of the render area granularity, as queried from vkGetRenderAreaGranularity.
You can subset a window by setting the view rectangle and viewport in the VkGraphicsPipelineCreateInfo structure to the subregion you wish to render. You can dynamically configure the viewport at draw time using vkCmdSetViewport().
For VkCmdClearAttachments() you can set the clear area via the pRects argument (it ignores viewport).

Metal -- skipping commandBuffer.present(drawable) to not display a frame?

In my Metal app for macOS, I have a situation where I only want to display the render results every so often. I want to complete the rendering pass every frame, and save the drawable texture image to a file, but I only want to display the render every sixteenth frame or so. I tried just skipping commandBuffer.present(drawable) when I don't want to display, but it is not working. It just stops displaying new frames once I do that. After skipping one call to commandBuffer.present(), it just doesn't display any new frames. It does continue to run, however.
Why would that happen? Once I commit a command buffer, is it required for it to be presented?
If I can't get this to work, then I will try to render into an offscreen buffer for these frames I don't want displayed. But it would be extra work and require more memory for the offscreen render buffer, so I'd rather just be able to use my regular onscreen render buffer if possible.
Thanks!
It's not required that a command buffer present a drawable. I think the issue is that, once you've obtained the drawable, it's not returned to the pool maintained by the CAMetalLayer (or, indirectly, MTKView) that provided it until it is presented.
Do not render to a drawable's texture if you don't plan on presenting. Rendering to an off-screen texture is the right approach. In fact, if you always render first to an off-screen texture and then, only for the frames you want to display, copy that to a drawable's texture, then you can leave the framebufferOnly property of the CAMetalLayer with its default true value. In that case, there's a decent chance that you won't increase the memory required (because the drawable's texture is really just part of the screen's backing store).

How can I overlay my UI render target onto the back buffer using DirectX 11?

I have two render targets, the back buffer and a UI render target where all 2d UI will be drawn.
I have used the graphics debugger to confirm that both render targets are being written to with the correct data, but I'm having trouble combining the two right at the end.
Question:
My world objects are drawn directly to the backbuffer so there is no problem displaying these, but how do I now overlay the UI render target OVER the backbuffer?
Desired effect:
Back buffer render target
UI render target
There's several ways to do this. The easiest is to render your UI elements to a texture that has both a RenderTargetView and a ShaderResourceView, then render the whole texture to the back buffer as a single quad in orthographic projection space. This effectively draws a 2D square containing your UI in screen space on the back buffer. It also has the benefit of allowing transparency.
You could also use the OutputMerger stage to blend the UI render target with the back buffer during rendering of the world geometry. You'd need to be careful how you set up your blend operations, as it could result in items being drawn over the UI, or blending inappropriately.
If your UI is not transparent, you could do the UI rendering first and mark the area under the UI in the stencil buffer, then do your world rendering while the stencil test is enabled. This would cause the GPU to ignore any pixels underneath the UI, and not send them to the pixel shader.
The above could also be modified to write the minimum depth value to the pixels within the UI render target, ensuring all geometry underneath it would fail the depth test. This modification would free up the stencil buffer for mirrors/shadows/etc.
The above all work for flat UIs drawn over the existing 3D world. To actually draw more complex UIs that appear to be a part of the world, you'll need to actually render the elements to 3D objects in the world space, or do complex projection operations to make it seem like they are.

Vulkan update descriptor every frame

I want to render my scene to a texture and then use that texture in shader so I created a frambuffer using imageview and recorded a command buffer for that. I successfully uploaded and executed the command buffer on gpu but the descriptor of imageview is black. I'm creating a descriptor from the imageview before rendering loop. Is it black because I create it before anything is rendered to framebuffer? If so I will have to update the descriptor every frame. Will I have to create a new descriptor from imageview every frame? Or is there another way I can do this?
I have read other thread on this title. Don't mark this as duplicate cause that thread is about textures and this is texture from a imageview.
Thanks.
#IAS0601 I will answer questions from Your comment through an answer, as it allows for much longer text to be written, and its formatting is much better. I hope this also answers Your original question, but You don't have to treat like the answer. As I wrote, I'm not sure what You are asking about.
1) In practically all cases, GPU accesses images through image views. They specify additional parameters which define how image is accessed (like for example which part of the image is accessed), but still it is the original image that gets accessed. Image view, as name suggests, is just a view, list of access parameters. It doesn't have any memory bound to it, it doesn't contain any data (apart from the parameters specified during image view creation).
So when You create a framebuffer and render into it, You render into original images or, to be more specific, to those parts of original images which were specified in image views. For example, You have a 2D texture with 3 array layers. You create a 2D image view for the middle (second) layer. Then You use this image view during framebuffer creation. And now when You render into this framebuffer, in fact You are rendering into the second layer of the original 2D texture array.
Another thing - when You later access the same image, and when You use the same image view, You still access the original image. If You rendered something into the image, then You will get the updated data (provided You have done everything correctly, like perform appropriate synchronization operations, layout transition if necessary etc.). I hope this is what You mean by updating image view.
2) I'm not sure what You mean by updating descriptor set. In Vulkan when we update a descriptor set, this means that we specify handles of Vulkan resources that should be used through given descriptor set.
If I understand You correctly - You want to render something into an image. You create an image view for that image and provide that image view during framebuffer creation. Then You render something into that framebuffer. Now You want to read data from that image. You have two options. If You want to access only one sample location that is associated with fragment shader's location, You can do this through an input attachment in the next subpass of the same render pass. But this way You can only perform operations which don't require access to multiple texels, for example a color correction.
But if You want to do something more advanced, like blurring or shadow mapping, if You need access to several texels, You must end a render pass and start another one. In this second render pass, You can read data from the original image through a descriptor set. It doesn't matter when this descriptor set was created and updated (when the handle of image view was specified). If You don't change the handles of resources - meaning, if You don't create a new image or a new image view, You can use the same descriptor set and You will access the data rendered in the first render pass.
If You have problems accessing the data, for example (as You wrote) You get only black colors, this suggests You didn't perform everything correctly - render pass load or store ops are incorrect, or initial and final layouts are incorrect. Or synchronization isn't performed correctly. Unfortunately, without access to Your project, we can't be sure what is wrong.

Disable mipmapping in OpenGL ES 2.0

I would like to draw some of the same figures (with the same texture) on screen (OpenGL ES 2.0). These figures will be different in magnification and minification filters. And different states mipmapping.
The issue is: if I use mipmapping in draw any figure ( if I called glGenerateMipmap() function) I can't switch off mipmapping mode.
Is it possible to switch off mipmapping mode, if I call glGenerateMipmap() at least once?
glGenerateMipmap only generates the smaller mipmap images (based on the top-level image). But those mipmaps are not used for filtering if you don't use a proper mipmapping filter mode (through glTexParamteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_..._MIPMAP_...)). So if you don't want your texture mipmap filtered, just disable it for this particular texture by setting either GL_NEAREST or GL_LINEAR as minification filter. Likewise does not calling glGenerateMipmap not mean that there is no mipmapping going on. A possible mipmapping filter mode (which is also the default for a newly created texture) will still be used, just that the mipmap images contain rubbish (or the texture is actually incomplete, resulting in implementation-defined behaviour, but usually a black texture).
Likewise you shouldn't call glGenerateMipmap each frame before rendering. Call it once after setting the base image of the texture. Like said it generates the mipmap images, those won't go away after they've been generated. What decides if mipmapping is actually used is the texture object's filter mode.