I need to draw textured quad. My texture has some alpha pixels. So I need to do glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Thats OK. But I need some other blending function on that quad (glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);) to achieve textures masking. How can I do it? Because if I set both glBlendFunc, one of them is ignored.
Blending is a framebuffer operation and can not be set per primitive. If you need to combine several texture layers on a single primitive, do this in a shader and emit a compound color/alpha that interacts in the right way with the choosen blending function. If you need different blending functions, you must do this using separate drawing calls.
Related
Does Vulkan provide functionality to draw basic primitives? Point, Line, Rectangle, Filled Rectangle, Rounded Corner Rectangle, Filled Rounded Corner Rectangle, Circle, Filled Circle, etc.. ?
I don't believe there are any VkCmdDraw* commands that provide this functionality. If that is true, what needs to be done to draw simple primitives like this?
Vulkan is not vector graphics library. It is an API for your GPU.
It does have (square) Points and Lines though. But size other than 1 is optional. And any other high-level features you can think of are not part of the API, except those in VK_EXT_line_rasterization extension.
Rectangle can be a Line Strip of four lines.
Filled Rectangle is probably two filled triangles (resp. Triangle Strip primitive).
Rounded corners and Circles probably could be made by rendering the bounding rectangle, and discarding the unwanted parts of the shape in the Fragment Shader. Or something can be done with a Stencil Buffer. Or there is a Compute Shader, which can do anything. Alternatively they can be emulated with triangles.
There are no such utility functions in Vulkan. If you need to draw a certain primitive you need to provide vertices (and indices) yourself. So if you e.g. want to draw a circle you need to calculate the vertices using standard trigonometric functions, and provide them for your draw calls using a buffer.
This means creating a buffer via vkCreateBuffer, allocating the memory required to store your data into that buffer via vkAllocateMemory and after mapping that buffer into host memory you can copy your primitive's vertices (and/or indices) to such a buffer.
If you're on a nun-unified memory architecture (i.e. desktop GPUs) you also want to upload that data from host to the device for best performance then.
Once you've got a buffer setup, backed by memory and your values stored in that buffer you can draw your primitive using vkCmdDraw*commands.
All available types of primitives are defined in the standard, and can be set through the VkPrimitiveTopology member topology in VkPipelineInputAssemblyStateCreateInfo.
The manual page of VkPrimitiveTopology states the following possible values:
VK_PRIMITIVE_TOPOLOGY_POINT_LIST = 0,
VK_PRIMITIVE_TOPOLOGY_LINE_LIST = 1,
VK_PRIMITIVE_TOPOLOGY_LINE_STRIP = 2,
VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST = 3,
VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP = 4,
VK_PRIMITIVE_TOPOLOGY_TRIANGLE_FAN = 5,
VK_PRIMITIVE_TOPOLOGY_LINE_LIST_WITH_ADJACENCY = 6,
VK_PRIMITIVE_TOPOLOGY_LINE_STRIP_WITH_ADJACENCY = 7,
VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST_WITH_ADJACENCY = 8,
VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP_WITH_ADJACENCY = 9,
VK_PRIMITIVE_TOPOLOGY_PATCH_LIST = 10,
You may also need to change polygonMode, if you're rendering a shape you don't want filled.
I don't know if this is any good help, but I have used geometry shaders with OpenGL to draw circles and ellipses. This works by adding a uniform value stating the amount of subdivision and the radius, and then generate a bunch of triangles or a bunch of lines (depending on whether it should be filled or "wireframe". This required a little trigonometry (sin and cos). For filled circles I would use triangle-fan primitive, and for wireframe circles I would use line-loop. For Vulkan: whichever primitive is available, as #theRPGMaster suggested.
I hear many places that geometry shaders are very slow to use, comparably, so that should probably not be your go-to choice, as I assume you picked Vulkan for performance reasons. On thing that geometry shaders could be good for, is the rectangular selection box you see in e.g. Windows Explorer when holding down left mouse button and moving the cursor. At least I found that to work well.
From what I have seen of Vulkan so far it seems even more barebones than OpenGL is, so I would expect nothing in terms of supporting this kind of thing.
I am using wxWidgets to design a GUI that draws multiple layers with transparency on top of each other.
Therefore I have one method for each layer that draws with wxGraphicsContext onto the "shared" wxImage, which is then plotted to the wxWindow in the paintEvent method.
I have the layer data in arrays exactly of the same dimension as my wxImage and therefore I need to draw/manipulate pixel-wise, of course. Currently I am doing that with the drawRectangle-routine. My guess is that this is quite inefficient.
Is there a clever way to manipulate wxImage's pixel data directly, enabling me to still use transparency of each separate layer in the resulting image? Or is the 1x1 pixel drawing with drawRectangle sufficient?
Thanks for any thoughts on this!
You can efficiently manipulate wxImage pixels by just directly accessing them, they are stored in two contiguous RGB and alpha arrays which you can work with directly.
The problem is usually converting this wxImage to wxBitmap which can be displayed -- this is the expensive operation, and to avoid it raw bitmap access can be used to manipulate wxBitmap directly instead.
I want to display mesh models in OpenGL ES 2.0, where it clearly shows the actual mesh, so I don't want smooth shading across each primitive/triangle. The only two options I can think about are
Each triangle has its own set of normals, all perpendicular to the triangles surface (but then I guess I can't share vertices among the triangles with this option)
Indicate triangle/primitive edges using black lines and stick to the normal way with shared vertices and one normal for each vertex
Does it have to be like this? Why can't I simply read in primitives and don't specify any normals and somehow let OpenGL ES 2.0 make a flat shade on each face?
Similar question Similar Stackoverflow question, but no suggestion to solution
Because in order to have shading on your mesh (any, smooth or flat), you need a lighting model, and OpenGL ES can't guess it. There is no fixed pipeline in GL ES 2 so you can't use any built-in function that will do the job for you (using a built-in lighting model).
In flat shading, the whole triangle will be drawn with the same color, computed from the angle between its normal and the light source (Yes, you also need a light source, which could simply be the origin of the perspective view). This is why you need at least one normal per triangle.
Then, a GPU works in a very parallelized way, processing several vertices (and then fragments) at the same time. To be efficient, it can't share data among vertices. This is why you need to replicate normals for each vertex.
Also, your mesh can't share vertices among triangles anymore as you said, because they share only the vertex position, not the vertex normal. So you need to put 3 * NbTriangles vertices in you buffer, each one having one position and one normal. You can't either have the benefit of using triangle strips/fans, because none of your faces will have a common vertex with another one (because, again, different normals).
I have a 2D texture that I want to reuse instead of having different textures each time for different colours. So what I wanted to know is can you apply a colour to that texture and if so how?
Disco
Yes, you can blend a texture together with material-color (and even color of lights). Take a look at this site, especially the parts about GLMaterial and the blend-function:
http://www.opengl.org/resources/faq/technical/lights.htm
With modern hardware, what is the fastest way to draw an image with a "bitmask", i.e., a mask that specifies whether a given pixel will be drawn or not (this could be extracted from "magic pink" pixels, for example) using OpenGL?
Should I just use alpha blending and set invisible pixels to a=0?
Should I use the old "AND black/white mask then OR image on black bg" technique?
Should I use the alpha pass test?
Should I use a shader?
This matters because I'm planning on drawing massive quantities of such images - as much as I can afford to.
If the mask and the texture are always the same (e.g. for splatting), you probably should use blending with a pre-multiplied color values. This usually is just saturated adding the texture with the background (no need to multiply per-pixel).
You should definitely use the alpha pass test - by default it's set to something like >0.08, so you'll automatically get this if you set your pixels to 0.0 alpha.