DrawPrimitiveUP in d3d11 - direct3d11

Just checking, but it appears D3D9's DrawPrimitiveUP / any method of drawing WITHOUT a vertex buffer has been all but stripped out of D3D11.
Is there a way to draw in D3D11 that does not use a vertex buffer, with example?

No, you have to use a vertex buffer

Related

Is it possible to process a FBO's color attachement (texture) using a fragment/pixel shader without vertex shader?

Im currently playing around with some terrain-generation stuff using OpenGL ES 2.0 on iOS devices. I have a texture and a heightmap. What I want to do is blur the terrain's texture using a fragment shader, but not on every draw call (just on demand and at the beginning). This is why I decided to process the blurring offscreen inside a FBO and then attach this FBO as a texture to the terrain. Now I'm wondering if it is possible to just add the image(texture) as a color attachement to a newly generated FBO and process it with a fragment shader? Or is there a better approach? No projection, lightning etc. is needed.
You can't circumvent a vertex shader and have your fragment shader do anything. There are plenty of ways to minimize how much the vertex shader does - you can just pass the geometry right through to the fragment shader. Shaders like that are usually called (unsurprisingly) "pass-through shaders" because they just shuffle information on to the next piece of the pipeline without doing a whole lot.

multiple glBlendFunc for one object

I need to draw textured quad. My texture has some alpha pixels. So I need to do glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Thats OK. But I need some other blending function on that quad (glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);) to achieve textures masking. How can I do it? Because if I set both glBlendFunc, one of them is ignored.
Blending is a framebuffer operation and can not be set per primitive. If you need to combine several texture layers on a single primitive, do this in a shader and emit a compound color/alpha that interacts in the right way with the choosen blending function. If you need different blending functions, you must do this using separate drawing calls.

Difference between Frame buffer object, Render buffer object and texture?

what is the Difference between Frame buffer object, Render buffer object and texture? In what context they will be used?
Framebuffer
A framebuffer is not actually a buffer. It's an abstraction for an object that defines parameters for a draw operation. It's a small object that holds one or more attachments, which are themselves the actual buffers. Understand the framebuffer as a C struct with many fields. Each field (each attachment in OpenGL terms) can be a pointer to a render buffer, texture, depth buffer, etc.
Texture
An array of standard pixels. This is an actual buffer and can be attached to a framebuffer as the destination of pixels being drawn. Each pixel in a texture typically contains color components and an alpha value (a pixel in the texture can be translated from and into an RGBA quad with 4 floats).
After drawing to the framebuffer that contains a texture attached, it's possible to read pixels from the texture to use in another draw operation. This allows, for instance, multi-pass drawing or drawing a scene inside another scene.
Textures can be attached to a shader program and used as samplers.
Renderbuffer
An array of native pixels. The renderbuffer is just like a texture, but stores pixels using an internal format. It's optimized for pixel transfer operations. It can be attached to a framebuffer as the destination of pixels being drawn, then quickly copied to the viewport or another framebuffer. This allows implementing of double buffer algorithms, where the next scene is drawn while the previous scene is exhibited.
A renderbuffer can also be used to store depth and stencil information that is used just for a single draw procedure. This is possible because only the implementation itself needs to read renderbuffer data, and tends to be faster than textures, because uses a native format.
Because this uses a native format, a renderbuffer cannot be attached to a shader program and used as a sampler.
A framebuffer object is more or less just a managing construct. It manages a complete framebuffer at a whole with all its sub-buffers, like the color buffers, the depth buffer and the stencil buffer.
The textures or renderbuffers comprise the actual storage for the individual sub-buffers. This way you can have multiple color buffers, a depth buffer and a stencil buffer, all stored in different textures/renderbuffers. But they all together make up a single logical framebuffer into which you render.
So a final fragment (you may call it pixel, but actually isn't one yet) written to the framebuffer has one or more color values, a depth value and a stencil value and they all end up in different sub-buffers of the framebuffer.

Drawing a line using openGL es 2.0 and iphone touchscreen

This is the super simple version of the question I posted earlier (Which I think is too complicated)
How do I draw a Line in OpenGL ES 2.0 Using as a reference a stroke on the Touch Screen?
For example If i draw a square with my finger on the screen, i want it to be drawn on the screen with OpenGL.
I have tried researching a lot but no luck so far.
(I only now how to draw objects which already have fixed vertex arrays, no idea of how to draw one with constantly changing array nor how to implement it)
You should use vertex buffer objects (VBOs) as the backing OpenGL structure for your vertex data. Then, the gesture must be converted to a series of positions (I don't know how that happens on your platform). These positions must then be pushed to the VBO with glBufferSubData if the existing VBO is large enough or glBufferData if the existing VBO is too small.
Using VBOs to draw lines or any other OpenGL shape is easy and many tutorials exist to accomplish it.
update
Based on your other question, you seem to be almost there! You already create VBOs like I mentioned but they are probably not large enough. The current size is sizeof(Vertices) as specified in glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
You need to change the size given to glBufferData to something large enough to hold all the original vertices + those added later. You should also use GL_STREAM as the last argument (read up on the function).
To add a new vertex, use something like this :
glBufferSubData(GL_ARRAY_BUFFER, current_nb_vertices*3*sizeof(float), nb_vertices_to_add, newVertices);
current_nb_vertices += nb_vertices_to_add;
//...
// drawing lines
glDrawArrays(GL_LINE_STRIP, 0, current_nb_vertices);
You don't need the indices in the element array to draw lines.

How do I upload sub-rectangles of image data to an OpenGLES 2 framebuffer texture?

In my OpenGLES 2 application (on an SGX535 on Android 2.3, not that it matters), I've got a large texture that I need to make frequent small updates to. I set this up as a pair of FBOs, where I render updates to the back buffer, then render the entire back buffer as a texture to the front buffer to "swap" them. The front buffer is then used elsewhere in the scene as a texture.
The updates are sometimes solid color sub-rectangles, but most of the time, the updates are raw image data, in the same format as the texture, e.g., new image data is coming in as RGB565, and the framebuffer objects are backed by RGB565 textures.
Using glTexSubImage2D() is slow, as you might expect, particularly on a deferred renderer like the SGX. Not only that, using glTexSubImage2D on the back FBO eventually causes the app to crash somewhere in the SGX driver.
I tried creating new texture objects for each sub-rectangle, calling glTexImage2D to initialize them, then render them to the back buffer as textured quads. I preserved the texture objects for two FBO buffer swaps before deleting them, but apparently that wasn't long enough, because when the texture IDs were re-used, they retained the dimensions of the old texture.
Instead, I'm currently taking the entire buffer of raw image data and converting it to an array of structs of vertices and colors, like this:
struct rawPoint {
GLfloat x;
GLfloat y;
GLclampf r;
GLclampf g;
GLclampf b;
};
I can then render this array to the back buffer using GL_POINTS. For a buffer of RGB565 data, this means allocating a buffer literally 10x bigger than the original data, but it's actually faster than using glTexSubImage2D()!
I can't keep the vertices or the colors in their native unsigned short format, because OpenGL ES 2 only takes floats in vertex attributes and shader uniforms. I have to submit every pixel as a separate set of coordinates, because I don't have geometry shaders. Finally, I can't use the EGL_KHR_gl_texture_2D_image extension, since my platform doesn't support it!
There must be a better way to do this! I'm burning tons of CPU cycles just to convert image data into a wasteful floating point color format just so the GPU can convert it back into the format it started with.
Would I be better off using EGL Pbuffers? I'm not excited by that prospect, since it requires context switching, and I'm not even sure it would let me write directly to the image buffer.
I'm kind of new to graphics, so take this with a big grain of salt.
Create a native buffer (see ) the size of your texture
Use the native buffer to create an EGL image
eglCreateImageKHR(eglGetCurrentDisplay(),
eglGetCurrentContext(),
EGL_GL_TEXTURE_2D_KHR,
buffer,
attr);
I know this uses EGL_GL_TEXTURE_2D_KHR. Are you sure your platform doesn't support this? I am developing on a platform that uses SGX535 as well, and mine seems to support it.
After that, bind the texture as usual. You can memcpy into your native buffer to update sub rectangles very quickly I believe.
I realize I'm answering a month old question, but if you need to see some more code or something, let me know.