I have a line art texture applied to an object in 3D space. The default behavior is for the object and the texture to receive perspective scaling based on the perspective model view projection matrix. Is there any established technique to keep the positioning and scaling of the 3D object, while keeping the line width constant relative to the screen? The desired effect is as though a pen (fixed screen width) were used to trace a path on the 3D object.
Would something like SDF-based font rendering help?
Or maybe some kind of projective texture mapping?
Or render the object and texture to a buffer and expand the lines using edge detection?
Unfortunately, I'm using OGL ES 2, so I can't use a geom shader or anything like that.
The solution I came up with is inspired by procedural SDF generation, like #Felipe suggested, combined with Chris Green's Improved Alpha-Tested Magnification for Vector Textures and Special Effects.
Basically I hand draw shapes into textures using pure red, green, and blue. Then I render the scene using those textures, and generate an SDF on the fly in a second render pass. The SDF generation uses Green's algorithm with a small spread to improve performance. The SDF is then passed to a final render pass that thresholds and antialiases the SDF per Green's approach, using fwidth to maintain a constant line weight regardless of the distance of the object to the camera.
Since the original question was just for the approach/concept, I'm not posting an example at the moment. But I'll see if I can put together a shadertoy sometime soon.
You could create the texture procedurally in a fragment shader and use the size of a pixel for interpolations.
See:
FabriceNeyret's blog
Related
Does Vulkan provide functionality to draw basic primitives? Point, Line, Rectangle, Filled Rectangle, Rounded Corner Rectangle, Filled Rounded Corner Rectangle, Circle, Filled Circle, etc.. ?
I don't believe there are any VkCmdDraw* commands that provide this functionality. If that is true, what needs to be done to draw simple primitives like this?
Vulkan is not vector graphics library. It is an API for your GPU.
It does have (square) Points and Lines though. But size other than 1 is optional. And any other high-level features you can think of are not part of the API, except those in VK_EXT_line_rasterization extension.
Rectangle can be a Line Strip of four lines.
Filled Rectangle is probably two filled triangles (resp. Triangle Strip primitive).
Rounded corners and Circles probably could be made by rendering the bounding rectangle, and discarding the unwanted parts of the shape in the Fragment Shader. Or something can be done with a Stencil Buffer. Or there is a Compute Shader, which can do anything. Alternatively they can be emulated with triangles.
There are no such utility functions in Vulkan. If you need to draw a certain primitive you need to provide vertices (and indices) yourself. So if you e.g. want to draw a circle you need to calculate the vertices using standard trigonometric functions, and provide them for your draw calls using a buffer.
This means creating a buffer via vkCreateBuffer, allocating the memory required to store your data into that buffer via vkAllocateMemory and after mapping that buffer into host memory you can copy your primitive's vertices (and/or indices) to such a buffer.
If you're on a nun-unified memory architecture (i.e. desktop GPUs) you also want to upload that data from host to the device for best performance then.
Once you've got a buffer setup, backed by memory and your values stored in that buffer you can draw your primitive using vkCmdDraw*commands.
All available types of primitives are defined in the standard, and can be set through the VkPrimitiveTopology member topology in VkPipelineInputAssemblyStateCreateInfo.
The manual page of VkPrimitiveTopology states the following possible values:
VK_PRIMITIVE_TOPOLOGY_POINT_LIST = 0,
VK_PRIMITIVE_TOPOLOGY_LINE_LIST = 1,
VK_PRIMITIVE_TOPOLOGY_LINE_STRIP = 2,
VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST = 3,
VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP = 4,
VK_PRIMITIVE_TOPOLOGY_TRIANGLE_FAN = 5,
VK_PRIMITIVE_TOPOLOGY_LINE_LIST_WITH_ADJACENCY = 6,
VK_PRIMITIVE_TOPOLOGY_LINE_STRIP_WITH_ADJACENCY = 7,
VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST_WITH_ADJACENCY = 8,
VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP_WITH_ADJACENCY = 9,
VK_PRIMITIVE_TOPOLOGY_PATCH_LIST = 10,
You may also need to change polygonMode, if you're rendering a shape you don't want filled.
I don't know if this is any good help, but I have used geometry shaders with OpenGL to draw circles and ellipses. This works by adding a uniform value stating the amount of subdivision and the radius, and then generate a bunch of triangles or a bunch of lines (depending on whether it should be filled or "wireframe". This required a little trigonometry (sin and cos). For filled circles I would use triangle-fan primitive, and for wireframe circles I would use line-loop. For Vulkan: whichever primitive is available, as #theRPGMaster suggested.
I hear many places that geometry shaders are very slow to use, comparably, so that should probably not be your go-to choice, as I assume you picked Vulkan for performance reasons. On thing that geometry shaders could be good for, is the rectangular selection box you see in e.g. Windows Explorer when holding down left mouse button and moving the cursor. At least I found that to work well.
From what I have seen of Vulkan so far it seems even more barebones than OpenGL is, so I would expect nothing in terms of supporting this kind of thing.
When learning to program simple 2D games, each object would have a sprite sheet with little pictures of how a player would look in every frame/animation. 3D models don't seem to work this way or we would need one image for every possible view of the object!
For example, a rotating cube would need a lot images depicting how it would look on every single side. So my question is, how are 3D model "images" represented and rendered by the engine when viewed from arbitrary perspectives?
Multiple methods
There is a number of methods for rendering and storing 3D graphics and models. There are even different methods for rendering 2D graphics! In addition to 2D bitmaps, you also have SVG. SVG uses numbers to define points in an image. These points make shapes. The points can also define curves. This allows you to make images without the need for pixels. The result can be smaller file sizes, in addition to the ability to transform the image (scale and rotate) without causing distortion. Most 3D graphics use a similar technique, except in 3D. What these methods have in common, however, is that they all ultimately render the data to a 2D grid of pixels.
Projection
The most common method for rendering 3D models is projection. All of the shapes to be rendered are broken down into triangles before rendering. Why triangles? Because triangles are guaranteed to be coplanar. That saves a lot of work for the renderer since it doesn't have to worry about "coloring outside of the lines". One drawback to this is that most 3D graphics projection technologies don't support perfect spheres or other round surfaces. You have to use approximations and other tricks to make round surfaces (although there are some renderers which support round surfaces). The next step is to convert or project all of the 3D points into 2D points on the screen (as seen below).
From there, you essentially "color in" the triangles to make everything look solid. While this is pretty fast, another downside is that you can't really have things like reflections and refractions. Anytime you see a refractive or reflective surface in a game, they are only using trickery to make it look like a reflective or refractive material. The same goes for lighting and shading.
Here is an example of special coloring being used to make a sphere approximation look smooth. Notice that you can still see straight lines around the smoothed version:
Ray tracing
You also can render polygons using ray tracing. With this method, you basically trace the paths that the light takes to reach the camera. This allows you to make realistic reflections and refractions. However, I won't go into detail since it is too slow to realistically use in games currently. It is mainly used for 3D animations (like what Pixar makes). Simple scenes with low quality settings can be ray traced pretty quickly. But with complicated, realistic scenes, rendering can take several hours for a single frame (as is the case with Pixar movies). However, it does produce ultra realistic images:
Ray casting
Ray casting is not to be confused with the above-mentioned ray tracing. Ray casting does not trace the light paths. That means that you only have flat surfaces; not reflective. It also does not produce realistic light. However, this can be done relatively quickly, since in most cases you don't even need to cast a ray for every pixel. This is the method that was used for early games such as Doom and Wolfenstein 3D. In early games, ray casting was used for the maps, and the characters and other items were rendered using 2D sprites that were always facing the camera. The sprites were drawn from a few different angles to make them look 3D. Here is an image of Wolfenstein 3D:
Castle Wolfenstein with JavaScript and HTML5 Canvas: Image by Martin Kliehm
Storing the data
3D data can be stored using multiple methods. It is not necessarily dependent on the rendering method that is used. The stored data doesn't mean anything by itself, so you have to render it using one of the methods that have already been mentioned.
Polygons
This is similar to SVG. It is also the most common method for storing model data. You define the geometry using 3D points. These points can have other properties, such as texture data (in the form of UV mapping), color data, and whatever else you might want.
The data can be stored using a number of file formats. A common file format that is used is COLLADA, which is an XML file that stores the 3D data. There are a lot of other formats though. Fundamentally, however, all file formats are still storing the 3D data.
Here is an example of a polygon model:
Voxels
This method is pretty simple. You can think of voxel models like bitmaps, except they are a bunch of bitmaps layered together to make 3D bitmaps. So you have a 3D grid of pixels. One way of rendering voxels is converting the voxel points to 3D cubes. Note that voxels do not have to be rendered as cubes, however. Like pixels, they are only points that may have color data which can be interpreted in different ways. I won't go into much detail since this isn't too common and you generally render the voxels with polygon methods (like when you render them as cubes. Here is an example of a voxel model:
Image by Wikipedia user Vossman
In the 2D world with sprite sheets, you are drawing one of the sprites depending on the state of the actor (visual representation of your object). In the 3D world you are rendering a model for your actor that is a series of polygons with a texture mapped to it. There are standardized model files (I am mostly familiar with Autodesk 3DS Max), in which the model and the assigned textures can be packaged together (a .3DS or .MAX file), providing everything your graphics library needs to render the object and its textures.
In a nutshell, you don't use images for each view of a 3D object, you have a model with a texture rendered on it, creating a dynamic view as it is rendered by the graphics library.
Can anybody explain how rendering differs from rasterization especially in the context of font rendering (why not font rasterization)?
Can rendering be called a special technique (like greyscale rendering and subpixel rendering) before the rasterizer rasterizes the image?
Rendering is a broad term that generally means transforming computer-readable information, for example objects in a 3d scene, to one or more images.
Rasterization is a more specific term that typically means the process of transforming a vector (curve based) image to a rasterized (pixel based) image.
Rendering involves performing the calculations for vectors and shape geometry for the elements to be drawn.
Rasterizing involves converting the rendered vectors and shapes into pixel bit maps for display.
I have created an experimental fast rectangular object tracking system; it will be used for headtracking and controllling objects in 3D engine (Ogre3D).
For now I am able to show to the webcam any kind of bright colored rectangle (text markers are good objects) and system registers basic properties of this object (hue/value/lightness and initial width and height in 0 degrees rotation).
After I have registered the trackable object, I do some simple frame processing to create grayscale probabilty map.
So now I have 2 known things:
1) 4 corners for the last object position (it's always a rectangle but it may be rotated)
2) a pretty rectangular (but still far from perfect) blob which is the brightest in the frame. I can get coordinates of any point of the blob without problems, point detection is stable enough.
I can find a bounding rectangle of the object without problems, but I have a problem with detecting the object corners themselves.
I need the simplest possible (quick&dirty would be great) algorithm to scan the image starting with some known coordinates (a point inside the blob) and detect new 4 x,y coordinates of a "blobish" rectangle corners (not corners of a bounding box but corners of the rectangular blob itself).
Ready-to-use C++ function would be awesome, but somehow google doesn't like me today :(
I think that it would be overkill to use some complicated function form OpenCV library just to extract 4 points of a single rectanglular blob. But if you know a quick and efficient way how to do it using OpenCV (it must be real-time and light on CPU because I'll run the 3D engine at the same time) then I would be really grateful.
You can apply Hough transform on segmented image to detect lines. Using detected lines you can calculate their intersection to find the corner coordinates of the blob.
I have a requirement like applying texture pattern to the 3D bars in 3D Barchart using ireport. I am able to see the texture pattern in the default JRViewer. But when I take the PDF from the same report, I cannot see the texture pattern instead I can see a transparent 3D bars.
Can someone have a solution ?
with a little research we found the answer. There is an option in the iReport for the charts called renderType. We need to set this as svg(Scalable Vector Graphics).
So the texture pattern will apply to the PDF also.
The disadvantage of using this is - The PDF file size gets increased.