OpenGL lights, textures, etc. correct way? - api

Until this moment I've only implemented all the effects in GLSL shaders using inputs, outputs and uniforms, except for a couple of really essential constants like gl_Position, etc. I've read several tutorials, had a lecture on computer graphics and everytime all they implement things by looking at physical model and calculating all the stuff using input values and uniforms. That is a kind of the way I thought it all works.
Now I faced the fact, that there are much more GLSL things, like glLight* API functions and gl_LightSource, gl_Texture constants in GLSL with a big set of light types and lighting models predefined. Seems to be a kind of different way of programming shaders.
I wonder if there are any advantages/disadvantages using one or other way? Did I miss something very important? It looks I'm doing a lot of redundant work.

All the glLight* calls you might find in both GLSL and the OpenGL API are from the old and deprecated fixed-function pipeline!
Now you must do all the calculations yourself through Shaders, as I can guess you're already doing.
Why did they "remove" all the awesome stuff?
They "removed" (deprecated) the Matrix Stack, Light calls, Immediate Mode Rendering, etc. etc. etc. and the list goes one for various reason. But the overall reason is that it's better to implement and control those things yourself.
It requires more work from our side implementing and controlling all those things, though you're in total control of everything and when you actually want to use something.
Using the fixed-function pipeline OpenGL would allocate and load various things you might never even wanted to use.
Also when talking about the Matrix Stack as an example, you would usually (the lazy way) make OpenGL re-calculate the Matrix Stack each render call, using the old glPushMatrix(), glPopMatrix(), glTranslate*(), etc. functions. Now because YOU HAVE TO, you are forced to do all those calculations and handling the Matrices yourself. So now you would realize that most of the Matrices and much more could simply be allocated and calculated once, or atleast not every render call.
Of course they didn't deprecated Immediate Mode Rendering, because we need to implement that ourselves, now we simply need to use Buffers, because they are so much better in every way.
Extra
If you want a great spreadsheet that shows which function are deprecated and which are core functions, and extension functions, etc. Then take a look here, though be aware that this spreadsheet is made by people who use OpenGL and not by the Khronos Group (current developers of OpenGL) nor Silicon Graphics (the creators of OpenGL).

Ignore glLightXXX functions, the related gl_LightXXX variables and all the documentation associated with them. It's all deprecated and if you look closely at the docs, you'll probably that it's several years old or specifically designed for versions of OpenGL <= 2.x. Instead continue to work with your own vertex attributes and set up lighting configuration in your own uniforms however you please based on the model of lighting you want to implement. It's more work, but it's more flexible in the long run.
The OpenGL lighting model that uses glLight pre-dates the programmable shader pipeline, and represent a particular way of doing lighting in the fixed function pipeline.
Once GLSL entered the scene it was possible to use the OpenGL lighting model in conjunction with shaders. You could use the same glLight function and it's related functions to set up your lighting parameters but then write shaders that used the same information in different ways, allowing per-pixel lighting calculations.
Textures are a little more murky, because OpenGL still has a texture model and many of the GL functions relating to textures are still valid, though some are deprecated. However, any documentation that refers to GLSL variables like gl_Texture is similarly out of date. Current OpenGL uses sampler objects for texture access.
If you want to make sure you're doing it the 'modern' way, make sure you create a forward-compatible OpenGL profile of 3.3 or higher or 4.0 or higher, and make sure your shaders declare the appropriate version number as their first line like so:
#version 330
This will cause the use of any deprecated OpenGL function or deprecated shader variable to generate an error so that you know to avoid them.

Current graphics hardware offers an interface to customize any rendering step e.g Vertex Shading, Tesselation, Geometry shading, fragment shading and so on. GLSL is the language to programm or influence the rendering steps of the graphics hardware leveraging this interface.
The predefined function glLight, glTexture and so on belong to the deprecated fixed
graphics pipeline of opengl. Modern OpenGL still supports the functions of this fixed pipeline but it ist strongly recommended to use GLSL for the different rendering steps.
The glLight function is a fixed function which just influences Vertex Processing. So you can just achieve a per vertex shading, which not looks very realistic.
When you programm the lighting on your own within the fragment shader using GLSL you can directly influence any pixel.
So to summarize the main advantage is that a programmer is more flexible and is able to influence every kind of rendering step, which enables you to achieve sophisticated and realistic 3d graphics. The main disadvantage is. You need much more knowledge and (GLSL, graphics pipeline) and much more programming effort to achieve the same result as with fixed functions.
Best regards

Related

Is programming a voxel based graphics API theoretically possible?

This is entirely a theoretical question because I understand the time it would take to do such a thing would be ridiculous
I've been working with "voxels" a lot lately and the only way I can display them to a user is to either triangulate the visible surfaces or make a CPU ray-tracer but both come with their own problems.
Simply put, if we dismiss the storage space needed for voxel meshs and targeted a very specific GPU would someone who was wanting to create a graphics API like OpenGL but with "true" voxel primitives that don't need to be converted be able to make such thing or are GPUs designed specifically for triangles with no way to introduce a new base primitive?
Its possible and it was already done many times
games like Minecraft,SpaceEngineers...
3D printing tools and slicers
MRI/PET scans tools
Yes rendering on GPU is possible with the two base methods you mention. Games usually use the transform to boundary representation 3D geometry. With rise of shaders even ray tracers are now possible here mine:
simple GLSL voxel ray tracer
using native OpenGL architecture and passing geometry as 3D texture. In order to obtain speed you need to add BVH or similar spatial subdivision of geometry...
However voxel based tools have been here for quite some time. For example many isometric games/engines are voxel based (tile is a voxel) like this one:
Improving performance of click detection on a staggered column isometric grid
Also do you remember UFO ? It was playable on x286 and it was also "voxel/tile" based isometric.

Custom rendering with GPU, Direct3D or OpenGL

I have a Windows application that currently renders graphics largely using MFC that I'd like to change to get better use out of the GPU. Most of the graphics are straightforward and could easily be built up into a scene graph, but some of the graphics could prove very difficult. Specifically, in addition to the normal mesh type objects, I'm also dealing with point clouds which are liable to contain billions of Cartesian stored in a very compact manner that use quite a lot of custom culling techniques to be displayed in real time (Example). What I'm looking for is a mechanism that does the bulk of the scene rendering to a buffer and then gives me access to that buffer, a z buffer, and camera parameters such that I can modify them before putting them out to the display. I'm wondering whether this is possible with Direct3D, OpenGL or possibly use a higher level framework like OpenSceneGraph, and what would be the best starting point? Given the software is Windows based, I'd probably prefer to use Direct3D as this is likely to lead to fewest driver issues which I'm eager to avoid. OpenSceneGraph seems to provide custom culling via octrees, which are close but not identical to what I'm using.
Edit: To clarify a bit more, currently I have the following;
A display list / scene in memory which will typically contain up to a few million triangles, lines, and pieces of text, which I cull in software and output to a bitmap using low performing drawing primitives
A point cloud in memory which may contain billions of points in a highly compressed format (~4.5 bytes per 3d point) which I cull and output to the same bitmap
Cursor information that gets added to the bitmap prior to output
A camera, z-buffer and attribute buffers for navigation and picking purposes
The slow bit is the highlighted part of section 1 which I'd like to replace with GPU rendering of some kind. The solution I envisage is to build a scene for the GPU, render it to a bitmap (with matching z-buffer) based on my current camera parameters and then add my point cloud prior to output.
Alternatively, I could move to a scene based framework that managed the cameras and navigation for me and provide points in view as spheres or splats based on volume and level of detail during the rendering loop. In this scenario I'd also need to be able add cursor information to the view.
In either scenario, the hosting application will be MFC C++ based on VS2017 which would require too much work to change for the purposes of this exercise.
It's hard to say exactly based on your description of a complex problem.
OSG can probably do what you're looking for.
Depending on your timeframe, I'd consider eschewing both OpenGL (OSG) and DirectX in favor of the newer Vulkan 3D API. It's a successor to both D3D and OGL, and is designed by the GPU manufacturers themselves to provide optimal performance exceeding both of its predecessors.
The OSG project is currently developing a Vulkan scenegraph known as VSG, which already demonstrates superior performance to OSG and will have more generalized culling ability.
I've worked a bunch with point clouds and am pretty experienced with them, but I'm not exactly clear on what you're proposing to do.
If you want to actually have a verbal discussion about the matter, I'm pretty easy to find (my company is AlphaPixel -- AlphaPixel.com) and you could call us. I'm in the European time zone right now, it's not clear from your question where you are but you sound US-based.

D3D12 Use backbuffer surface as unordered access view (UAV)

Im making a simple raytracer for a schoolproject were a compute shader is supposed to be used to shade a triangle or some other primitive.
For this I'd like to write to a backbuffer-surface directly in the compute shader, to then present the results imideatly. I know for certain that this is possible in DX11 though i can't seem to get it to work in DX12.
I couldn't gather that much information about this, but i found this gamedev thread discussing the exact same problem I try to figure out and they seem to come to the conclusion which was my go to workaround: writing to an intermediate texture and then sampling in a pipeline.
I can't fully accept that this would be impossible to achieve in dx12. Why would that feature be removed? Could it be that the queuing-systems removes some overhead that makes it unnecessary to have this feature?
Is there any way to achieve a raytracer without writing to a separate texture and then sampling in a pipeline or copy it onto the back-buffer? What are my best alternatives for achieving performance?
You will have to access the answer. They removed the capability to create an UAV the same way they removed the capability to use multisample surface in the swapchain.
The problem with authorizing UAV on the swapchain surface is that they would have to forfeit tracking of what is happening to it. DX12 rely on descriptor heaps that are 100% volatile at runtime for UAVs ( render targets are CPU side only and can be tracked ).
Microsoft need to track the swapchain surface status strongly in order to guarantee behavior with the desktop presentation and for that reason, they choose to deny the UAV binding.

Translating OpeGLES1.1 fixed function pipeline to programmable pipeline on the fly

Is it possible to emulate the completed fixed function pipeline with shaders on the fly? By on the fly mean not rewriting the fixed function code to use shaders but sort of an intermediate driver which receives fixed function GLES calls (possibly caching it for full one frame as there is no direct one to one translation from fixed to programmable pipeline) and outputs equivalent GLES2.0 calls.
And even if it possible then how much work would it really be?
For most of ES 1.1, that looks pretty straightforward. All the typical fixed functionality like transformations, lights, and materials, translates directly into shader code.
For a complete replacement, you would obviously have to implement all the functionality. From skimming over the ES 1.1 entry points, I spotted a few items that would not directly translate to ES 2.0, where the last of these looks particularly problematic:
Arbitrary clipping planes. This is not available in ES 2.0, but not terribly hard to emulate in shaders by calculating a distance in the vertex shader, and then discarding the clipped fragments in the fragment shader.
ES 1.1 has something called "palette textures". From my understanding, it looks somewhat painful to implement in ES 2.0, but possible. You would probably need two textures, one for the indices, and one for the palette, with two levels of sampling in the fragment shader.
ES 1.1 supports logical operations (glLogicOp) as part of the per-fragment operations that are executed after the fragment shader. ES 2.0 does not have this, and I can't think of a good way to replicate it. The only thing that comes to mind is to render, read back the result, do the logical operation on the CPU, and then render the resulting image. And you would have to do that every time the operation is changed.

open scence graph non-uniform terrain support

I would like to add terrain to my project, which uses OSG.
I've read osgTerrain documentation. As I understand from it's interface, it treats data as uniform height field -- grid of heights.
I want terrain to be non-uniform. It would be represented as triangulation wuth height specified at vertices.
Does osgTerrain supports this out of the box? Or should I implement myself, deriving from Layer? Where to find extensive docs? Where to start from?
osgTerrain at one point, through the VPB tool, supported irregular triangulated terrain models. There's nothing in OSG itself that prevents you from doing this still. However, I must question your reasons for doing so. Are you looking for performance? The reason osg uses regular heightfields now is that with modern GPUs, they're just as fast as the old indexed triangles. Are you planning on doing some modifications to the terrain at runtime that requires a irregular mesh?
Also, you might consider osgEarth. It is sort of the replacement terrain subsystem for OSG. It is much more feature-filled than osgTerrain. It uses quadtrees of regular grids too though.