Effort Required to make 3D Game Engine? - game-engine

For the sake of theory (and general understanding),
I would like to understand in a moderately exhaustive list of all the things that must be done in order to create a "modern" 3D Game Engine (from a coder's perspective)
I seem to have a hard time finding this information anywhere else, so I think that you guys at Stack overflow will have the knowledge I seek.
In terms of "moderately exhaustive", I mean such things as a general explanation of the design stages of such engine, such as Binary Space Partitioning, then actual implementation of such an engine, and the uses of the software ( it would be helpful if the means of rendering other than BSP could be explained).
I don't want to make a 3D Engine, but simply understand what sheer amount of effort is required to make one.

Focusing on 3D rendering alone:
Binary space partitioning, like many elements of 3d rendering, is optional. In this case, it is an optimization, allowing the computer to do less work to render a scene, by cutting out invisible sections.
At its core, rendering is simply a five stage process. First, a list of triangles is generated. Next, the triangles are converted from 3-space to 2-space using matrix multiplication. Next, the triangles are filled in with pixels and meta information. Finally, the pixels are shaded individually using the meta-information. Extra finally, the pixels are drawn to the screen.
Most of those steps are partially or wholly done by a graphics card, meaning the programmer's job is to tell the card which step to perform and provide the input data.
This bare bones engine is not even close to a modern engine, however. Modern engines will be filled with optimizations like binary space partitioning, mesh simplification, background loading and texture compression. They will also be filled with special features like shadows, mirrors, mist and particle effects.
Modern engines have to be able to load and interpret textures and meshes, and in some cases, deform and modify both at runtime. The most common example would be interpolating between keyframes.
Engines may need to interact with game logic modules in order to reuse data for collision detection. Collision detection being the thing that determines if bullets hit something and also the thing that makes makes walls and floors real.

Related

Is programming a voxel based graphics API theoretically possible?

This is entirely a theoretical question because I understand the time it would take to do such a thing would be ridiculous
I've been working with "voxels" a lot lately and the only way I can display them to a user is to either triangulate the visible surfaces or make a CPU ray-tracer but both come with their own problems.
Simply put, if we dismiss the storage space needed for voxel meshs and targeted a very specific GPU would someone who was wanting to create a graphics API like OpenGL but with "true" voxel primitives that don't need to be converted be able to make such thing or are GPUs designed specifically for triangles with no way to introduce a new base primitive?
Its possible and it was already done many times
games like Minecraft,SpaceEngineers...
3D printing tools and slicers
MRI/PET scans tools
Yes rendering on GPU is possible with the two base methods you mention. Games usually use the transform to boundary representation 3D geometry. With rise of shaders even ray tracers are now possible here mine:
simple GLSL voxel ray tracer
using native OpenGL architecture and passing geometry as 3D texture. In order to obtain speed you need to add BVH or similar spatial subdivision of geometry...
However voxel based tools have been here for quite some time. For example many isometric games/engines are voxel based (tile is a voxel) like this one:
Improving performance of click detection on a staggered column isometric grid
Also do you remember UFO ? It was playable on x286 and it was also "voxel/tile" based isometric.

OpenCascade: How to subdivide elongated triangles?

I am using OpenCascade to import STEP/IGES as meshes in my software. Works nicely.
But I need small triangles, and the one I get are sometimes very large (in flat area), or very elongated (eg. when meshing a cylinder). The best would be to split triangle's edge bigger than some absolute value. Avoiding T vertices, too.
I was'nt able to google anything about that... So, currently, I pass the mesh to OpenMesh, apply the OpenMesh::Subdivider::Uniform::LongestEdgeT operator, then pass it back to my software. Tedious and costly when I manage several M triangles...
Questions:
Is there an equivalent in OpenCascade ?
Or a simple code snipet to implement my own loop to do so ?
Thanks !
The meshing algorithm BRepMesh_IncrementalMesh coming with Open CASCADE Technology is mainly focused on two usage scenarios:
Visualization in 3D Viewer. Large and prolonged triangles make no harm to presentation, as vertex normals ensure proper smooth shading. Deflection parameters allows managing presentation quality.
Computing Algorithms using triangulation as approximation (to speed up calculations compared to the same algorithm working on exact geometry). In this case, deflection parameters determine the target precision of the algorithm. Large and prolonged triangles should not cause problems here, as deviation from exact geometry is controlled by meshing parameters.
There are, however, some categories of algorithms, where shape of mesh element is very important. Solvers (numerical simulation) make one of such categories, where unfortunate mesh elements may cause numerical instability or other issues. What exactly matters / cause issues depend on specific algorithm - this may include element skewness, element aspect ratio, element size and elements grid. Some solvers work much better on quads rather than on triangles.
If you take a look onto meshing result of BRepMesh_IncrementalMesh algorithm, you may notice that not only large prolonged triangles, but entire mesh structure is somewhat suboptimal for solver algorithms:
There are several options you may consider:
Triangulation refinement algorithm. Such algorithm processes existing triangulation and tries healing some properties like skewness. This what does OpenMesh from your question, I suppose. Such postprocessing algorithm might give satisfactory results at good performance, but final result will dramatically depend on properties of original meshing algorithm. For the moment, OCCT doesn't have any refinement tool, although it is possible writing such algorithm on your own (I cannot give you a small code snippet, because such algorithm is not that small an trivial as it may look from a first glance).
Consider alternative meshing algorithm. Probably incomplete list:
Express Mesh by Open Cascade (hence, working directly on OCCT shapes). This tool generates triangulation having nice grid-alike structure (for smooth surfaces), configurable element size and quad-dominant generation option. This is a commercial product though.
Netgen mesher. This open source tool provides bindings to OCCT, and although it is focused on 3D tetrahedral mesh generation, it may be also used for generating a common triangular mesh. I cannot say something good about this tool - it was rather slow and unstable when I've seen its work many years ago.
MeshGems surface meshing. Another commercial tool providing an interface to OCCT. Never worked with this product, so cannot share any opinion on it.
Consider improving BRepMesh_IncrementalMesh. As OCCT is an open source framework, you may consider extending its meshing algorithm with more parameters and contribute to the project.

Custom rendering with GPU, Direct3D or OpenGL

I have a Windows application that currently renders graphics largely using MFC that I'd like to change to get better use out of the GPU. Most of the graphics are straightforward and could easily be built up into a scene graph, but some of the graphics could prove very difficult. Specifically, in addition to the normal mesh type objects, I'm also dealing with point clouds which are liable to contain billions of Cartesian stored in a very compact manner that use quite a lot of custom culling techniques to be displayed in real time (Example). What I'm looking for is a mechanism that does the bulk of the scene rendering to a buffer and then gives me access to that buffer, a z buffer, and camera parameters such that I can modify them before putting them out to the display. I'm wondering whether this is possible with Direct3D, OpenGL or possibly use a higher level framework like OpenSceneGraph, and what would be the best starting point? Given the software is Windows based, I'd probably prefer to use Direct3D as this is likely to lead to fewest driver issues which I'm eager to avoid. OpenSceneGraph seems to provide custom culling via octrees, which are close but not identical to what I'm using.
Edit: To clarify a bit more, currently I have the following;
A display list / scene in memory which will typically contain up to a few million triangles, lines, and pieces of text, which I cull in software and output to a bitmap using low performing drawing primitives
A point cloud in memory which may contain billions of points in a highly compressed format (~4.5 bytes per 3d point) which I cull and output to the same bitmap
Cursor information that gets added to the bitmap prior to output
A camera, z-buffer and attribute buffers for navigation and picking purposes
The slow bit is the highlighted part of section 1 which I'd like to replace with GPU rendering of some kind. The solution I envisage is to build a scene for the GPU, render it to a bitmap (with matching z-buffer) based on my current camera parameters and then add my point cloud prior to output.
Alternatively, I could move to a scene based framework that managed the cameras and navigation for me and provide points in view as spheres or splats based on volume and level of detail during the rendering loop. In this scenario I'd also need to be able add cursor information to the view.
In either scenario, the hosting application will be MFC C++ based on VS2017 which would require too much work to change for the purposes of this exercise.
It's hard to say exactly based on your description of a complex problem.
OSG can probably do what you're looking for.
Depending on your timeframe, I'd consider eschewing both OpenGL (OSG) and DirectX in favor of the newer Vulkan 3D API. It's a successor to both D3D and OGL, and is designed by the GPU manufacturers themselves to provide optimal performance exceeding both of its predecessors.
The OSG project is currently developing a Vulkan scenegraph known as VSG, which already demonstrates superior performance to OSG and will have more generalized culling ability.
I've worked a bunch with point clouds and am pretty experienced with them, but I'm not exactly clear on what you're proposing to do.
If you want to actually have a verbal discussion about the matter, I'm pretty easy to find (my company is AlphaPixel -- AlphaPixel.com) and you could call us. I'm in the European time zone right now, it's not clear from your question where you are but you sound US-based.

Basics of face Sculpting in Blender

I mean, the basics..
1) I have seen in the Online videos, that they are modelling a character (or anything) through one object only, they are extruding, loop cut, scaling, etc and model a character, why don't they design different objects separately (like hands separately, legs separately, body separate and then join them together and make one object)..??????
2) Like What the texturing department has to see so that they should not return the model back to the modelling department. I mean like the meshes(polygons) over the model face must be quad, etc not triangle. while modelling a character..
what type of basics i should know , means is there any check list or is there any basics which i should see before modelling a character..
Please correct me if i am wrong , and answer my both questions.. Thanks
It may be common but it definitely isn't mandatory to have a model as one solid mesh. Some models will have parts of the body underneath clothing removed to reduce the poly count. How the model is to be used will be a big factor to how you model it, that is a for a single image it is easy to get away with multiple parts, while a character that will be animated in a cartoony animation could be stretched and distorted in ways that could show holes in a model with multiple pieces. When working in a team, there may be rules in place determining whether a solid or multi-part model is considered acceptable.
An example of an animated model made from multiple parts is Sintel, the main character in the Sintel short animation.
There is nothing stopping you from making a library of separate body parts and joining them together when you make your model. Be aware that this can bring complications, if you model an arm with 12 verts and then you make your hand with 15, then you have to fiddle around to merge them together.
You will also find some extra freedom to work with multiple body parts during the sculpting phase as you are creating a high density mesh that is used as a template to model a clean mesh over. This step is called retopology.
It is more likely that the rigging department will send a model back for fixing than the texturing department. When adding a rig and deforming the mesh in different ways, any parts that deform badly will be revealed and need fixing.
[...] (like hands separately, legs separately, body separate and then
join them together and make one object) [...]
Some modelers I know do precisely this and they do it in a way where they block in the design using broad primitive shapes, start slicing some edge loops and add broad details, then merge everything together, then sculpt it a bit further with high-res sculpting tools, and finally retopologize everything.
The main modelers I know who do this, however, model in a way that tries to adhere as close as possible to the concept artist's illustration. They're not creating their own models from scratch but are instead given top/front/back/side illustrations of a character, for example, and are just trying to match it as closely as possible.
When you start modeling everything in small pieces, it helps to have that concept illustration since you can get lost in the topology otherwise and fusing organic meshes together can be difficult to do in a clean way.
[...] why don't they design different objects separately? [...]
Again they sometimes do, but one of the appeals of creating organic meshes by keeping it seamless the entire time is that you can start to focus on how edge loops propagate across the entire model. It helps to know that the base of a finger is a hexagon, for example, in figuring out how to cleanly propagate and terminate the edge loops for a hand, and likewise have a strategy for the hand to cleanly propagate and terminate edge loops as it joins into the forearm.
It can be hard to get the topology to match up cleanly if you designed everything in small pieces and then had to figure out how to merge it all together. Polygonal modeling is very topology-oriented. It tends to require as much thinking about the wireframe and edge flows as it does the shape of the model, since it needs to be a certain way for everything to subdivide cleanly and smoothly and animate predictably with subdivision surfaces.
I used to work with developers who took one glance at the topology-dominated workflow of polygonal modeling and immediately wanted to jump to seeking alternatives, like voxel sculpting. With voxels you could be able to potentially model everything in pieces and foose it all together in a nice and smooth organic way without thinking about topology whatsoever.
However, that loses sight of the key appeal of polygonal meshes. Their wire flow forms a control lattice with a very finite number of control points for the artist to animate and move around to predictably control the shape of their model. You immediately lose that with a voxel representation -- so while voxels free the artist of thinking about how the topology works and how the wireframe flows through the model, it also loses all those control benefits of having that. So often if people use voxel sculpting, they end up meticulously retopologizing everything at the end anyway to gain back that level of coarse and predictable control they have with polygonal meshes.
I mean like the
meshes(polygons) over the model face must be quad, etc not triangle.
while modelling a character..
This is all in the context of subdivision surfaces: the most popular of which are variants of catmull-clark. That favors quads to get the most predictable subdivision. It's much easier for the artist to predict how everything will look like and deform if they favor, as much as possible, uniform grids of quadrangles wrapped around their model with 4-valence vertices and every polygon having 4 points. Then only in the case where they kind of need to "join" these quad grids together, they might create some funky topology: a 5-valence vertex here, a 3-valence vertex there, a 5-sided polygon here, a triangle there -- but those cases tend to deform a bit unpredictably (at least unintuitively), so artists tend to try to avoid these as much as possible.
Because when artists model polygonal meshes in this way, they are not just trying to create a statue with a nice shape. If that's all they wanted to do, they'd save themselves a lot of grief avoiding dealing with things in terms of individual vertices/edges/polygons in the first place and using something like Sculptris. Instead they are designing not only shapes but also designing a control lattice, a wire flow and a set of control points they can easily move around in the future to get predictable behavior out of their control cage. They're basically designing controls or an "interactive GUI/rig" almost for themselves with how they design the topology.
2) Like What the texturing department has to see so that they should
not return the model back to the modelling department.
Generally how a mesh is modeled in a direct sense shouldn't affect the texture department's work much at all if they're working with UV maps and painting textures over them (at that point it doesn't really matter if a model has clean wire flows or not, since all the texture artists do is pain images over the 2D UV map or directly onto the 3D model).
However, if the modeler does the UV mapping, then regardless of whether he uses quad meshes and clean wire flows or not, if the UV mapping is poor, then the resulting texture images will look all distorted. So the UV maps need to be made well with minimal distortion, though that's usually easy to do automatically these days.
The other exception is if the department doesn't use UV maps and instead uses, say, PTex from Disney. PTex really favors quads. In the original paper at least, it only worked with quads.

Non power of two textures and memory consumption optimization

I read somewhere that XNA framework upscales a texture to nearest power of two size and then sends that to VRAM, which, provided it's how it really works, might be not efficient when loading many small (in my case 150×150) textures, which essentially waste memory with unused texture data resulting from upscaling.
So is there some automatic optimization, or should I make my own implementation of it, like loading all textures, figuring out where the "upscaled" space is big enough to hold some other texture and place it there, remembering sprite positions, thus using one texture instead of two (or more)?
It isn't always handy to do this manually for each texture (placing many small sprites in a single texture), because it's hard to work with later (essentially it becomes less human-oriented), and not always a sprite will be needed in some level of a game, so it would be better if sprites were in a different composition, so it should be done automatically.
There are tools available to create what are known as "sprite sheets" or "texture atlases". This XNA sample does this for you as part of a content pipeline extension.
Note that the padding of textures only happens on devices that do not support non-power-of-two textures. Windows Phone, for example. Modern GPUs won't waste the RAM. However this is still a useful optimisation to allow you to merge batches of sprites (see this answer for details).