What are 'physically-based' lighting/rendering/materials? - rendering

What's the difference between physically based lighting, physically based rendering and physically based materials? How they are different from deferred and forward rendering? And physically based rendering is used in films and deferred rendering is used in games?

It would appear that "Physically-based rendering" (PBR), "Physically-based materials", and "Physically-based shading" are different names for the same general concept of creating materials with shading that lend themselves to looking more "real" or physically-based when rendered.
PBR can be used in both movies (offline rendering systems, Pixar has a paper) and games (realtime: Unity and Unreal have docs). It can also be done in WebGL, check out Marmoset's PBR-enabled WebGL Viewer, find the picture of the camera lens, and try rotating that around. Another WebGL example can be found in OpenSceneGraphJS's PBR Demo.
Different PBR implementations have different parameters, but most center around a concept of specifying how "metallic" and how "rough" a particular material is, and allowing the rendering engine to make sure that the material doesn't reflect light in an unrealistic way, or reflect more light than it receives, etc. For example:
This example is a composite of screenshots from the OSGJS demo above, and shows a single base color with varying degrees of metallic and roughness. The idea is that the artist can vary these parameters per-material and even per-texel, and not wind up with something that appears to violate the laws of physics when it comes to lighting. This is considered better than more traditional rendering techniques, where you can create a specular highlight that overexposes objects even in low light conditions.
Disclaimer, I'm still learning about PBR myself. I've been doing a lot of reading up on it lately in hopes of being able to contribute to the current effort underway to add PBR to the glTF 3D model spec. Khronos (the standards organization behind OpenGL and WebGL) is creating an open standard 3D model delivery format that eventually will support PBR, and can convey PBR materials to a wide variety of rendering engines, on the web, mobile, and desktop. But, currently the PBR portion of this is still a draft extension (as of this writing), still under development for some time to come.
I'm not a deferred/forward rendering expert, perhaps someone else can chime in with differences there. In the meantime, I'd suggest to read more from Marmoset about the theory of PBR, and PBR in practice. Good luck.

Related

Custom rendering with GPU, Direct3D or OpenGL

I have a Windows application that currently renders graphics largely using MFC that I'd like to change to get better use out of the GPU. Most of the graphics are straightforward and could easily be built up into a scene graph, but some of the graphics could prove very difficult. Specifically, in addition to the normal mesh type objects, I'm also dealing with point clouds which are liable to contain billions of Cartesian stored in a very compact manner that use quite a lot of custom culling techniques to be displayed in real time (Example). What I'm looking for is a mechanism that does the bulk of the scene rendering to a buffer and then gives me access to that buffer, a z buffer, and camera parameters such that I can modify them before putting them out to the display. I'm wondering whether this is possible with Direct3D, OpenGL or possibly use a higher level framework like OpenSceneGraph, and what would be the best starting point? Given the software is Windows based, I'd probably prefer to use Direct3D as this is likely to lead to fewest driver issues which I'm eager to avoid. OpenSceneGraph seems to provide custom culling via octrees, which are close but not identical to what I'm using.
Edit: To clarify a bit more, currently I have the following;
A display list / scene in memory which will typically contain up to a few million triangles, lines, and pieces of text, which I cull in software and output to a bitmap using low performing drawing primitives
A point cloud in memory which may contain billions of points in a highly compressed format (~4.5 bytes per 3d point) which I cull and output to the same bitmap
Cursor information that gets added to the bitmap prior to output
A camera, z-buffer and attribute buffers for navigation and picking purposes
The slow bit is the highlighted part of section 1 which I'd like to replace with GPU rendering of some kind. The solution I envisage is to build a scene for the GPU, render it to a bitmap (with matching z-buffer) based on my current camera parameters and then add my point cloud prior to output.
Alternatively, I could move to a scene based framework that managed the cameras and navigation for me and provide points in view as spheres or splats based on volume and level of detail during the rendering loop. In this scenario I'd also need to be able add cursor information to the view.
In either scenario, the hosting application will be MFC C++ based on VS2017 which would require too much work to change for the purposes of this exercise.
It's hard to say exactly based on your description of a complex problem.
OSG can probably do what you're looking for.
Depending on your timeframe, I'd consider eschewing both OpenGL (OSG) and DirectX in favor of the newer Vulkan 3D API. It's a successor to both D3D and OGL, and is designed by the GPU manufacturers themselves to provide optimal performance exceeding both of its predecessors.
The OSG project is currently developing a Vulkan scenegraph known as VSG, which already demonstrates superior performance to OSG and will have more generalized culling ability.
I've worked a bunch with point clouds and am pretty experienced with them, but I'm not exactly clear on what you're proposing to do.
If you want to actually have a verbal discussion about the matter, I'm pretty easy to find (my company is AlphaPixel -- AlphaPixel.com) and you could call us. I'm in the European time zone right now, it's not clear from your question where you are but you sound US-based.

What frameworks for depth cameras are out there?

I want to evaluate the performance of several SDKs / frameworks for depth cameras. These cameras can either be using Time-of-Flight or structured light.
The framework should be capable (at least) of person tracking / blob detection and gesture recognition.
So far I found the following frameworks:
OpenNI (structured light only)
Microsoft Kinect SDK (Kinect only)
Beckon SDK by Omek Interactive (ToF and structured light)
iisu by SoftKinetic (ToF and structured light)
Are there any other frameworks I should be aware of?
EDIT: I found this article by Techradar that seems to indicate that these are indeed the only options currently available.
Any feedback would be very much appreciated!
I have found some interesting links on this. You can take MIT's approach using CodAC . They list lots of facts on this post, the most important ones I will post here.
9. What are limitations of this technique?
The main limitation of our framework is inapplicability to scenes with curvilinear
objects, which would require extensions of the current mathematical model.
Another limitation is that a periodic light source creates a wrap-around error
as it does in other TOF devices. For scenes in which surfaces have high reflectance
or texture variations, availability of a traditional 2D image prior to our data
acquisition allows for improved depth map reconstruction as discussed in our paper.
10. What are advantages of this technique/device and how does it
compare with existing TOF-based range sensing techniques?
In laser scanning, spatial resolution is limited by the scanning time.
TOF cameras do not provide high spatial resolution because they rely on a
low-resolution 2D pixel array of range-sensing pixels. CoDAC is a single-sensor,
high spatial resolution depth camera which works by exploiting the sparsity of natural
scene structure.
11. What is the range resolution and spatial resolution of the CoDAC system?
We have demonstrated sub-centimeter range resolution in our experiments.
This is significantly better than fundamental limit of about 10 cm that would
arise from using a detector with 0.7 nanosecond rise time if we were not using
parametric signal modeling. The improvement in range resolution comes from the
parametric modeling and deconvolution in our framework. We refer the reader to
our publications for complete details and analysis.
We have demonstrated 64-by-64 pixel spatial resolution,
as this is the spatial resolution of our spatial light modulator.
Spatially patterning with a digital micromirror device (DMD) will enable
much higher spatial resolution. Our experiments use only 205 projection patterns,
which correspond to just 5% of number of pixels in the reconstructed depth map.
This is a significant improvement over raster scanning in LIDAR, and it is
obtained without the 2D sensor array used in TOF cameras.
Also another interesting project I found on Youtube uses libfreenect and libusb
There is also dSensingNI which is described as
This work presents an approach to overcome the disadvantages of existing interaction
frameworks and technologies for touch detection and object interaction. The robust and
easy to use framework dSensingNI (Depth Sensing Natural Interaction) is described,
which supports multitouch and tangible interaction with arbitrary objects. It uses
images from a depth-sensing camera and provides tracking of users fingers of palm of
hands and combines this with object interaction, such as grasping, grouping and
stacking, which can be used for advanced interaction techniques.
So you have hit most of them out there, especially that use Kinect, but there are a few other options out there! Hope this Helps!

Description-File for physical setup of Multi-Monitors

I need machine-readable descriptions for Multi-Monitor and VR Setups, like simple dual-screen computers, Powerwalls, and Caves. This description must include the sizes and placements of all outputs (displays or projections) in the physical space.
The far goal is to combine User-(Head)-tracking, device tracking for mobile devices, etc. with multi-display environments.
The simplest issue is to be aware of the gap between the screens of a multi-monitor setup because of the borders of the display cases.
The most complex setup would probably be caves with polygonal or curved projection surfaces.
My impression is that every VR-Software out there defines it's own setup-config-crackpot-text-file-format. Is there a common standard or common practice I am missing?
There are no common standards in VR (yet) especially the type you take interest in, but you might want to check out vrui.
The author of that project understands the need for middle software that would do what you want to do: http://doc-ok.org/?p=123. He also has a great article where he considers that for VR the standard camera model could be changed with great benefits, in a way similar to what you seem to ask for in your question: http://doc-ok.org/?p=27
Maybe, once VR gets some popularity and traction thanks to Oculus and all, a need for standaristaion will rise - there already is one for HMDs, check out the OSVR project. But I dont really see it very probable - CAVEs and Powerwall setups won't be so widespread due to costs involved and space required. Using HMDs will probably be a lot cheaper and more portable/handy.
EDIT: I also found this - http://www.middlevr.com/

How to make a 2D Soft-body physics engine?

The definition of rigid body in Box2d is
A chunk of matter that is so strong
that the distance between any two bits
of matter on the chunk is completely
constant.
And this is exactly what i don't want as i would like to make 2D (maybe 3D eventually), elastic, deformable, breakable, and even sticky bodies.
What I'm hoping to get out of this community are resources that teach me the math behind how objects bend, break and interact. I don't care about the molecular or chemical properties of these objects, and often this is all I find when I try to search for how to calculate what a piece of wood, metal, rubber, goo, liquid, organic material, etc. might look like after a force is applied to it.
Also, I'm a very visual person, so diagrams and such are EXTREMELY HELPFUL for me.
================================================================================
Ignore these questions, they're old, and I'm only keeping them here for contextual purposes
1.Are there any simple 2D soft-body physics engines out there like this?
preferably free or opensource?
2.If not would it be possible to make my own without spending years on it?
3.Could i use existing engines like bullet and box2d as a start and simply transform their code, or would this just lead to more problems later, considering my 1 year of programming experience and bullet being 3D?
4.Finally, if i were to transform another library, would it be the best change box2D's already 2d code, Bullet's already soft code, or mixing both's source code?
Thanks!
(1) Both Bullet and PhysX have support for deformable objects in some capacity. Bullet is open source and PhysX is free to use. They both have ports for windows, mac, linux and all the major consoles.
(2) You could hack something together if you really know what you are doing, and it might even work. However, there will probably be bugs unless you have a damn good understanding of how Box2D's sequential impulse constraint solver works and what types of measures are going to be necessary to keep your system stable. That said, there are many ways to get deformable objects working with minimal fuss within a game-like environment. The first option is to take a second (or higher) order approximation of the deformation. This lets you deal with deformations in much the same way as you deal with rigid motions, only now you have a few extra degrees of freedom. See for example the following paper:
http://www.matthiasmueller.info/publications/MeshlessDeformations_SIG05.pdf
A second method is pressure soft bodies, which basically model the body as a set of particles with some distance constraints and pressure forces. This is what both PhysX and Bullet do, and it is a pretty standard technique by now (for example, Gish used it):
http://citeseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.4.2828%26rep%3Drep1%26type%3Dpdf
If you google around, you can find lots of tutorials on implementing it, but I can't vouch for their quality. Finally, there has been a more recent push to trying to do deformable objects the `right' way using realistic elastic models and finite element type approaches. This is still an area of active research, so it is not for the faint of heart. For example, you could look at any number of the papers in this year's SIGGRAPH proceedings:
http://kesen.realtimerendering.com/sig2011.html
(3) Probably not. Though there are certain 2D style games that can work with a 3D physics engine (for example top down type games) for special effects.
(4) Based on what I just said, you should probably know the answer by now. If you are the adventurous sort and got some time to kill and the will to learn, then I say go for it! Of course it will be hard at first, but like anything it gets easier over time. Plus, learning new stuff is lots of fun!
On the other hand, if you just want results now, then don't do it. It will take a lot of time, and you will probably fail (a lot). If you just want to make games, then stick to the existing libraries and build on whatever abstractions it provides you.
Quick and partial answer:
rigid body are easy to model due to their property (you can use physic tools, like "Torseur+ (link on french on wikipedia, english equivalent points to screw theory) to modelate forces applying at any point in your element.
in comparison, non-solid elements move from almost solid (think very hard rubber : it can move but is almost solid) to almost liquid (think very soft ruber, latex). Meaning that dynamical properties applying to that knd of objects are much complex and depend of the nature of the object
Take the example of a spring : it's easy to model independantly (f=k.x), but creating a generic tool able to model that specific case is a nightmare (especially if you think of corner cases : extension is not infinite, compression reaches a lower point, material is non linear...)
as far as I know, when dealing with "elastic" materials, people do their own modelisation for their own purpose (a generic one does not exist)
now the answers:
Probably not, not that I know at least
not easily, see previously why
Unless you got high level background in elastic materials, I fear it's gonna be painful
Hope this helped
Some specific cases such as deformable balls can be simulated pretty well using spring-joint bodies:
Here is an implementation example with cocos2d: http://2sa-studio.blogspot.com/2014/05/soft-bodies-with-cocos2d-v3.html
Depending on the complexity of the deformable objects that you need, you might be able to emulate them using box2d, constraining rigid bodies with joints or springs. I did it in the past using a box2d clone for xna (farseer) and it worked nicely. Hope this helps.
The physics of your question breaks down into two different topics:
Inelastic Collisions: The math here is easy, and you could write a pretty decent library yourself without too much work for 2D points/balls. (And with more work, you could learn the physics for extended bodies.)
Materials Bending and Breaking: This will be hard. In general, you will have to model many of the topics in Mechanical Engineering:
Continuum Mechanics
Structural Analysis
Failure Analysis
Stress Analysis
Strain Analysis
I am not being glib. Modeling the bending and breaking of materials is, in general, a very detailed and varied topic. It will take a long time. And the only way to succeed will be to understand the science well enough that you can make clever shortcuts in limiting the scope of the science you need to model in your game.
However, the other half of your problem (modeling Inelastic Collisions) is a much more achievable goal.
Good luck!

What's the fastest force-directed network graph engine for large data sets? [duplicate]

We currently have a dynamically updated network graph with around 1,500 nodes and 2,000 edges. It's ever-growing. Our current layout engine uses Prefuse - the force directed layout in particular - and it takes about 10 minutes with a hefty server to get a nice, stable layout.
I've looked a little GraphViz's sfpd algorithm, but haven't tested it yet...
Are there faster alternatives I should look at?
I don't care about the visual appearance of the nodes and edges - we process that separately - just putting x, y on the nodes.
We do need to be able to tinker with the layout properties for specific parts of the graph, for instance, applying special tighter or looser springs for certain nodes.
Thanks in advance, and please comment if you need more specific information to answer!
EDIT: I'm particularly looking for speed comparisons between the layout engine options. Benchmarks, specific examples, or just personal experience would suffice!
I wrote a JavaScript-based graph drawing library VivaGraph.js.
It calculates layout and renders graph with 2K+ vertices, 8.5K edges in ~10-15 seconds. If you don't need rendering part it should be even faster.
Here is a video demonstrating it in action: WebGL Graph Rendering With VivaGraphJS.
Online demo is available here. WebGL is required to view the demo but is not needed to calculate graphs layouts. The library also works under node.js, thus could be used as a service.
Example of API usage (layout only):
var graph = Viva.Graph.graph(),
layout = Viva.Graph.Layout.forceDirected(graph);
graph.addLink(1, 2);
layout.run(50); // runs 50 iterations of graph layout
// print results:
graph.forEachNode(function(node) { console.log(node.position); })
Hope this helps :)
I would have a look at OGDF, specifically http://www.ogdf.net/doku.php/tech:howto:frcl
I have not used OGDF, but I do know that Fast Multipole Multilevel is a good performant algorithm and when you're dealing with the types of runtimes involved with force directed layout with the number of nodes you want, that matters a lot.
Why, among other reasons, that algorithm is awesome: Fast Multipole method. The fast multipole method is a matrix multiplication approximation which reduces the O() runtime of matrix multiplication for approximation to a small degree. Ideally, you'd have code from something like this: http://mgarland.org/files/papers/layoutgpu.pdf but I can't find it anywhere; maybe a CUDA solution isn't up your alley anyways.
Good luck.
The Gephi Toolkit might be what you need: some layouts are very fast yet with a good quality: http://gephi.org/toolkit/
30 secondes to 2 minutes are enough to layout such a graph, depending on your machine.
You can use the ForAtlas layout, or the Yifan Hu Multilevel layout.
For very large graphs (+50K nodes and 500K links), the OpenOrd layout wil
In a commercial scenario, you might also want to look at the family of yFiles graph layout and visualization libraries.
Even the JavaScript version of it can perform layouts for thousands of nodes and edges using different arrangement styles. The "organic" layout style is an implementation of a force directed layout algorithm similar in nature to the one used in Neo4j's browser application. But there are a lot more layout algorithms available that can give better visualizations for certain types of graph structures and diagrams. Depending on the settings and structure of the problem, some of the algorithms take only seconds, while more complex implementations can also bring your JavaScript engine to its knees. The Java and .net based variants still perform quite a bit better, as of today, but the JavaScript engines are catching up.
You can play with these algorithms and settings in this online demo.
Disclaimer: I work for yWorks, which is the maker of these libraries, but I do not represent my employer on SO.
I would take a look at http://neo4j.org/ its open source which is beneficial in your case so you can customize it to your needs. The github account can be found here.