When using path tracing, rough surfaces look good using very few samples. Even while moving the camera the surface is clearly rough, even though the renderer hardly accumulates frames.
As I understand it, roughness is a measure for how randomly light rays "bounce" when making contact with a surface. Intuitively, when light bounces very randomly, you would need many samples to converge to a realistic color.
As a comparison I created a material with a roughness close to 0, but using a very fine-grained noisy normal map. This material did indeed require many samples to start looking rough.
My questions are:
How does UE achieve roughness using only very few samples?
Is there a fundamental difference between using roughness and using a noisy, extremely fine-grained normal map?
Related
I am currently developing hair strand system for my project. Currently I am using verlet integration to simulate gravity and wind.
Wind vector is currently just a vector. But I want to make a more realistic wind.
Is there any papers or articles that I should read about? Thanks.
It depends on how deep you want to go with the simulation. I suppose that you want something more interesting than uniform wind with varying direction and intensity.
I would suggest adding turbulent velocity to each strand with 3D Curl/Simplex noise. Even animated Perlin noise might be cheap and fast enough for your needs, but you might be able to get more dramatic effects with curl noise.
The original paper for curl noise is here: http://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph2007-curlnoise.pdf
You can also find several implementations of it, but the basic idea is still the same - perturbing particles according to an underlying flow-field.
Currently I'm working on a little project just for a bit of fun. It is a C++, WinAPI application using OpenGL.
I hope it will turn into a RTS Game played on a hexagon grid and when I get the basic game engine done, I have plans to expand it further.
At the moment my application consists of a VBO that holds vertex and heightmap information. The heightmap is generated using a midpoint displacement algorithm (diamond-square).
In order to implement a hexagon grid I went with the idea explained here. It shifts down odd rows of a normal grid to allow relatively easy rendering of hexagons without too many further complications (I hope).
After a few days it is beginning to come together and I've added mouse picking, which is implemented by rendering each hex in the grid in a unique colour, and then sampling a given mouse position within this FBO to identify the ID of the selected cell (visible in the top right of the screenshot below).
In the next stage of my project I would like to look at generating more 'playable' terrains. To me this means that the shape of each hexagon should be more regular than those seen in the image above.
So finally coming to my point, is there:
A way of smoothing or adjusting the vertices in my current method
that would bring all point of a hexagon onto one plane (coplanar).
EDIT:
For anyone looking for information on how to make points coplanar here is a great explination.
A better approach to procedural terrain generation that would allow
for better control of this sort of thing.
A way to represent my vertex information in a different way that allows for this.
To be clear, I am not trying to achieve a flat hex grid with raised edges or platforms (as seen below).
)
I would like all the geometry to join and lead into the next bit.
I'm hope to achieve something similar to what I have now (relatively nice undulating hills & terrain) but with more controllable plateaus. This gives me the flexibility of cording off areas (unplayable tiles) later on, where I can add higher detail meshes if needed.
Any feedback is welcome, I'm using this as a learning exercise so please - all comments welcome!
It depends on what you actually want and what you mean by "more controlled".
Do you want to be able to say "there will be a mountain on coordinates [11, -127] with radius 20"? Complexity of this this depends on how far you want to go. If you want just mountains, then radial gradients are enough (just add the gradient values to the noise values). But if you want some more complex shapes, you are in for a treat.
I explore this idea to great depth in my project (please consider that the published version is just a prototype, which is currently undergoing major redesign, it is completely usable a map generator though).
Another way is to make the generation much more procedural - you just specify a sequence of mathematical functions, which you apply on the terrain. Even a simple value transformation can get you very far.
All of these methods should work just fine for hex grid. If artefacts occur because of the odd-row shift, then you could interpolate the odd rows instead (just calculate the height value for the vertex from the two vertices between which it is located with simple linear interpolation formula).
Consider a function, which maps the purple line into the blue curve - it emphasizes lower located heights as well as very high located heights, but makes the transition between them steeper (this example is just a cosine function, making the curve less smooth would make the transformation more prominent).
You could also only use bottom half of the curve, making peaks sharper and lower located areas flatter (thus more playable).
"sharpness" of the curve can be easily modulated with power (making the effect much more dramatic) or square root (decreasing the effect).
Implementation of this is actually extremely simple (especially if you use the cosine function) - just apply the function on each pixel in the map. If the function isn't so mathematically trivial, lookup tables work just fine (with cubic interpolation between the table values, linear interpolation creates artefacts).
Several more simple methods of "gamification" of random noise terrain can be found in this paper: "Realtime Synthesis of Eroded Fractal Terrain for Use in Computer Games".
Good luck with your project
I want to evaluate the performance of several SDKs / frameworks for depth cameras. These cameras can either be using Time-of-Flight or structured light.
The framework should be capable (at least) of person tracking / blob detection and gesture recognition.
So far I found the following frameworks:
OpenNI (structured light only)
Microsoft Kinect SDK (Kinect only)
Beckon SDK by Omek Interactive (ToF and structured light)
iisu by SoftKinetic (ToF and structured light)
Are there any other frameworks I should be aware of?
EDIT: I found this article by Techradar that seems to indicate that these are indeed the only options currently available.
Any feedback would be very much appreciated!
I have found some interesting links on this. You can take MIT's approach using CodAC . They list lots of facts on this post, the most important ones I will post here.
9. What are limitations of this technique?
The main limitation of our framework is inapplicability to scenes with curvilinear
objects, which would require extensions of the current mathematical model.
Another limitation is that a periodic light source creates a wrap-around error
as it does in other TOF devices. For scenes in which surfaces have high reflectance
or texture variations, availability of a traditional 2D image prior to our data
acquisition allows for improved depth map reconstruction as discussed in our paper.
10. What are advantages of this technique/device and how does it
compare with existing TOF-based range sensing techniques?
In laser scanning, spatial resolution is limited by the scanning time.
TOF cameras do not provide high spatial resolution because they rely on a
low-resolution 2D pixel array of range-sensing pixels. CoDAC is a single-sensor,
high spatial resolution depth camera which works by exploiting the sparsity of natural
scene structure.
11. What is the range resolution and spatial resolution of the CoDAC system?
We have demonstrated sub-centimeter range resolution in our experiments.
This is significantly better than fundamental limit of about 10 cm that would
arise from using a detector with 0.7 nanosecond rise time if we were not using
parametric signal modeling. The improvement in range resolution comes from the
parametric modeling and deconvolution in our framework. We refer the reader to
our publications for complete details and analysis.
We have demonstrated 64-by-64 pixel spatial resolution,
as this is the spatial resolution of our spatial light modulator.
Spatially patterning with a digital micromirror device (DMD) will enable
much higher spatial resolution. Our experiments use only 205 projection patterns,
which correspond to just 5% of number of pixels in the reconstructed depth map.
This is a significant improvement over raster scanning in LIDAR, and it is
obtained without the 2D sensor array used in TOF cameras.
Also another interesting project I found on Youtube uses libfreenect and libusb
There is also dSensingNI which is described as
This work presents an approach to overcome the disadvantages of existing interaction
frameworks and technologies for touch detection and object interaction. The robust and
easy to use framework dSensingNI (Depth Sensing Natural Interaction) is described,
which supports multitouch and tangible interaction with arbitrary objects. It uses
images from a depth-sensing camera and provides tracking of users fingers of palm of
hands and combines this with object interaction, such as grasping, grouping and
stacking, which can be used for advanced interaction techniques.
So you have hit most of them out there, especially that use Kinect, but there are a few other options out there! Hope this Helps!
Recently I've started developing voxel engine. What I need is only colorful voxels without texture, but at very large amount (much smaller than minecraft) - and the question is how to draw the scene very fast? I'm using c#/xna but this is in my opinion not very important in this case, let's talk about general cases. Look at these two games:
http://www.youtube.com/watch?v=EKdRri5jSMs
http://www.youtube.com/watch?v=in0bavLJ8KQ
Especially I think video number 2 represents great optimization methods (my gfx card starts choking just at 192 x 192 x 64) How they achieve this?
What i would to have in the engine:
colorful voxels without texture, but shaded
many, many voxels, say minimum 512 x 512 x 128 to achieve something like video #2
shadows (smooth shadows will be great but this is not necessary)
optional: dynamic lighting (for example from fireballs flying, which light up near voxel structures)
framerate minimum 40 FPS
camera have 3 ways of freedom (move in x-axis, move in y-axis, move in z-axis), no camera rotation is needed
finally optional feature may be Depth of Field (it will be sweet ^^ )
What optimization I have already know:
remove unseen voxels that resides inside voxel structure (covered
from six directions by other voxels)
remove unseen faces of voxels - because camera have no rotation and always look aslant forward like in TPP games, so if we divide screen
by vertical cut, left voxels and right voxels will show only 3 faces
keep voxels in Dictionary instead of 3-dimensional array - jumping through array of size 512 x 512 x 128 takes miliseconds which is
unacceptable - but dictionary int:color where int describes packed
3D position is much much faster
use instancing where applciable
occluding? (how to do this?)
space dividing / octtree (is it good idea?)
I'll be very thankful if someone give me a tip how to improve existing optimizations listed above or can share ideas of new improvements. Thanks
1) Voxatron uses a software renderer rather than the GPU. You can read some details about it if you read the comments in this blog post:
http://www.lexaloffle.com/bbs/?tid=201
I haven't looked in detail myself so can't tell you much more than that.
2) I've never played 3D Dot Game Heroes but I don't have any reason to believe it uses voxels at all. I mean, I don't see any cubes being added or deleted. Most likely it is just a static polygon mesh with a nice texture applied.
As for implementing it yourself, do not try to draw the world by rendering cubes as this is very slow. Instead you should process the volume and generate meshes lying on the intersection of solid voxels and empty ones. Break the volume into suitable sized regions (e.g. 32x32x32) and generate a mesh for each.
I have written a book article about this which you might find useful. It's actually about smooth voxel terain but a lot of the priciples stll apply.
You can read it on Google books here: http://books.google.com/books?id=WNfD2u8nIlIC&lpg=PR1&dq=game%20engine%20gems&pg=PA39#v=onepage&q&f=false
And you can find the associated source code here: http://www.thermite3d.org
Since you are using XNA, you can just use instancing to get the desired effect: http://www.float4x4.net/index.php/2010/06/hardware-instancing-in-xna/
http://roecode.wordpress.com/2008/03/17/xna-framework-gameengine-development-part-19-hardware-instancing-pc-only/
The underlying concept is instancing: this feature lets you specify some amount of repeating data and some amount of varying data in a single DrawIndexedPrimitive call. In your case, the instance stream would be a single solid box, and the other stream would be the transform and color information.
As the title says, I'm fleshing out a design for a 2D platformer engine. It's still in the design stage, but I'm worried that I'll be running into issues with the renderer, and I want to avoid them if they will be a concern.
I'm using SDL for my base library, and the game will be set up to use a single large array of Uint16 to hold the tiles. These index into a second array of "tile definitions" that are used by all parts of the engine, from collision handling to the graphics routine, which is my biggest concern.
The graphics engine is designed to run at a 640x480 resolution, with 32x32 tiles. There are 21x16 tiles drawn per layer per frame (to handle the extra tile that shows up when scrolling), and there are up to four layers that can be drawn. Layers are simply separate tile arrays, but the tile definition array is common to all four layers.
What I'm worried about is that I want to be able to take advantage of transparencies and animated tiles with this engine, and as I'm not too familiar with designs I'm worried that my current solution is going to be too inefficient to work well.
My target FPS is a flat 60 frames per second, and with all four layers being drawn, I'm looking at 21x16x4x60 = 80,640 separate 32x32px tiles needing to be drawn every second, plus however many odd-sized blits are needed for sprites, and this seems just a little excessive. So, is there a better way to approach rendering the tilemap setup I have? I'm looking towards possibilities of using hardware acceleration to draw the tilemaps, if it will help to improve performance much. I also want to hopefully be able to run this game well on slightly older computers as well.
If I'm looking for too much, then I don't think that reducing the engine's capabilities is out of the question.
I think the thing that will be an issue is the sheer amount of draw calls, rather than the total "fill rate" of all the pixels you are drawing. Remember - that is over 80000 calls per second that you must make. I think your biggest improvement will be to batch these together somehow.
One strategy to reduce the fill-rate of the tiles and layers would be to composite static areas together. For example, if you know an area doesn't need updating, it can be cached. A lot depends of if the layers are scrolled independently (parallax style).
Also, Have a look on Google for "dirty rectangles" and see if any schemes may fit your needs.
Personally, I would just try it and see. This probably won't affect your overall game design, and if you have good separation between logic and presentation, you can optimise the tile drawing til the cows come home.
Make sure to use alpha transparency only on tiles that actually use alpha, and skip drawing blank tiles. Make sure the tile surface color depth matches the screen color depth when possible (not really an option for tiles with an alpha channel), and store tiles in video memory, so sdl will use hardware acceleration when it can. Color key transparency will be faster than having a full alpha channel, for simple tiles where partial transparency or blending antialiased edges with the background aren't necessary.
On a 500mhz system you'll get about 6.8 cpu cycles per pixel per layer, or 27 per screen pixel, which (I believe) isn't going to be enough if you have full alpha channels on every tile of every layer, but should be fine if you take shortcuts like those mentioned where possible.
I agree with Kombuwa. If this is just a simple tile-based 2D game, you really ought to lower the standards a bit as this is not Crysis. 30FPS is very smooth (research Command & Conquer 3 which is limited to 30FPS). Even still, I had written a remote desktop viewer that ran at 14FPS (1900 x 1200) using GDI+ and it was still pretty smooth. I think that for your 2D game you'll probably be okay, especially using SDL.
Can you just buffer each complete layer into its view plus an additional tile size for all four ends(if you have vertical scrolling), use the buffer again to create a new buffer minus the first column and drawing on a new end column?
This would reduce a lot of needless redrawing.
Additionally, if you want a 60fps, you can look up ways to create frame skip methods for slower systems, skipping every other or every third draw phase.
I think you will be pleasantly surprised by how many of these tiles you can draw a second. Modern graphics hardware can fill a 1600x1200 framebuffer numerous times per frame at 60 fps, so your 640x480 framebuffer will be no problem. Try it and see what you get.
You should definitely take advantage of hardware acceleration. This will give you 1000x performance for very little effort on your part.
If you do find you need to optimise, then the simplest way is to only redraw the areas of the screen that have changed since the last frame. Sounds like you would need to know about any animating tiles, and any tiles that have changed state each frame. Depending on the game, this can be anywhere from no benefit at all, to a massive saving - it really depends on how much of the screen changes each frame.
You might consider merging neighbouring tiles with the same texture into a larger polygon with texture tiling (sort of a build process).
What about decreasing the frame rate to 30fps. I think it will be good enough for a 2D game.