Can we control progressive rendering in the Viewer based on the distance to the camera? - rendering

We have to work with very large models and we're hoping to use the first person camera to walk through them, and eventually do this in VR. The progressive rendering does wonders for improving perceived responsiveness, but it can be disorienting to have so many items around you disappear as you move.
Is there any way to turn progressive rendering off but only for objects closest to the camera? Maybe a number of objects up to a maximum number, or objects within a radius from the camera. Everything further away can load in later and flicker during motion, but it would be nice to keep the nearby objects rendered, especially structural objects like stairs. I've often walked towards stairs just to have them disappear in front of me, forcing me to fly to a platform with E and Q instead of walking.
So far I've only found a way to toggle progressive rendering for the entire model on or off with viewer.setProgressiveRendering(bool) but I haven't found a way to customize the rendering behavior.

Per our Engineering's recommendation can you try set rendering targets with
viewer.impl.setFPSTargets(1, 5, 15) //min, target, max
In fact we've been having similar reports from other developers requesting similar capability to fine tune rendering with large models so our Engineering is considering their options to extend on existing functions and even build them into extensions.

Related

Choosing game model design

I need help designing a game where characters
have universal actions(sit, jump, etc.) or same across all characters; roughly 50 animations
unique attack patterns(different attacks) roughly 6 animations per character
item usage attacks(same across all characters) roughly 4 animations per item which could scale to 500+
What would be the best way to design this? I use blender for animations. And I just started a week ago.
I’m thinking of using either one model for everything and limiting actions or to create multiple and import those separately. Any help is appreciated!
Edit: also considering optimization since I don’t want lag to incur; making a mmo like game.
There is an initial release (MIT License) of the module GodotAnimationRetargeting that I referenced in comments. Update: There is a GDScript version now.
Usually in Godot you have an animation player with the animations tied to a given model. Which means you would have to add them for all the models. However, this module allows you apply animations from an animation player to another model. You can also apply them partially (e.g. only rotation, or position or scaling of bones).
That should help you have a common set of animation applied to different models.
Being a module it requires to compile Godot using it. See Compiling on the Godot docs.

FabricJS v3.4.0: Filters & maxTextureSize - performance/size limitations

Intro:
I've been messing with fabricJS image filtering features in an attempt to start using them in my webapp, but i've run into the following.
It seems fabricJS by default only sets the image size cap (textureSize) on filters to be 2048, meaning the largest image is 2048x2048 pixels.
I've attempted to raise the default by calling fabric.isWebGLSupported() and then setting fabric.textureSize = fabric.maxTextureSize, but that still caps it at 4096x4096 pixels, even though my maxTextureSize on my device is in the 16000~ range.
I realize that devices usually report the full value without accounting for current memory actually available, but that still seems like a hard limitation.
So I guess the main issues I'm looking at here to start effectively using this feature:
1- Render blocking applyFilters() method:
The current filter application function seems to be render blocking in the browser, is there a way call it without blocking the rendering, so I can show an indeterministic loading spinner or something?
is it as simple as making the apply filter method async and calling it from somewhere else in the app? (I'm using vue for context, with webpack/babel which polyfills async/await etc.)
2- Size limits:
Is there a way to bypass the size limit on images? I'm looking to filter images up to 4800x7200 pixels
I can think of one way atleast to do this, which is to "break up" the image into smaller images, apply the filters, and then stitch it back together. But I worry it might be a performance hit, as there will be a lot of canvas exports & canvas initializations in this process.
I'm surprised fabricjs doesn't do this "chunking" by default as its quite a comprehensive library, and I think they've already gone to the point where they use webGL shaders (which is a black box to me) for filtering under the hood for performance, is there a better way to do this?
My other solution would be to send the image to a service (one i handroll, or a pre-existing paid one) that applies the filters somewhere in the cloud and returns it to the user, but thats not a solution i prefer to resort to just yet.
For context, i'm mostly using fabric.Canvas and fabric.StaticCanvas to initialize canvases in my app.
Any insights/help with this would be great.
i wrote the filtering backend for fabricJS, with Mr. Scott Seaward (credits to him too), and i can give you some answers.
Hard block to 2048
A lot of macbook with intel integrated only videocard report a max texture size of 4096, but then they crash the webgl instance at anything higher of 2280. This was happening widely in 2017 when the webgl filtering was written. 4096 would have left uncovered by default a LOT of notebooks. Do not forget mobile phones too.
You know your userbase, you can up the limit to what your video card allows and what canvas allows in your browser. The final image, for how big the texture can be, must be copied in a canvas and displayed. ( canvas has a different max size depending on browser and device )
Render blocking applyFilters() method
Webgl is sync for what i understood.
Creating a parallel executing in a thread for filtering operations that are in the order of 20-30 ms ( sometimes just a couple of ms in chrome ) seems excessive.
Also consider that i tried it but when more than 4 webgl context were open in firefox, some would have been dropped. So i decided for one at time.
The non webgl filtering take longer of course, that could be done probably in a separate thread, but fabricJS is a generic library that does both vectors and filterings and serialization, it has already lot of things on the plate, filtering performances are not that bad. But i'm open to argue around it.
Chunking
Shutterstock editor uses fabricJS and is the main reason why a webgl backend was written. The editor has also chunking and can filter with tiles of 2048 pixels bigger images. We did not release that as opensource and i do not plan of asking. That kind of tiling limit the kind of filters you can write because the code has knowledge of a limited portion of the image at time, even just blurring becomes complicated.
Here there is a description of the process of tiling, is written for casual reader and not only software engineers, is just a blog post.
https://tech.shutterstock.com/2019/04/30/canvas-webgl-filtering-concepts
Generic render blocking consideration
So fabricJS has some pre-written filters made with shaders.
The timing i note here are from my memory and not reverified
The time that pass away filtering an image is:
Uploading the image in the GPU ( i do not know how many ms )
Compiling the shader ( up to 40 ms, depends )
Running the shader ( like 2 ms )
Downloading the result on the GPU ( like 0ms or 13 depends on what method is using )
Now the first time you run a filter on a single image:
The image gets uploaded
Filter compiled
Shader Run
Result downloaded
The second time you do this:
Shader Run
Result downloaded
When a new filter is added or filter is changed:
New filter compiled
Shader or both shader run
Result downloaded
Most common errors in application building with filtering that i have noticed are:
You forget to remove old filters, leaving them active with a value near 0 that does not produce visual changes, but adds up time
You connect the filter to a slider change event, without throttling, and that depending on the browser/device brings up to 120 filtering operation per second.
Look at the official simple demo:
http://fabricjs.com/image-filters
Use the sliders to filter, apply even more filters, everything seems pretty smooth to me.

Custom rendering with GPU, Direct3D or OpenGL

I have a Windows application that currently renders graphics largely using MFC that I'd like to change to get better use out of the GPU. Most of the graphics are straightforward and could easily be built up into a scene graph, but some of the graphics could prove very difficult. Specifically, in addition to the normal mesh type objects, I'm also dealing with point clouds which are liable to contain billions of Cartesian stored in a very compact manner that use quite a lot of custom culling techniques to be displayed in real time (Example). What I'm looking for is a mechanism that does the bulk of the scene rendering to a buffer and then gives me access to that buffer, a z buffer, and camera parameters such that I can modify them before putting them out to the display. I'm wondering whether this is possible with Direct3D, OpenGL or possibly use a higher level framework like OpenSceneGraph, and what would be the best starting point? Given the software is Windows based, I'd probably prefer to use Direct3D as this is likely to lead to fewest driver issues which I'm eager to avoid. OpenSceneGraph seems to provide custom culling via octrees, which are close but not identical to what I'm using.
Edit: To clarify a bit more, currently I have the following;
A display list / scene in memory which will typically contain up to a few million triangles, lines, and pieces of text, which I cull in software and output to a bitmap using low performing drawing primitives
A point cloud in memory which may contain billions of points in a highly compressed format (~4.5 bytes per 3d point) which I cull and output to the same bitmap
Cursor information that gets added to the bitmap prior to output
A camera, z-buffer and attribute buffers for navigation and picking purposes
The slow bit is the highlighted part of section 1 which I'd like to replace with GPU rendering of some kind. The solution I envisage is to build a scene for the GPU, render it to a bitmap (with matching z-buffer) based on my current camera parameters and then add my point cloud prior to output.
Alternatively, I could move to a scene based framework that managed the cameras and navigation for me and provide points in view as spheres or splats based on volume and level of detail during the rendering loop. In this scenario I'd also need to be able add cursor information to the view.
In either scenario, the hosting application will be MFC C++ based on VS2017 which would require too much work to change for the purposes of this exercise.
It's hard to say exactly based on your description of a complex problem.
OSG can probably do what you're looking for.
Depending on your timeframe, I'd consider eschewing both OpenGL (OSG) and DirectX in favor of the newer Vulkan 3D API. It's a successor to both D3D and OGL, and is designed by the GPU manufacturers themselves to provide optimal performance exceeding both of its predecessors.
The OSG project is currently developing a Vulkan scenegraph known as VSG, which already demonstrates superior performance to OSG and will have more generalized culling ability.
I've worked a bunch with point clouds and am pretty experienced with them, but I'm not exactly clear on what you're proposing to do.
If you want to actually have a verbal discussion about the matter, I'm pretty easy to find (my company is AlphaPixel -- AlphaPixel.com) and you could call us. I'm in the European time zone right now, it's not clear from your question where you are but you sound US-based.

3D Objects are not being in their regular shape at distance

I am working on a game which was developed by some other guy earlier. I am facing a problem that when player(with camera) start running on the road the buildings are not being shown up in their regular shape and as we move forward (more closer to the buildings) they gain their original shapes, and some times the buildings present on either side of the road are not visible by camera ( empty space ) and when we move closer to the building it comes up as visible object suddenly. I think it may be some unity3d setting problem (rendering , camera or quality). May be, it was being done due to increase performance on mobile devices.
can anybody know what may be the issue or how to resolve it.
Any help will be appreciated. Thanks in advance
This sounds like it's a problem with the available LODs for each building's 3D model.
Basically, 3d games work by having 2-3 different versions of each 3D model, with varying *L*evels *O*f *D*etail. So for example, if you have a house model which uses 500 polygons, you'll probably have another 2 versions (eg 250 polys and 100 polys), which are used depending on the distance between the player and the object. The farther away he is, the simpler the version used will be.
The issue occurs when developers use automatically generated LOD models, which will look distorted or won't appear at all. Unity probably auto generates them, but I'm unsure where you'll find the settings for this in unity. However I've seen 3d models on the unity store offering models with different LODs, so unity probably gives you the option to set your own. The simplest solution would be to increase the distance the LODs change at, while the complicated solution would be to fix custom versions of the 3D models for larger distances, with a lower poly count.
I have resolved the problem. This was due to the LOD (level of details) used for objects (buildings) in Unity3d to enhance the performance on the slower device. LOD provides many level of details (of an object) which you can adjust according to your need . In my specific problem the buildings were suddenly appear due to the different (wrong) position for LOD1, i.e. for LOD1 the building was at wrong place but for LOD0 it was at its right place. So when my camera see from the distance it see LOD1 which was at wrong place thence it sees empty space with no building at the expected position. But when camera comes closer it sees LOD0 in which building is at the right position and it seems that buildings are suddenly come or become visible.

Per frame optimization for large datasets

Summary
New to iPhone programming, I'm having trouble picking the right optimization strategy to filter a set of view components in a scrollview with huge content. In what area would my app gain the most performance?
Introduction
My current iPad app-in-progress let's users explore fairly large binary tree structures. The trees contain between 30 to 900 nodes, and when drawing inside a scrollview (with limited zoom) it looks like this.
The nodes' contents are stored in a SQLite backed Core Data model. It's a binary tree and if a node has children, there are always exactly two. The x and y positions are part of the model, as are the dimensions of the node connections, shown as dotted lines.
Optimization
Only about 50 nodes fit the screen at any given time. With the largest trees containing up to 900 nodes, it's not possible to put everything in a scrollview controlled and zooming UIView, that's a recipe for crashes. So I have to do per frame filtering of the nodes.
And that's where my troubles start. I don't have the experience to make a well founded decision between the possible filtering options, and in addition I probably don't know about that really fast special magic buried deep in Objective-C or Cocoa Touch. Because the backing store is close to 200 MB in size (some 90.000 nodes in hundreds of trees), it's very time consuming to test every single tree on the iPad device. Which is why I'd like to ask you guys for advice.
For all my attempts I'm putting a filter method in the scrollViewDidScroll: and scrollViewDidZoom:. I'm also blocking the main thread with the filter, because I can't show the content without the nodes anyway. But maybe someone has an idea in that area?
Because all the positioning is present in the Core Data model, I might use NSFetchRequest to do the filtering. Is that really fast though? I have the idea it's not a very optimized method.
From what I've tried, the faulted managed objects seem to fit in memory at once, but it might be tricky for the larger trees once their contents start firing faults. Is it a good idea to loop over the NSSet of nodes and see what items should be on screen?
Are there other tricks to gain performance? Would you see ways where I could use multi threading to get the display set faster, even though the model's context was created on the main thread?
Thanks for your advice,
EP.
Ironically your binary tree could be divided using Binary Space Partitioning done in 2D so rendering will be very fast performant and a number of check close to minimum necessary.