Glass Material in Unreal Engine 4 for Virtual Reality (VR)? - rendering

The glass material(s) that are provided and used by UE4 work very well in regular games however it is very expensive to use in VR.
Are there any glass material(s) or textures out there that work well for VR? My guess is the less transparent the glass; the cheaper it is to render since there will be less reflection/refraction?

Related

HoloLens external rendering

Does soneone of you have a good solution for external rendering for Microsoft HoloLens Apps? Specified: Is it possible to let my laptop render an amount of 3D objects that is too much for the HoloLens GPU and then display them with the HoloLens by wifi including the spatial mapping and interaction?
It's possible to render remotely both directly from the unity editor and from a built application.
While neither achieves your goal of a "good solution" they both allow very intensive applications to at least run at all.
This walks you through how to add it to an app you're building.
https://learn.microsoft.com/en-us/windows/mixed-reality/add-holographic-remoting
This is for running directly from the editor:
https://blogs.unity3d.com/2018/05/30/create-enhanced-3d-visuals-with-holographic-emulation-in-uwp/
I don't think this is possible since, you can't really access the OS or the processor at all on the HoloLens. Even if you do manage to send the data to a 3rd party to process, the data will still need to be run back through the HoloLens which is really just the same as before.
You may find a way to perhaps hook up a VR backpack to it but even then, I highly doubt it would be possible.
If you are having trouble rendering 3D objects, then you should reduce the number of triangles, get a lower resolution shader on it, or reduce the size of the object. The biggest factor in processing 3D objects on the HoloLens is how much space is being drawn on the lens. If your object takes up 25% of the view instead of 100% it will be easier to process on the HoloLens.
Also if you can't avoid a lot of objects in the scene maybe check out LOD, which reduces the resolution of objects based off of distance to it and vice versa.

Why do we use CPUs for ray tracing instead of GPUs?

After doing some research on rasterisation and ray tracing. I have discovered that there is not much information on how CPUs work for ray-tracing available on the internet. I came across and article about Pixar and how they pre-rendered Cars 2 on the CPU. This took them 11.5 hours per frame. Would a GPU not have rendered this faster with the same image quality?
http://gizmodo.com/5813587/12500-cpu-cores-were-required-to-render-cars-2
https://www.engadget.com/2014/10/18/disney-big-hero-6/
http://www.firstshowing.net/2009/michael-bay-presents-transformers-2-facts-and-figures/
I'm one of the rendering software architects at a large VFX and animated feature studio with a proprietary renderer (not Pixar, though I was once the rendering software architect there as well, long, long ago).
Almost all high-quality rendering for film (at all the big studios, with all the major renderers) is CPU only. There are a bunch of reasons why this is the case. In no particular order, some of the really compelling ones to give you the flavor of the issues:
GPUs only go fast when everything is in memory. The biggest GPU cards have, what, 12GB or so, and it has to hold everything. Well, we routinely render scenes with 30GB of geometry and that reference 1TB or more of texture. Can't load that into GPU memory, it's literally two orders of magnitude too big. So GPUs are simply unable to deal with our biggest (or even average) scenes. (With CPU renderers, we can page stuff from disk whenever we need. GPUs aren't good at that.)
Don't believe the hype, ray tracing with GPUs is not an obvious win over CPU. GPUs are great at highly coherent work (doing the same things to lots of data at once). Ray tracing is very incoherent (each ray can go a different direction, intersect different objects, shade different materials, access different textures), and so this access pattern degrades GPU performance very severely. It's only very recently that GPU ray tracing could match the best CPU-based ray tracing code, and even though it has surpassed it, it's not by much, not enough to throw out all the old code and start fresh with buggy fragile code for GPUs. And the biggest, most expensive scenes are the ones where GPUs are only marginally faster. Being lots faster on the easy scenes is not really important to us.
If you have 50 or 100 man years of production-hardened code in your CPU-based renderer, you just don't throw it out and start over in order to get a 2x speedup. Software engineering effort, stability, and so on, is more important and a bigger cost factor.
Similarly, if your studio has an investment in a data center holding 20,000 CPU cores, all in the smallest, most power and heat-efficient form factor you can, that's also a sunk cost investment you don't just throw away. Replacing them with new machines containing top of the line GPUs vastly increases the cost of your render farm, and they are bigger and produce more heat, so it literally might not fit in your building.
Amdahl's Law: The actual "rendering" per se is only one stage in generating the scenes, and GPUs don't help with it. Let's say that it takes 1 hour to fully generate and export the scene to the renderer, and 9 hours to "render", and out of that 9 hours, an hour is reading texture, volumes, and other data from disk. So out of the total 10 hours of how the user experiences rendering (push button until final image is ready), 8 hours is potentially sped up with GPUs. So, even if GPU was 10x as fast as CPU for that part, you go from 10 hours to 1+1+0.8 = nearly 3 hours. So 10x GPU speedup only translates to 3x actual gain. If GPU was 1,000,000x faster than CPU for ray tracing, you still have 1+1+tiny, which is only a 5x speedup.
But what's different about games? Why are GPUs good for games but not film?
First of all, when you make a game, remember that it's got to render in real time -- that means your most important constraint is the 60Hz (or whatever) frame rate, and you sacrifice quality or features where necessary to achieve that. In contrast, with film, the unbreakable constraint is making the director and VFX supervisor happy with the quality and look he or she wants, and how long it takes you to get that is (to a degree) secondary.
Also, with a game, you render frame after frame after frame, live in front of every user. But with film, you effectively are rendering ONCE, and what's delivered to theaters is a movie file -- so moviegoers will never know or care if it took you 10 hours per frame, but they will notice if it doesn't look good. So again, there is less of a penalty placed on those renders taking a long time, as long as they look fabulous.
With a game, you don't really know what frames you are going to render, since the player may wander all around the world, view from just about anywhere. You can't and shouldn't try to make it all perfect, you just want it to be good enough all the time. But for a film, the shots are all hand-crafted! A tremendous amount of human time goes into composing, animating, lighting, and compositing every shot, and then you only need to render it once. Think about the economics -- once 10 days of calendar (and salary) has gone into lighting and compositing the shot just right, the advantage of rendering it in an hour (or even a minute) versus overnight, is pretty small, and not worth any sacrifice of quality or achievable complexity of the image.
ADDENDUM (2022):
The world has changed a lot since I wrote this answer in 2016! Once ray tracing acceleration was added to hardware (with NVIDIA RTX cards) ray tracing on GPUs was finally, definitively faster than ray tracing the same scene on a CPU -- for scenes that are of a size that can fit on the GPUs. And GPUs have a lot more memory than they did in 2016, so that includes a much wider range of scenes. Lots of games in 2022 use a combination of rasterization and ray tracing (when available) and probably within a couple years there may be games that are ray traced only. And in the film world, we are all racing to get our renderers ray tracing on GPUs with full feature parity with the CPU ray tracers. But we're not quite there yet. We use the GPUs more and more for various interactive uses during production, but final frames are still CPU rendered for full-complexity frames. But I think we're within a year or two of some portion of final frames being rendered strictly with GPU ray tracing, and probably within 5 years of nearly all final film frames being GPU ray traced (though not anywhere near at realtime rates).

OpenGL power of two texture performance [duplicate]

I am creating an OpenGL video player using Ffmpeg and all my videos aren't power of 2 (as they are normal video resolutions). It runs at fine fps with my nvidia card but I've found that it won't run on older ATI cards because they don't support non-power-of-two textures.
I will only be using this on an Nvidia card so I don't really care about the ATI problem too much but I was wondering how much of a performance boost I'd get if the textuers were power-of-2? Is it worth padding them out?
Also, if it is worth it, how do I go about padding them out to the nearest larger power-of-two?
Writing a video player you should update your texture content using glTexSubImage2D(). This function allows you to supply arbitarily sized images, that will be placed somewhere in the target texture. So you can initialize the texture first with a call of glTexImage() with the data pointer being NULL, then fill in the data.
The performance gain of pure power of 2 textures strongly depends on the hardware used, but in extreme cases it may be up to 300%.

Best physics engine with VB.net

I'm building a simple program. Basically some simple meshes, some cubes, etc. I'll be having them crash around a bit through (against some solid objects). I've worked with a couple of rendering engines but nothing like what I want (i.e. like, with physics :] ).
Give this a try http://sourceforge.net/projects/vbphysxdx9/
It uses PhysX by Nvidia. You will need a Nvidia graphics card with PhysX to use it though.

Hardware requirements for development machines

Given that:
SSD’s are now [high end] mainstream
Two+ cores are not hard to come across
24+ Inch monitors are plentiful
Dual Video Outputs are the norm.
64-Bit OS’s complement very cheap memory
Can I ask two questions to hardware enthused developers [not the gamers!]
What high-end hardware item could you not develop without - [what is your hardware crutch]?
What should a baseline [no frills] dev machine look like and what basic specs should it have to ensure that any dev can still be productive?
Note: It might be worth mentioning what platform and dev-env your base line is for?
The most important hardware update (and most underrated) is the monitor.
If you're coding 8+ hours a day don't hesitate on costs and get a nice high end 24" at least, or even a pair of them.
Absolute must have is a good monitor which is easy on the eyes, afterall, you stare at it all day. I go with the 24" Samsung (forget model). I used to go with two monitors but prefer the one wide screen now. You need to be able to get docs and code on the same screen.
Secondly is a good chair and desk (sorry not very technical).
Followed lastly by plenty of RAM (2Gb minimum). Once you get over any thrashing due to paging you are fine. Anything with a dual core had enough processing power.
This is entirely dependent upon what you are developing for. Take your target system requirements, and double them and use that as your minimum specs for the dev machines. That may seem odd, but it is about the point I've found that I've needed at least of when developing various projects.
As others have mentioned the importance of getting good monitors, keyboard, and chairs is underrated. If you are going to spend a lot of time at this PC, those are very important.
RAM is cheap, and you'll likely never have enough. If you are running 32bit Windows, max it out at 4GB of RAM. If you are using another OS that supports more than 4GB of ram (Linux, or 64bit Windows for example), start at 8GB minimum, and if you are working on multimedia projects be ready to upgrade from there.
Best bang for the buck on CPUs seems to be Quad cores right now, so I would say that at least a quad core (2.4Ghz or so) should be the minimum. You may not see much difference going up beyond there, until you get until dual quad core, which is a large price jump.
Find a reliable hard drive or two. Reliability and speed are going to be more important than size. Personally I currently go for a pair of 640GB western digital drives in all machines I build.
24 inch or larger monitor
Baseline dev machine would be a 15 inch MacBook Pro with 4GB of RAM. (For web development)
A pair of the fastest hard drives avaílable. I never recognized how much difference separate and fast System and Data drives can make.
(And please, none of those slow SSDs that you usually get nowadays in <$2000 Laptops - if you really want to hop on the SSD train, get a proper one, otherwise you could as well use a 32 GB SDHC Card)
There's been a study on the optimum size of computer monitors by the Utah University
Wall street journal article. Not surprising is that bigger monitors will boost the speed of work. Surprising is that there seems to be an optimum size of 26". There's no explanation why though.
I am not a developer, but do sit at the computer all day.
For me the must have is a desk that is a good height or easily adjusted, I prefer dual monitors, a 26" and a second wide screen that can turn sideways to view documents full lenght without the need for a lot of scrolling, a computer with dual core(prefer 4) and at east 4gb of ram(I tend to do a lot of vm work), and as stated above, a good chair that has lumbar support and will allow me to lean back when I am reading or pondering a situation. The last one is specific for me since I have glasses and tend to hear high frequencies, I prefer to have incandescent lighting with a slightly warm spectrum. I can hear a fluorescent ballast above someone playing loud speakers. I also find I get less glare and I can focus my eyes for longer periods of time with incandescent.
Ram, lots and lots of ram. Ram compensates for many performance bottlenecks.
But do make sure you keep an eye on the memory usage of whatever you're building. When you're building a 60 MB footprint app on a system with 2 gigs of developer tools loaded at run-time, it's easy to lose that footprint in the noise, even when it doubles.
Don't bother shelling out for a high-end cpu. The cpu is the most overpowered component in modern systems. A standard cheap dual-core should be more than enough. Compiles tend to be disk-bound, not cpu bound, so that money is better invested in a faster drive.
Dell Outlet sells 30" LCD monitors for about $800.00.
That is a good place to start.
Besides that, invest time into tweaking your OS to your needs and automate as much as possible.
It's like I keep telling people, "I'll upgrade to the latest Mac when it somehow manages to help me run more Terminal windows and Text Editors." Until then, you're better off saving the money for a new machine and investing it into a decent monitor and keyboard.
It depends on the project.
For large imaging application like medical imaging applications, You may require: large monitors(we have to view the images properly and in detail), powerful graphics, lots of RAM and a good processor(imaging applications usually need lots of power).
I'm going to echo most people on the large monitors part, and you can always make good use of a pair.
Second to that is a good keyboard. What that mean varies depending on which school of keyboard design you subscribe to. I'm with the ergonomic camp.
Following that is 2Gb+ of RAM, and a recent desktop CPU (anything released in the past 2-3 years really).
As has been previously said, large monitors are essential. These days is not that expensive to have 2 hooked up to a machine. At work I'm lucky enough to have 3 hooked up to one PC and it make a huge amount of difference to how I work.
A decent keyboard and mouse are essential. For the last 10 or so years I've always taken my own mouse and keyboard to work as you typically end up with whatever comes from the PC manufacturer. I use a Microsoft ergonomic keyboard and it's very hard to find these in the workplace, or to get your employer to stump up for one, but I've never worked anywhere where the employer has an issue with taking your own in.
High-end hardware I cannot do without:
Kinesis countoured ergonomic keyboard ($300)
Fast twin SATA drives, striped for speed ($150)
Affordable luxuries I could do without:
Dell 30" widescreen monitor ($900)
Twin Velociraptor hard drives ($600)