HoloLens external rendering - rendering

Does soneone of you have a good solution for external rendering for Microsoft HoloLens Apps? Specified: Is it possible to let my laptop render an amount of 3D objects that is too much for the HoloLens GPU and then display them with the HoloLens by wifi including the spatial mapping and interaction?

It's possible to render remotely both directly from the unity editor and from a built application.
While neither achieves your goal of a "good solution" they both allow very intensive applications to at least run at all.
This walks you through how to add it to an app you're building.
https://learn.microsoft.com/en-us/windows/mixed-reality/add-holographic-remoting
This is for running directly from the editor:
https://blogs.unity3d.com/2018/05/30/create-enhanced-3d-visuals-with-holographic-emulation-in-uwp/

I don't think this is possible since, you can't really access the OS or the processor at all on the HoloLens. Even if you do manage to send the data to a 3rd party to process, the data will still need to be run back through the HoloLens which is really just the same as before.
You may find a way to perhaps hook up a VR backpack to it but even then, I highly doubt it would be possible.
If you are having trouble rendering 3D objects, then you should reduce the number of triangles, get a lower resolution shader on it, or reduce the size of the object. The biggest factor in processing 3D objects on the HoloLens is how much space is being drawn on the lens. If your object takes up 25% of the view instead of 100% it will be easier to process on the HoloLens.
Also if you can't avoid a lot of objects in the scene maybe check out LOD, which reduces the resolution of objects based off of distance to it and vice versa.

Related

How to modify DirectX camera

Suppose I have a 3D (but not stereoscopic) DirectX game or program. Is there a way for a second program (or a driver) to change the camera position in the game?
I'm trying to build a head-tracking plugin or driver that I can use for my DirectX games/programs. An inertial motion sensor will give me the position of my head but my problem is using that position data to change the camera position, not with the hardware/math concerns of head tracking.
I haven't been able to find anything on how to do this so far, but iZ3D was able to create two cameras near the original camera and use it for stereoscopic stuff, so I know there exists some hook/link/connection into DirectX that makes camera manipulation by a second program possible.
If I am able to get this to work I'll release the code.
-Shane
Hooking Direct3D calls in its nature is just hooking DLL calls. I.e. its not something special to D3D but just a generic technique. Try googling for "hook dll" or start from here: [C++] Direct3D hooking sample. As it always happens with hooks there are many caveats and you'll have to make a pretty huge boilerplate to satisfy all needs of the hooked application.
Though, manipulation with camera in games usually gives not good results. There are at least two key features of modern PC game which will severely limit your idea:
Pre-clipping. Almost any game engine filters out objects that are behind the viewing plane. So when you rotate camera to a side you won't see the objects you'd expect to see in a real world - they were just not sent to D3D since game doesn't know that viewing plane has changed.
Multiple passes rendering. Many popular post processing effects are done in extra passes (either thru the whole scene or just part of it). Mirrors and "screens" are the most known such effects. Without knowing what camera you're manipulating with you'll most likely just break the scene.
Btw, #2 is the reason why stereoscopic mode is not 100% compatible with all games. For example, in Source engine HDR scenes are rendered in three passes and if you don't know how to distinguish them you'll do nothing but break the game. Take a look at how nVidia implements their stereoscopic mode: they make a separate hook for every popular game and even with this approach it's not always possible to get expected result.

Is QML worth using for Embedded systems which run around 600MHz and no GPU?

Im planning to build a Embedded system which is almost like an organizer i.e. which handles contacts, games, applications & wifi/2G/3G for internet. I planned to build the UI with QML because of its easy to use and quick application building nature. And to have a linux kernel.
But after reading these articles:
http://qt-project.org/forums/viewthread/5820 &
http://en.roolz.org/Blog/Entries/2010/10/29_Qt_QML_on_embedded_devices.html
I am depressed and reconsidering my idea of using QML!
My hardware will be with these configurations : Processor around 600MHz, RAM 128MB and no GPU.
Please give comments on this and suggest me some alternatives for this.
Thanks in Advance.
inblueswithu
I have created a QML application for Nokia E63 which has 369MHz processor, 128MB RAM. I don't think it has a GPU. The application is a Stop Watch application. I have animated button click events like jumping (jumping balls). The animations are really smooth even when two button jumps at the same time. A 600MHz processor is expected to handle QML easily.
This is the link for the sis file http://store.ovi.com/content/184985. If you have a Nokia mobile you can test it.
May be you should consider building QML elements by hand instead of doing it from Photoshop or Gimp. For example using Item in the place of Rectangle will be optimal. So you can give it a try. May be by creating a rough sketch with good amount of animations to check whether you processor can handle that. Even if it don't work as expected then consider Qt to build your UI.
QML applications work fine on low-specs Nokia devices. I have made one for 5800 XPressMusic smartphone without any problems.

Improving the efficiency of Kinect for Windows DTWGestureRecognizer Application

Currently I am using the DTWGestureRecognizer open source tool for Kinect SDK v1.5. I have recorded a few gestures and use them to navigate through Windows 7. I also have implemented voice control for simple things such as opening PowerPoint, Chrome, etc.
My main issue is that the application uses quite a bit of my CPU power which causes it to become slow. During gestures and voice commands, the CPU usage sometimes spikes to 80-90%, which causes the application to be unresponsive for a few seconds. I am running it on a 64 bit Windows 7 machine with an i5 processor and 8 GB of RAM. I was wondering if anyone with any experience using this tool or Kinect in general has made it more efficient and less performance hogging.
Right now I removed sections which display the RGB video and the Depth video but even doing that did not make a big impact. Any help is appreciated, thanks!
Some of the factors I can think of are
Reduce the resolution.
Reduce the frames being recorded/processed by the application using polling model i.e. OpenNextFrame(int millisecondsWait) method of DepthStream, ColorStream & SkeletonStream
instead of event model.
Tracking mode is Default instead of Seated(sensor.SkeletonStream.TrackingMode =
SkeletonTrackingMode.Default) as seated consumes more resources.
Use sensor.MapDepthFrameToColorFrame instead of calling sensor.MapDepthToColorImagePoint method in a loop.
Last and most imp. is the algorithm used in the open source tool.

retrieve video ram usage iphone

i have see this article about retrieve memory usage off iphone app
programmatically-retrieve-memory-usage-on-iphone It's great !
In my project i want to retrieve the available VRAM free, because my app load many textures, and i must preload theses into the video Ram for fast rendering.
but on the VM_statistics i don't view theses properties : vm_statistics MAN page
Thanks a lot for your help.
As you've seen so far, getting hard numbers for GL texture memory usage is quite difficult. It's complicated further by the fact that CoreAnimation will also use GL Texture memory without "consulting" you, including from processes other than yours.
Practically speaking, I suggest that you use the VM Tracker instrument in Instruments to watch changes in the VM pages your process maps under the IOKit tag. It's a bit crude, but it's the best approach I've found. In my experience, this process is largely guess and check.
You asked specifically for a way to determine the amount of free VRAM, but even if you could get that info, it's not really likely to be helpful. Even if your app is totally OpenGL and uses no UIViews or CoreAnimation layers other processes, most importantly those more privileged than yours, can consume that memory at any time, either explicitly or implicitly through CoreAnimation. It's also probably safe to assume that if your app prevents those more-privileged apps from getting the texture memory they need, your process will be killed.
Put differently, even if you could ascertain the instantaneous state of the GL texture memory, you probably couldn't count on being the only consumer of that resource, so it's pretty useless.
At the end of the day, you should spend your effort designing your app to be a good citizen in terms of GL memory and manage (read: minimize) your own consumption of texture memory. iOS devices are not old-school game consoles -- you are not the only thing running -- so you need to be mindful and tolerant of that fact, lest your app be one of those where everyone has to reboot their phone every few minutes in order to use it.

How to estimate size of Windows CE run-time image

I am developing an application, and need to estimate how much resources (RAM and ROM) it will need to run on a device. I have been looking online, but couldn't find any good tip on how to do this.
The system in question is an industrial system. The application itself will need to have a .NET Compact framework, and following components besides Windows CE Core: SYSGEN_HTTPD (Web Server), SYSGEN_ATL (Active Template Libraries), SYSGEN_SOAPTK_SERVER (SOAP Server), SYSGEN_MSXML_HTTP (XML/HTTP), SYSGEN_CPP_EH_AND_RTTI (Exception Handling and Runtime Type Information).
Tx
There really is not way to estimate this, becasue application behavior and code can have wildly different requirements. Something that does image manipulation is going to likely require more RAM than a simple HMI, but even two graphical apps that do the same thing could be vastly different based on how image algorithms and buffer sizes are set up.
The only way to greally get an idea is to actually run the application and see what the footprint looks like. I would guess that you're going to want a BOM that includes at least 64MB or RAM and 32MB of flash at a bare minimum. Depending on the app, I'd probably ask for 128MB of RAM. Flash would greatly depend on what the app needs to do.
Since you are specifying core OS components and since I assume you can estimate your own application's resources, I assume you ask for an estimation of the OS as a whole.
The simplest way to have an approximation is to build an emulator image (CE6 has an arm one) and it should give you a sense. The difference with the final image will be with the size of the drivers for the actual platform you will use.