Hardware-accelerated Bing Maps... not accelerated? - silverlight-4.0

In my SL4 application I add a lot of Polyline objects onto a Bing Map control. The end result is that the application is sluggish when, for example, moving the map.
Thus, I've tried enabling GPU acceleration.
I've added an extra parameter to the .aspx page hosting the SL application:
<param name="EnableGPUAcceleration" value="true" />
I've also added the following bit of XAML code to the map control:
<bing:Map.CacheMode>
<BitmapCache/>
</bing:Map.CacheMode>
Unfortunately, it's still just as slow as before. Did I forget about something? Or does it mean Bing Maps won't benefit from GPU acceleration?

Hardware acceleration won't help when you have a lot of polyline's/polygon's. Here are two tools for getting good performance with the Bing Maps Silverlight controls:
http://rbrundritt.wordpress.com/2010/11/19/optimize-map-layers-in-bing-maps-silverlight/
http://rbrundritt.wordpress.com/2010/03/06/multipolygon-multilinestring-classes-for-bing-maps-silverlight/
Using these two pieces of code I'm able to render 95mb of polygon data without any performance issue.

Related

HoloLens external rendering

Does soneone of you have a good solution for external rendering for Microsoft HoloLens Apps? Specified: Is it possible to let my laptop render an amount of 3D objects that is too much for the HoloLens GPU and then display them with the HoloLens by wifi including the spatial mapping and interaction?
It's possible to render remotely both directly from the unity editor and from a built application.
While neither achieves your goal of a "good solution" they both allow very intensive applications to at least run at all.
This walks you through how to add it to an app you're building.
https://learn.microsoft.com/en-us/windows/mixed-reality/add-holographic-remoting
This is for running directly from the editor:
https://blogs.unity3d.com/2018/05/30/create-enhanced-3d-visuals-with-holographic-emulation-in-uwp/
I don't think this is possible since, you can't really access the OS or the processor at all on the HoloLens. Even if you do manage to send the data to a 3rd party to process, the data will still need to be run back through the HoloLens which is really just the same as before.
You may find a way to perhaps hook up a VR backpack to it but even then, I highly doubt it would be possible.
If you are having trouble rendering 3D objects, then you should reduce the number of triangles, get a lower resolution shader on it, or reduce the size of the object. The biggest factor in processing 3D objects on the HoloLens is how much space is being drawn on the lens. If your object takes up 25% of the view instead of 100% it will be easier to process on the HoloLens.
Also if you can't avoid a lot of objects in the scene maybe check out LOD, which reduces the resolution of objects based off of distance to it and vice versa.

Kinect v2 XAML performance vs WPF performance

I've recently adquired a new MS Kinect v2 for Windows, and i'm messing with it in order to learn how it works, and how I would aproach my future ideas for it.
By now, i'm only teasing the samples that comes with the Kinect browser (Downloaded with the new SDK), using an almost new Toshiba C55 NoteBook (i5 2.5GH, 8GB RAM, NVidia 710M).
The fact is that i've tried the "Coordinate Mapping basics" sample, that comes in many forms (D2D, XAML, HTML and WPF). This sample just removes the background using the depth frame.
I've tried the all the versions so far, and the XAML sample runs very very very very slow... while the rest are running very very very smooth...
So i've tried an external code extracted from GitHub which technically does the same, also using XAML. And it also runs too slow.
Due the fact that i'm not used to develop for MS platforms, i don't know if it is really a hardware problem, or if XAML has higer requirements, and I cannot figure out why is it behaving so bad only with XAML.
I tried to find any similar questions, but didn't found any that seemed useful for my case.
I know that is probably my fault, but I don't know why... Maybe a misunderstanding of the whole setup?
The external sample I found: https://github.com/Vangos/kinect-2-background-removal
Also tried the CoordinateMapper from the same GitHub, same issue: https://github.com/Vangos/kinect-2-coordinate-mapping
Thank you all.
UPDATE:
After developing and deploying the WPF app succefully, I'd started to check the performance of the Kinect with Windows RT, and I'd find lots of problems at memory level, W8.1 RT is slow, and does not support Kinect V2 very well, at least in my testing HW. This problems may lead to the symptoms described in this other question I found: Kinect camera freeze
This issue also made me note that the new Kinect V2 is VERY VERY sensitive to ambient temperature.
Hope this helps some Overflowed developars with similar problems :).
The Coordinate Mapper XAML and Coordinate Mapper WPF samples both use XAML. The version marked "XAML" is a Windows Store App. The version marked "WPF" is a Windows Desktop app. I didn't see much of a difference on my machine between the two until I ran the Performance and Diagnostics tools in Visual Studio 2013. I suggest running them and creating an analysis report. That will help you discover what exactly is causing the differences.

Need to add an interactive 3D model to my otherwise non-3D app

As briefly as I can; are there any frameworks available that I can drop into an iPad app I'm working on, along with a 3D model, and allow me to add a view that will present the model in an interactive format?
Model needs to be rotatable, and ideally I would like to be able to add interactive points on to the model that pop up modal views when tapped.
I have never worked with 3D before in any respect so I'm coming at that part as a complete novice. The 3D model is being supplied to me and will be available in "various formats". The rest of the app is pure Objective-C in which I'm proficient enough.
I have Googled and Googled and have come up with nothing so far.
Failing there being any drop-in frameworks, how much of a challenge is it likely to be to get myself up to speed with what I would need to know? Are there any good starting points to expand my knowledge here?
3D is a complex matter, if you don't see your future dealing with it on a regular basis I wouldn't recommend writing your own solutions for it.
The closest you can find to a drag and drop framework would be the SDK of the iPhone / iPad GPU's manufacturer. It's pretty easy to use.
PowerVR SDK Download
After a free registration on their website, you can download the SDK that contains lots of samples with source code. Their framework displays 3D models in their own POD format, which is of course heavily optimized for the iOS devices. Ask your 3D model provider to give you the models in POD format (you can find POD converters / exporters for Maya etc. on PowerVR's website as well).

Is QML worth using for Embedded systems which run around 600MHz and no GPU?

Im planning to build a Embedded system which is almost like an organizer i.e. which handles contacts, games, applications & wifi/2G/3G for internet. I planned to build the UI with QML because of its easy to use and quick application building nature. And to have a linux kernel.
But after reading these articles:
http://qt-project.org/forums/viewthread/5820 &
http://en.roolz.org/Blog/Entries/2010/10/29_Qt_QML_on_embedded_devices.html
I am depressed and reconsidering my idea of using QML!
My hardware will be with these configurations : Processor around 600MHz, RAM 128MB and no GPU.
Please give comments on this and suggest me some alternatives for this.
Thanks in Advance.
inblueswithu
I have created a QML application for Nokia E63 which has 369MHz processor, 128MB RAM. I don't think it has a GPU. The application is a Stop Watch application. I have animated button click events like jumping (jumping balls). The animations are really smooth even when two button jumps at the same time. A 600MHz processor is expected to handle QML easily.
This is the link for the sis file http://store.ovi.com/content/184985. If you have a Nokia mobile you can test it.
May be you should consider building QML elements by hand instead of doing it from Photoshop or Gimp. For example using Item in the place of Rectangle will be optimal. So you can give it a try. May be by creating a rough sketch with good amount of animations to check whether you processor can handle that. Even if it don't work as expected then consider Qt to build your UI.
QML applications work fine on low-specs Nokia devices. I have made one for 5800 XPressMusic smartphone without any problems.

Getting started with image processing on Mac OS X

I recently moved from a PC to a MacBook Pro. I'm starting to go through tutorials on Objective-C and developing in Cocoa. I do a lot of image processing algorithm development work (pixel by pixel manipulation) in my day job so I'd like to get create a test image processing app or two for OS X. I'm struggling to figure out where to start - let's say I want to create a simple application (that I could reuse) like the following:
load an image from an open file option within a file menu
display this within the GUI.
Click a button to apply pixel by pixel processing
Update the displayed image
Save the processed image from the save option within the file menu
Any pointers or links would be most appreciated.
Thanks
Other info:
I'm pretty familiar with OpenCV within Linux - haven't looked at using it within Objective-C/Cocoa/Xcode environment yet though - not even sure if this would be a good idea?
I guess it would be nice to use GPU acceleration as well, but I'm not familiar with OpenGL/OpenCL - so I might have to put that one on the long finger for the moment.
As you are looking at the Apple platform, you should look into the CoreImage framework - it will provide you most of pre-baked cookies ready to be consumed in your application.
For more advanced purposes, you can start off with openCV.
Best of luck!!
As samfisher suggests, OpenCV is not that hard to get working on the Mac, and Core Image is a great Cocoa framework for doing GPU-accelerated image processing. I'm working on porting my GPUImage framework from iOS to the Mac, and it's entirely geared around making accelerated image processing easy to work with, but unfortunately that isn't working right now.
If you're just getting started on the Mac, one tool that I can point out which you might overlook is Quartz Composer. You have to download the separate Graphics Tools package from Apple's developer site to install Quartz Composer, because it's no longer shipped with Xcode.
Quartz Composer is a graphical development tool that lets you drag and drop modules, connect inputs and outputs, and do rapid development of some fairly interesting things. One task it's great for is doing rapid prototyping of image processing, either using Core Image or OpenGL shaders. I've even heard of people using OpenCV with this using custom patches. You can easily connect an image or camera source into a filter chain, then edit the filters and see live updates as you work on them, without requiring a compile-run cycle.
If you want some sample QC projects to play with, I have a couple of them linked from this article I wrote a couple of years ago. They both do the same color-based object tracking, with one using Core Image and the other OpenGL shaders. You can dig into that and play around to see how that works, without having to get too far into writing any code.