Vuforia for Hololens - tracking

I was wondering if someone else tried developing a hololens application using vuforia. Specifically, using vuforia's capacity to recognize and track objects.
I tried and it seems like it's working. I was just not sure about the result I got from the Debug.Log that print the name of the tracked object.
I tried putting two trackable targets millimeters away from each other and pointed my Gaze towards the distance between the objects(hoping it takes both).
Some how the output window gave me this.
It seems like I was able to track both targets but I want to know if I tracked two different objects at the same time.
I have this doubt because at some point, eventhough the hololens was in the same position as before, the output started to change and started printing only one of the two objects(the one in the right).
I think of this as a problem caused by hololens' small camera window or by hololens limited hardware.

In the vuforiaconfiguration you should be able to set the maximum simultaneous amount of objects your app can track. You need to make sure it is set to more than 1.
In the image above you see how you can set the maximum amount of tracked images in unity.
If you are not using unity you'll have to access the vuforiaconfiguration in another way and set the maximum simultaneous amount of tracked object there.
From code you can do it in c# like this:
VuforiaConfiguration.Instance.Vuforia.MaxSimultaneousImageTargets = 2;

Related

FabricJS v3.4.0: Filters & maxTextureSize - performance/size limitations

Intro:
I've been messing with fabricJS image filtering features in an attempt to start using them in my webapp, but i've run into the following.
It seems fabricJS by default only sets the image size cap (textureSize) on filters to be 2048, meaning the largest image is 2048x2048 pixels.
I've attempted to raise the default by calling fabric.isWebGLSupported() and then setting fabric.textureSize = fabric.maxTextureSize, but that still caps it at 4096x4096 pixels, even though my maxTextureSize on my device is in the 16000~ range.
I realize that devices usually report the full value without accounting for current memory actually available, but that still seems like a hard limitation.
So I guess the main issues I'm looking at here to start effectively using this feature:
1- Render blocking applyFilters() method:
The current filter application function seems to be render blocking in the browser, is there a way call it without blocking the rendering, so I can show an indeterministic loading spinner or something?
is it as simple as making the apply filter method async and calling it from somewhere else in the app? (I'm using vue for context, with webpack/babel which polyfills async/await etc.)
2- Size limits:
Is there a way to bypass the size limit on images? I'm looking to filter images up to 4800x7200 pixels
I can think of one way atleast to do this, which is to "break up" the image into smaller images, apply the filters, and then stitch it back together. But I worry it might be a performance hit, as there will be a lot of canvas exports & canvas initializations in this process.
I'm surprised fabricjs doesn't do this "chunking" by default as its quite a comprehensive library, and I think they've already gone to the point where they use webGL shaders (which is a black box to me) for filtering under the hood for performance, is there a better way to do this?
My other solution would be to send the image to a service (one i handroll, or a pre-existing paid one) that applies the filters somewhere in the cloud and returns it to the user, but thats not a solution i prefer to resort to just yet.
For context, i'm mostly using fabric.Canvas and fabric.StaticCanvas to initialize canvases in my app.
Any insights/help with this would be great.
i wrote the filtering backend for fabricJS, with Mr. Scott Seaward (credits to him too), and i can give you some answers.
Hard block to 2048
A lot of macbook with intel integrated only videocard report a max texture size of 4096, but then they crash the webgl instance at anything higher of 2280. This was happening widely in 2017 when the webgl filtering was written. 4096 would have left uncovered by default a LOT of notebooks. Do not forget mobile phones too.
You know your userbase, you can up the limit to what your video card allows and what canvas allows in your browser. The final image, for how big the texture can be, must be copied in a canvas and displayed. ( canvas has a different max size depending on browser and device )
Render blocking applyFilters() method
Webgl is sync for what i understood.
Creating a parallel executing in a thread for filtering operations that are in the order of 20-30 ms ( sometimes just a couple of ms in chrome ) seems excessive.
Also consider that i tried it but when more than 4 webgl context were open in firefox, some would have been dropped. So i decided for one at time.
The non webgl filtering take longer of course, that could be done probably in a separate thread, but fabricJS is a generic library that does both vectors and filterings and serialization, it has already lot of things on the plate, filtering performances are not that bad. But i'm open to argue around it.
Chunking
Shutterstock editor uses fabricJS and is the main reason why a webgl backend was written. The editor has also chunking and can filter with tiles of 2048 pixels bigger images. We did not release that as opensource and i do not plan of asking. That kind of tiling limit the kind of filters you can write because the code has knowledge of a limited portion of the image at time, even just blurring becomes complicated.
Here there is a description of the process of tiling, is written for casual reader and not only software engineers, is just a blog post.
https://tech.shutterstock.com/2019/04/30/canvas-webgl-filtering-concepts
Generic render blocking consideration
So fabricJS has some pre-written filters made with shaders.
The timing i note here are from my memory and not reverified
The time that pass away filtering an image is:
Uploading the image in the GPU ( i do not know how many ms )
Compiling the shader ( up to 40 ms, depends )
Running the shader ( like 2 ms )
Downloading the result on the GPU ( like 0ms or 13 depends on what method is using )
Now the first time you run a filter on a single image:
The image gets uploaded
Filter compiled
Shader Run
Result downloaded
The second time you do this:
Shader Run
Result downloaded
When a new filter is added or filter is changed:
New filter compiled
Shader or both shader run
Result downloaded
Most common errors in application building with filtering that i have noticed are:
You forget to remove old filters, leaving them active with a value near 0 that does not produce visual changes, but adds up time
You connect the filter to a slider change event, without throttling, and that depending on the browser/device brings up to 120 filtering operation per second.
Look at the official simple demo:
http://fabricjs.com/image-filters
Use the sliders to filter, apply even more filters, everything seems pretty smooth to me.

Regrounding Zero Based ColumnSeries in Apache/Adobe Flex

I have tweeted an image illustrating the problem with Flex ColumnSeries on a PlotChart when trying to overlay one on top of another.
Essentially, it can display one series alright, two or more OK on initialization, but after a bit of manipulation (in the user session), the columns lose their sense of where zero is, and begin to float (these series have no minfield, thus zero is their starting point). FWIW: the axis for these columns is on the right, but that can change given the type of data displayed.
The app this is for allows users to turn multiple series of multiple plotting styles on and off, change visual parameters, and even the order in which the series stack on top of each other -- just to give you an idea of what's going on.
Due to how dynamic this all is, I am doing most of the code in ActionScript.
So the questions are:
Is this fixable? Googling around has provided no insights, regardless of inquiry.
Is there a refresh function or equivalent within PlotChart/CartesianCharts that may help?
May this not be a problem with the chart canvas, but more of the axis which the series points to? or the series itself?
If it has not been made clear already: I am lost on this. The issue I have known about for ~a year now was first discovered on a Beta version of the app I am working on now, but it took a while for it to surface in an average user session. As the complexity of the app has grown (by client demand), the issue takes a lot less time to surface.
The issue also occurs on all versions of Flex I have used: 4.5, 4.6, 4.9... etc.
Please help, or offer pointers. Thanks!

3D Objects are not being in their regular shape at distance

I am working on a game which was developed by some other guy earlier. I am facing a problem that when player(with camera) start running on the road the buildings are not being shown up in their regular shape and as we move forward (more closer to the buildings) they gain their original shapes, and some times the buildings present on either side of the road are not visible by camera ( empty space ) and when we move closer to the building it comes up as visible object suddenly. I think it may be some unity3d setting problem (rendering , camera or quality). May be, it was being done due to increase performance on mobile devices.
can anybody know what may be the issue or how to resolve it.
Any help will be appreciated. Thanks in advance
This sounds like it's a problem with the available LODs for each building's 3D model.
Basically, 3d games work by having 2-3 different versions of each 3D model, with varying *L*evels *O*f *D*etail. So for example, if you have a house model which uses 500 polygons, you'll probably have another 2 versions (eg 250 polys and 100 polys), which are used depending on the distance between the player and the object. The farther away he is, the simpler the version used will be.
The issue occurs when developers use automatically generated LOD models, which will look distorted or won't appear at all. Unity probably auto generates them, but I'm unsure where you'll find the settings for this in unity. However I've seen 3d models on the unity store offering models with different LODs, so unity probably gives you the option to set your own. The simplest solution would be to increase the distance the LODs change at, while the complicated solution would be to fix custom versions of the 3D models for larger distances, with a lower poly count.
I have resolved the problem. This was due to the LOD (level of details) used for objects (buildings) in Unity3d to enhance the performance on the slower device. LOD provides many level of details (of an object) which you can adjust according to your need . In my specific problem the buildings were suddenly appear due to the different (wrong) position for LOD1, i.e. for LOD1 the building was at wrong place but for LOD0 it was at its right place. So when my camera see from the distance it see LOD1 which was at wrong place thence it sees empty space with no building at the expected position. But when camera comes closer it sees LOD0 in which building is at the right position and it seems that buildings are suddenly come or become visible.

Inserting CCParticleSystemQuad between sprites in different CCSpriteBatchNode's

I currently have a few layers in a Cocos2d scene (running in Kobold2d). Each layer has a sprite batch node attached to it. I need to use batch nodes given the ridiculous number of sprites I have on screen at once. Everything is working fine, and I've set up a little particle system. Problem I'm running into is CCParticleBatchNode particle emmiters are always on top of everything (as it is the highest zOrder's layer) - but this is an isometric game and obviously doesn't work.
Is there a way that I could sneak the emmiters between the sprites on any of my layers containing CCSpriteBatchNode's? I've tried messing around with vertexZ (I'm on the newest version of cocos2d 2.+) but it doesn't matter what I do, it doesn't seem to change anything, even though the LUA file for Kobold2d that would enable this is set properly and the shader for programForKey:kCCShader_PositionTextureColorAlphaTest on my batch nodes is enabled - but maybe this isn't even the best solution?
Has anyone run into anything like this or suggest any sacrifices I could make or tricks I could do that I'm not thinking of?
To use vertexZ you need to enable depth buffering (see config.lua). Vertexz is the only way to change draw order between spritebatches and other nodes.

Location manager, not accurate even after setting kCLLocationAccuracyBest

Hi there
I am using location manager and mapkit, i am able to get the curernt location, but its not accurate enough - This is my problem
My current location on the map is for example 3.0856333888778926, 101.67204022407532, but location manager's location only returns +3.08370327, +101.67506444; which is short of a few decimal numbers
This is resulting in the wrong location (about 1 KM away) when i try to show directions
I have already set location to be kCLLocationAccuracyBest -
Any suggestions?
Where do you try it? Inside, the accuracy of GPS is inherently limited (usually not to 1km, though. But within big cities, reflections from buildings are possible). Ahh, and another thing: is the measurement done inside the simulator? I'm not sure how the location is determined within it. But in my tests, I'm also usually quite off my actual position.
It may be related on how you have setup your locationmanager.
Could you please post it here for us to check? Maybe this could help.
Are you on wifi? This happens to me if I am on wifi. When I switch to edge/3g, everything turns to normal. Just try with standard map application if it also shows you wrong.
the highest possible accuracy and combine it with additional sensor data.
kCLLocationAccuracyBestForNavigation
This level of accuracy is intended for use in navigation applications that require precise position information at all times and are intended to be used only while the device is plugged in.