How do Augmented reality browsers track you accurate location to overlay relevant information? - gps

AR browsers include those like wikitude, Layar etc that are available for Iphone and Android smartphones .
When you point your camera to a landmark they automatically overlay previously available information for a location over it (e.g. name of a restaurant over its door)
If the accuracy of GPS as states by some is ~ 10 m , how can this be done accurately ?
I mean if they track the location of your phone to display geographically accurate information , a difference of even 2-3 m can cause havoc.

you can check by yourself in the code of mixare, which is an augmented reality browser released under a free software license (GPLv3). The source code is available on github both for android and iPhone.
To answer your question: there are errors, mostly because of the compass readings (digital compasses are unreliable because they pick up every kind of noise). What helps is that you are usually looking to objects that are quite big (buildings etc.) hence the error is not THAT visible to the end user, but it's still there, trust me :)
HTH
Daniele
Disclosure: I am the project leader of mixare and main developer of the android version

Related

Relocalize a smartphone on a preloaded point cloud

Being a novice I need an advice how to solve the following problem.
Say, with photogrammetry I have obtained a point cloud of the part of my room. Then I upload this point cloud to an android phone and I want it to track its camera pose relatively to this point cloud in real time.
As far as I know there can be problems with different cameras' (simple camera or another phone camera VS my phone camera) intrinsics that can affect the presision of localisation, right?
Actually, it's supposed to be an AR-app, so I've tried existing SDKs - vuforia, wikitude, placenote (haven't tried arcore yet cause my device highly likely won't support it). The problem is they all use their own clouds for their services and I don't want to depend on them. Ideally, it's my own PC where I perform 3d reconstruction and from where my phone downloads a point cloud.
Do I need a SLAM (with IMU fusion) or VIO on my phone, don't I? Are there any ready-to-go implementations within libs like ARtoolKit or, maybe, PCL? Will any existing SLAM catch up a map, reconstructed with other algorithms or should I use one and only SLAM for both mapping and localization?
So, the main question is how to do everything arcore and vuforia does without using third party servers. (I suspect the answer is to device the same underlay which vuforia and other SDKs use to employ all available hardware..)

Defining a tracking area for the Kinect

Is it possible to specify a (rectangular) area for skeleton tracking with the Kinect (using any of the available SDKs)? I want to make sure that only users inside that designated area are tracked and that the sensor is not distracted by people outside it. Think of a game zone, in which a player interacts with the Kinect and where bystanders outside of the zone should be ignored lest they confuse the sensor.
The reason I want this is that many times the Kinect "locks" onto someone or even something, whether it should or not, and then it's difficult for the sensor to track other individuals, who come into tracking range. I want to avoid that by defining this zone.
It's not possible to specify a target area for the skeleton tracking with Microsoft's official SDKs, but there are some potential workarounds.
(Note that I'm not familiar with other SDKs for the Kinect, and note that I'm not sure if you are using the Kinect v1 or v2.)
If you are using the Kinect v1, note that it can track 6 players simultaneously (with a skeleton body position), but it can only provide full-body skeletal tracking for 2 players at a time. It's possible to specify which 2 players you want full skeletal tracking for in the official SDK, and you can do this based on which skeletons are in your target game zone.
If this isn't the problem, and the problem is that the Kinect (v1 or v2) has already detected 6 players and it can't detect a 7th individual that's in your game zone, then that is a more difficult problem. With the official SDK, you have no control over which 6 players are selected to be tracked. The sensor will lock onto the first 6 players it finds, so if a 7th player walks in, there is no simple way to lock onto that player.
However, there are some possible workarounds that involve resetting the sensor to clear all skeletons to re-select the 6 tracked skeletons (see the thread Skeleton tracking in crowds - Kinect v2):
Kinect body tracking is always scanning and finding candidate bodies
to track. The body tracking only locks on when it detects head and
shoulders of the person facing the camera. You could do something like
look for stable blob points in the target area and if there isn't a
tracked body, reset the Kinect Monitor service.
The SDK is resilient to this type of failure of the runtime, but it is
a hard approach. Additionally, you could employ a way to cover the
depth camera (your hand) to reset the tracking since this will make
all depth/ir invalid and will need to rebuild.
-- Carmine Sirignano - MSFT
In the same thread, RobAcheson points out that restarting the sensor is another workaround:
I've been using the by-hand method successfully for a while and that
definitely works - when I'm in the crowd :)
I have started calling KinectSensor.Close() and KinectSensor.Open()
when there are >6 skeletons if none are in the target area. That seems
to be working well too. Now I just need a crowd to test with.
-- RobAcheson

How can we test Roku application

I'm new to Roku development (in R&D phase actually). I read that we can't test Roku app on simulator and need real device. If we develop an application, how will we test it?
I checked Roku developer site and different links on internet, but could not find anything that answers my questions
As per my info, Roku sells 5 devices so:
Can we do one app that supports all 5 devices?
Do we need assets in multiple resolutions?
Do I need to buy all devices?
Can we do one app that supports all 5 devices?
Yes. Roku is trying hard to keep their platform coherent, though there are performance issues between the OpenGL and non-OpenGL devices. The "legacy" models (<2222) are no more supported, the firmware is kept current for the others.
Do we need assets in multiple resolutions?
Theoretically yes, practically - not really. You can make-do with assets in only one resolution, if you RTFM and pre-plan carefully. You'll need 3 sizes of app icon, no sweat. For the real UI though, you can either do HD (720) or FHD (1080) and leave it scale accordingly - the thing is TV is very forgiving to scaling graphics because of 10ft watching distance (60" 1080p screen is "Retina" beyond 8ft). Can largely snub SD.
Do I need to buy all devices?
No. And there are much more than 5 devices that are in use - see https://forums.roku.com/viewtopic.php?f=34&t=86471&start=15#p536994 for some statistics (RokuCo does not publish statistics, so that's about the best info available). If you buy only 2 devices, i'll say get
a #42xx (Roku 3 or current Roku 2) as reference model with OpenGL
a #27xx (Roku 1 or SE) or #5xxx RokuTV as reference for "slower", non-OGLES
As 3rd model i'll say the "new HDMI stick" #3600. You can get that one as the only device, its performance is somewhere between (1) and (2) above... but i don't think developing with only 1 device is a good idea.
One thing you may not have noticed is that there are also these "Roku TV" things under Hisense/TCL/Sharp/Insignia brands, models #5xxx. These are proper TVs with proper Roku smarts - meaning can run your Roku app. And one can be had for as little as... (skimming BestBuy web) $130-150 for 24-32" screen.
And i haven't even mentioned the 4k/HDR craze here, nor the new 37xx/46xx models that will be out for the holiday season (i only expect minor, evolutionary changes there).
Disclosure: I am a Roku employee.
That's correct, you'll need an actual Roku device to test your application. You can buy them used on eBay for very cheap ($20-35), or you can buy a brand new unit from our website for $50. The latest Roku Streaming Stick (Model #3600X) is my personal favorite option, and a great value.
You don't need to buy all devices, although we do recommend having many models so that you can QA test across devices. However, one popular development approach is to build your channel on a lower-end model, which theoretically will assure it works on higher-end models as well. This will also mean you have to spend less on your purchase.
Download our Precertification Checklist and open the third sheet, which includes a list of all our model numbers and corresponding code names. I'd recommend building on a "Giga" or a "Paolo."
Think of this cost as an R&D expense. Plus you'll get to enjoy the device on your free time as well!
As for your other questions:
Yes, you will only build one app that will work on all different devices. We do recommend taking the time to make sure your app is optimized across all devices, including older devices with less processing power. Our Performance Guide is a great starting point for this.
The other option is to check if the first number of the device model is less than “3” (which indicates it's a lower-end device) and add conditionals off that, such as removing animations.
You can find two examples of this on our RokuDev GitHub page:
Hero-Grid-Channel —> Components —> LoadingIndicator —> LoadingIndicator.brs —> Line 244
Multi-Live-Channel —> Source —> Main.brs —> Line 21
Yes, you do need different assets based on resolutions. Take a look at this document: https://github.com/rokudev/docs/blob/master/design/channel-artwork.md

Why are maps in Indian apps like Uber, Google Maps so inaccurate?

I've been using some popular taxi-hailing apps like Uber and OLA in India and the same, Uber, in USA. The location of cars, where they're moving and my position on the app's map are always off. So much so, I'd need to call the driver to tell them where I'm at. From this Quora thread I was able to narrow the problem to be in use of Maps API or GPS signals.
The Quora post: https://www.quora.com/Why-is-GPS-in-India-so-inaccurate
The parody video: https://www.youtube.com/watch?v=hjBM-zSq3NU
It is possible that the problem is caused by your device or the capability of GPS in your area. newer phones can use 10's of GPS's to locate themselves this is actually called AGPS. However, older phones use three cell towers (not GPS) in order to triangulate your position. While, this method was fairly accurate it was known to be more than a couple of feet off on occasion. Even older phones, may even use only two cell towers, the problem here that the speed at which this data (light speed) allowed for a very large margin of error which could be your problem. Also, some phones without AGPS use only one GPS to locate, and this may also complicate things for you.

Data usage from any application

I want to read how much data from 3G every app uses. Is this is possible in iOS 5.x ? And in iOS 4.x? My goal is for example:
Maps consumed 3 MB from your data plan
Mail consumed 420 kB from your data plan
etc, etc. Is this possible?
EDIT:
I just found app doing that: Data Man Pro
EDIT 2:
I'm starting a bounty. Extra points goes to the answer that make this clear. I know it is possible (screen from Data Man Pro) and i'm sure the solution is limited. But what is the solution and how to implement this.
These are just hints not a solution. I thought about this many times, but never really started implementing the whole thing.
first of all, you can calculate transferred bytes querying network interfaces, take a look to this SO answer for code and a nice explanation about network interfaces on iOS;
use sysctl or similar system functions to detect which apps are currently running (and for running I mean the process state is set to RUNNING, like the ps or top commands do on OSX. Never tried I just suppose this to be possible on iOS, hoping there are no problems with app running as unprivileged user) so you can deduce which apps are running and save the traffic stats for those apps. Obviously, given the possibility to have applications runnning in background it is hard to determine which app is transferring data.
It also could be possible to retrieve informations about network activity per process/app like nettop does on OSX Lion, unfortunately nettop uses the private framework NetworkStatistics.framework so you can't dig something out it's implementation;
take into account time;
My 2 cents
No, all applications in iOS are sandboxed, meaning you cannot access anything outside of the application. I do not believe this is possible. Neither do I believe data-traffic is saved on this level on the device, hence apple would have implemented it in either the network page or the usage page in Settings.app.
Besides that, not everybody has a "data-plan". E.g. in Sweden its common that data-traffic is free of charge without limit in either size or speed.