Why are maps in Indian apps like Uber, Google Maps so inaccurate? - gps

I've been using some popular taxi-hailing apps like Uber and OLA in India and the same, Uber, in USA. The location of cars, where they're moving and my position on the app's map are always off. So much so, I'd need to call the driver to tell them where I'm at. From this Quora thread I was able to narrow the problem to be in use of Maps API or GPS signals.
The Quora post: https://www.quora.com/Why-is-GPS-in-India-so-inaccurate
The parody video: https://www.youtube.com/watch?v=hjBM-zSq3NU

It is possible that the problem is caused by your device or the capability of GPS in your area. newer phones can use 10's of GPS's to locate themselves this is actually called AGPS. However, older phones use three cell towers (not GPS) in order to triangulate your position. While, this method was fairly accurate it was known to be more than a couple of feet off on occasion. Even older phones, may even use only two cell towers, the problem here that the speed at which this data (light speed) allowed for a very large margin of error which could be your problem. Also, some phones without AGPS use only one GPS to locate, and this may also complicate things for you.

Related

Defining a tracking area for the Kinect

Is it possible to specify a (rectangular) area for skeleton tracking with the Kinect (using any of the available SDKs)? I want to make sure that only users inside that designated area are tracked and that the sensor is not distracted by people outside it. Think of a game zone, in which a player interacts with the Kinect and where bystanders outside of the zone should be ignored lest they confuse the sensor.
The reason I want this is that many times the Kinect "locks" onto someone or even something, whether it should or not, and then it's difficult for the sensor to track other individuals, who come into tracking range. I want to avoid that by defining this zone.
It's not possible to specify a target area for the skeleton tracking with Microsoft's official SDKs, but there are some potential workarounds.
(Note that I'm not familiar with other SDKs for the Kinect, and note that I'm not sure if you are using the Kinect v1 or v2.)
If you are using the Kinect v1, note that it can track 6 players simultaneously (with a skeleton body position), but it can only provide full-body skeletal tracking for 2 players at a time. It's possible to specify which 2 players you want full skeletal tracking for in the official SDK, and you can do this based on which skeletons are in your target game zone.
If this isn't the problem, and the problem is that the Kinect (v1 or v2) has already detected 6 players and it can't detect a 7th individual that's in your game zone, then that is a more difficult problem. With the official SDK, you have no control over which 6 players are selected to be tracked. The sensor will lock onto the first 6 players it finds, so if a 7th player walks in, there is no simple way to lock onto that player.
However, there are some possible workarounds that involve resetting the sensor to clear all skeletons to re-select the 6 tracked skeletons (see the thread Skeleton tracking in crowds - Kinect v2):
Kinect body tracking is always scanning and finding candidate bodies
to track. The body tracking only locks on when it detects head and
shoulders of the person facing the camera. You could do something like
look for stable blob points in the target area and if there isn't a
tracked body, reset the Kinect Monitor service.
The SDK is resilient to this type of failure of the runtime, but it is
a hard approach. Additionally, you could employ a way to cover the
depth camera (your hand) to reset the tracking since this will make
all depth/ir invalid and will need to rebuild.
-- Carmine Sirignano - MSFT
In the same thread, RobAcheson points out that restarting the sensor is another workaround:
I've been using the by-hand method successfully for a while and that
definitely works - when I'm in the crowd :)
I have started calling KinectSensor.Close() and KinectSensor.Open()
when there are >6 skeletons if none are in the target area. That seems
to be working well too. Now I just need a crowd to test with.
-- RobAcheson

How can we test Roku application

I'm new to Roku development (in R&D phase actually). I read that we can't test Roku app on simulator and need real device. If we develop an application, how will we test it?
I checked Roku developer site and different links on internet, but could not find anything that answers my questions
As per my info, Roku sells 5 devices so:
Can we do one app that supports all 5 devices?
Do we need assets in multiple resolutions?
Do I need to buy all devices?
Can we do one app that supports all 5 devices?
Yes. Roku is trying hard to keep their platform coherent, though there are performance issues between the OpenGL and non-OpenGL devices. The "legacy" models (<2222) are no more supported, the firmware is kept current for the others.
Do we need assets in multiple resolutions?
Theoretically yes, practically - not really. You can make-do with assets in only one resolution, if you RTFM and pre-plan carefully. You'll need 3 sizes of app icon, no sweat. For the real UI though, you can either do HD (720) or FHD (1080) and leave it scale accordingly - the thing is TV is very forgiving to scaling graphics because of 10ft watching distance (60" 1080p screen is "Retina" beyond 8ft). Can largely snub SD.
Do I need to buy all devices?
No. And there are much more than 5 devices that are in use - see https://forums.roku.com/viewtopic.php?f=34&t=86471&start=15#p536994 for some statistics (RokuCo does not publish statistics, so that's about the best info available). If you buy only 2 devices, i'll say get
a #42xx (Roku 3 or current Roku 2) as reference model with OpenGL
a #27xx (Roku 1 or SE) or #5xxx RokuTV as reference for "slower", non-OGLES
As 3rd model i'll say the "new HDMI stick" #3600. You can get that one as the only device, its performance is somewhere between (1) and (2) above... but i don't think developing with only 1 device is a good idea.
One thing you may not have noticed is that there are also these "Roku TV" things under Hisense/TCL/Sharp/Insignia brands, models #5xxx. These are proper TVs with proper Roku smarts - meaning can run your Roku app. And one can be had for as little as... (skimming BestBuy web) $130-150 for 24-32" screen.
And i haven't even mentioned the 4k/HDR craze here, nor the new 37xx/46xx models that will be out for the holiday season (i only expect minor, evolutionary changes there).
Disclosure: I am a Roku employee.
That's correct, you'll need an actual Roku device to test your application. You can buy them used on eBay for very cheap ($20-35), or you can buy a brand new unit from our website for $50. The latest Roku Streaming Stick (Model #3600X) is my personal favorite option, and a great value.
You don't need to buy all devices, although we do recommend having many models so that you can QA test across devices. However, one popular development approach is to build your channel on a lower-end model, which theoretically will assure it works on higher-end models as well. This will also mean you have to spend less on your purchase.
Download our Precertification Checklist and open the third sheet, which includes a list of all our model numbers and corresponding code names. I'd recommend building on a "Giga" or a "Paolo."
Think of this cost as an R&D expense. Plus you'll get to enjoy the device on your free time as well!
As for your other questions:
Yes, you will only build one app that will work on all different devices. We do recommend taking the time to make sure your app is optimized across all devices, including older devices with less processing power. Our Performance Guide is a great starting point for this.
The other option is to check if the first number of the device model is less than “3” (which indicates it's a lower-end device) and add conditionals off that, such as removing animations.
You can find two examples of this on our RokuDev GitHub page:
Hero-Grid-Channel —> Components —> LoadingIndicator —> LoadingIndicator.brs —> Line 244
Multi-Live-Channel —> Source —> Main.brs —> Line 21
Yes, you do need different assets based on resolutions. Take a look at this document: https://github.com/rokudev/docs/blob/master/design/channel-artwork.md

How to read GPS coordinates from device via USB port

I need to read GPS coordinates using a VB.NET program directly from a GPS device connected to the computer via USB (bluetooth also OK but prefer USB). My constraints are:
The computer running the software is NOT connected to the internet. It is a stand-alone machine in a moving vehicle.
I need to be able to read GPS coordinates from the device while the vehicle moves and use the device to perform location-aware queries on a local database
The GPS device can be anything (e.g. Garmin GPS or GPS card without display), as long at it can be purchased off the shelf or over the internet.
The user group for this solution is quite small (about 40 users).
I have already checked out GPSGate (http://gpsgate.com/) and emailed my requirements to them. They replied, and I quote: "I am sorry but we have no product for you." (end of reply).
I also checked out Eye4Software) and tried using their demo product but it does not pick up my Garmin Nuvi via USB. They responded to my questions but unfortunately their OEM product is an ActiveX dll and I am looking for a .NET based solution.
So if anyone has a "home-grown" solution based on the .NET framework, that can be easily duplicated, I would really appreciate it. Many thanks!
Most of the USB GPS pucks will speak a standardized protocol called NMEA 0183. There are several .net protocols out there that decode this protocol, see here for some pointers to get started.
So, if when shopping around you just check that the device is able to generate NMEA you should be up and running in a minimum of time, and at a reasonable cost.
EDIT: a "gps puck" is a GPS receiver shaped more or less like a hockey puck, like this one
For in-car use there are specific versions that can be fixed onto the vehicle's roof
They are pretty common (many online shops carry them) but select them based on the chip that's inside, the popular Sirf Star 3 is still a solid performer, stable and accurate. I haven't had the chance to play with its successor, the Sirf Star 4 yet, and I'm not implying these are the only good chips around, only that I got most experience with this chip.

How do Augmented reality browsers track you accurate location to overlay relevant information?

AR browsers include those like wikitude, Layar etc that are available for Iphone and Android smartphones .
When you point your camera to a landmark they automatically overlay previously available information for a location over it (e.g. name of a restaurant over its door)
If the accuracy of GPS as states by some is ~ 10 m , how can this be done accurately ?
I mean if they track the location of your phone to display geographically accurate information , a difference of even 2-3 m can cause havoc.
you can check by yourself in the code of mixare, which is an augmented reality browser released under a free software license (GPLv3). The source code is available on github both for android and iPhone.
To answer your question: there are errors, mostly because of the compass readings (digital compasses are unreliable because they pick up every kind of noise). What helps is that you are usually looking to objects that are quite big (buildings etc.) hence the error is not THAT visible to the end user, but it's still there, trust me :)
HTH
Daniele
Disclosure: I am the project leader of mixare and main developer of the android version

How to demo examples of embeded systems?

It seems that a lot of small business people have a need for some customized embedded systems, but don't really know too much about the possibilities and cannot quite envisage them.
I had the same problem when trying to explain what Android could do; I was generally met with glazed eyes - and then I made a few demos. Somehow, being able to see something - to be able to touch it and play around with it – people have that cartoon lightbulb moment.
Even if it is not directly applicable to them, a demo starts them thinking about what could be useful to them.
The sort of person I am talking about may or may not be technical, but is certainly intelligent, having built from scratch a business which turns over millions.
Their needs are varied, from RFID or GPS asset & people tracking, to simple stock control systems, displays, communications, sometime satellite, sometimes VPN or LAN (wifi or RJ45). A lot of it needs a good back-end database with a web-site to display, query, data-mine …
So, to get to the question, I am looking for a simple project, or projects, which will cause that cartoon lightbulb moment. It need not be too complicated as those who need complicated solutions are generally tech-savvy, just something straightforward & showing what could be done to streamline their business and make it more profitable.
It would be nice it if could include some wifi/RJ45 comms, communicate across the internet (e.g not just a micro-controller attached to a single PC – that should then communicate with a server/web-site), an RFID reader would be nice, something actually happening (LEDs, sounds, etc), plus some database, database analysis/data-ming – something end-to-end, preferably in both directions.
A friend was suggesting a Rube Goldberg like contraption with a Lego Mindstorms attached to a local PC, but also controllable from a remote PC (representing head office) or web site. That would show remote control of devices. Maybe it could pick up some RFID tags and move them around (at random, or on command), representing stock control (or maybe employee/asset movement within a factory or warehouse (Location Based Services/GIS)), which cold then be shown on the web site, with some nice charts & graphs etc.
Any other ideas?
How best to implement it? One of those micro-controller starter kites like http://www.nerdkits.com/ ? Maybe some Lego, or similar robot kit, a few cheap RFID readers … anything else?
And – the $409,600 question – what's a good, representative demo which demonstrate as many functionalities as possible, as impressively as possible, with the least effort? (keeping it modular and allowing for easy addition of features, since there is such a wide area to cover)
p.s a tie with an Adroid slate PC would be welcome too
Your customers might respond better to a solid looking R/C truck which seeks RFID tags than to a Lego robot. Lego is cool, but it has a bit of a slapped-together 'kiddie' feel.
What if you:
scatter some RFID tags across the conference room.
add a GPS & wifi transmitter to your truck.
drive the truck to the tag
(manually - unless you want to invest a lot of time in steering algorithms).
have a PC drawing a real-time track of the trucks path.
every the truck gets within range of the tag, add it to an inventory list on the screen, showing item id, location, time recorded, total units so far.
indicate the position of the item on the map.
I'd be impressed.
Is it 'least effort'? I don't know, but I'd hope that if this is the type of solution you are pitching, that you already have a good handle on how to read GPS and RFID devices, how to establish a TCP or UDP connection with wifi, how to send and decode packets. Add some simple graphics and database lookup, and you are set.
Regarding hardware, I don't have any first hand experience with any of these, but the GadgetPC Wi-Fi G Kit + a USB RFID reader + a USB GPS reciever looks like a nice platform for experimenting with this.
Many chip manufactures have off-the-shelf demo boards. Microchip has some great demo boards for TCP/IP communications on an embedded system. I haven't seen one yet for RFID. Showing potential customers some of these demos could get them thinking about what is possible.