For the Kinect v2, what is the location of the point cloud's origin, in reference to say some fixed, external feature of the physical Kinect?
For context, I have multiple point clouds taken by a Kinect v2, from multiple precisely known locations, but I can't get them to line up manually. (I've been manually moving the location of the origin around hoping it would just come in to focus)
it should be the center of the color camera as stated in https://msdn.microsoft.com/it-it/library/dn785530.aspx
Hope that this helps,
Alex
Related
Being a novice I need an advice how to solve the following problem.
Say, with photogrammetry I have obtained a point cloud of the part of my room. Then I upload this point cloud to an android phone and I want it to track its camera pose relatively to this point cloud in real time.
As far as I know there can be problems with different cameras' (simple camera or another phone camera VS my phone camera) intrinsics that can affect the presision of localisation, right?
Actually, it's supposed to be an AR-app, so I've tried existing SDKs - vuforia, wikitude, placenote (haven't tried arcore yet cause my device highly likely won't support it). The problem is they all use their own clouds for their services and I don't want to depend on them. Ideally, it's my own PC where I perform 3d reconstruction and from where my phone downloads a point cloud.
Do I need a SLAM (with IMU fusion) or VIO on my phone, don't I? Are there any ready-to-go implementations within libs like ARtoolKit or, maybe, PCL? Will any existing SLAM catch up a map, reconstructed with other algorithms or should I use one and only SLAM for both mapping and localization?
So, the main question is how to do everything arcore and vuforia does without using third party servers. (I suspect the answer is to device the same underlay which vuforia and other SDKs use to employ all available hardware..)
Is it possible to specify a (rectangular) area for skeleton tracking with the Kinect (using any of the available SDKs)? I want to make sure that only users inside that designated area are tracked and that the sensor is not distracted by people outside it. Think of a game zone, in which a player interacts with the Kinect and where bystanders outside of the zone should be ignored lest they confuse the sensor.
The reason I want this is that many times the Kinect "locks" onto someone or even something, whether it should or not, and then it's difficult for the sensor to track other individuals, who come into tracking range. I want to avoid that by defining this zone.
It's not possible to specify a target area for the skeleton tracking with Microsoft's official SDKs, but there are some potential workarounds.
(Note that I'm not familiar with other SDKs for the Kinect, and note that I'm not sure if you are using the Kinect v1 or v2.)
If you are using the Kinect v1, note that it can track 6 players simultaneously (with a skeleton body position), but it can only provide full-body skeletal tracking for 2 players at a time. It's possible to specify which 2 players you want full skeletal tracking for in the official SDK, and you can do this based on which skeletons are in your target game zone.
If this isn't the problem, and the problem is that the Kinect (v1 or v2) has already detected 6 players and it can't detect a 7th individual that's in your game zone, then that is a more difficult problem. With the official SDK, you have no control over which 6 players are selected to be tracked. The sensor will lock onto the first 6 players it finds, so if a 7th player walks in, there is no simple way to lock onto that player.
However, there are some possible workarounds that involve resetting the sensor to clear all skeletons to re-select the 6 tracked skeletons (see the thread Skeleton tracking in crowds - Kinect v2):
Kinect body tracking is always scanning and finding candidate bodies
to track. The body tracking only locks on when it detects head and
shoulders of the person facing the camera. You could do something like
look for stable blob points in the target area and if there isn't a
tracked body, reset the Kinect Monitor service.
The SDK is resilient to this type of failure of the runtime, but it is
a hard approach. Additionally, you could employ a way to cover the
depth camera (your hand) to reset the tracking since this will make
all depth/ir invalid and will need to rebuild.
-- Carmine Sirignano - MSFT
In the same thread, RobAcheson points out that restarting the sensor is another workaround:
I've been using the by-hand method successfully for a while and that
definitely works - when I'm in the crowd :)
I have started calling KinectSensor.Close() and KinectSensor.Open()
when there are >6 skeletons if none are in the target area. That seems
to be working well too. Now I just need a crowd to test with.
-- RobAcheson
This is my first interaction with kinect xbox, i have to count the number of people getting in and out from a door, what i have learned to do is to get a depth map,
Detect the top of head closer to sensor and track it
Increment/decrement count when head crosses a specifuc region
I am able to get the depth image, but totally blank that how would detect head from depth image.
I am using xbox 360 kinect, and Kinect for windows SDK v1.8 in c#
Thanks in advance
Kinect SDK by default gives you the coordinates for body part tracking, such as the head. Also, the SDK can track up to 2 individual people (for Kinect v1 / SDK 1.8) and 6 different people (for Kinect v2 / SDK 2.0).
For starters, you can get the location of the head as reported by the SDK and count when it crosses the region maybe? Simply search for Kinect Head tracking and you can see that it's easy to locate the head in a scene using the SDK (instead of a depth map directly).
An example for the 1.8 SKD is given here.
I've been using some popular taxi-hailing apps like Uber and OLA in India and the same, Uber, in USA. The location of cars, where they're moving and my position on the app's map are always off. So much so, I'd need to call the driver to tell them where I'm at. From this Quora thread I was able to narrow the problem to be in use of Maps API or GPS signals.
The Quora post: https://www.quora.com/Why-is-GPS-in-India-so-inaccurate
The parody video: https://www.youtube.com/watch?v=hjBM-zSq3NU
It is possible that the problem is caused by your device or the capability of GPS in your area. newer phones can use 10's of GPS's to locate themselves this is actually called AGPS. However, older phones use three cell towers (not GPS) in order to triangulate your position. While, this method was fairly accurate it was known to be more than a couple of feet off on occasion. Even older phones, may even use only two cell towers, the problem here that the speed at which this data (light speed) allowed for a very large margin of error which could be your problem. Also, some phones without AGPS use only one GPS to locate, and this may also complicate things for you.
Is it possible to disable GPS without disabling location services?
What I would like to do is essentially dumb-down location accuracy but removing gps function from my phone.
Im aware that I may not have accurate location - and im ok with that.
I just want to know if gps can be disabled and only use cell tower triangulation to determine my (approximate) location.
Thank you.
Igor
Your question is related to programming in the case of testing:
You cannot disable GPS with a iPhone/iPad setting while keeping cell tower location services,
but:
GPS is easily shielded by some metal foil, while GSM is very dificult to shield.
Try wrapping your phone into aluminium foil, you get in every super market.
That should shield GPS, while Cell Tower Triangulation still works.
That way I tested my Gps App.
You can't control the GPS directly. But you can specify the desired accuracy of the location, so you can specify a very large distance as acceptable, and stop updating location when you get it.
Also if you use region monitoring, the GPS is much less impacted (if at all), because it uses cell towers primarily.