I am making an application that will somewhat work like the Kinect's WebServer WPF sample. I am currently trying to map hand positions to screen coordinates so they can work like cursors. Seems all fine and dandy, so let's look at the InteractionHandPointer documentation:
Gets interaction-adjusted X-coordinate of hand pointer position. 0.0 corresponds to left edge of interaction region and 1.0 corresponds to right edge of interaction region, but values could be outside of this range.
And the same goes for Y. Wow, sounds good. If it's a value between 0 and 1 I can just multiply it by the screen resolution and get my coordinates, right?
Well, turns out it regularly returns values outside that range, going as low as -3 and as high as 4. I also checked out SkeletonPoint, but the usage of meters as scale makes it even harder to use reliably.
So, has anyone had any luck using the InteractionHandPointer? If so, what kind of adjustments did you do?
Best Regards,
João Fernandes
The interaction zone is an area for each hand where the users can comfortably interact. When the value is lower than 0 or greater than 1, the hand of the user is outside the interaction region and you should ignore the movement.
To those wondering, like kallocain said, if the value is greater than 1 or lower than 0 then tha hand of the user is outside the interaction region. The fact that the boundaries of this region aren't configurable is quite the bother.
When the values do go outside these values you can indeed choose to ignore them. Instead of doing that, I bounded them to the region in this manner:
Math.Max(0, Math.Min(hand.x, 1))
I hope this helps someone someday.
Related
I am working on a game which was developed by some other guy earlier. I am facing a problem that when player(with camera) start running on the road the buildings are not being shown up in their regular shape and as we move forward (more closer to the buildings) they gain their original shapes, and some times the buildings present on either side of the road are not visible by camera ( empty space ) and when we move closer to the building it comes up as visible object suddenly. I think it may be some unity3d setting problem (rendering , camera or quality). May be, it was being done due to increase performance on mobile devices.
can anybody know what may be the issue or how to resolve it.
Any help will be appreciated. Thanks in advance
This sounds like it's a problem with the available LODs for each building's 3D model.
Basically, 3d games work by having 2-3 different versions of each 3D model, with varying *L*evels *O*f *D*etail. So for example, if you have a house model which uses 500 polygons, you'll probably have another 2 versions (eg 250 polys and 100 polys), which are used depending on the distance between the player and the object. The farther away he is, the simpler the version used will be.
The issue occurs when developers use automatically generated LOD models, which will look distorted or won't appear at all. Unity probably auto generates them, but I'm unsure where you'll find the settings for this in unity. However I've seen 3d models on the unity store offering models with different LODs, so unity probably gives you the option to set your own. The simplest solution would be to increase the distance the LODs change at, while the complicated solution would be to fix custom versions of the 3D models for larger distances, with a lower poly count.
I have resolved the problem. This was due to the LOD (level of details) used for objects (buildings) in Unity3d to enhance the performance on the slower device. LOD provides many level of details (of an object) which you can adjust according to your need . In my specific problem the buildings were suddenly appear due to the different (wrong) position for LOD1, i.e. for LOD1 the building was at wrong place but for LOD0 it was at its right place. So when my camera see from the distance it see LOD1 which was at wrong place thence it sees empty space with no building at the expected position. But when camera comes closer it sees LOD0 in which building is at the right position and it seems that buildings are suddenly come or become visible.
I'm asking if there are any ideas of how to cluster different body segments using the depth map from the Kinect device? There are two problems, the first one is how to identify different body parts from each other, for example: lower arm from upper arm. The second one is how to identify a body part if there is an occluded part?
I hope if anyone could guide me solve this.
Many Thanks for your kind assistance
You can use skeleton recognition middlewares (e.g. Nite) to get the coordinates of the joints of the body (such as shoulder, elbow, fingertip). After reading the Z (depth) value of the joints, you can consider only the points which has a Z value close to the body joints' Z values.
For example if the middleware tells you that the Z value of the hand is 2000mm, you can safely assume that all the pixels/points that are part of fingers and palm will have a Z value around 1900-2100mm, and the wall or desk behind or in front of the user will have a much different Z value. So you can just disregard any point outside 1900-2100mm.
You should also disregard any points that are far from the joints. For example there might be a book that is exactly 2000mm far from the camera, but located far from the user.
Need a little help from someone who knows a little about logistics;
I am currently working with an application known as Framework. The application is not really something that I am familiar with, but regardless I can figure out how it works. One of the tabs running in the application is for expected orders (shipping trucks). Within that, I am able to see where an outbound truck's current location is, as well as it's destination. I am trying to add functionality to the application that would allow me to see an estimated time of arrival to its current destination + the drive back to my location. This seems simple enough, but I'm trying to figure out the best way that I could calculate this. I looked into The Google Distance Matrix API, but I have no need to display a map on the application, all I want is the ETA. I am pretty inexperienced with this kind of thing, so I was hoping someone could point me in the right direction.
Thanks guys.
This may not be the best forum for this question...
It looks like Google Distance Matrix requires you to display the map. An alternative is the open source OSRM project. Natively it's a C++ engine for routing, which outputs directions and the total route information so the any map display is up to you.
There is a demo and HTTP API hosted on the project site but you will need to check if it's suitable for your usage level.
Just an idea, but depending on the size of your delivery area, and how accurate you want the estimated time, you may be able to keep it all in a database.
Let's assume your delivery area is 10 miles x 10 miles.
So that's 100 square miles. We'll use each square mile as a point.
Do a one time calculation of how long it will take to get from each
point to the rest. You can
use the Google Distance Matrix API for this since you're only doing
this once.
This will give you 10,000 records that has every point to point time.
So, if your truck is in point 25, and has to get to point 64, you do a lookup and see that it should take about 10 minutes. plus the drive from point 64 back to the warehouse (point 10) is 8 minutes. Then you'll know the truck should be back in about 18 minutes.
It's not super accurate, but it might be close enough for your needs. I would be curious if you do implement this method.
Btw, if your delivery area is 100 miles x 100 miles, then that would be 100,000,000 location points if each point is 1 square mile. If that's too much, then if you increase your point size to 2 miles x 2 miles (4 square miles), then that's about 6,000,000 records.
I'm not sure what is going wrong here. It works in some projects, but not in others. I can't figure out what the difference is between them. To test the problem, I created a point set with a single point at a position I am sure is inside the cube. When I call IndexAtPosition, I sometimes get obviously wrong answers. For example, I sometimes get inline or crossline indexes that are negative or way beyond the maximum index. The z-dimension index also comes back with a very unrealistic answer too.
I am fairly certain that my data is all consistent, i.e. same domain and CRS. There must be some settings I'm not checking.
My guess is that your point is a point like x, y, 1000m in depth
the cube however is probably in the time domain. So if you try to find a point in the cube you are going to be looking at x,y, 1000Seconds. This would be VERY far out of range and would give you a crazy number for your k index (super high). Based on the angle from north your i,j could be crazy too that far away from reality. You need to have your point in time or some way to convert it from Depth to Time.
Unfortunately, I made a mistake. One of my data points was in fact outside of the cube area when the cube was rotated.
We're building a GIS interface to display GPS track data, e.g. imagine the raw data set from a guy wandering around a neighborhood on a bike for an hour. A set of data like this with perhaps a new point recorded every 5 seconds, will be large and displaying it in a browser or a handheld device will be challenging. Also, displaying every single point is usually not necessary since a user can't visually resolve that much data anyway.
So for performance reasons we are looking for algorithms that are good at 'reducing' data like this so that the number of points being displayed is reduced significantly but in such a way that it doesn't risk data mis-interpretation. For example, if our fictional bike rider stops for a drink, we certainly don't want to draw 100 lat/lon points in a cluster around the 7-Eleven.
We are aware of clustering, which is good for when looking at a bunch of disconnected points, however what we need is something that applies to tracks as described above. Thanks.
A more scientific and perhaps more math heavy solution is to use the Ramer-Douglas-Peucker algorithm to generalize your path. I used it when I studied for my Master of Surveying so it's a proven thing. :-)
Giving your path and the minimum angle you can tolerate in your path, it simplifies the path by reducing the number of points.
Typically the best way of doing that is:
Determine the minimum number of screen pixels you want between GPS points displayed.
Determine the distance represented by each pixel in the current zoom level.
Multiply answer 1 by answer 2 to get the minimum distance between coordinates you want to display.
starting from the first coordinate in the journey path, read each next coordinate until you've reached the required minimum distance from the current point. Repeat.