Can EDSDK provide the camera's pixel size? - edsdk

Is there any way to get the physical pixel size (e.g. in µm) from the EDSDK? There is lots of rather useless information provided, like the camera's serial number, but an elementary property of the camera, like the pixel size is hidden, or I was too stupid to find it. I need this information for astro-photography and FITS file conversion. If the SDK does not provide this parameter, I need to setup a data base of Canon cameras which provides this information, but this would really suck!
I have searched through the whole SDK documentation, but only number of pixels in x- and y-direction seem to be provided by using the GetProperty commands.
Thank you for your help!

Related

create a geotiff file from undocumented tif image

I have an undocumented tiff image which I need to use with a software that can read only geotif files. my simplest idea was to pretend the image is at 0N, 0W with a pixel size of 0.00000899928° (1m) in both directions.
I have rea the thread here but I was unable to reproduce the answer.
Thanks for helping. I am a dummy in geodesics, GIS and the like.
You are attempting to georeference a raster, which is often a difficult task, with multiple techniques. It's not possible to provide an answer for your question given the information you have supplied. Also, never assume that lengths in degrees can be converted to lengths in metres (the Earth isn't flat).
Search around GIS.SE for ideas , e.g. using the [georeferencing] tag. There are tools available with QGIS to help manually georeference rasters to other geospatial data.

Multiple resolutions on Windows Phone 8.1

I'm moving to WP8.1 and there are different resolutions to be supported so I need to create the correct resources for each scaling factor.
Following this guide I've found out that I can use a simple naming conventions and the appropriate image will be automatically loaded without having to manage it from code.
What the guide doesn't say, and I can't seem to find it anywhere, is what the original size should be.
I need to know which one is the resolution that corresponds to the 100 scaling factor so that I can calculate the size of the scaled images.
Do you have any idea on this?
There's a lot of documentation for Windows Store (I'm not using Silverlight) apps but everything seems always related to desktop/tablets even if there's the phone icon.
Please refer section "1.a.i" as guideline (http://msdn.microsoft.com/library/windows/apps/xaml/hh965325.aspx) mention by you.
You need to create image with 96 DPI, considering as 100% scale.
I also recommended refer guidelines provided in this link http://msdn.microsoft.com/en-US/library/windows/apps/xaml/hh465362.aspx. Refer below statement from provided link.
"Windows Runtime apps (that run on Windows, Windows Phone, or both)
are automatically scaled by the system to ensure consistent
readability and functionality regardless of a screen's pixel density."
And also below one.
"Windows determines which scaling plateau to use based on the physical
screen size, the screen resolution, the DPI of the screen, and form
factor."
Hope this helps.

kinect SKD skeletonization method

I was wondering if there's a way to modify the depth map prior to sending it to the skeletonization algorithm used by the kinect, for example, if we want to run the skeletonization on the output of a segmented depth image. So far I have reviewed the methods in the sdk but I haven't been able to find a skeletonization method exposed. It's like you either turn the skeleton on or off but you have no control on its inputs.
If anyone has any idea regarding this topic I will be much obliged.
Shamita: skeletonization means tracking the joints of the user in real time. I edit because I can't comment (not enought reputation).
All the joints' give a depth coordinate and I don't think you can mess with the Kinect hardware input stream. But you can categorize the joints regarding to depth segments. For example with the live stream you categorize it with the corresponding category if it is below 10 and above five it is in category A. this can be done with the live stream itself because it is just a simple calculation.

It is possible to recognize all objects from a room with Microsoft Kinect?

I have a project where I have to recognize an entire room so I can calculate the distances between objects (like big ones eg. bed, table, etc.) and a person in that room. It is possible something like that using Microsoft Kinect?
Thank you!
Kinect provides you following
Depth Stream
Color Stream
Skeleton information
Its up to you how you use this data.
To answer your question - Official Micorosft Kinect SDK doesnt provides shape detection out of the box. But it does provide you skeleton data/face tracking with which you can detect distance of user from kinect.
Also with mapping color stream to depth stream you can detect how far a particular pixel is from kinect. In your implementation if you have unique characteristics of different objects like color,shape and size you can probably detect them and also detect the distance.
OpenCV is one of the library that i use for computer vision etc.
Again its up to you how you use this data.
Kinect camera provides depth and consequently 3D information (point cloud) about matte objects in the range 0.5-10 meters. With this information it is possible to segment out the floor (by fitting a plane) of the room and possibly walls and the ceiling. This step is important since these surfaces often connect separate objects making them a one big object.
The remaining parts of point cloud can be segmented by depth if they don't touch each other physically. Using color one can separate the objects even further. Note that we implicitly define an object as 3D dense and color consistent entity while other definitions are also possible.
As soon as you have your objects segmented you can measure the distances between your segments, analyse their shape, recognize artifacts or humans, etc. To the best of my knowledge however a Skeleton library can recognize humans after they moved for a few seconds. Below is a simple depth map that was broken on a few segments using depth but not color information.

Indoor positioning

I am trying to get indoor gps by trying to orient my floorplan with the actual building from google maps. I know perfect accuracy is not possible. Any idea how to do this ? Do the maps need to be converted to kml format?
Forget that!
Only with luck you can get indoor GPS signals, probably only near the window, and then it is likely to be more distorted than the size of your building.
You only can try to get the coordinates outside, at the corner of the buildings.
For precise measures you would need some averaging of the measures, which only a few GPS devices offer. For less precision, take the coordinate, or measure it on differnet hours, days.
Otherwise, you should think about geolocation using Wifi/HF and any other wireless/radio sources that you can precisely locate since you probably install it yourself or at least someone from your company/service is responsible of them and could give you the complete list with coordinates. Then, once you've got the radio location, you can geolocate the devices using radio propagation and location.
I know that's not the answer you were looking for, but think about it as an alternate one if you really need to locate people inside your building.
PS: I did it at work and it works pretty well (except some areas where radio emitter are broken).