I have two layers: a polygon layer defining 20 study sites and a raster layer defining depth. Using the zonal statistics tool, I calculated mean depth and standard deviation per study site. Now I wish to extract those depth pixels (per site) that exceed 1 standard deviation from the mean of the depth values for each site. The goal is to define the deepest areas of habitat per site. Any idea how to do this for all 20 sites simultaneously (since they each have a distinct mean and standard deviation value)?
I was able to devise a method to accomplish this task (tho likely not in the most elegant fashion): as described above I used zonal statistics to create a new raster with mean values of the raster (depth) per site (delineated by polygon layer). I then used the same tool to create a raster layer representing the standard deviation per site. Then in rater calculator I set all values to NULL which were less than 1 standard deviation from the mean depth per site. SetNull("depth"> "mean_depth" - "sd_depth", "depth")-- and this created a new raster with only those pixel values greater than 1 std from mean depth (ie the deepest habitat per site). note: because depth values were negative, we used > (greater than)
Related
Basically what I'm looking for is an algorithm or an extension similar to least cost analysis, but instead of using points on top of a DEM to create a path (line vector) between the points, I whish to create a Thiessen (Voronoi) polygons (centered on points), whose spatial limits would be defined by the DEM.
So for example, a border between 2 polygons would be determined by the least cost analysis between the center points of the 2 polygons. The aim would then be, instead of getting a set of Thiessen polygons with arrow-straight borders (like in the pic), to create a set of polygons whose limits would be determined by the DEM (relief). Sort of like a watershed centered on a single point.
Btw, it would be great if there was a solution applicable in QGIS.
Thanks!
I need some help trying to figure out the Volume Voxel Per Meter and Resolution settings in Kinect Fusion...mostly how, and if at all, they interact with Depth Threshold settings in the Kinect Fusion Explorer program please...because I don't get if the depth threshold minimum is increased and maximum is reduced, does that smaller range increases the overall precision of the scanned volume, or does it stay the same?
Say I set the Kinect Fusion's depth threshold minimum to 2m and the maximum to 3m, thus setting the scanned range to 3m-2m=1m, does then the volume voxels per meter setting of say 256 and a resolution of also 256 mean that I would get a voxel depth precision of 1m/256=0.003m=0.3cm (a third of a centimeter)? Or is the resolution applicable only to the complete Kinect depth range instead of the one set via depth threshold? Also, how's width and height affected by depth threshold settings, and how to calculate precision in those two remaining axis?
Thanks in advance
P.S.
If the volume voxel resolution is set to maximum for all three axis (768x768x768) what is the minimum amount of GPU memory needed to make Kinect Fusion work?
Answering an old topic; because there is no other answer:
A. Simple Answer:
Depth threshold settings simply decide what region of the depth map are you interested in. Any value below min depth threshold and above max depth threshold is simply replaced with 0 during depth map generation.
B. Detailed Answer:
Volume Voxel per meter: This is the mm value depth represented by a single voxels . So 1000mm/256 (voxel_per_meter) = ~3.9 mm/voxel
( See:PCL documentation )
Voxel Resolution: The number of voxels in the volume you are constructing.
So;
Voxel Resolution / Voxel per m = Volume of Reconstruction volume (in meters)
EG: 512 voxels / 256 vpm = 2.0m (The volume of the reconstruction cube, given that the number of voxels per side of the cube are the same - each axis can be independently defined.)
If you have the Kinect SDK installed; see the descriptions of the following variables:
minDepthClip = FusionDepthProcessor.DefaultMinimumDepth;
maxDepthClip = FusionDepthProcessor.DefaultMaximumDepth;
voxelsPerMeter; voxelsX; voxelsY; voxelsZ;
So; these values are not dependent (or vice versa) on the depth threshold value.
A good example of using the depth threshold values is in the great video by Daniel Shiffman ([Kinect & Processing])
Is there any available DM script that can compare two images and know the difference?
I mean the script can compare two or more images, and it can determine the similarity of two images, for example the 95% area of one image is same as another image, then the similarity of these two images is 95%.
The script can compare brightness and contrast distribution of images.
Thanks,
This question is a bit ill-defined, as "similarity" between images depends a lot on what you want.
If by "95% of the area is the same" you mean that 95% of the pixels are of identical value in images A & B, you can simply create a mask and sum() it to count the number of pixels, i.e.:
sum( abs(A-B)==0 ? 1 : 0 )
However, this will utterly fail if the images A & B are shifted with respect to each other even by a single pixel. It will also fail, if A & B are of same contrast but different absolute value.
I guess the intended question was to find similarity of two images in a fuzzy way.
For these, one way is to do crosscorrelation. DM has this function. Like this,
image xcorr= CrossCorrelate(ref,img)
From xcorr, the peak position gives x- and y- shift between the two, the peak intensity gives "similarity" of the two.
If you know there is no shift between the two, you can just do the sum and multiplication,
number similarity1=sum(img1*img2)
Another way to do similarity is calculate Euclidian distance of the two:
number similarity2=sqrt(sum((img1-img2)**2)).
"similarity2" calculates the "pure" similarity. "similarity1" is the pure similarity plus the mean intensity of img1 and img2. The difference is essentially this,
(a-b)**2=a**2+b**2-2*a*b.
The left term is "similarity2", the last term on the right is the "crosscorrelation" or "similarity1".
I think "similarity1" is called cross-correlation, "similarity2" is called correlation coefficient.
In example comparing two diffraction patterns, if you want to compute the degree of similarity, use "similarity2". If you want to compute the degree of similarity plus a certain character of the diffraction pattern, use "similarity1".
I am now implementing an email filtering application using the Naive Bayes algorithm. My application uses the Spambase Data Set from the UCI Machine Learning Repository. Since the attributes are continuous, I calculate the probability using the Probability Density Function (PDF). However, when I evaluate the data using the k-fold cross validation, a training set may contain only 0 for one of its attributes. For this reason, I got a 0 standard deviation and the PDF returns NaN and it leads to a huge number of spams are not correctly classified with that training set. What should I do to fix the problem?
You could use a discrete PDF, which will always be bounded.
Alternatively, simply ignore any attribute with zero variance. There is no point in including distributions with zero variance, because they won't actually do anything. For example, you want to know how old I am, and then I tell you that I live on planet Earth. That shouldn't change your estimate, because every single piece of data you have is for people on planet Earth.
I have a set of vectors in multidimensional space (may be several thousands of dimensions). In this space, I can calculate distance between 2 vectors (as a cosine of the angle between them, if it matters). What I want is to visualize these vectors keeping the distance. That is, if vector a is closer to vector b than to vector c in multidimensional space, it also must be closer to it on 2-dimensional plot. Is there any kind of diagram that can clearly depict it?
I don't think so. Imagine any twodimensional picture of a tetrahedron. There is no way of depicting the four vertices in two dimensions with equal distances from each other. So you will have a hard time trying to depict more than three n-dimensional vectors in 2 dimensions conserving their mutual distances.
(But right now I can't think of a rigorous proof.)
Update:
Ok, second idea, maybe it's dumb: If you try and find clusters of closer associated objects/texts, then calculate the center or mean vector of each cluster. Then you can reduce the problem space. At first find a 2D composition of the clusters that preserves their relative distances. Then insert the primary vectors, only accounting for their relative distances within a cluster and their distance to the center of to two or three closest clusters.
This approach will be ok for a large number of vectors. But it will not be accurate in that there always will be somewhat similar vectors ending up at distant places.