How to read the calibration in a line profile - dm-script

I would like to read the calibration factor of a line profile. It is stored in "Image Display Info - Calibration". I use the function GetUnitsH (image, num), but I only obtain the channel number, not the calibrated position (in nanometers).
Thank you in advance.

The command you are seeking are:
Number ImageGetDimensionScale( BasicImage, Number dimension )
Number ImageGetDimensionOrigin( BasicImage, Number dimension )
String ImageGetDimensionUnitString( BasicImage, Number dimension )
Number ImageGetIntensityScale( BasicImage )
Number ImageGetIntensityOrigin( BasicImage )
String ImageGetIntensityUnitString( Number dimension )
These will give you the calibrations as shown in the image-display.
In order to convert calibrated and uncalibrated units, you have to do the accordign maths yourself.
And yes, each of the "Get" commands has an according "Set" command as well, if you need it.
One thing to watch out for is:
Do you really look at your image, or at a copy of it?
In particular, makes sure that you use := and not = when assigning image variables to images.
Example:
This will work:
Image img := GetFrontImage()
number scale_x = img.ImageGetDimensionScale(0)
Result("\n Scale X:" + scale_x )
This will not work:
Image img = GetFrontImage()
number scale_x = img.ImageGetDimensionScale(0)
Result("\n Scale X:" + scale_x )
In the second case, one gets the refernece to the front-most image, but the = will just copy the values (and not the calibrations or other meta data) into a new image.

Related

Plotting an exponential function given one parameter

I'm fairly new to python so bare with me. I have plotted a histogram using some generated data. This data has many many points. I have defined it with the variable vals. I have then plotted a histogram with these values, though I have limited it so that only values between 104 and 155 are taken into account. This has been done as follows:
bin_heights, bin_edges = np.histogram(vals, range=[104, 155], bins=30)
bin_centres = (bin_edges[:-1] + bin_edges[1:])/2.
plt.errorbar(bin_centres, bin_heights, np.sqrt(bin_heights), fmt=',', capsize=2)
plt.xlabel("$m_{\gamma\gamma} (GeV)$")
plt.ylabel("Number of entries")
plt.show()
Giving the above plot:
My next step is to take into account values from vals which are less than 120. I have done this as follows:
background_data=[j for j in vals if j <= 120] #to avoid taking the signal bump, upper limit of 120 MeV set
I need to plot a curve on the same plot as the histogram, which follows the form B(x) = Ae^(-x/λ)
I then estimated a value of λ using the maximum likelihood estimator formula:
background_data=[j for j in vals if j <= 120] #to avoid taking the signal bump, upper limit of 120 MeV set
#print(background_data)
N_background=len(background_data)
print(N_background)
sigma_background_data=sum(background_data)
print(sigma_background_data)
lamb = (sigma_background_data)/(N_background) #maximum likelihood estimator for lambda
print('lambda estimate is', lamb)
where lamb = λ. I got a value of roughly lamb = 27.75, which I know is correct. I now need to get an estimate for A.
I have been advised to do this as follows:
Given a value of λ, find A by scaling the PDF to the data such that the area beneath
the scaled PDF has equal area to the data
I'm not quite sure what this means, or how I'd go about trying to do this. PDF means probability density function. I assume an integration will have to take place, so to get the area under the data (vals), I have done this:
data_area= integrate.cumtrapz(background_data, x=None, dx=1.0)
print(data_area)
plt.plot(background_data, data_area)
However, this gives me an error
ValueError: x and y must have same first dimension, but have shapes (981555,) and (981554,)
I'm not sure how to fix it. The end result should be something like:
See the cumtrapz docs:
Returns: ... If initial is None, the shape is such that the axis of integration has one less value than y. If initial is given, the shape is equal to that of y.
So you are either to pass an initial value like
data_area = integrate.cumtrapz(background_data, x=None, dx=1.0, initial = 0.0)
or discard the first value of the background_data:
plt.plot(background_data[1:], data_area)

How to arbitrarily extract a specific subset of images from a dataset?

Recently I'm planning to manipulate a stack of images and the goal is to extract a specific subset of slices from there, for example only even or odd or arbitrary indexes, and then save them into another dataset.
In DM, there are a number of helpful functions in the Volume menu but unfortunately, they cannot really fullfill what I want to do.
I am just wondering whether this idea can be realized via scripting.
Many thanks for your help in advance.
There are two ways you can go about it, one of them only suitable for data up to 3D and generally slower than the other, but more flexible.
As you have been asking for arbitrary subsampling, I'm starting with that option, but it is more likely that the second option gives you what you want: orthogonal, regular subsampling.
If you are in a hurry, the short answer is: Use the SliceN command.
1) Using expressions (arbitrary subsampling)
Individual pixel positions in an Image data (img) can be addressed
using the notations
img[ X, 0 ] ... for 1D data at position X
img[ X, Y ] ... for 2D data at position X/Y
img[ X, Y, Z ] ... for 3D data at position X/Y/Z
Note that even if this addresses a single number, the result is an expression of size 1x1 or 1x1x1 and not a scalar number, therefore you can not do: number num = img[10,4]
However, you can use a little trick to use any of the functions that convert an expression to a single number like f.e. summation. So you can do: number num = sum(img[10,4])
So how does this relate to your question? Well, in the expressions above, we used scalar values as X, Y and Z, and the resulting expressions were expressions of size 1x1 and 1x1x1, but
You can use expressions of any size as X, Y, Z in this notations, as long as all of them are expressions of same size. The resulting addressed data is of this size with values references by the according coordinates.
This will become clearer with the examples below. Starting out with a simple 1D example:
image img1D := RealImage( "TestData", 4, 100 )
image coord := RealImage( "Coordinates", 4, 10 )
img1D = 1000 + icol // Just sum test data
coord = trunc(100*Random()) // random integer 0-99
image subImg := img1D[coord,0]
img1D.ShowImage()
coord.ShowImage()
subImg.ShowImage()
Our testdata (img1D) here is just a linear graph from 1000 to 1099 using the icol expression which, at each pixel, represents that pixels X coordinate.
The coordinate image (coord) is containing random integer values between 0 and 99.
The 'magic' happens in the subImg. We use an expression with the coord image as X coordinates. That images is of size 10(x1), so the outcoming expression is of size 10(x1) which we assign to the image subImg before showing it.
Note, that the expression we have built is really just pointing to that data of the image. Instead of showing it as a new image, we could have use that expression to change these points in the data instead, using:
img1D[coord,0] = 0
Taking it from here, it is straight forward to extend the example to 2D:
image img2D := RealImage( "TestData", 4, 30, 30 )
image coordX := RealImage( "Coordinates X", 4, 10 )
image coordY := RealImage( "Coordinates Y", 4, 10 )
img2D = 10000 + icol + irow * 100
coordX = trunc(30*Random())
coordY = trunc(30*Random())
img2D[coordX,coordY] = 0
coordX.ShowImage()
coordY.ShowImage()
img2D.ShowImage()
...and 3D:
image img3D := RealImage( "TestData", 4, 30, 30, 30 )
image coordX := RealImage( "Coordinates X", 4, 10 )
image coordY := RealImage( "Coordinates Y", 4, 10 )
image coordZ := RealImage( "Coordinates Y", 4, 10 )
img3D = 10000 + icol + irow * 100 + iplane * 1000
coordX = trunc(30*Random())
coordY = trunc(30*Random())
coordZ = trunc(30*Random())
img3D[coordX,coordY,coordZ] = 0
coordX.ShowImage()
coordY.ShowImage()
coordZ.ShowImage()
img3D.ShowImage()
Unfortunately, it ends here.
You can no longer do this type of addressing in 4D or 5D data, because expression with 4 parameters are already defined to address a rectangle region in 2D data as img[T,L,B,R]
2) Using SliceN (orthogonal subsampling)
Data subsets along the dimension directions of data can be addressed using the command SliceN and its simplified variants Slice1, Slice2 and Slice3.
The SliceN command is maybe one of my favourite commands in the language when dealing with data. It looks intimidating at first, but it is straight forward.
Lets start with its simplified version for 1D extraction, Slice1.
To extract 1D data from any data up to 3D with the Slice1 command, you need the following (-and these are exactly the 7 parameters used by the command-):
data source
start point in the source
sampling direction
sampling length
sampling step-size
The only thing you need to know on top of that is:
The start point is always defined as a X,Y,Z triplet, even if the data source is only 2D or 1D. 0 is used for the not needed
dimensions.
Directions are given as dimension index: 0 = X, 1 = Y, 2 = Z
Step-size can be negative to indicate opposite directions
The specified sampling must be contained within the source data.(You can not 'extrapolate')
So a very simple example of extracting a 1D data of a 3D dataset would be:
number sx = 20
number sy = 20
number sz = 20
image img3D := RealImage( "Spectrum Image", 4, sx, sy, sz )
img3D = 5000000 + icol + irow * 100 + iplane * 10000
number px = 5
number py = 7
image spec1D := Slice1( img3D, px,py,0, 2,sz,1 )
ShowImage( img3D )
ShowImage( spec1D )
This example showed a quite typical situation in analytical microscopy when dealing with "3D Spectrum Image" data: Extracting a "1D Spectrum" at a specific spatial position.
The example did that for the spatial point px,py. Starting at the point at that position (px,py,0), it samples along the Z direction (2) for all pixels of the data (sz) with a step-size of 1.
Note, that the command again returns an expression within the source data, and that you can use this to set values as well, just using f.e.:
Slice1( img3D, px,py,0, 2,sz,1 ) = 0
The extension for 2D and 3D data using the commands Slice2 and Slice3 is straight forward. Instead of defining one output direction, you define two or three, respectively. Each with a triplet of numbers: direction, length, step-size.
The following example extracts an "image plane" of a "3D Spectrum image":
number sx = 20
number sy = 20
number sz = 20
image img3D := RealImage( "Spectrum Image", 4, sx, sy, sz )
img3D = 5000000 + icol + irow * 100 + iplane * 10000
number pz = 3
image plane2D := Slice2( img3D, 0,0,pz, 0,sx,1, 1,sy,1 )
ShowImage( img3D )
ShowImage( plane2D )
And the following example "rotates" a 3D image:
number sx = 6
number sy = 4
number sz = 3
image img3D := RealImage( "Spectrum Image", 4, sx, sy, sz )
img3D = 1000 + icol + irow * 10 + iplane * 100
image rotated := Slice3( img3D, 0,0,0, 0,sx,1, 2,sz,1, 1,sy,1 )
ShowImage( img3D )
ShowImage( rotated )
You can get all sorts of rotations, mirroring, binning with these
commands. If you want the full flexibility to get any expression up to
5D from any source data up to 5D, then you need the most versatile
SliceN command.
It works exactly the same, but you need to specify both the dimensionality of the source data, and the dimensionality of the output expression. Then, the 'starting' point needs to be defined with as many coordinates as the source data dimension suggests, and you need one triplet of specification for each output dimension.
For a source data of N dimensions and want an output of M dimensions you need: 2 + N + 3*M parameters.
As an example, lets extract the "plane" at specific spatial position from a "4D Diffraction image" data, which stores a 2D image at each spatial location of a 2D scan:
number sx = 9
number sy = 9
number kx = 9
number ky = 9
image img4D := RealImage( "Diffraction Image", 4, sx, sy, kx, ky )
img4D = 50000 + icol + irow * 10 + idimindex(2)*100 + idimindex(3)*1000
number px = 3
number py = 4
image img2D := SliceN( img4D, 4, 2, px,py,0,0, 2,kx,1, 3,ky,1 )
ShowImage( img4D )
ShowImage( img2D )

Kinect depth conversion from mm to pixels

Does anybody knows how many pixels correspond for each millimeter of depth value in images taken from kinect for xbox360?
I'm using the standard resolution and settings...
Thanks!
1 pixel corresponds to a number of millimiters that depends on the depth value of that pixels (i.e. its level of gray).
The simplest way you can get the distance between two pixels in a depth image is to convert those pixels (which are expressed in Depth Space) in real world coordinates (i.e. in Skeleton Space)1. Then, you can calculate the distance between those points using a common euclidean distance formula.
So if you have two pixels P1 and P2, with depth values
respectively equal to D1 and D2, you can proceed as follows:
DepthImagePoint dip1 = new DepthImagePoint();
dip1.X = P1.x;
dip1.Y = P1.y;
dip1.Depth = D1;
DepthImagePoint dip2 = new DepthImagePoint();
dip2.X = P2.x;
dip2.Y = P2.y;
dip2.Depth = D2;
SkeletonPoint sp1 = CoordinateMapper.MapDepthPointToSkeletonPoint(DepthImageFormat.Resolution640x480Fps30, dip1);
SkeletonPoint sp2 = CoordinateMapper.MapDepthPointToSkeletonPoint(DepthImageFormat.Resolution640x480Fps30, dip2);
double dist = euclideanDistance(sp1, sp2);
1 See Coordinate Spaces for more information.

Advice with a file program asking for the largest number

the assignment this time around deals with using files. "Assume that a file containing a series of integers is named numbers.dat and exists on the computer's disk. Design a program that determines the largest number stored in the file. The instructor told us not to use array based implementations, and include a variable to count the number of items read from the file and output this count after displaying the largest value found in the file. I am having trouble on how to get the largest value without using array based implementation. Here is what I have so far:
def main():
n = 1
largest = None
num_input = int(input("How many numbers do " + "you have to input? "))
numbers_file = open('numbers.dat', 'w')
for count in range(1, num_input + 1):
number = float(input('Enter the number #' + str(count) + ': '))
if largest is None or n > largest:
largest = n
print('The largest value inputted is: ', largest)
numbers_file.close()
main()
Imagine you have a sheet of paper with hundreds of numbers on it. Using nothing but your brain and eyes, read those numbers and find the largest one.
How would you do this?
Now, how would you tell the computer to do it the same way?
use a variable to store the current number and assume that it is largest. As you go through the file compare the stored number to current number from file, if number from file is greater store it in the variable else keep on reading the file. Repeat this until you reach end of the file.
largest = 0;
count = 0
while ((num =input.readline()) != EOF) {
count++;
if (largest < num) {
largest = num;
}
}

CUDAFunctionLoad in Mathematica - Indexing problem

I am trying to debug an index problem I am having on my CUDA machine
Cuda Machine Info:
{1->{Name->Tesla C2050,Clock Rate->1147000,Compute Capabilities->2.,GPU Overlap->1,Maximum Block Dimensions->{1024,1024,64},Maximum Grid Dimensions->{65535,65535,65535},Maximum Threads Per Block->1024,Maximum Shared Memory Per Block->49152,Total Constant Memory->65536,Warp Size->32,Maximum Pitch->2147483647,Maximum Registers Per Block->32768,Texture Alignment->512,Multiprocessor Count->14,Core Count->448,Execution Timeout->0,Integrated->False,Can Map Host Memory->True,Compute Mode->Default,Texture1D Width->65536,Texture2D Width->65536,Texture2D Height->65535,Texture3D Width->2048,Texture3D Height->2048,Texture3D Depth->2048,Texture2D Array Width->16384,Texture2D Array Height->16384,Texture2D Array Slices->2048,Surface Alignment->512,Concurrent Kernels->True,ECC Enabled->True,Total Memory->2817982462},
All this code does is set the values of a 3D array equal to the index that CUDA is using:
__global __ void cudaMatExp(
float *matrix1, float *matrixStore, int lengthx, int lengthy, int lengthz){
long UniqueBlockIndex = blockIdx.y * gridDim.x + blockIdx.x;
long index = UniqueBlockIndex * blockDim.z * blockDim.y * blockDim.x +
threadIdx.z * blockDim.y * blockDim.x + threadIdx.y * blockDim.x +
threadIdx.x;
if (index < lengthx*lengthy*lengthz) {
matrixStore[index] = index;
}
}
For some reason, once the dimension of my 3D array becomes too large, the indexing stops.
I have tried different block dimensions (blockDim.x by blockDim.y by blockDim.z):
8x8x8 only gives correct indexing up to array dimension 12x12x12
9x9x9 only gives correct indexing up to array dimension 14x14x14
10x10x10 only gives correct indexing up to array dimension 15x15x15
For dimensions larger than these all of the different block sizes eventually start to increase again, but they never reach a value of dim^3-1 (which is the maximum index that the cuda thread should reach)
Here are some plots that illustrate this behavior:
For example: This is plotting on the x axis the dimension of the 3D array (which is xxx), and on the y axis the maximum index number that is processed during the cuda execution. This particular plot is for block dimensions of 10x10x10.
Here is the (Mathematica) code to generate that plot, but when I ran this one, I used block dimensions of 1024x1x1:
CUDAExp = CUDAFunctionLoad[codeexp, "cudaMatExp",
{{"Float", _,"Input"}, {"Float", _,"Output"},
_Integer, _Integer, _Integer},
{1024, 1, 1}]; (*These last three numbers are the block dimensions*)
max = 100; (* the maximum dimension of the 3D array *)
hold = Table[1, {i, 1, max}];
compare = Table[i^3, {i, 1, max}];
Do[
dim = ii;
AA = CUDAMemoryLoad[ConstantArray[1.0, {dim, dim, dim}], Real,
"TargetPrecision" -> "Single"];
BB = CUDAMemoryLoad[ConstantArray[1.0, {dim, dim, dim}], Real,
"TargetPrecision" -> "Single"];
hold[[ii]] = Max[Flatten[
CUDAMemoryGet[CUDAExp[AA, BB, dim, dim, dim][[1]]]]];
, {ii, 1, max}]
ListLinePlot[{compare, Flatten[hold]}, PlotRange -> All]
This is the same plot, but now plotting x^3 to compare to where it should be. Notice that it diverges after the dimension of the array is >32
I test the dimensions of the 3D array and look at how far the indexing goes and compare it with dim^3-1. E.g. for dim=32, the cuda max index is 32767 (which is 32^3 -1), but for dim=33 the cuda output is 33791 when it should be 35936 (33^3 -1). Notice that 33791-32767 = 1024 = blockDim.x
Question:
Is there a way to correctly index an array with dimensions larger than the block dimensions in Mathematica?
Now, I know that some people use __mul24(threadIdx.y,blockDim.x) in their index equation to prevent errors in bit multiplication, but it doesn't seem to help in my case.
Also, I have seen someone mention that you should compile your code with -arch=sm_11 because by default it's compiled for compute capability 1.0. I don't know if this is the case in Mathematica though. I would assume that CUDAFunctionLoad[] knows to compile with 2.0 capability. Any one know?
Any suggestions would be extremely helpful!
So, Mathematica kind of has a hidden way of dealing with grid dimensions, to fix your grid dimension to something that will work, you have to add another number to the end of the function you are calling.
The argument denotes the number of threads to launch (or grid dimension times block dimension).
For example, in my code above:
CUDAExp =
CUDAFunctionLoad[codeexp,
"cudaMatExp", {
{"Float", _, "Input"}, {"Float", _,"Output"},
_Integer, _Integer, _Integer},
{8, 8, 8}, "ShellOutputFunction" -> Print];
(8,8,8) denotes the dimension of the block.
When you call CUDAExp[] in mathematica, you can add an argument that denotes the number of threads to launch:
In this example I finally got it to work with the following:
// AA and BB are 3D arrays of 0 with dimensions dim^3
dim = 64;
CUDAExp[AA, BB, dim, dim, dim, 4089];
Note that when you compile with CUDAFunctionLoad[], it only expects 5 inputs, the first is the array you pass it (of dimensions dim x dim x dim) and the second is where the memory of it is stored. The third, fourth, and fifth are the dimensions.
When you pass it a 6th, mathematica translates that as gridDim.x * blockDim.x, so, since I know I need gridDim.x = 512 in order for every element in the array to be dealt with, I set this number equal to 512 * 8 = 4089.
I hope this is clear and useful to someone in the future that comes across this issue.