Script command to Align Slice Horizontally by Calibration - dm-script

Is there a script command that I can specify a particular slice in a LinePlotImageDisplay and do the Align Slice Horizontally by Calibration (or Uncalibrated (channels)) action?

The following scipt is a complete implantation based on example codes provided by BmyGuest. It will align all slices in a LinePlotImageDisplay horizontally either by calibration or by channel (i.e. un-calibrated).
class SliceAlignment : object {
number true, false; // boolean
image imgLPID;
imageDisplay LPID; // line plot image display
number CalculateImageToGroupTransformFactors( object self, image slice_src, image slice_ref, number &relOff, number &relScale ) {
number origin_ref, scale_ref, origin_src, scale_src;
string unit_ref, unit_src;
number calFMT = 0; // origin is expressed in calibrated unit
//
slice_src.ImageGetDimensionCalibration( 0, origin_src, scale_src, unit_src, calFMT );
slice_ref.ImageGetDimensionCalibration( 0, origin_ref, scale_ref, unit_ref, calFMT );
//
relScale = scale_src / scale_ref;
relOff = (origin_src - origin_ref) / scale_ref ;
// check if both images are calibrated in same unit
if( unit_src != unit_ref ) return false
return true;
};
void AlignNthSliceHorizontallyByChannel( object self, number slice_idx ) {
// get current reference slice index
number refSlice_idx = LPID.LinePlotImageDisplayGetSlice();
// get slice ID's (as objects)
object slice_ref = LPID.ImageDisplayGetSliceIDByIndex(refSlice_idx);
object slice_src = LPID.ImageDisplayGetSliceIDByIndex(slice_idx);
number int_offset = 0, int_scale = 1.0; // vertical (intensity) offset and scaling factors
number pos_offset = 0, pos_scale = 1.0; // horizontal (position) offset and scaling factors
LPID.LinePlotImageDisplaySetImageToGroupTransform( slice_src, slice_ref, int_offset, int_scale, pos_offset, pos_scale );
};
void AlignNthSliceHorizontallyByCalibration( object self, number slice_idx ) {
// get current reference slice index
number refSlice_idx = LPID.LinePlotImageDisplayGetSlice();
// get slice ID's (as objects)
object slice_ref = LPID.ImageDisplayGetSliceIDByIndex(refSlice_idx);
object slice_src = LPID.ImageDisplayGetSliceIDByIndex(slice_idx);
number int_offset = 0, int_scale = 1.0; // vertical (intensity) offset and scaling factors
number pos_offset, pos_scale; // horizontal (position) offset and scaling factors
number unit_check = self.CalculateImageToGroupTransformFactors( imgLPID{slice_idx}, imgLPID{refSlice_idx}, pos_offset, pos_scale );
if( unit_check == false ) {
string prompt = "slice #" + slice_idx + " [" + LPID.ImageDisplayGetSliceLabelById( LPID.ImageDisplayGetSliceIDByIndex(slice_idx) ) + "] is calibrated in different unit!";
if( !ContinueCancelDialog( prompt ) ) return
};
LPID.LinePlotImageDisplaySetImageToGroupTransform( slice_src, slice_ref, int_offset, int_scale, pos_offset, pos_scale );
return;
};
void AlignAllSlicesHorizontallyByChannel( object self ) {
number nSlices = LPID.LinePlotImageDisplayCountSlices();
for( number idx = 0; idx < nSlices; idx++ ) self.AlignNthSliceHorizontallyByChannel( idx );
return;
};
void AlignAllSlicesHorizontallyByCalibration( object self ) {
number nSlices = LPID.LinePlotImageDisplayCountSlices();
for( number idx = 0; idx < nSlices; idx++ ) self.AlignNthSliceHorizontallyByCalibration( idx );
return;
};
object init( object self, image img ) {
// check if the image display is correct type
imgLPID := img;
LPID = imgLPID.ImageGetImageDisplay(0);
if( LPID.ImageDisplayGetDisplayType() != 3 ) throw( "Please choose a valid line plot display" );
return self;
};
SliceAlignment( object self ) {
true = 1; false = 0;
result( "SliceAlignment [obj ID:" + self.ScriptObjectGetID().hex() + "] constructured\n" );
};
~SliceAlignment( object self ) {
result( "SliceAlignment [obj ID:" + self.ScriptObjectGetID().hex() + "] destructured\n\n" );
}; };
{; object objAlign = alloc(SliceAlignment);
objAlign.init( GetFrontImage() );
if( OptionDown() ) objAlign.AlignAllSlicesHorizontallyByChannel();
else objAlign.AlignAllSlicesHorizontallyByCalibration(); };

No, there is no single 'convenience' command to achieve this alignment. You will have to create the according function yourself from reading a slices calibration and setting its display-coordinate system. You might find the following (old) tutorial PDF on the FELMI homepage might be useful:
SlicesInLinePlotDisplay.pdf
The following example script might also be useful. It shows how one slice is aligned relative to another slice. (Just on the X-axis)
// All Slices in a LinePlot are grouped into a single 'group'
// Slices can be moved relative to each other by specifying their image-to-group transform,
// and the whole image (i.e. the group) can be moved with respect to the display using the group-to-display transform.
// To set the image-to-group transform of the slice specified by 'slice_id', with respect to the slice specified by 'ref_id'
// use the command:
// LinePlotImageDisplaySetImageToGroupTransform( LinePlotImageDisplay lpid, ScriptObject slice_id, ScriptObject ref_id, double off_val, double scale_val, double off_dim_0, double scale_dim_0 )
/*********************************************************/
// Create 2 LinePlots and add them into one display
// (Initially they are aligned by their calibrations)
number sc1 = 1
number of1 = -50
number sc2 = 2
number of2 = -20
image sl1 := realImage("S1",4,300)
image sl2 := realImage("S2",4,300)
sl1 = (iwidth-icol)/iwidth
sl2 = (iwidth-icol)/iwidth
sl1[0,50,1,60] = 1
sl1[0,250,1,260] = 1
sl2[0,10,1,15] = 1
sl2[0,110,1,115] = 1
// Adding Calibrations
sl1.ImageSetDimensionCalibration(0,of1,sc1,"CH",0)
sl2.ImageSetDimensionCalibration(0,of2,sc2,"CH",0)
sl1.DisplayAt(20,30)
sl2.DisplayAt(750,30)
OKDialog( "Put into one Display" )
imageDisplay disp = sl1.ImageGetImageDisplay(0)
disp.ImageDisplayAddImage( sl2, "S2") // When added like this, the slices are automatically aligned by their respective calibration!
disp.LinePlotImageDisplaySetDoAutoSurvey( 0, 0 )
object ref_id = disp.ImageDisplayGetSliceIDByIndex(0) // Slice 0
object slice_id = disp.ImageDisplayGetSliceIDByIndex(1) // Slice 1
OKDialog("Now align by channels (i.e. undo any relative sclice alignment)")
// Simply set the relative "shifts" and "scales" to 0 and 1.
disp.LinePlotImageDisplaySetImageToGroupTransform( slice_id, ref_id, 0, 1, 0, 1 )
OKDialog("Now align by chalibration ")
number relScale = sc2/sc1
number relOff = of2-of1
disp.LinePlotImageDisplaySetImageToGroupTransform( slice_id, ref_id, 0, 1, relOff, relScale )

Related

Directx11 heightmap texture real-time modification problem

I'm making a terrain tool.
I made a 2D texture and am using it as a height map.
I want to change a specific part of the heightmap, but I'm having a problem.
I changed certain small parts, but the whole landscape of the texture is changed.
I would like to know the cause of this problem and how to solve it
thank you.
`HeightMap ShaderResourceView Create Code
void TerrainRenderer::BuildHeightmapSRV(ID3D11Device* device)
{
ReleaseCOM(mHeightMapSRV);
ReleaseCOM(m_hmapTex);
D3D11_TEXTURE2D_DESC texDesc;
texDesc.Width = m_terrainData.HeightmapWidth; //basic value 2049
texDesc.Height = m_terrainData.HeightmapHeight; //basic value 2049
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_R16_FLOAT;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DYNAMIC;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
texDesc.MiscFlags = 0;
// HALF is defined in xnamath.h, for storing 16-bit float.
std::vector<HALF> hmap(mHeightmap.size());
//current mHeightmap is all zero.
std::transform(mHeightmap.begin(), mHeightmap.end(), hmap.begin(), XMConvertFloatToHalf);
D3D11_SUBRESOURCE_DATA data;
data.pSysMem = &hmap[0];
data.SysMemPitch = m_terrainData.HeightmapWidth * sizeof(HALF);
data.SysMemSlicePitch = 0;
HR(device->CreateTexture2D(&texDesc, &data, &m_hmapTex));
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = texDesc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MipLevels = -1;
HR(device->CreateShaderResourceView(m_hmapTex, &srvDesc, &mHeightMapSRV));
}
`HeightMap Texture modifying code
D3D11_MAPPED_SUBRESOURCE mappedData;
//m_hmapTex is ID3D11Texture2D*
HR(m_texMgr.m_context->Map(m_hmapTex, D3D11CalcSubresource(0, 0, 1), D3D11_MAP_WRITE_DISCARD, 0, &mappedData));
HALF* heightMapData = reinterpret_cast<HALF*>(mappedData.pData);
D3D11_TEXTURE2D_DESC heightmapDesc;
m_hmapTex->GetDesc(&heightmapDesc);
UINT width = heightmapDesc.Width;
for (int row = 0; row < width/4; ++row)
{
for (int col = 0; col < width/4; ++col)
{
idx = (row * width) + col;
heightMapData[idx] = static_cast<HALF>(XMConvertFloatToHalf(200));
}
}
m_texMgr.m_context->Unmap(m_hmapTex, D3D11CalcSubresource(0,0,1));
Please refer to the picture below
The lower right area renders the HeightMap texture.
I wanted to edit only 1/4 width and height, but that's all changed.
enter image description here
When the completed heightmap is applied, it works normally.
enter image description here
A texture does not always have the same width and height in memory as the definition suggests. Some textures strides (lines) are oversized. You have to use the Stride Size * Row to calculate the offset to write into.

anychart line - xScale limited input data + offset

Maximum data on screen is 31 point + offset. In my input i have 50 points.
How set more?
Image
Example
JavaScript code and more detail here
It looks like your post is mostly code; please add some more details.
axisX is days, problem here
//array
arr = []
for (let i = 0; i < 50; i++) {
arr.push(i)
}
//object x y
const data = []
const time = new Date().getTime();
const oneDay = 1000*60*60*24
for (let i = 0; i < arr.length; i++) {
const day = new Date( time + ( oneDay * (i+1) ) );
const currentDate = day.getDate()
data.push( {
x: 'day: ' + currentDate,
y: arr[i]
} );
}
// create line chart
chart = anychart.line();
// set chart padding
chart.padding([5, 5, 5, 5]);
// turn on chart animation
chart.animation(false);
// turn on the crosshair
chart.crosshair(false);
// set chart title text settings
const name = 'Name'
chart.title(name);
// set y axis title
chart.yAxis().title('Price');
chart.xScale()
var series;
series = chart.line(data);
series.name('series name');
series.labels().enabled(true).anchor('right-bottom').padding(2);
series.labels().enabled(true).anchor('left-bottom').padding(2);
series.markers(true);
// turn the legend on
chart.legend().enabled(true).fontSize(11).padding([0, 0, 10, 0]);
chart.container('chartContainer');
chart.draw();
It happens because you are using a Cartesian chart which default xScale is Ordinal. The Ordinal scale works with categories, it assumes that all categories are unique. But your data set includes doubling categories, like day: 19, day: 20, etc. So, the next point with the same category name overrides the previous one.
In your case, you should use dateTime scale and real dateTime X coordinates (string date, timestamp in ms, Date class object). For details, check the article.

How to add annotation information in each frame of a stack image?

I know how to add the arrows in a single image displayed front, but now I need to add the arrow annotations on each frame of a image stack to indicate the contrast change position and show them using the GSM "slice player". How to do it?
There is no difference between a 2D image and a 3D stack in DigitalMicrograph. Both are just dimensional data. As such "slices" in a Stack do neither have their individual tags nor annotations - there is just a single imageTagGroup and a single imageDisplays.
So to achieve what you want you need a different approach. You need to move your annotation whenever the display updates to shows a different slice.
In order to do this, you need to add a display-listener to your image display and act on the slice_property_changed event.
A basic example script for this:
Class CStackAnno
{
ImageDisplay disp
Component arrow
Number ListenerID
// This method is called whenever the imagedisplay fires the slice update event
void OnSlicePropChanged( object self, Number disp_flags, ImageDisplay disp, Number flags1, Number flags2, object slice_id_beg, object slice_id_end )
{
image img := disp.ImageDisplayGetImage()
if ( 3 != img.ImageGetNumDimensions() ) return
if ( !arrow.ComponentIsValid() ) return
number sx = img.imageGetDimensionsize(0)
number sy = img.imageGetDimensionsize(1)
number sz = img.imageGetDimensionsize(2)
number start, end
disp.ImageDisplayGetDisplayedLayers( start, end )
number kLineEndPoint = 2
arrow.ComponentSetControlPoint( kLineEndPoint, sx/sz * start, sx/sz * start, 0 )
}
Object Launch( object self, image Img )
{
if ( !img.ImageIsValid() ) Throw( "Invalid input image." )
if ( 3 != img.ImageGetNumDimensions() ) Throw( "This script only supports 3D images." )
disp = img.ImageGetImageDisplay(0)
// Register DisplayListener to catch when it updates
ListenerID = disp.ImageDisplayAddEventListener( self, "slice_property_changed:OnSlicePropChanged" )
// Add the annotation
arrow = NewArrowAnnotation( img.ImageGetDimensionSize(1)/5, img.ImageGetDimensionSize(0)*4/5, 0, 0 )
arrow.ComponentSetForegroundColor( 0, 0.5 , 1 )
arrow.ComponentSetBackgroundColor( 0, 0.8 , 1 )
arrow.ComponentSetDrawingMode( 1 )
disp.ComponentAddChildAtEnd( arrow )
return self
}
}
//Main call
image fImg
GetFrontImage(fImg)
Alloc(CStackAnno).Launch(fImg)

How to apply virtual apperture with 4D-STEM dataset in EFFICIENT way?

I would like to apply arbitrarily defined bit mask as virtual aperture and apply it to 4D-STEM data set in an EFFICIENT way.
I did it using the SliceN function and apply the mask pixel-by-pixel, which is very slow for large datasets. How to optimize it to so to run faster?
Image 4DSTEM := GetFrontImage() // dimention [ScanX, ScanY, Dx, Dy]
Image mask: = iradius // just an arbitrary mask (aperture)
Image out // dimention [ScanX, ScanY]
for (number i=0; i<ScanX; i++)
{ for (number j=0; j<ScanY; j++)
{
Diff2D = 4DSTEM.SliceN(4,2,i,j,0,0,2,Dx,1,3,Dy,1)
out.setpixel(i,j, sum(diff2D*mask))
}
}
out.showimage()
for an [100,100,512,512] dataset, that took few minutes to finish. When I have to repeat the operation several times, that is way to slow compare to matrix operation. but I dont know how to make it in an efficient way.
Thanks!
you're hitting the limitations of scripting languages here. Using sliceN is already pretty much the optimum you can get to, unfortunately. Everything else in speed optimization requires parallelized, compiled code. (i.e. you could code C++ code and use the SDK to compile your own plugin.)
However, there is a bit of room for improvement over your example.
First of all, your example above doesn't run :c) But that is quickly fixed.
Point #1:
Try to avoid number type casting. DM script only knows number but internally there is a difference between the proper number types (integer, floating point, signed/unsigned, byte-size). The script languages uses real-4-byte as the default unless told differently explicitly. And some methods will return real-4-byte by default. For this reason, the processing will be fastest, if both data and mask use real-4-byte data as well.
In my testing, the time-difference between running with uint16 data plus uint8 mask and *real4 data plus real4 mask) was significant! Nearly 30% time difference.
Point #2:
Don't copy you sliced image! Use := not = for your Dif2D.
The SliceN command returns an expression directly addressing the required memory. You can use it directly in any other expression (like I do below) or you can assign an image variable to it using := to give it a name.
The speed increase is not huge, but it's one copy-operation less per loop iteration.
Point #3:
You additional knowledge: Now for arbitrary masks there is not much you can do, but most often masks are zero-valued over large stretches and it is possible to define a smaller ROI containing all non-zero points. If this is the case, you can limit your math operations to that region.
i.e. instead of multiplying the whole DP with the same sized mask, just use a smaller mask and use the according sub-section of the DP.
This can actually make a big difference, but it will depend on your mask.
Of course you need to "find" this ROI first. In my script below I'm having a helper method to do that, utilizing the comparatively fast max() command and image rotation as trick for speed-up.
Point #4:
...would be to get rid of the double-for loop and replace it with image-expressions. Unfortunately, DigitalMicrograph does currently (GMS 3.3) not support this for 4D or 5D data.
The script below executed on a [53 x 52 x 512 x 512] STEM DI (of real-4 byte data) gave me the following timings:
Original: 12.80910 sec
Test 1 : 10.77700 sec
Test 2 : 1.83017 sec
// Helper class for timing
class CTimer{
number s
string n
~CTimer(object self){result("\n"+n+": "+ (GetHighResTickCount()-s)/GetHighResTicksPerSecond()+" sec");}
object Start(object self, string n_) { n=n_; s=GetHighResTickCount(); return self;}
}
// Helper method to find best non-zero containing ROI
void GetNonZeroArea( image src, number &t, number &l, number &b, number &r )
{
image work = !!src // Make a binary image which is 0 only where src==0
number d
max(work,d,t) // get "first" non-zero pixel coordinate, this is y = dist from TOP
rotateRight(work) // rotate image right
max(work,d,l) // get "first" non-zero pixel coordinate, this is y = dist from LEFT
rotateRight(work) // rotate image right
max(work,d,b) // get "first" non-zero pixel coordinate, this is y = dist from BOTTOM
b = work.ImageGetDimensionSize(1) - b // Opposite side!
rotateRight(work) // rotate image right
max(work,d,r) // get "first" non-zero pixel coordinate
r = work.ImageGetDimensionSize(1) - r // Opposite side!
}
// The original proposed script (plus fixes to make it actually run)
image Original(image STEM4D, image mask)
{
Number ScanX = STEM4D.ImageGetDimensionSize(0)
Number ScanY = STEM4D.ImageGetDimensionSize(1)
Number Dx = STEM4D.ImageGetDimensionSize(2)
Number Dy = STEM4D.ImageGetDimensionSize(3)
Image out := RealImage("Test1",4,ScanX,ScanY)
for (number i=0; i<ScanX; i++)
{ for (number j=0; j<ScanY; j++)
{
image Diff2D = STEM4D.SliceN(4,2,i,j,0,0,2,Dx,1,3,Dy,1)
out.setpixel(i,j, sum(Diff2D*mask))
}
}
return out
}
// Remove copying the slice, just reference it
image Test1(image STEM4D, image mask)
{
Number ScanX = STEM4D.ImageGetDimensionSize(0)
Number ScanY = STEM4D.ImageGetDimensionSize(1)
Number Dx = STEM4D.ImageGetDimensionSize(2)
Number Dy = STEM4D.ImageGetDimensionSize(3)
Image out := RealImage("Test1",4,ScanX,ScanY)
for (number i=0; i<ScanX; i++)
{ for (number j=0; j<ScanY; j++)
{
image Diff2D := STEM4D.SliceN(4,2,i,j,0,0,2,Dx,1,3,Dy,1)
out.setpixel(i,j, sum(Diff2D*mask))
}
}
return out
}
// Limit mask size to what is needed!
image Test2(image STEM4D, image mask )
{
Number ScanX = STEM4D.ImageGetDimensionSize(0)
Number ScanY = STEM4D.ImageGetDimensionSize(1)
Number Dx = STEM4D.ImageGetDimensionSize(2)
Number Dy = STEM4D.ImageGetDimensionSize(3)
Image out := RealImage("Test1",4,ScanX,ScanY)
Number t,l,b,r
GetNonZeroArea(mask,t,l,b,r)
Number w = r - l
Number h = b - t
image subMask := mask.slice2(l,t,0, 0,w,1, 1,h,1 )
for (number i=0; i<ScanX; i++)
for (number j=0; j<ScanY; j++)
out.setpixel(i,j, sum(STEM4D.SliceN(4,2,i,j,l,t,2,w,1,3,h,1)*subMask))
return out
}
Image src := GetFrontImage() // dimention [ScanX, ScanY, Dx, Dy]
Number ScanX = src.ImageGetDimensionSize(0)
Number ScanY = src.ImageGetDimensionSize(1)
Number Dx = src.ImageGetDimensionSize(2)
Number Dy = src.ImageGetDimensionSize(3)
Number r = 50 // mask radius
Image maskImg := RealImage("Mask",4,Dx,Dy)
maskImg = iradius < r ? 1 : 0 // just an aperture mask
image resultImg
{
object timer = Alloc(CTimer).Start("Original")
resultImg := Original(src,maskImg)
}
resultImg.SetName("Oringal")
resultImg.ShowImage()
{
object timer = Alloc(CTimer).Start("Test 1")
Test1(src,maskImg).ShowImage()
}
resultImg.SetName("Test 1")
resultImg.ShowImage()
{
object timer = Alloc(CTimer).Start("Test 2")
Test2(src,maskImg).ShowImage()
}
resultImg.SetName("Test 2")
resultImg.ShowImage()
Compiled code comparison:
Now, it should be added that the above script still is rather slow. Because it is iterating and using script language. The fully compiled c++ code of DigitalMicrograph is much faster. So if you have the licensed packages giving you the SI menu, then you want to use the SI/Map/Signal command. This is near-instantaneous for the example STEM DI I've mentioned above. My other answer shows how one could utilize this functionality by script.
As mentioned in my other answer, a real speed-win comes when compiled, parallelized code is used. DigitalMicrograph does this, after all, in the available SI "signal" map functionality. This feature is not available in the free version, but if you have Spectrum-Imaging acquisition, you most likely have the appropriated license as well.
The answer below utilizes this functionality by accessing the UI with the command ChooseMenuItem() and applying a few more tricks. The script is a bit lengthy, but its parts also show some other nice tricks worthwhile knowing:
TestSignalIntegrationInSI is the main script demoing how things can work.
CreatePickerByScript shows how one can create picker-spectra on SIs. This is used to open a 'Picker Diffraction Pattern' image from the STEM DI.
AddTestMasksToDP_ROIs programmatically adds ROIs to the diffraction pattern to be used as mask
AddTestMasksToDP_Threshold programmatically adds an intensity-threshold mask to be used as mask.
AddTestMasksToDP_DPMasks programmatically adds the various types of diffraction-masks to be used as mask
GetIntegratedSignalViaSIMenu is the central step of the script. With a picker-DP and required 'masks' on it front-most, the menu command is called to perform the signal-extraction (as fast as possible.) Then the displayed result-image is returned.
GetNewestImage is just a utility method showing how on can access the latest memory-created image.
Here is the script:
image GetNewestImage()
{
// New images get the next higher imageID.
// This can be used to identify the "latest" created image.
if ( 0 == CountImages() ) Throw( "No image in memory!" )
// We create a temp. image to get the uppder limit
number lastID = RealImage("Dummy",4,1).ImageGetID()
// Then we search for the next lower existing one
image lastImg
for( number ID = lastID - 1; ID>0; ID-- )
{
lastImg := FindImageByID(ID)
if ( lastImg.ImageIsValid() ) break
}
return lastImg
}
image CreatePickerByScript( image SI, number t, number l, number b, number r )
{
if ( SI.ImageGetNumDimensions()<3 ) Throw( "Sorry, LineScans are not supprorted here." )
// Adding a non-volatile ROI of specific RoiNAME acts as if using
// the picker-tool. The ID string must be unique!
ROI pickerROI = NewROI()
pickerROI.RoiSetVolatile( 0 )
string uniqueID = GetDate(0)+"#"+GetTime(1)+";"+round(random()*1000)
pickerROI.RoiSetName( "SICursor(##"+uniqueID+"##)" )
SI.ImageGetImageDisplay(0).ImageDisplayAddROI( pickerROI )
// This creates the picker image.
// So the child is now the "newest" image in memory
image child := GetNewestImage()
return child
}
void AddTestMasksToDP_ROIs( image DP )
{
// Add ROIs to the DP which are your masks (any numebr and type of ROI works)
imageDisplay DPdisp = DP.ImageGetImageDisplay(0)
number dpX = DP.ImageGetDimensionSize(0)
number dpY = DP.ImageGetDimensionSize(1)
// Only simple RECT ROIs are supported
ROI maskRoi1 = NewROI()
maskRoi1.ROISetRectangle( dpY*0.1, dpX*0.1, dpY*0.8, dpX*0.3 )
DPdisp.ImageDisplayAddROI(maskRoi1)
// Arbitrary multi-vertex (use for ovals etc.)
ROI maskRoi2 = NewROI()
maskRoi2.ROISetRectangle( dpY*0.7, dpX*0.1, dpY*0.9, dpX*0.9 )
DPdisp.ImageDisplayAddROI(maskRoi2)
}
void AddTestMasksToDP_Threshold( image DP )
{
// Add intensity treshhold mask (highest 95% intensity range)
imageDisplay DPdisp = DP.ImageGetImageDisplay(0)
DPdisp.RasterImageDisplaySetThresholdOn( 1 )
number low = max(DP) * 0.05
number high = max(DP)
DPdisp.RasterImageDisplaySetThresholdLimits( low, high )
}
void AddTestMasksToDP_DPMasks( image DP )
{
// Add Diffraction masks to the DP
imageDisplay DPdisp = DP.ImageGetImageDisplay(0)
// Spot masks (always symmetric pair)
Component spotMask = NewComponent(8,0,0,0,0) // 8 = Spotmask
spotMask.ComponentSetControlPoint(4, 0, 0,0) // 4 = TopLeft of one spot [Size only]
spotMask.ComponentSetControlPoint(7,10,10,0) // 7 = BottomRight of one spot [Size only]
spotMask.ComponentSetControlPoint(8,150,0,0) // 8 = Spot position [center]
DPdisp.ComponentAddChildAtEnd(spotMask)
// Bandpass mask (Only circles are correctly supported)
Component bandpassMask = NewComponent(15,0,0,0,0) // 15 = Bandpass (ring)
number r1 = 100
number r2 = 120
bandpassMask.ComponentSetControlPoint(7,r1,r1,0) // 7 = BottomRight of one ring [Size only]
bandpassMask.ComponentSetControlPoint(14,r2,r2,0) // 14 = BottomRight of one ring [Size only]
DPdisp.ComponentAddChildAtEnd(bandpassMask)
// Wege mask (symmetric)
Component wedgeMask = NewComponent(19,0,0,0,0) // 19 = wedgemask (ringsegment)
wedgeMask.ComponentSetControlPoint(9,10,20,0) // 9 = One wedge vector
wedgeMask.ComponentSetControlPoint(10,-20,40,0) // 10 = Other wedge vector
DPdisp.ComponentAddChildAtEnd(wedgeMask)
// Array mask (symmetric)
Component arrayMask = NewComponent(9,0,0,0,0) // 9 = arrayMask (ringsegment)
arrayMask.ComponentSetControlPoint(9,-70,-60,0) // 9 = One array vector
arrayMask.ComponentSetControlPoint(10,99,-99,0) // 10 = Other array vector
arrayMask.ComponentSetControlPoint(4, 0, 0,0) // 4 = TopLeft of one spot [Size only]
arrayMask.ComponentSetControlPoint(7,20,20,0) // 7 = BottomRight of one spot [Size only]
DPdisp.ComponentAddChildAtEnd(arrayMask)
}
image GetIntegratedSignalViaSIMenu( image pickerChild )
{
// Call the Menu to do the work
// The picker-spectrum or DP needs to be front-most
pickerChild.SelectImage()
ChooseMenuItem("SI","Map","Signal")
// The created signal map is NOT the newest image
// (some internal iamges are created for the mask)
// but it is the front-most displayed one.
image signalMap := GetFrontImage()
return signalMap
}
image GetMaskFromSignalMap( image signalMap, number DPx, number DPy )
{
// The actual mask is stored in the tags
string tagPath = "Processing:[0]:Parameters:Mask"
tagGroup tg = signalMap.ImageGetTagGroup()
if ( !tg.TagGroupDoesTagExist(tagPath) )
Throw( "Sorry, no mask tag found." )
image mask := RealImage("Mask",4,DPx, DPy )
if ( !tg.TagGroupGetTagAsArray(tagPath,mask) )
Throw( "Sorry, could not retrieve mask. Maybe wrong size?" )
return mask
}
void TestSignalIntegrationInSI()
{
image STEMDI := GetFrontImage()
image DP := STEMDI.CreatePickerByScript(0,0,1,1)
if ( TwoButtonDialog( "Add ROIs as mask?", "Yes", "No" ) )
AddTestMasksToDP_ROIs( DP )
else if ( TwoButtonDialog( "Add intensity treshold as mask?", "Yes", "No" ) )
AddTestMasksToDP_Threshold( DP )
else if ( TwoButtonDialog( "Add diffraction masks as mask?", "Yes", "No" ) )
AddTestMasksToDP_DPMasks( DP )
image signalMap := GetIntegratedSignalViaSIMenu( DP )
number dpX = DP.ImageGetDimensionSize(0)
number dpY = DP.ImageGetDimensionSize(1)
// We may want to close the DP again. No longer needed
//DP.DeleteImage()
// Verification: Get Mask image form SignalMap
image usedMask := GetMaskFromSignalMap( signalMap, dpX, dpY )
usedMask.SetName( "This mask was used." )
usedMask.ShowImage()
}
TestSignalIntegrationInSI()
The solution below utilizes the intrinsic expression loops by performing in-place multiplication and then projection.
Disappointingly, it turns out the solution is actually a bit slower then the for-loop with the SliceN command.
For the same test-data of size [53 x 52 x 512 x 512] the resulting timing is:
Data copy: 1.28073 sec
Inplace multiply: 30.1978 sec
Project 1/2: 1.1208 sec
Project 2/2: 0.0019557 sec
InPlace multiplication with projections (total): 32.9045 sec
InPlace multiplication with projections (total): 34.9853 sec
// Helper class for timing
class CTimer{
number s
string n
~CTimer(object self){result("\n"+n+": "+ (GetHighResTickCount()-s)/GetHighResTicksPerSecond()+" sec");}
object Start(object self, string n_) { n=n_; s=GetHighResTickCount(); return self;}
}
image MaskMultipliedSum( image STEM4D, image MASK2D, number copyFirst )
{
// Boring feasability checks...
if ( 4 != STEM4D.ImageGetNumDimensions() )
Throw( "Input data is not 4D." )
if ( 2 != MASK2D.ImageGetNumDimensions() )
Throw( "Input mask is not 2D." )
Number ScanX = STEM4D.ImageGetDimensionSize(0)
Number ScanY = STEM4D.ImageGetDimensionSize(1)
Number Dx = STEM4D.ImageGetDimensionSize(2)
Number Dy = STEM4D.ImageGetDimensionSize(3)
if ( Dx != MASK2D.ImageGetDimensionSize(0) )
Throw ("X dimension of mask does not match input data." )
if ( Dy != MASK2D.ImageGetDimensionSize(1) )
Throw ("Y dimension of mask does not match input data." )
// Do the maths!
image workCopy4D
if ( copyFirst )
{
object timer = Alloc(CTimer).Start("Data copy")
workCopy4D = STEM4D
}
else
workCopy4D := STEM4D
{
object timer = Alloc(CTimer).Start("Inplace multiply")
workCopy4D *= MASK2D[idimindex(2),idimindex(3)]
}
// Now we want to "sum up" over Dx and Dy
image p1,p2
{
object timer = Alloc(CTimer).Start("Project 1/2")
p1 := project( workCopy4D, 3 )
}
{
object timer = Alloc(CTimer).Start("Project 2/2")
p2 := project( p1, 2 )
}
return p2
}
image stack4D, mask2D
If ( GetTwoLabeledImagesWithPrompt("Please select 4D data and 2D mask", "Select input", "4D data", stack4D, "2D mask", mask2D ) )
{
number doCopy = TwoButtonDialog("Create workcopy?","Yes (takes time)","No (overwrites input data!)")
object timer = Alloc(CTimer).Start("InPlace multiplication with projections (total)")
MaskMultipliedSum(stack4D,mask2D,doCopy).ShowImage()
}

Kinect background removal

I followed the code provided by Robert Levy at this link: http://channel9.msdn.com/coding4fun/kinect/Display-Kinect-color-image-containing-only-players-aka-background-removal
I tried implementing it into my existing code, and have had inconsistent results. If the user is in the kinect's field of view when the program starts up it will remove the background some of the time. If the user walks into the field of view it will not pick them up.
namespace KinectUserRecognition
{
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
//Kinect Runtime
Runtime kinect = Runtime.Kinects[0];
PlanarImage colorImage;
PlanarImage depthImage;
bool isDepthImage;
WriteableBitmap player1;
private void Window_Loaded(object sender, RoutedEventArgs e)
{
isDepthImage = false;
//UseDepthAndPlayerIndex and UseSkeletalTracking
kinect.Initialize(RuntimeOptions.UseDepthAndPlayerIndex | RuntimeOptions.UseColor);// | RuntimeOptions.UseSkeletalTracking);
//register for event
kinect.VideoFrameReady += new EventHandler<ImageFrameReadyEventArgs>(nui_VideoFrameReady);
kinect.DepthFrameReady += new EventHandler<ImageFrameReadyEventArgs>(nui_DepthFrameReady);
//Video image type
kinect.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480,
ImageType.Color);
//DepthAndPlayerIndex ImageType
kinect.DepthStream.Open(ImageStreamType.Depth, 2, ImageResolution.Resolution320x240,
ImageType.DepthAndPlayerIndex);
}
void nui_VideoFrameReady(object sender, ImageFrameReadyEventArgs e)
{
colorImage = e.ImageFrame.Image;
image1.Source = BitmapSource.Create(colorImage.Width, colorImage.Height, 96, 96,
PixelFormats.Bgr32, null, colorImage.Bits, colorImage.Width * colorImage.BytesPerPixel);
if (isDepthImage)
{
player1 = GeneratePlayerImage(e.ImageFrame, 1);
image3.Source = player1;
}
}
void nui_DepthFrameReady(object sender, ImageFrameReadyEventArgs e)
{
//Convert depth information for a pixel into color information
byte[] ColoredBytes = GenerateColoredBytes(e.ImageFrame);
depthImage = e.ImageFrame.Image;
image2.Source = BitmapSource.Create(depthImage.Width, depthImage.Height, 96, 96, PixelFormats.Bgr32, null,
ColoredBytes, depthImage.Width * PixelFormats.Bgr32.BitsPerPixel / 8);
isDepthImage = true;
}
private WriteableBitmap GeneratePlayerImage(ImageFrame imageFrame, int playerIndex)
{
int depthWidth = kinect.DepthStream.Width;
int depthHeight = kinect.DepthStream.Height;
WriteableBitmap target = new WriteableBitmap(depthWidth, depthHeight, 96, 96, PixelFormats.Bgra32, null);
var depthRect = new System.Windows.Int32Rect(0, 0, depthWidth, depthHeight);
byte[] color = imageFrame.Image.Bits;
byte[] output = new byte[depthWidth * depthHeight * 4];
//loop over each pixel in the depth image
int outputIndex = 0;
for (int depthY = 0, depthIndex = 0; depthY < depthHeight; depthY++)
{
for(int depthX = 0; depthX < depthWidth; depthX++, depthIndex +=2)
{
short depthValue = (short)(depthImage.Bits[depthIndex] | (depthImage.Bits[depthIndex + 1] << 8));
int colorX, colorY;
kinect.NuiCamera.GetColorPixelCoordinatesFromDepthPixel(
imageFrame.Resolution,
imageFrame.ViewArea,
depthX, depthY, //depth coordinate
depthValue, //depth value
out colorX, out colorY); //color coordinate
//ensure that the calculate color location is within the bounds of the image
colorX = Math.Max(0, Math.Min(colorX, imageFrame.Image.Width - 1));
colorY = Math.Max(0, Math.Min(colorY, imageFrame.Image.Height - 1));
output[outputIndex++] = color[(4 * (colorX + (colorY * imageFrame.Image.Width))) + 0];
output[outputIndex++] = color[(4 * (colorX + (colorY * imageFrame.Image.Width))) + 1];
output[outputIndex++] = color[(4 * (colorX + (colorY * imageFrame.Image.Width))) + 2];
output[outputIndex++] = GetPlayerIndex(depthImage.Bits[depthIndex]) == playerIndex ? (byte)255 : (byte)0;
}
}
target.WritePixels(depthRect, output, depthWidth * PixelFormats.Bgra32.BitsPerPixel / 8, 0);
return target;
//return output;
}
private static int GetPlayerIndex(byte firstFrame)
{
//returns 0 = no player, 1 = 1st player, 2 = 2nd player...
//bitwise & on firstFrame
return (int)firstFrame & 7;
}
}
}
-Edit 1-
I think I've narrowed the problem down, but I'm not sure of a way to resolve it. I assumed that having only one person in the kinect's field of view would return a value of one from my "GetPlayerIndex" method. This is not the case. I was hoping to produce a separate image for each person with the background removed. What type of values should I assume to receive from:
-Edit 2-
From my tests I've noticed that I can a max value of 6 for the player index, but the index that I get isn't consistent. If there a way to know what player index will be assigned to a skeleton? For example, if I were the only person in the fov would there be a way to know that my player index would always be 1?
The player index is not guaranteed to be anything. Once it catches a skeleton, the index will stay the same for that skeleton until it loses sight of it, but you can't assume that the first player will be 1, the second 2, etc.
What you'll need to do is determine a valid skeleton index prior to the player1 = GeneratePlayerImage(e.ImageFrame, 1); call, or alter the GeneratePlayerImage function to find an index. If you're only interested in removing the background and leaving the pixels for all the people in the frame untouched, just change this:
output[outputIndex++] = GetPlayerIndex(depthImage.Bits[depthIndex]) == playerIndex ? (byte)255 : (byte)0;
to this, which will just check for ANY player, instead of a specific player:
output[outputIndex++] = GetPlayerIndex(depthImage.Bits[depthIndex]) != 0 ? (byte)255 : (byte)0;
The other two ways I can think of to do this for a specific player instead of all players:
Open the Kinect's Skeleton feed, and loop through the array of skeletons it gives you to find a valid index. Create a global integer to hold this index, then call the GeneratePlayerImage method with this global integer.
Change the GeneratePlayerImage method to check for a player index in each pixel, and if one is found use that index to remove the background for the entire image (ignore any other index it finds).