Acquire a series of STEM images via the method of event listener - dm-script

Recently I am trying to attach an event listener to a live display so as to acquire a series of images automatically and the even map used is "data_value_changed". In TEM mode everything is fine and the 3D stack can be properly obtained. Unfortunately, while applying this to a live STEM image from the DigiScan, the script failed completely. Later on I just realized that in such a mode, the image is updated pixel by pixel with the scanning rather than frame by frame. Another event map "data_changed" was further tested but still ended up with failure.
With DM2.0 or later version, it seems to be much easier to acquire a series of customerized STEM images, since the DigiScan controlling can be conveniently accessable via scripting. Unfortunately, our microscope is quite old and only with DM 1.5 installed.
Is there any event map sepcific to this purpose or the approach of event handler is not suitable at all?
Thanks in advance

The data changed event handler is suitable, but there is no event for a particular frame/complete event.
Instead, your event-handling code needs to be creative and deal with the situation that you get more events than you want. You are really only interested in the event which (also) change the last pixel in an image (as the frame is sequentially filled), but you do get events whenever sub-parts of the image change.
So you need to "filter" those events out - as quickly and CPU-conservative as possible.
The easiest way is to gather the last pixel's value at each event and compare it to a stored value. If the value changed, then this pixel was changed, indicating the frame is "complete" and you want to use the event. Otherwise, just return without further action.
There is a very slim chance ( - for scanned images - ) that a "new" frame has the numerically identical value than the frame before, so this in most cases is all yoou need to do.
If this isn't enough for you, you may look at longer - but also more CPU cycles consuming - checks like each time computing a Boolean change map betwee "now" and "buffered" and keeping track of the "last" change. Then, if there is a "jump over" to an earlier index, you know that your "buffered" last image actually wa a full frame.
(Note, that you will always see the data update once at the end of frame. Hence this will work.)
There is an example of this type of script in this answer here. If this isn't working for you, please comment or rephrase your question for more details on where you run into issues.

Related

How to create a list which sorts its self like games with the animation

I am creating a game in flutter in which I want to create my leader board page in which I want to achieve the list like Reorderable widget but in that we drag the tile manually but I need to get it automate ,it should elevate and lower another tiles same animation like Reorderable list but it should be automated
I hope I will find some solution with large community
You can take a look at the AnimatedList in Flutter: https://api.flutter.dev/flutter/widgets/AnimatedList-class.html
When a new value is inserted (or removed), it is automatically animated (and you can of course customize this animation to suit your needs).
I'm not sure how you would handle moving an item from one place to the next in the list, though.
Update
I found this other SO thread which mentions the great_list_view package, in which you can just update your underlying list (with the new score, or sort it again) and it will automatically animate it for you. That sounds like a good option, and the package has recently been updated and has a decent amount of likes on pub.
https://pub.dev/packages/great_list_view

Nsight Graphics and RenderDoc cannot trace application

I am stuck writing a Vulkan renderer. The final output I see on the screen is only the clear color, animated over time, but no geometries. Even with all possible validation turned on I dont get any errors / warnings / bestPractices / performance hints etc except for the bestPractices warning "You are using VK_PIPELINE_STAGE_ALL_COMMANDS_BIT when vkQueueSubmit is called". Not actually sure I use all possible validation, but I have Vulkan Configuration running and ticked all checkboxes under "VK_LAYER_KHRONOS_validation", after which the vkQueueSubmit hint showed up
After poking around for some hours I decided to look into using RenderDoc and I can startup the application just fine, however RenderDoc says "Connection status: Established, Api: none" and I cannot capture a frame.
Quite confused I thought I would look into using NSight Graphics just to find the same problem: I can run the application but it says "Attachable process detected. Status: No Graphics API". I read somewhere I can start the process first, then use the attach functionality to attach to the running process, which I did, unfortunately with the same outcome
I read there can be problems when not properly presenting every frame, which was the reason for me to change the clear color over time to make sure I actually present every frame, which I can confirm is the case
I am quite lost at this point, did anyone make similar experiences? Any ideas as to what I could do to get RenderDoc / NSight Graphics working properly? They both dont show anything in the logs as I guess they just assume the process does not use any graphics api and thus wont be traced.
I am also thankful for ideas about why I cannot see my geometries but I understand this is even harder to guess from your side, still some notes: I have even forced depth and stencil tests off, although the vertices should be COUNTER_CLOCKWISE I have also checked CLOCKWISE just to make sure, set the face cull mode off, checked the color write mask and rasterizerDiscard, even set the gl_Position to ignore the vertex positions and transform matrices completely and use some random values in range -1 to 1 instead, basically everything that came to my mind when I hear "only clear color, but no errors" but everything to no avail
In case it helps with anything: I am on Win11 using either RTX3070 or Intel UHD 770 both with the same outcome
Small Update:
Using the Vulkan Configurator I could force the VK_LAYER_RENDERDOC_capture layer on, after which when running the application I can see the overlay and after pressing F12 read that it captured a frame. However RenderDoc still cannot find a graphics api for this process and I have no idea how to access that capture
I then forced VK_LAYER_LUNARG_api_dump on and dumped it into an html which I inspected and I still cannot see anything wrong. I looked especially closely at the Pipeline and Renderpass creation calls.
This left me thinking it would be any uniform / vertex buffer content / offsets or whatever so I removed any of that, use hardcoded vertex positions and fragment outputs and still I can only see the clear color in the final image on the screen.
Thanks
Maybe confused me should start converting the relative viewport that I expose to absolute values using my current cameras width and height, ie giving (0,0,1920,1080) to Vulkan instead of (0,0,1,1).
Holymoly what a ride

Creating blocks in Grasshopper dynamically

What I have: When my device moves, point data is sent to grasshopper via UDP.
What I want: Visualize the path in rhino/grasshopper based on the incoming points.
Is it possible to create the points / blocks dynamically?
Thank you very much in advance!
You could use the timer component in Grasshopper to continuously update your UDP receiver. Anything downstream from there, such as a spline through a list of points, will be redrawn with every change.
So far it is unclear what is it that you have a problem with exactly. If you have trouble with interpreting incoming text messages, try text split component to break down your message into XYZ point coordinates and then use construct point component.
If the problem is with storing the points, try data recorder component.

Read code markers of a images

I do not know the real name of this type of "QR" they are used in augmented reality and other tracking applications.
Here is a image of what it looks like.
I want to build a vb.net program that finds as many of this in a image. I do not need to get angels and so. Only a number.
The marker need to handler +10K of numbers and tolerate rotation.
I did use https://github.com/jcmellado/js-aruco as a template to solve my problem.

Getting the screen position of a non-view based NSStatusItem

My application makes use of an NSStatusItem. I need to grab the screen coordinates for the status item, but since I have no need for the functionality offered by setting a custom view for the item I am using a standard icon-based one instead.
Is there a way to get the status item's position without having to resort to setting a custom view to it?
Unless you find a better way, you could use some undocumented API: - (id)_window of NSStatusItem probably returns the window enclosing the item itself. Maybe you can get some interesting information out of that?
Beware: Some fundamentalists might actually try to break your neck for using undocumented API. Just make sure you check regularly if this portion of your code works with newer OS versions (either by hand or using some unit tests).