Fast EELS acquisation - hardware

To acquire EELS, I used these below,
img:=camera.cm_acquire(procType,exp,binX, binY,tp,lf,bt,rt)
imgSP:=img.verticalSum() //this is a custom function to do vertical sum
and this,
imgSP:=EELSAcquireSpectrum(exp, nFrames, binX, binY, processing)
When using either one in my customized 2D mapping, they are much slower than the "spectrum Imaging" from Gatan. (The first one is faster than the 2nd one). Is the lack of speed the natural limitation with scripting? or there are better function calls?

Yes, the lack of speed is a limitation of the scripting giving you only access to the camera in single read mode. I.e. one command initiates the camera, exposes it, reads it out and returns the image.
In SpectrumImaging the camera is run in continuous mode, i.e. the same as if you have the live view running. The cameras is constantly exposed and reads out (with shutter, depending on the type of camera). This mode of camera acquisition is available as camera script command from GMS 3.4.0 onward.

Related

Do I need to create all the surfaces before creating the device?

just a quick question here...
So, as you know, when you create a Vulkan device, you need to make sure the physical device you chose supports presenting to a surface with vkGetPhysicalDeviceSurfaceSupportKHR() right?, that means, you need to create the surface before creating the device.
Now lets say that at run-time, the user may press a button which makes a new window open, and stuff is going to be drawn to that window, so you need a new surface right?, but the device has already been created...
Does this mean I have to create all the surfaces before I create the device or do I have to recreate the device, but, if need to recreate it, what happens with all the stuff that has been created/allocated from that device?
Does this mean I have to create all the surfaces before I create the device or do I have to recreate the device
Neither.
If the physical device cannot draw to the surface... then you need to find a physical device which can. This could happen if you have 2 GPUs, and each one is plugged into a different monitor. Each GPU can only draw to surfaces that are on their monitor (though sometimes there are ways for implementations to get around this).
So if the physical device for the logial VkDevice you're using cannot draw to the surface, you don't "recreate" the device. You create a new device, one which is almost certainly unable to draw to the surface that the old device could draw to. So in this case, you'd need 2 separate devices to render to the two surfaces.
But for most multi-monitor cases this isn't an issue. If you have a single GPU with multi-monitor output support, then any windows you create will almost certainly be compatible with that GPU. Integrated GPU + discrete GPU cases also tend to support the same surfaces.
The Vulkan API simply requires that you check to see if there is an incompatibility, and then deal with it however you can. Which could involve moving the window to the proper monitor or other OS-specific things.

Swap chains for windows covering multiple monitors

I'm currently developing multi-monitor DX11 app and I ran into a very specific problem. When creating a swap chain for a window, a window handle and a pointer to device object should be passed, both parameters are required to be non-NULL. But when a window covers two monitors connected to different devices, pointer to exactly what device should be passed? Or should I create swap chains for each monitor in order to perform rendering of window parts?
I'm aware that in windowed mode, DWM performs final merging of swap chains back buffers into the real back buffer of its very own swap chain. But I can't understand how to perform rendering to a window that can be dragged from monitor to another monitor and back.
On the other hand I do understand that swap chain buffers are located into device memory so device must be specified when creating a swap chain. Window handle is required too because rendering is performed to a window. The problem is that I can't understand what exactly device must be used in case of a window spanning two monitors and, if I should create swap chains for each monitor, should I merge rendering results from all swap chains?
Thank you!
In general, DWM makes it work. For your window you can create a swapchain on any device, and DWM will composite it. However it is possible there will be a performance drop when your window moves from a monitor connected to the adapter on which the windows' swapchain was created (most efficient) to a monitor connected to another adapter (less efficient, more copies).
Also, the window cannot go fullscreen on a monitor connected to an adapter different from the one on which the windows' swapchain was created.
Perhaps for maximum perfomance you need to have one device per adapter, and juggle your rendering from one the another depending on where the window sits. But I have no experience with that. (Also, those adapters may have very different performance profiles to the point that copying be less expensive than rendering on the slow adapter.)

Is there a dm script command to control the GIF cinema mode

I have been making digital micrograph scripts to take some sequential frame acquisitions on a JEOL ARM200F. For some experiments, I need a faster readout speed than the usual CCD acquisition mode can do.
The GIF Quantum camera is able to do a "cinema" mode in which half the pixels are used as memory storage such that the camera can be exposed and read out simultaneously. This is utilized for EELS acquisitions.
Does anybody know if there is a DM scripting command to activate (acquire images in) the cinema mode?
My current script sets the number of frames to acquire, the acquisition time per frame, and binning. However the readout time between each frame is too slow. Setting the camera to cinema mode before running the script still only acquires full frame images.
There is no simple command for this. The advanced camera modes are not available as simple commands, and they are generally not part of the supported DM-script interface.
Usually, these modes can only be accessed via the object-oriented camera-script interface (CM_ commands) used by Gatan service and R&D. This script interface is, at least until now, not end-user supported.
It definitely falls into the category of 'advanced' scripting, so you will need to know how to handle object-oriented script coding style.
With the above said, the following might help you, if you already know how to use the CM_ commands in general:
In the extended (not enduser - supported) script interface, the way to achieve cinema mode is to modify the acquisition parameter set. One needs to set the readMode parameter.
The following code snipped shows this:
object camera = cm_GetCurrentCamera()
number read_mode = camera.cm_GetReadModeForNamedAcquisitionStyle("Cinema")
number create_if_not_exist = 1;
object acq_params = camera.CM_GetCameraAcquisitionParameterSet("Imaging", "Acquire", "Record", create_if_not_exist)
cm_SetReadMode(acq_params, read_mode)
cm_Validate_AcquisitionParameters(camera, acq_params);
image img := cm_AcquireImage(camera, acq_params)
img.ShowImage()
Note, that not all cameras support the Cinema readmode. The second line command will throw an error message in that case.

HoloLens external rendering

Does soneone of you have a good solution for external rendering for Microsoft HoloLens Apps? Specified: Is it possible to let my laptop render an amount of 3D objects that is too much for the HoloLens GPU and then display them with the HoloLens by wifi including the spatial mapping and interaction?
It's possible to render remotely both directly from the unity editor and from a built application.
While neither achieves your goal of a "good solution" they both allow very intensive applications to at least run at all.
This walks you through how to add it to an app you're building.
https://learn.microsoft.com/en-us/windows/mixed-reality/add-holographic-remoting
This is for running directly from the editor:
https://blogs.unity3d.com/2018/05/30/create-enhanced-3d-visuals-with-holographic-emulation-in-uwp/
I don't think this is possible since, you can't really access the OS or the processor at all on the HoloLens. Even if you do manage to send the data to a 3rd party to process, the data will still need to be run back through the HoloLens which is really just the same as before.
You may find a way to perhaps hook up a VR backpack to it but even then, I highly doubt it would be possible.
If you are having trouble rendering 3D objects, then you should reduce the number of triangles, get a lower resolution shader on it, or reduce the size of the object. The biggest factor in processing 3D objects on the HoloLens is how much space is being drawn on the lens. If your object takes up 25% of the view instead of 100% it will be easier to process on the HoloLens.
Also if you can't avoid a lot of objects in the scene maybe check out LOD, which reduces the resolution of objects based off of distance to it and vice versa.

How to modify DirectX camera

Suppose I have a 3D (but not stereoscopic) DirectX game or program. Is there a way for a second program (or a driver) to change the camera position in the game?
I'm trying to build a head-tracking plugin or driver that I can use for my DirectX games/programs. An inertial motion sensor will give me the position of my head but my problem is using that position data to change the camera position, not with the hardware/math concerns of head tracking.
I haven't been able to find anything on how to do this so far, but iZ3D was able to create two cameras near the original camera and use it for stereoscopic stuff, so I know there exists some hook/link/connection into DirectX that makes camera manipulation by a second program possible.
If I am able to get this to work I'll release the code.
-Shane
Hooking Direct3D calls in its nature is just hooking DLL calls. I.e. its not something special to D3D but just a generic technique. Try googling for "hook dll" or start from here: [C++] Direct3D hooking sample. As it always happens with hooks there are many caveats and you'll have to make a pretty huge boilerplate to satisfy all needs of the hooked application.
Though, manipulation with camera in games usually gives not good results. There are at least two key features of modern PC game which will severely limit your idea:
Pre-clipping. Almost any game engine filters out objects that are behind the viewing plane. So when you rotate camera to a side you won't see the objects you'd expect to see in a real world - they were just not sent to D3D since game doesn't know that viewing plane has changed.
Multiple passes rendering. Many popular post processing effects are done in extra passes (either thru the whole scene or just part of it). Mirrors and "screens" are the most known such effects. Without knowing what camera you're manipulating with you'll most likely just break the scene.
Btw, #2 is the reason why stereoscopic mode is not 100% compatible with all games. For example, in Source engine HDR scenes are rendered in three passes and if you don't know how to distinguish them you'll do nothing but break the game. Take a look at how nVidia implements their stereoscopic mode: they make a separate hook for every popular game and even with this approach it's not always possible to get expected result.