Is there a dm script command to control the GIF cinema mode - dm-script

I have been making digital micrograph scripts to take some sequential frame acquisitions on a JEOL ARM200F. For some experiments, I need a faster readout speed than the usual CCD acquisition mode can do.
The GIF Quantum camera is able to do a "cinema" mode in which half the pixels are used as memory storage such that the camera can be exposed and read out simultaneously. This is utilized for EELS acquisitions.
Does anybody know if there is a DM scripting command to activate (acquire images in) the cinema mode?
My current script sets the number of frames to acquire, the acquisition time per frame, and binning. However the readout time between each frame is too slow. Setting the camera to cinema mode before running the script still only acquires full frame images.

There is no simple command for this. The advanced camera modes are not available as simple commands, and they are generally not part of the supported DM-script interface.
Usually, these modes can only be accessed via the object-oriented camera-script interface (CM_ commands) used by Gatan service and R&D. This script interface is, at least until now, not end-user supported.
It definitely falls into the category of 'advanced' scripting, so you will need to know how to handle object-oriented script coding style.
With the above said, the following might help you, if you already know how to use the CM_ commands in general:
In the extended (not enduser - supported) script interface, the way to achieve cinema mode is to modify the acquisition parameter set. One needs to set the readMode parameter.
The following code snipped shows this:
object camera = cm_GetCurrentCamera()
number read_mode = camera.cm_GetReadModeForNamedAcquisitionStyle("Cinema")
number create_if_not_exist = 1;
object acq_params = camera.CM_GetCameraAcquisitionParameterSet("Imaging", "Acquire", "Record", create_if_not_exist)
cm_SetReadMode(acq_params, read_mode)
cm_Validate_AcquisitionParameters(camera, acq_params);
image img := cm_AcquireImage(camera, acq_params)
img.ShowImage()
Note, that not all cameras support the Cinema readmode. The second line command will throw an error message in that case.

Related

Fast EELS acquisation

To acquire EELS, I used these below,
img:=camera.cm_acquire(procType,exp,binX, binY,tp,lf,bt,rt)
imgSP:=img.verticalSum() //this is a custom function to do vertical sum
and this,
imgSP:=EELSAcquireSpectrum(exp, nFrames, binX, binY, processing)
When using either one in my customized 2D mapping, they are much slower than the "spectrum Imaging" from Gatan. (The first one is faster than the 2nd one). Is the lack of speed the natural limitation with scripting? or there are better function calls?
Yes, the lack of speed is a limitation of the scripting giving you only access to the camera in single read mode. I.e. one command initiates the camera, exposes it, reads it out and returns the image.
In SpectrumImaging the camera is run in continuous mode, i.e. the same as if you have the live view running. The cameras is constantly exposed and reads out (with shutter, depending on the type of camera). This mode of camera acquisition is available as camera script command from GMS 3.4.0 onward.

Error -200361 using USB-6356 X-series DAQ board for SPI control

I'm using a USB-6356 DAQ board to control an IC via SPI.
I'm using parts of the NI SPI Digital Waveform library to create the digital waveform, then a small wrapper VI to transmit the code.
My IC measures temperature on an RTD, and currently the controlling VI has a 'push for single measurement' style button.
When I push it, the temperature is returned by a series of other VIs running the SPI communication.
After some number of pushes (clicking the button very quickly makes this happen more quickly in time, but not necessarily in number of clicks), the VI generates an error -200361, which is nominally FIFO buffer overflow on the DAQ board.
It's unclear to me if that could actually be the cause of the problem, but I don't think so...
An NI guide describing this error for USB-600{0,8,9} devices looks promising, but following the suggestions didn't help me. I substituted 'DI.UsbXferReqCount' for the analog equivalent, since my read task is digital. Reading the default returned 4, so I changed the property to write and selected '1', but this made no difference.
I tried uninstalling the DAQ board using the Device Manager, unplugging and replugging, but this also didn't change anything.
My guess is that additional clock samples are generated after the end of the 'Finite Samples' part for the Read and Write tasks, and that these might be adding blank data that overflows, but the temperatures returned don't indicate strange data, and I'd have assumed that if this were the case, my VIs would be unable to interpret the data read in as the correct temperature.
I've attached an image of the block diagram for the Transmit VI I'm using, but actually getting it to run would require an entire library of VIs.
The controlling VI is attached to a nearly identical forum post at NI forums.
I think USB-6356 don't have output buffers used for Digital signal. You can try it by NI-MAX, if you select the digital output, you may find that there is no parameters for Samples. It's only output a bool-value(0 or 1) in one time.
You can also use DAQ Assistant in LabVIEW, when you config Digital output, if you select N-Samples or Continuous samples, then push OK button, here comes a Dialog that tell you there is no buffer for lines that you selected.

Impossible to acquire and present in parallel with rendering?

Note: I'm self-learning Vulkan with little knowledge of modern OpenGL.
Reading the Vulkan specifications, I can see very nice semaphores that allow the command buffer and the swapchain to synchronize. Here's what I understand to be a simple (yet I think inefficient) way of doing things:
Get image with vkAcquireNextImageKHR, signalling sem_post_acq
Build command buffer (or use pre-built) with:
Image barrier to transition image away from VK_IMAGE_LAYOUT_UNDEFINED
render
Image barrier to transition image to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
Submit to queue, waiting on sem_post_acq on fragment stage and signalling sem_pre_present.
vkQueuePresentKHR waiting on sem_pre_present.
The problem here is that the image barriers in the command buffer must know which image they are transitioning, which means that vkAcquireNextImageKHR must return before one knows how to build the command buffer (or which pre-built command buffer to submit). But vkAcquireNextImageKHR could potentially sleep a lot (because the presentation engine is busy and there are no free images). On the other hand, the submission of the command buffer is costly itself, and more importantly, all stages before fragment can run without having any knowledge of which image the final result will be rendered to.
Theoretically, it seems to me that a scheme like the following would allow a higher degree of parallelism:
Build command buffer (or use pre-built) with:
Image barrier to transition image away from VK_IMAGE_LAYOUT_UNDEFINED
render
Image barrier to transition image to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
Submit to queue, waiting on sem_post_acq on fragment stage and signalling sem_pre_present.
Get image with vkAcquireNextImageKHR, signalling sem_post_acq
vkQueuePresentKHR waiting on sem_pre_present.
Which would, again theoretically, allow the pipeline to execute all the way up to the fragment shader, while we wait for vkAcquireNextImageKHR. The only reason this doesn't work is that it is neither possible to tell the command buffer that this image will be determined later (with proper synchronization), nor is it possible to ask the presentation engine for a specific image.
My first question is: is my analysis correct? If so, is such an optimization not possible in Vulkan at all and why not?
My second question is: wouldn't it have made more sense if you could tell vkAcquireNextImageKHR which particular image you want to acquire, and iterate through them yourself? That way, you could know in advance which image you are going to ask for, and build and submit your command buffer accordingly.
Like Nicol said you can record secondaries independent of which image it will be rendering to.
However you can take it a step further and record command buffers for all swpachain images in advance and select the correct one to submit from the image acquired.
This type of reuse does take some extra consideration into account because all memory ranges used are baked into the command buffer. But in many situations the required render commands don't actually change frame one frame to the next, only a little bit of the data used.
So the sequence of such a frame would be:
vkAcquireNextImageKHR(vk.dev, vk.swap, 0, vk.acquire, VK_NULL_HANDLE, &vk.image_ind);
vkWaitForFences(vk.dev, 1, &vk.fences[vk.image_ind], true, ~0);
engine_update_render_data(vk.mapped_staging[vk.image_ind]);
VkSubmitInfo submit = build_submit(vk.acquire, vk.rend_cmd[vk.image_ind], vk.present);
vkQueueSubmit(vk.rend_queue, 1, &submit, vk.fences[vk.image_ind]);
VkPresentInfoKHR present = build_present(vk.present, vk.swap, vk.image_ind);
vkQueuePresentKHR(vk.queue, &present);
Granted this does not allow for conditional rendering but the gpu is in general fast enough to allow some geometry to be rendered out of frame without any noticeable delays. So until the player reaches a loading zone where new geometry has to be displayed you can keep those command buffers alive.
Your entire question is predicated on the assumption that you cannot do any command buffer building work without a specific swapchain image. That's not true at all.
First, you can always build secondary command buffers; providing a VkFramebuffer is merely a courtesy, not a requirement. And this is very important if you want to use Vulkan to improve CPU performance. After all, being able to build command buffers in parallel is one of the selling points of Vulkan. For you to only be creating one is something of a waste for a performance-conscious application.
In such a case, only the primary command buffer needs the actual image.
Second, who says that you will be doing the majority of your rendering to the presentable image? If you're doing deferred rendering, most of your stuff will be written to deferred buffers. Even post-processing effects like tone-mapping, SSAO, and so forth will probably be done to an intermediate buffer.
Worst-case scenario, you can always render to your own image. Then you build a command buffer who's only contents is an image copy from your image to the presentable one.
all stages before fragment can run without having any knowledge of which image the final result will be rendered to.
You assume that the hardware has a strict separation between vertex processing and rasterization. This is true only for tile-based hardware.
Direct renderers just execute the whole pipeline, top to bottom, for each rendering command. They don't store post-transformed vertex data in large buffers. It just flows down to the next step. So if the "fragment stage" has to wait on a semaphore, then you can assume that all other stages will be idle as well while waiting.
wouldn't it have made more sense if you could tell vkAcquireNextImageKHR which particular image you want to acquire, and iterate through them yourself?
No. The implementation would be unable to decide which image to give you next. This is precisely why you have to ask for an image: so that the implementation can figure out on its own which image it is safe for you to have.
Also, there's specific language in the specification that the semaphore and/or event you provide must not only be unsignaled but there cannot be any outstanding operations waiting on them. Why?
Because vkAcquireNextImageKHR can fail. If you have some operation in a queue that's waiting on a semaphore that's never going to fire, that will cause huge problems. You have to successfully acquire first, then submit work that is based on the semaphore.
Generally speaking, if you're regularly having trouble getting presentable images in a timely fashion, you need to make your swapchain longer. That's the point of having multiple buffers, after all.

Record raw data on Labview

I have this VI in Labview that streams video from a webcam (Logitech C300) and processes the colored layers of each image as arrays. I am trying to get raw Bayer data from the webcam using Logitech's program (http://web.archive.org/web/20100830135714/http://www.quickcamteam.net/documentation/how-to/how-to-enable-raw-streaming-on-logitech-webcams) and the Vision Acquisition tool but I only get as much data as with regular capture, instead of four times more.
Basically, I get 1280x1024 24-bit pixels where I want 1280*1024 32-bit or 2560*2048 8-bit pixels.
Has anyone had any experience with this and knows a way for Labview to process the camera's raw output, or how to actually record a raw file from the camera?
Thank you!
The driver flag you've enabled simply packs the raw pixel value (8/10 bpp) into the least significant bits of the 24bit values. Assuming that the 8bpp mode is used, the raw values can be extracted from the blue color plane as in the following example. It can then be debayered to obtain RGB values for example.
Unless you can improve on the debayer algorithms in the firmware, or have very specific needs this is not very useful. Normally, one can at least reduce the amount of data transferred by enabling raw mode - which is not the case here.
The above assumes that the raw video mode isn't being overwritten by the LabVIEW IMAQdx driver. If that is the case, you might be able to enable raw mode from LabVIEW through property nodes. This requires to manually configure the acquision, as the configurability of express VIs are limited. Use the EnumStrings property to get all possible attributes, and then see if there is something like the one specified outside of the diagram disable structure (this is from a different camera).

How to modify DirectX camera

Suppose I have a 3D (but not stereoscopic) DirectX game or program. Is there a way for a second program (or a driver) to change the camera position in the game?
I'm trying to build a head-tracking plugin or driver that I can use for my DirectX games/programs. An inertial motion sensor will give me the position of my head but my problem is using that position data to change the camera position, not with the hardware/math concerns of head tracking.
I haven't been able to find anything on how to do this so far, but iZ3D was able to create two cameras near the original camera and use it for stereoscopic stuff, so I know there exists some hook/link/connection into DirectX that makes camera manipulation by a second program possible.
If I am able to get this to work I'll release the code.
-Shane
Hooking Direct3D calls in its nature is just hooking DLL calls. I.e. its not something special to D3D but just a generic technique. Try googling for "hook dll" or start from here: [C++] Direct3D hooking sample. As it always happens with hooks there are many caveats and you'll have to make a pretty huge boilerplate to satisfy all needs of the hooked application.
Though, manipulation with camera in games usually gives not good results. There are at least two key features of modern PC game which will severely limit your idea:
Pre-clipping. Almost any game engine filters out objects that are behind the viewing plane. So when you rotate camera to a side you won't see the objects you'd expect to see in a real world - they were just not sent to D3D since game doesn't know that viewing plane has changed.
Multiple passes rendering. Many popular post processing effects are done in extra passes (either thru the whole scene or just part of it). Mirrors and "screens" are the most known such effects. Without knowing what camera you're manipulating with you'll most likely just break the scene.
Btw, #2 is the reason why stereoscopic mode is not 100% compatible with all games. For example, in Source engine HDR scenes are rendered in three passes and if you don't know how to distinguish them you'll do nothing but break the game. Take a look at how nVidia implements their stereoscopic mode: they make a separate hook for every popular game and even with this approach it's not always possible to get expected result.