Record raw data on Labview - labview

I have this VI in Labview that streams video from a webcam (Logitech C300) and processes the colored layers of each image as arrays. I am trying to get raw Bayer data from the webcam using Logitech's program (http://web.archive.org/web/20100830135714/http://www.quickcamteam.net/documentation/how-to/how-to-enable-raw-streaming-on-logitech-webcams) and the Vision Acquisition tool but I only get as much data as with regular capture, instead of four times more.
Basically, I get 1280x1024 24-bit pixels where I want 1280*1024 32-bit or 2560*2048 8-bit pixels.
Has anyone had any experience with this and knows a way for Labview to process the camera's raw output, or how to actually record a raw file from the camera?
Thank you!

The driver flag you've enabled simply packs the raw pixel value (8/10 bpp) into the least significant bits of the 24bit values. Assuming that the 8bpp mode is used, the raw values can be extracted from the blue color plane as in the following example. It can then be debayered to obtain RGB values for example.
Unless you can improve on the debayer algorithms in the firmware, or have very specific needs this is not very useful. Normally, one can at least reduce the amount of data transferred by enabling raw mode - which is not the case here.
The above assumes that the raw video mode isn't being overwritten by the LabVIEW IMAQdx driver. If that is the case, you might be able to enable raw mode from LabVIEW through property nodes. This requires to manually configure the acquision, as the configurability of express VIs are limited. Use the EnumStrings property to get all possible attributes, and then see if there is something like the one specified outside of the diagram disable structure (this is from a different camera).

Related

Fast EELS acquisation

To acquire EELS, I used these below,
img:=camera.cm_acquire(procType,exp,binX, binY,tp,lf,bt,rt)
imgSP:=img.verticalSum() //this is a custom function to do vertical sum
and this,
imgSP:=EELSAcquireSpectrum(exp, nFrames, binX, binY, processing)
When using either one in my customized 2D mapping, they are much slower than the "spectrum Imaging" from Gatan. (The first one is faster than the 2nd one). Is the lack of speed the natural limitation with scripting? or there are better function calls?
Yes, the lack of speed is a limitation of the scripting giving you only access to the camera in single read mode. I.e. one command initiates the camera, exposes it, reads it out and returns the image.
In SpectrumImaging the camera is run in continuous mode, i.e. the same as if you have the live view running. The cameras is constantly exposed and reads out (with shutter, depending on the type of camera). This mode of camera acquisition is available as camera script command from GMS 3.4.0 onward.

Is there a dm script command to control the GIF cinema mode

I have been making digital micrograph scripts to take some sequential frame acquisitions on a JEOL ARM200F. For some experiments, I need a faster readout speed than the usual CCD acquisition mode can do.
The GIF Quantum camera is able to do a "cinema" mode in which half the pixels are used as memory storage such that the camera can be exposed and read out simultaneously. This is utilized for EELS acquisitions.
Does anybody know if there is a DM scripting command to activate (acquire images in) the cinema mode?
My current script sets the number of frames to acquire, the acquisition time per frame, and binning. However the readout time between each frame is too slow. Setting the camera to cinema mode before running the script still only acquires full frame images.
There is no simple command for this. The advanced camera modes are not available as simple commands, and they are generally not part of the supported DM-script interface.
Usually, these modes can only be accessed via the object-oriented camera-script interface (CM_ commands) used by Gatan service and R&D. This script interface is, at least until now, not end-user supported.
It definitely falls into the category of 'advanced' scripting, so you will need to know how to handle object-oriented script coding style.
With the above said, the following might help you, if you already know how to use the CM_ commands in general:
In the extended (not enduser - supported) script interface, the way to achieve cinema mode is to modify the acquisition parameter set. One needs to set the readMode parameter.
The following code snipped shows this:
object camera = cm_GetCurrentCamera()
number read_mode = camera.cm_GetReadModeForNamedAcquisitionStyle("Cinema")
number create_if_not_exist = 1;
object acq_params = camera.CM_GetCameraAcquisitionParameterSet("Imaging", "Acquire", "Record", create_if_not_exist)
cm_SetReadMode(acq_params, read_mode)
cm_Validate_AcquisitionParameters(camera, acq_params);
image img := cm_AcquireImage(camera, acq_params)
img.ShowImage()
Note, that not all cameras support the Cinema readmode. The second line command will throw an error message in that case.

Error -200361 using USB-6356 X-series DAQ board for SPI control

I'm using a USB-6356 DAQ board to control an IC via SPI.
I'm using parts of the NI SPI Digital Waveform library to create the digital waveform, then a small wrapper VI to transmit the code.
My IC measures temperature on an RTD, and currently the controlling VI has a 'push for single measurement' style button.
When I push it, the temperature is returned by a series of other VIs running the SPI communication.
After some number of pushes (clicking the button very quickly makes this happen more quickly in time, but not necessarily in number of clicks), the VI generates an error -200361, which is nominally FIFO buffer overflow on the DAQ board.
It's unclear to me if that could actually be the cause of the problem, but I don't think so...
An NI guide describing this error for USB-600{0,8,9} devices looks promising, but following the suggestions didn't help me. I substituted 'DI.UsbXferReqCount' for the analog equivalent, since my read task is digital. Reading the default returned 4, so I changed the property to write and selected '1', but this made no difference.
I tried uninstalling the DAQ board using the Device Manager, unplugging and replugging, but this also didn't change anything.
My guess is that additional clock samples are generated after the end of the 'Finite Samples' part for the Read and Write tasks, and that these might be adding blank data that overflows, but the temperatures returned don't indicate strange data, and I'd have assumed that if this were the case, my VIs would be unable to interpret the data read in as the correct temperature.
I've attached an image of the block diagram for the Transmit VI I'm using, but actually getting it to run would require an entire library of VIs.
The controlling VI is attached to a nearly identical forum post at NI forums.
I think USB-6356 don't have output buffers used for Digital signal. You can try it by NI-MAX, if you select the digital output, you may find that there is no parameters for Samples. It's only output a bool-value(0 or 1) in one time.
You can also use DAQ Assistant in LabVIEW, when you config Digital output, if you select N-Samples or Continuous samples, then push OK button, here comes a Dialog that tell you there is no buffer for lines that you selected.

Postprocess Depth Image to get Skeleton using the Kinect sdk / other tools?

The short question: I am wondering if the kinect SDK / Nite can be exploited to get a depth image IN, skeleton OUT software.
The long question: I am trying to dump depth,rgb,skeleton data streams captured from a v2 Kinect into rosbags. However, to the best of my knowledge, capturing the skeleton stream on Linux with ros, kinect v2 isn't possible yet. Therefore, I was wondering if I could dump rosbags containing rgb,depth streams, and then post-process these to get the skeleton stream.
I can capture all three streams on windows using the Microsoft kinect v2 SDK, but then dumping them to rosbags, with all the metadata (camera_info, sync info etc) would be painful (correct me if I am wrong).
It's quite some time ago that I worked with NITE (and I only used Kinect v1) so maybe someone else can give a more up-to-date answer, but from what I remember, this should easily be possible.
As long as all relevant data is published via ROS topics, it is quite easy to record them with rosbag and play them back afterwards. Every node that can handle live data from the sensor will also be able to do the same work on recorded data coming from a bag file.
One issue you may encounter is that if you record kinect-data, the bag files are quickly becoming very large (several gigabytes). This can be problematic if you want to edit the file afterwards on a machine with very little RAM. If you only want to play the file or if you have enough RAM, this should not really be a problem, though.
Indeed it is possible to perform a NiTE2 skeleton tracking on any depth-image-stream.
Refer to:
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/How-to-use
and
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/About-PrimeSense-NiTE
With this extension one can add a virtual device which allows to manipulate each pixel of the depth stream. This device can then be used for creation of a userTracker object. As long as the right device name is provided skeleton tracking can be done
\OpenNI2\VirtualDevice\Kinect
but consider usage limits:
NiTE only allow to been used with "Authorized Hardware"

ZyXEL ADPCM codec

I have a ZyXEL USB Omni56K Duo modem and want to send and receive voice streams on it, but to reach adequate quality I probably need to implement some "ZyXEL ADPCM" encoding because plain PCM provides too small sampling rate to transmit even medium quality voice, and it doesn't work through USB either (probably because even this bitrate is too high for USB-Serial converter in it).
This mysterious codec figures in all Microsoft WAV-related libraries as one of many codecs theoretically supported by it, but I found no implementations.
Can someone offer an implementation in any language or maybe some documentation? Writing a custom mu-law decoding algorithm won't be a problem for me.
Thanks.
I'm not sure how ZyXEL ADPCM varies from other flavors of ADPCM, but various ADPCM implementations can be found with some google searches.
However, the real reason for my post is why the choice of ADPCM. ADPCM is adaptive differential pulse-code modulation. This means that the data being passed is the difference in samples, not the current value (which is also why you see such great compression). In a clean environment with no bit loss (ie disk drive), this is fine. However, in a streaming environment, its generally assumed that bits may be periodically mangled. Any bit damage to the data and you'll be hearing static or other audio artifacts very quickly and usually, fairly badly.
ADPCM's reset mechanism isn't framed based, which means the audio problems can go on for an extended period of time depending on the encoder. The reset code is a usually a set of 0s (16 comes to mind, but its been years since I wrote my own ports).
ADPCM in the telephony environment usually converts a 12 bit PCM sample to a 4 bit ADPCM sample (not bad). As for audio quality...not bad for phone conversations and the spoken word, but most people, in a blind test, can easily detect the quality drop.
In your last sentence, you throw a curve ball into the question. You start mentioning muLaw. muLaw is a PCM implementation that takes a 12 bit sample and transforms it using a logarithmic scale to an 8 bit sample. This is the typical compression mechanism for TDM (phone) networkworks in North America (most of the rest of the world uses a similar algorithm called ALaw).
So, I'm confused what you are actually trying to find.
You also mentioned Microsft and WAV implementations. You probably know, but just in case, that WAV is just a wrapper around the audio data that provides format, sampling information, channel, size and other useful information. Without WAV, AU or other wrappers involved, muLaw and ADPCM are usually presented as raw data.
One other tip if you are implementing ADPCM. As I indicated, they use 4 bits to represent a 12 bit sample. They get away with this by both sides having a multiplier table. Your position in the table changes based on the 4 bit value (in other words, the value is both multiple against a step size and used to figure out the new step size). I've seen a variety of algorithms use slightly different tables (no idea why, but you typically see the sent and received signals slowly stray off the bias). One of the older, popular sound packages was different than what I typically saw from the telephony hardware vendors.
And, for more useless trivia, there are multiple flavors of ADPCM. The variances involve the table, source sample size and destination sample size, but I've never had a need to work with them. Just documented flavors that I've found when I did my internet search for specifications for the various audio formats used in telephony.
Piping your pcm through ffmpeg -f u16le -i - -f wav -acodec adpcm_ms - will likely work.
http://ffmpeg.org/