I have an application build on labview for sensors. It takes files and gives output as .cal files. It takes the maximum pressure and temperature values and output the sensor data with minimum errors. I want to automate this process
This application is very old written in Labview. Should I make the whole application in another platform or language, or is there any other way to automate the error reduction and giving file as output in Labview itself
I want to automate the whole process. I should be able to enter pressure and temperature and it should give me the sensor value in the end.
Related
I'm a newbie using LabView for my project. So I'm developing a program that gathers data from sensors that attach in the DAQmx board and also a spectrometer from STS-VIS ocean optic. At the first developing, I combine both devices in one loop inside the same flat structure, but I got an error saying: "The application is not able to keep up with the hardware acquisition." I cannot get the data showing on the graph for both devices, but it was just fine if I run it separately. And I found the solution saying that I need to separate both devices in a different while loop process because it may have different buffer size (?). I did it and it worked that all the sensors are showing in each graph. But the weird thing is, I need to stop the program first at the first run, then run it again for the second time for getting the graph showing in the application. Can anyone tell me what I did wrong and give me a solution? Due to the project rule I cannot share my Vi here publicly, but if anyone interested to help, I'd like to share it personally. Thank you.
you are doing right thing but you have to understand how Data acquisition work in LabVIEW and hardware.
you can increase hardware buffer Programmatically using property node or try to read fast as possible then you dont need two separate loop.
NI
I work currently with a NI DAQmx device too and became desprate using LabView because there is not good documentation and/or examples. Then I started to use Python which I found more intuitively. The only disadvantage is that the userinterface is not so easily generated, but for this one can use the QT Designer (open source programm avaiable online).
I have a snippet of LabVIEW code. The issue is that I am working on a grander project, and therefore have discarded the use of LabVIEW for this project. Would there be a way of how I can decipher what this program in LabVIEW is doing without having to learn all of LabVIEW? I just need to know the final voltage being applied and current. I have worked out that the program is outputting a pulse sequence that has 20 Hz and a 200 microsecond pulse duration. Would there be a way I can simulate this type of program on LabVIEW, instead of learning what each component does?
What would help - exactly VI, or code snippet from LabVIEW (not code screenshot). Then it would be possible to run the code, and save generated signal array to file. As per now, please, check functions below, and there are the links to their description:
http://zone.ni.com/reference/en-XX/help/371361P-01/lvanls/signal_generator_duration/
http://zone.ni.com/reference/en-XX/help/371361N-01/lvanls/ramp_pattern/
I'm recording data with USRP X310 using GNU Radio. I need to record and process data for days (possibly weeks) non-stop at a very high sampling rate, so I cannot store all the data on HDD, I need to process it on a fly. I want to cut continuous stream of data into chunks N samples each and process these chunks with a python program I have. One way of doing this, as I see, is to store each chunk into a file on HDD on a fly, so I can simultaneously access it with another program and process it.
Is there a way of doing this in GNU Radio?
Thank you!
I'm trying to write a code in which every 1 ms a number plused one , should be replaced the old number . (something like a chronometer ! ) .
the problem is whenever the cpu usage increases because of some other programs running on the pc, this 1 milliseconds is also increased and timing in my program changes !
is there any way to prevent cpu load changes affecting timing in my program ?
It sounds as though you are trying to generate an analogue output waveform with a digital-to-analogue converter card using software timing, where your software is responsible for determining what value should be output at any given time and updating the output accordingly.
This is OK for stationary or low-speed signals but you are trying to do it at 1 ms intervals, in other words to output 1000 samples per second or 1 ks/s. You cannot do this reliably on a desktop operating system - there are too many other processes going on which can use CPU time and block your program from running for many milliseconds (or even seconds, e.g. for network access).
Here are a few ways you could solve this:
Use buffered, hardware-clocked output if your analogue output device supports it. Instead of writing one sample at a time, you send the device a waveform or array of samples and it outputs them at regular intervals using a timing signal generated in hardware. Unfortunately, low-end DAQ devices often don't support hardware-clocked output.
Instead of expecting the loop that writes your samples to the AO to run every millisecond, read LabVIEW's Tick Count (ms) value in the loop and use that as an index to your array of samples: rather than trying to output every sample, your code will now say 'what time is it now, and therefore what should the output be?' That won't give you a perfect signal out but at least now it should keep the correct frequency rather than be 'slowed down' - instead you will see glitches imposed on the signal whenever the loop can't keep up. This is easy to test and maybe it will be adequate for your needs.
Use a real-time operating system instead of a desktop OS. In the case of LabVIEW this would mean using the Real-Time software module and either a National Instruments hardware device that supports RT, such as the CompactRIO series, or installing the RT OS on a dedicated PC if the hardware is compatible. This is not a cheap option, obviously (unless it's strictly for personal, home use). In any case you would need to have an RT-compatible driver for your output device.
Use your computer's sound output as the output device. LabVIEW has functions for buffered sound output and you should be able to get reliable results. You'll need to upsample your signal to one of the sound output's available sample rates, probably 44.1 ks/s. The drawbacks are that the output level is limited in range and is not calibrated, and will probably be AC-coupled so you can't output a DC or very low-frequency signal. However if the level is OK for what you want to connect it to, or you can add suitable signal conditioning, this could be a neat solution. If you need the output level to be calibrated you could simultaneously measure it with your DAQ card and scale the sound waveform you're outputting to keep it correct.
The answer to your question is "not on a desktop computer." This is why products like LabVIEW Real-Time and dedicated deterministic hardware exist: you need a computer built around dedication to a particular process in order to consistently serve that process. Every application in a regular Windows/Mac/Linux desktop system has the problem you are seeing of potentially being interrupted by other system processes, particularly in its UI layer.
There is no way to prevent cpu load changes from affecting timing in your program unless the computer has a realtime clock.
If it doesn't have a realtime clock, there is no reason to expect it to behave deterministically. Do you need for your program to run at that pace?
I'm working with a new Kinect v2 sensor, and using Kinect Studio to record the Kinect stream data during some experiments. The problem is our experiments are expected to last ~10 minutes, which including the uncompressed video would be equivalent to ~80gb. In addition, the buffer fills up quite fast and around 2 minutes in and the remainder of the data ends up stuttering at around 2fps instead of the smooth 25fps.
Is there any way I can record all the data I need in compressed form? Would it be easy to create an app similar to kinect studio that just prints out a video file and a .xed file containing all the other sensor data?
Kinect Studio does have APIs that can be used to programmatically record particular data streams into an XEF file. Additionally, it's possible to have multiple applications using the sensor simultaneously, so in theory you should be able to have three applications collecting data from the sensor (you could combine these into one application as well):
Your application;
An application using the Kinect Studio APIs, or Kinect Studio itself, to record the non-RGB streams;
Another application that collects the RGB data stream and performs compression and then saves the data.
However, the latency and buffer issue is likely to be a problem here. Kinect Studio data collection is extremely resource-intensive and it may not be possible to do real-time video compression while maintaining 25fps. Depending on the network infrastructure available you might be able to offload the RGB data to another machine to compress and store, but this would need to be well tested. This is likely to be a lot of work.
I'd suggest that you first see whether switching to another high-spec machine, with a fast SSD drive and good CPU and GPU, makes the buffering issue go away. If this is the case you could then record using Kinect Studio and then post-process the XEF files after the sessions to compress the videos (using the Kinect Studio APIs to open the XEF files).