labview - buffer data then save to excel file - labview

My question is with respect to a labVIEW VI (2013), I am trying to modify. (I am only just learning to use this language. I have searched the NI site and stackoverflow for help without success, I suspect I am using the incorrect key words).
My VI consists of a flat sequence one pane of which contains a while loop where integer data is collected from a device and displayed on a graph.
I would like to be able to be able to buffer this data and then send it to disk when a preset number of samples have been collected. My attempts so far result in only the last record being saved.
Specifically I need to know how to save the data in a buffer (array) then when the correct number of samples are captured save it all to disk (saving as it is captured slows the process down to much).
Hope the question is clear and thanks very much in advance for any suggestions.
Tom

Below is a simple circular-buffer that holds the most recent 100 readings. Each time the buffer is refilled, its contents are written to a text file. Drag the image onto a VI's block diagram to try it out.
As you learn more about LabVIEW and as your performance and multi-threaded needs increase, consider reading about some of the LabVIEW design patterns mentioned in the other answers:
State machine: http://www.ni.com/tutorial/7595/en/
Producer-consumer: http://www.ni.com/white-paper/3023/en/

I'd suggest to split the data acquisition and the data saving in two different loops using a producer/consumer design pattern..
Moreover if you need a very high throughput consider using TDMS file format.
Have a look here for an overview: http://www.ni.com/white-paper/3727/en/

Screenshot will definitely help. However, some things are clear:
Unless you are dealing with very high volume of data, very slow hard drives or have other unusual requirements, open the file before your while loop, write to it every time you acquire a sample (leaving buffering to the OS), and close it afterwards.
If you decide you need to manage buffering on your own, you can use queues. See this example: https://decibel.ni.com/content/docs/DOC-14804 for reference (they stream data from disk, buffering it in the queue, but it is the same idea)
My VI consists of a flat sequence one pane of which
Substitute flat sequence for finite state machine (e.g. http://forums.ni.com/t5/LabVIEW/Ending-a-Flat-Sequence-Inside-a-case-structure/td-p/3170025)

Related

Labview--Input-Output Data delay

I am a new labview user. I am trying to implement a controller in real time using labview. For that purpose, I started doing analouge input output exercise. As a part of learning process, I was trying to apply input on a system, get the data and feed it back through analouge channel. However, I noticed there is a signficant delay between input and output, It was about 1 ms. Then i thought of doing the simplest exercise. I generated an input signal read it through labview and feed it back again. So, basically its a task for ADC and DAC only. But still, it has the same amount of delay. I was under impression that if i do hardwared time data read and write, it would reduce the delay. But still, no change.
Could you help me out on this issue? Any kind of advice would be really helpful for me
What you're seeing might be the result of the while loop method you're using.
The while loop picks up data from the AI once every time it goes around and then sends that to the AO. This ends up being done in batches (maybe with a batch size of 1, but still) and you could be seeing a delay caused by that conversion of batches.
If you need more control over timing, LabVIEW does provide that. Read up on the different clocks, sample clocks and trigger clocks for example. They allow you to preload output signals into a niDAQ buffer and then output the signals point by point at specific, coordinated moments.

Why doesn't throttled file source data display in GUI time sink for GNU radio?

I have a flow graph with a file source (with repeat off) and a GUI Time Sink. The graph is throttled by a throttle block at 2 samples / sec. I expect to see two new samples in my GUI Time Sink every second. However, instead of 1-second updates, the GUI Time Sink doesn't display anything at all. If I turn repeat on on the file source, the GUI Time Sink does update. Why doesn't it update when repeat is off?
My question is similar to this one. In my case, I also have a file source throttled down to a very slow sample rate. However, my sink is a GUI Time Sink, not a file sink--I see no option for an "Unbuffered" parameter on the Time Sink.
My flow graph
Repeat off
Repeat On
This is actually multiple problems in one:
You're assuming the time sink will show two new values when they come in. That's not true: it will only update the display when it has (at least) as many new items as you configured it to show in number of points.
You're assuming GNU Radio will happily read single items (or two) at a time. Typically, that is not the case: it will ask the file source for as many items as there is space in the output buffer, something like 8192 (not fixed). And typically,
Throttle doesn't work like you think. It takes the number of input samples it gets in each call to its work function (e.g. 8192) and divides that number by the throttle rate you set and then just blocks for that amount of seconds. Throttle regulates the average rate, on a longer time scale, or in your really minimal rate case, a very long time scale.
You can limit the number of items in an output buffer, but not below a page size (4kB); for complexes that is 1024 items at least.
I think the classical graphical GNU Radio sinks might just not be the right thing to analyze files sample-by-sample.
I recommend trying the example flow graphs that come with Tim O'Shea's gr-pyqt. They are very handy for this kind of analysis.

convert cvs to database and graphing data: advice on optimisation

Was hoping for some advice on best practice here. Working in objective C and Xcode.
I made a "FileConverter" class which has a method to reads a cvs file with 7 columns of float values into a SQLite database (after verifying the data and parsing it).
The way I have done this is to load the whole file into an NSString, then split into row components, then split each row into column components (saving the result as a 2x2 NSArray.
I then open database and copy the array into the sqlite database. I'm using the TEXT datatype for storage at the moment. Once there, I plan to graph the data.
It seems to check and convert the cvs ok. However if the csv is quite long ( say 10,000 rows), I get the spinning wheel for several seconds while it does its work. For shorter files it converts almost instantly.
Ultimately, at the point the user clicks "Convert CSV", I will also be running another method which will graph the data and I expect this will result in huge delay while it reads the sql database, assembles the data into CGPoints and then draws into the graph view.
My question is about how best to optimise the process, so it can handle the larger files without spinning wheels appearing. Is this possible?
a) Using NSStrings and NSArrays certainly makes the job of reading and splitting up the data super simple and makes verifying the data easy. Is this the best way? Should I malloc a float array instead?
b) I'm working on the basis that by saving the data as TEXT values in the database, converting them to CGFloat values will be straightforward, but realise this will add processing time.
c) I'm imagining that a sqlite3 database would be a faster way for getting the data when I come to graphing it, but I could also simply copy the cvs file and parse it at the point of graphing the data.
Really appreciate advice on this
Your main thread runs the UI, and you're trying to do your computation (CSV Computations) on the main thread making the UI hang (not updating) making the OS invoke the spinning wheel. To avoid this you need to move your compute intensive operations to secondary threads.
When the csv file is small, a synchronous operation is able to perform the csv computation and update the UI immediately. In case of big files, the UI (app window) waits for the computations to finish before the UI can be updated. In such cases you need to asynchronously compute the csv calculations on a background thread. There are many ways to achieve this, the most popular being Apple's GCD.
The following link, Apple's guide to non blocking UI explains it in detail:
Link to Apple's non blocking guide
Link to GCD

Reading specific bytes of data from a large text file... quickly

For argument's sake, let's say you have a single, enormous file to hold your map save data. The game that comes to mind as a great example is Terraria. They save all MapWidth*MapHeight tile data within a single map file (Horrible idea, really) but they can render only what is visible within the camera (And some outer-lying tiles for smoothness sake) based on the camera position.
So my question is, "How can they search through all of that data in real time starting at the camera position?"
That would entail reading through potentially millions of tile data just to get to the screen coordinates. I understand you could skip bytes of data based on the x/y coordinates if the tile data was consistent (This is all I can find in my week or so of searching), but that is where my problem lies. The tile data is dynamic. If one tile is empty, the data beyond "isValid" is nonexistent. So that is less bytes to search through. If a tile has water, multiple states, a background, etc... it contains all the data and is the largest in terms of bytes. So it is not constant at all. In that case we cannot just skip X amount of bytes as it changes (Constantly as tiles are modified).
My current solutions are: Read it line by line (Ugh), use chunk files, or ensure fixed line sizes (Padding? Data wasted... Ugh).
I know chunks would be the best option, but being able to reach that deep into text files quickly would still be a nice thing to know.
If you have chunk-based data, you need a chunk-based reader, simple as that.
Additionally, if you're particularly interested only in certain parts of the data and you can process it first, is to build a second file/list that stores the offsets to the start of every object in the first file.
In that case, whenever you need to reference an object, you look up the offset first and then do a straight jump to it in your original file. It still requires you to read through the whole file at-least once.

How to Periodically Updating Labview chart when collecting multi channel data at a high rate

Looking for some help with a Labview data collection program. If I could collect 2ms of data at 8kHz (gives 16 data points) per channel (I am collecting data on 4 analog channels with an National Instruments data acquisition board). The DAQ-MX collection task gives a 1D array of 4 waveforms.
If I don't display the data I can do all my computation time is about 2ms and it is OK if the processing loop lags a little behind the collection loop. Updating the chart in Labview's front panel introduces an unacceptable delay. We don't need to update the display very quickly probably at 5-10Hz would be sufficient. But I don't know how to set this up.
My current Labview VI has three parallel loops
A timed-loop for data collection
A loop for analysis and processing
A low priority loop for caching data to disk as a TDMS file
Data is passed from the collection loop to the other loops using a queue. Labview examples gave me some ideas but I am stuck.
Any suggestions, references, ideas would be appreciated.
Thanks
Azim
Follow Up Question
eaolson suggests that I re-sample the data for display purposes. The data coming from the DAQ-MX read is a one dimensional array of waveforms. So I would need to somehow build or concatenate the waveform data for each channel. And then re-sample the data before updating the front panel chart. I suppose the best approach would be to queue the data and in a display loop dequeue the stack build and re-sample the data based on screen resolution and then update the chart. Would there be any other approach. I will look on
(NI Labview Forum)[http://forums.ni.com/ni/board?board.id=170] for more information as suggetsted by eaolson.
Updates
changed acceptable update rate for graphs to 5-10Hz (thanks Underflow and eaolson)
disk cache loop is a low priority one (thanks eaolson)
Thanks for all the responses.
Your overall architecture description sounds solid, but... getting to 30Hz for any non-trivial graph is going to be challenging. Make sure you really need that rate before trying to make it happen. Optimizing to that level might take some time.
References that should be helpful:
You can defer panel updates. This keeps the front panel from refreshing until you're ready for it to do so, allowing you to buffer data in the background, and only draw it occasionally.
You should know about (a)synchronous display. This option allows some control over display rates.
There is some general advice available about speeding execution.
There is a (somewhat dated) report on execution speed on the LAVA forums. Googling around the LAVA forums is a great idea if you need to optimize your speed.
Television updates at about 30 Hz. Any more than that is faster than the human eye can see. 30 Hz should be at the maximum update rate you should consider for a display, not the starting point. Consider an update rate of 5-10 Hz.
LabVIEW charts append the most recent data to the historical data they store and display all the data at once. At 8 kHz, you're acquiring at least 8000 data points per channel per second. That means the array backing that graph has to continuously be resized to hold the new data. Also, even if your graph is 1000 pixels across, that means you're displaying 8 data points per screen pixel. There's not usually any reason to display any more than one data point per pixel. If you really need fast update rates, plot less data. Create an array to hold the historical data and plot only every Nth data point, where N is chosen so you're plotting, say, only a few hundred points.
Remember that your loops can run at different rates. It may be satisfactory to run the write-to-disk loop at a much lower frequency than the data collection rate, maybe every couple of seconds.
Avoid property nodes if you can. They run in the UI thread, which is slower than most other execution.
Other than that, it's really hard to offer a lot of substantial advice without seeing code or more specifics. Consider also asking your question at the NI LabVIEW forums. There are a lot of helpful people there.