I use Lecroy LT264 oscilloscope to trace lightning impulse test (1.2/50usec waveform). I have driver for labview and use edge triggered example. One impulse is not shown in every three impulse. What could be the reason ? Thanks for your interest.
Also you can download code here : http://forums.ni.com/t5/Instrument-Control-GPIB-Serial/LeCroy-LT264-Lightning-Impulse-Test-Signal-Loss/td-p/2510994
Ok, I solve the problem. The reason is trigger level. I had set 40 V or 80 V. However when I set trigger level 150V, program shows all impulse.
Check to see how many samples you are recording. If the time it takes to collect samples is larger than the time spacing of the the trigger, then the hardware won't reset until all the samples are collected. This means the trigger won't get rearmed.
Try reducing the number of samples collected if you need to trigger on every impulse.
Also, seeing your code might help.
Related
I am a new labview user. I am trying to implement a controller in real time using labview. For that purpose, I started doing analouge input output exercise. As a part of learning process, I was trying to apply input on a system, get the data and feed it back through analouge channel. However, I noticed there is a signficant delay between input and output, It was about 1 ms. Then i thought of doing the simplest exercise. I generated an input signal read it through labview and feed it back again. So, basically its a task for ADC and DAC only. But still, it has the same amount of delay. I was under impression that if i do hardwared time data read and write, it would reduce the delay. But still, no change.
Could you help me out on this issue? Any kind of advice would be really helpful for me
What you're seeing might be the result of the while loop method you're using.
The while loop picks up data from the AI once every time it goes around and then sends that to the AO. This ends up being done in batches (maybe with a batch size of 1, but still) and you could be seeing a delay caused by that conversion of batches.
If you need more control over timing, LabVIEW does provide that. Read up on the different clocks, sample clocks and trigger clocks for example. They allow you to preload output signals into a niDAQ buffer and then output the signals point by point at specific, coordinated moments.
I've inherited a labview "circuit" that integrates G's to output IPS. The problem is, the output text window (double), at full speed, has numbers scrolling so fast, you can't read them. I only need to see the largest number detected. I'm not too well versed in LabView - Can anyone help me through a function that will display the largest number outputted to the text window for a duration of 1/2 second? I'm basically looking for a peak detect-and-hold function... I'd prefer to work with the double precision value that is constantly updated if possible, rather than the array feeding my integrator. I tried looking through the Functions>signal processing menu, saw one peak detector, but not sure that's the right utility.
Thanks!
Easier to use the Array Max & Min PtByPt.vi, this can be found in the signal processing, point by point menu. Below a VI snippit with how it works.
It will update the maximum value every 10 points. Also attached a waveform chart that shows the values.
We're going to be using Gnuradio to stream in data from a radio peripheral. In addition, we have another peripheral that is part of the system which control programatically. I have a basic C program to do the controls.
I'd like to be able to implement this in GNUradio, but I dont' know what the best way to do this is. I've seen that you can make blocks, so I was thinking I could make a sink block, have a constant feed into that, and have the constant's value defined by some control like a WX slider.
It would take a needless part out of this if I could remove the constant block and just have the variable assigned to the WX slider directly be assigned to the control block, but then there would be no input. Can you make an inputless and outputless block that just runs some program or subroutine?
Also, when doing a basic test to see if this was feasible, I used a slider to a constant source to a WX scope plot. There seems to be a lag or delay between putting in an option and seeing the result show up on the plot. Is there a more efficient way to do this that will reduce that lag? Or is the lag just becasue my computer is slow?
It would take a needless part out of this if I could remove the constant block and just have the variable assigned to the WX slider directly be assigned to the control block, but then there would be no input. Can you make an inputless and outputless block that just runs some program or subroutine?
Yes, if you do this it will work. In fact, you can write any sort of Python code in a GRC XML file, and if you set up the properties and setter code properly, what you want will work. It doesn't have to actually create any GNU Radio blocks per se.
Also, when doing a basic test to see if this was feasible, I used a slider to a constant source to a WX scope plot. There seems to be a lag or delay between putting in an option and seeing the result show up on the plot.
GNU Radio is not optimized for minimum latency, but for efficient bulk processing. You're seeing the buffering between the source and the sink. Whenever you have a source that computes values rather than being tied to some hardware clock, the buffers downstream of it will be always-nearly-full and you'll get this lag.
In the advanced options there are settings to tune the buffer size, but they will only help so much.
I would guess you need a throttle in your work flow diagram or the sampling rate between blocks is incorrect.
It's almost impossible to help you unless you post your grc file or an image of the it.
I've been using oxyplot for a month now and I'm pretty happy with what it delivers. I'm getting data from an oscilloscope and, after a fast processing, I'm plotting it in real time to a graph.
However, if I compare my application CPU usage to the one provided by the oscilloscope manufacturer, I'm loading a lot more the CPU. Maybe they're using some gpu-based plotter, but I think I can reduce my CPU usage with some modifications.
I'm capturing 10.000 samples per second and adding it to a LineSeries. I'm not plotting all that data, I'm decimating it to a constant number of points, let's say 80 points for a 20 secs measure, so I have 4 points/sec while totally zoomed out and a bit more detail if I zoom in to a specific range.
With the aid of ReSharper, I've noticed that the application is calling a lot of times (I've 6 different plots) the IsValidPoint method (something like 400.000.000 times), which is taking a lot of time.
I think the problem is that, when I add new points to the series, it checks for every point if it is a valid point, instead of the added values only.
Also, it spends a lot of time in the MeasureText/DrawText method.
My question is: is there a way to override those methods and to adapt it to my needs? I'm adding 10.000 new values each second, but the first ones remain the same, so there's no need for re-validate them. Also, the text shown doesn't change.
Thank you in advance for any advice you can give me. Have a good day!
Looking for some help with a Labview data collection program. If I could collect 2ms of data at 8kHz (gives 16 data points) per channel (I am collecting data on 4 analog channels with an National Instruments data acquisition board). The DAQ-MX collection task gives a 1D array of 4 waveforms.
If I don't display the data I can do all my computation time is about 2ms and it is OK if the processing loop lags a little behind the collection loop. Updating the chart in Labview's front panel introduces an unacceptable delay. We don't need to update the display very quickly probably at 5-10Hz would be sufficient. But I don't know how to set this up.
My current Labview VI has three parallel loops
A timed-loop for data collection
A loop for analysis and processing
A low priority loop for caching data to disk as a TDMS file
Data is passed from the collection loop to the other loops using a queue. Labview examples gave me some ideas but I am stuck.
Any suggestions, references, ideas would be appreciated.
Thanks
Azim
Follow Up Question
eaolson suggests that I re-sample the data for display purposes. The data coming from the DAQ-MX read is a one dimensional array of waveforms. So I would need to somehow build or concatenate the waveform data for each channel. And then re-sample the data before updating the front panel chart. I suppose the best approach would be to queue the data and in a display loop dequeue the stack build and re-sample the data based on screen resolution and then update the chart. Would there be any other approach. I will look on
(NI Labview Forum)[http://forums.ni.com/ni/board?board.id=170] for more information as suggetsted by eaolson.
Updates
changed acceptable update rate for graphs to 5-10Hz (thanks Underflow and eaolson)
disk cache loop is a low priority one (thanks eaolson)
Thanks for all the responses.
Your overall architecture description sounds solid, but... getting to 30Hz for any non-trivial graph is going to be challenging. Make sure you really need that rate before trying to make it happen. Optimizing to that level might take some time.
References that should be helpful:
You can defer panel updates. This keeps the front panel from refreshing until you're ready for it to do so, allowing you to buffer data in the background, and only draw it occasionally.
You should know about (a)synchronous display. This option allows some control over display rates.
There is some general advice available about speeding execution.
There is a (somewhat dated) report on execution speed on the LAVA forums. Googling around the LAVA forums is a great idea if you need to optimize your speed.
Television updates at about 30 Hz. Any more than that is faster than the human eye can see. 30 Hz should be at the maximum update rate you should consider for a display, not the starting point. Consider an update rate of 5-10 Hz.
LabVIEW charts append the most recent data to the historical data they store and display all the data at once. At 8 kHz, you're acquiring at least 8000 data points per channel per second. That means the array backing that graph has to continuously be resized to hold the new data. Also, even if your graph is 1000 pixels across, that means you're displaying 8 data points per screen pixel. There's not usually any reason to display any more than one data point per pixel. If you really need fast update rates, plot less data. Create an array to hold the historical data and plot only every Nth data point, where N is chosen so you're plotting, say, only a few hundred points.
Remember that your loops can run at different rates. It may be satisfactory to run the write-to-disk loop at a much lower frequency than the data collection rate, maybe every couple of seconds.
Avoid property nodes if you can. They run in the UI thread, which is slower than most other execution.
Other than that, it's really hard to offer a lot of substantial advice without seeing code or more specifics. Consider also asking your question at the NI LabVIEW forums. There are a lot of helpful people there.