I need to know the speed of waveform chart of labview
the program generate 2 wave form shifted by 90 i need to make program to find the speed of both
Neither waveform is "generated first". Every iteration of the loop will result in a different true/false value being placed onto the chart. On some iterations, the top one will update first; on other iterations, the bottom one will update first.
What you are seeing in the charts is NOT a coherent waveform. It is just a series of values that you have chosen to plot. There's no time data associated with this, just the values and an iteration count. The iteration counter is the clock of this algorithm, so, in that sense, both waveforms are generated at exactly the same rate at exactly the same time. (See below for comments about the Timed Loop.)
I doubt that this answers the question you think you are asking. You seem to want to know some information computed from these series of true/false values, but the terminology that you've used is not meaningful, and I cannot determine what information it is that you actually want.
I said earlier that the only clock for this algorithm is the iteration counter of the loop. You used a Timed Loop with a dt of 1. Are you on Windows? If so, then my statement is correct: The Timed Loop on Windows is only a simulation without any guarantee of timing, so you might as well be using a regular While Loop. If you are on a real-time OS with LabVIEW Real-Time module, then this is generating a point every 1 millisecond, so the speed of the iteration count is tied to the computer's clock, so the speed of both waveforms would be 1 millisecond.
Related
I am a new labview user. I am trying to implement a controller in real time using labview. For that purpose, I started doing analouge input output exercise. As a part of learning process, I was trying to apply input on a system, get the data and feed it back through analouge channel. However, I noticed there is a signficant delay between input and output, It was about 1 ms. Then i thought of doing the simplest exercise. I generated an input signal read it through labview and feed it back again. So, basically its a task for ADC and DAC only. But still, it has the same amount of delay. I was under impression that if i do hardwared time data read and write, it would reduce the delay. But still, no change.
Could you help me out on this issue? Any kind of advice would be really helpful for me
What you're seeing might be the result of the while loop method you're using.
The while loop picks up data from the AI once every time it goes around and then sends that to the AO. This ends up being done in batches (maybe with a batch size of 1, but still) and you could be seeing a delay caused by that conversion of batches.
If you need more control over timing, LabVIEW does provide that. Read up on the different clocks, sample clocks and trigger clocks for example. They allow you to preload output signals into a niDAQ buffer and then output the signals point by point at specific, coordinated moments.
In GNU radio I am trying to use the frequency of one signal to generate another signal of a different frequency. Here is the flow diagram that I am using:
I generate a 50 kHz signal with a signal source block and feed this into a Log Power FFT block. I used the Argmax block to find the FFT bin with the most power and multiply that with a constant. I want to use this result as the input to the complex vco block to generate another signal with a different frequency. All vectors have a length of 4096.
However, looking at the output of the complex QT Gui Time Sink block, the output of the vco is always zero. This is strange to me because using a float QT Gui Time Sink to look at the output of the multiply block (which is also going to the input of the vco block), the result is 50,000 as expected. Why am I only getting zero out of the vco?
Also, my sample rate is set to 1M. I am assuming because of the vector length of 4096 that the sample rate out of the Argmax block will be 1M/4096 = 244. Is this correct?
I am running gnu radio companion on windows 10.
The proposed solution is not a solution. Please don't abuse the signal probe, which is really just that, a probe for slow, debugging or purely visual purposes. Every time I use it myself, I see how architecturally bad it is, and I personally think the project should be removing it from the block library altogether.
Now, instead of just say "probe is bad, do something else", let's analyse where your flow graph falls short:
your frequency estimation depends on the argmax of a block that was meant for pure visualization purposes. No, the output rate is not (sampling rate/FFT length), the output rate is roughly "frame rate" (but not actually exactly. That block is terrible and mixes "sample times" with "wall clock times"). Don't do that. If you need something like that, use the FFT block, followed my "complex to magnitude squared". You don't even want the logarithm - you're just looking for a maximum
Instead of looking for a maximum absolute value in an FFT, which is inherently a quantizing frequency estimator, use something that actually gives you an oscillation. There's multiple ways you can do that with a PLL!
your VCO solution probably does what it's programmed to do. You just use an inadequate sensitivity!
The sampling rate you assume in your time sinks is totally off, which is probably why you have the impression of a constant output – it just changes so slowly that you'll not notice.
So, I propose to instead, either / or:
Use the PLL Freq det. Feed the output of that into the VCO. Don't scale with a constant, but simply apply the proper sensitivity. Sensitivity is the factor between "input amplitude" and "phase advance per sample on the output in radians".
Use the PLL Carrier recovery. Use a resampler, or some other mathematical method, to generate the new frequency. You haven't told us how that other frequency relates to the input frequency, so I can't give you concrete advice.
Also notice that this very much suggest this being a case of "I'm trying to recreate an analog approach in digital"; that might be a good approach, but in many cases it's not.
If I might be as brazen: Describe why you need to generate that other frequency, for which purpose, in a post on https://dsp.stackexchange.com or to the GNU Radio mailing list discuss-gnuradio#gnu.org (sign up here). This really only barely is a programming problem, but really a signal processing problem. And there's a lot of people out there eager to help you find an appropriate solution that actually tackles your problem!
It looks like a better solution was to probe the output of the multiplier using a probe signal block along with a Function Probe Block to create a new variable. This variable could then be used as the frequency value in a separate Signal Source Block that is used to generate the new signal. This flow diagram seems to satisfy the original intended purpose: new flow diagram
I've inherited a labview "circuit" that integrates G's to output IPS. The problem is, the output text window (double), at full speed, has numbers scrolling so fast, you can't read them. I only need to see the largest number detected. I'm not too well versed in LabView - Can anyone help me through a function that will display the largest number outputted to the text window for a duration of 1/2 second? I'm basically looking for a peak detect-and-hold function... I'd prefer to work with the double precision value that is constantly updated if possible, rather than the array feeding my integrator. I tried looking through the Functions>signal processing menu, saw one peak detector, but not sure that's the right utility.
Thanks!
Easier to use the Array Max & Min PtByPt.vi, this can be found in the signal processing, point by point menu. Below a VI snippit with how it works.
It will update the maximum value every 10 points. Also attached a waveform chart that shows the values.
I've been using oxyplot for a month now and I'm pretty happy with what it delivers. I'm getting data from an oscilloscope and, after a fast processing, I'm plotting it in real time to a graph.
However, if I compare my application CPU usage to the one provided by the oscilloscope manufacturer, I'm loading a lot more the CPU. Maybe they're using some gpu-based plotter, but I think I can reduce my CPU usage with some modifications.
I'm capturing 10.000 samples per second and adding it to a LineSeries. I'm not plotting all that data, I'm decimating it to a constant number of points, let's say 80 points for a 20 secs measure, so I have 4 points/sec while totally zoomed out and a bit more detail if I zoom in to a specific range.
With the aid of ReSharper, I've noticed that the application is calling a lot of times (I've 6 different plots) the IsValidPoint method (something like 400.000.000 times), which is taking a lot of time.
I think the problem is that, when I add new points to the series, it checks for every point if it is a valid point, instead of the added values only.
Also, it spends a lot of time in the MeasureText/DrawText method.
My question is: is there a way to override those methods and to adapt it to my needs? I'm adding 10.000 new values each second, but the first ones remain the same, so there's no need for re-validate them. Also, the text shown doesn't change.
Thank you in advance for any advice you can give me. Have a good day!
Looking for some help with a Labview data collection program. If I could collect 2ms of data at 8kHz (gives 16 data points) per channel (I am collecting data on 4 analog channels with an National Instruments data acquisition board). The DAQ-MX collection task gives a 1D array of 4 waveforms.
If I don't display the data I can do all my computation time is about 2ms and it is OK if the processing loop lags a little behind the collection loop. Updating the chart in Labview's front panel introduces an unacceptable delay. We don't need to update the display very quickly probably at 5-10Hz would be sufficient. But I don't know how to set this up.
My current Labview VI has three parallel loops
A timed-loop for data collection
A loop for analysis and processing
A low priority loop for caching data to disk as a TDMS file
Data is passed from the collection loop to the other loops using a queue. Labview examples gave me some ideas but I am stuck.
Any suggestions, references, ideas would be appreciated.
Thanks
Azim
Follow Up Question
eaolson suggests that I re-sample the data for display purposes. The data coming from the DAQ-MX read is a one dimensional array of waveforms. So I would need to somehow build or concatenate the waveform data for each channel. And then re-sample the data before updating the front panel chart. I suppose the best approach would be to queue the data and in a display loop dequeue the stack build and re-sample the data based on screen resolution and then update the chart. Would there be any other approach. I will look on
(NI Labview Forum)[http://forums.ni.com/ni/board?board.id=170] for more information as suggetsted by eaolson.
Updates
changed acceptable update rate for graphs to 5-10Hz (thanks Underflow and eaolson)
disk cache loop is a low priority one (thanks eaolson)
Thanks for all the responses.
Your overall architecture description sounds solid, but... getting to 30Hz for any non-trivial graph is going to be challenging. Make sure you really need that rate before trying to make it happen. Optimizing to that level might take some time.
References that should be helpful:
You can defer panel updates. This keeps the front panel from refreshing until you're ready for it to do so, allowing you to buffer data in the background, and only draw it occasionally.
You should know about (a)synchronous display. This option allows some control over display rates.
There is some general advice available about speeding execution.
There is a (somewhat dated) report on execution speed on the LAVA forums. Googling around the LAVA forums is a great idea if you need to optimize your speed.
Television updates at about 30 Hz. Any more than that is faster than the human eye can see. 30 Hz should be at the maximum update rate you should consider for a display, not the starting point. Consider an update rate of 5-10 Hz.
LabVIEW charts append the most recent data to the historical data they store and display all the data at once. At 8 kHz, you're acquiring at least 8000 data points per channel per second. That means the array backing that graph has to continuously be resized to hold the new data. Also, even if your graph is 1000 pixels across, that means you're displaying 8 data points per screen pixel. There's not usually any reason to display any more than one data point per pixel. If you really need fast update rates, plot less data. Create an array to hold the historical data and plot only every Nth data point, where N is chosen so you're plotting, say, only a few hundred points.
Remember that your loops can run at different rates. It may be satisfactory to run the write-to-disk loop at a much lower frequency than the data collection rate, maybe every couple of seconds.
Avoid property nodes if you can. They run in the UI thread, which is slower than most other execution.
Other than that, it's really hard to offer a lot of substantial advice without seeing code or more specifics. Consider also asking your question at the NI LabVIEW forums. There are a lot of helpful people there.