LED control of voltage output with multichannels in labview - loops end in LED on all the time - labview

I have a four channel LED system where I'd like to control LED intensity and time-on for each LED separately- with specific number of iterations of this stimulus (see diagram below).
We're using an A/O module on cDAQ system to control each individual LED and zero frequency sine waves to set up the LED voltages (snippets attached) - each LED also has a specific 0V pre and post stimulus time. The LEDs need to have the same timing so that each waveform executes at the same time and the stimulus needs to be repeated several times (controlled by the front panel and a loop). Ive set up a subVI which ensures all the waveforms are of equal length (MakeWavelengthEqual(SubVI)) as well as one subVI to generate a digital output for the duration of the LED on time (TrigOutGen(SubVI)) to trigger out data acquisition equipment.
The VI creates these Waveforms and output them to each individual channel, whilst also repeating them. However I keep having an issue where the LED outputs don't quite correlate to the waveforms I've set up - and the final LED output repetition seems to create a situation where the LED stays on even after the VI completes its run. See VI below:
I'm new to labview so troubleshooting these issues is very difficult to me, I get no errors, the waveforms look correct when plotted and the task is set up to clear outside of the loop so I'm not sure where the problem with the final output occurs. I'd really appreciate any help anyone can provide.

Related

How can I have Variable Frequency PWM with STM32?

I am working on an LLC converter project. So I need PWM signals with variable frequency. I mean I need too change frequency real time. For example frequency modulation 40kHZ-80kHZ. Can anyone give me an idea? Which timer mode I have to use ? Thanks..
Its a little tricky to answer your question when you dont state the exact hardware you're working with. Seeing your tags i will assume its a member of the the STM32 family.
STM standard timers have registers you usually dont need to interface directly. Hal does that for you. However as far as i am aware Hal does not support such functionalities. The standard STM32 Timer has a TIMx_ARR and a TIMx_CCRn register. These hold some of the configuration necessary for PWM generation. You should be able to change your frequency by adjusting the ARR register and the Duty-Cycle by adjusting the CCRn register.
Be carful with that approach tho as it usually has no inbuilt protection. You will not damage your device but it is very easy to produce unintended behavior.
You also need to consider the prescaler values and the general configuration of your timer.
For detailed information refer to the Chapter: GPTIM in the reference Manual of your device as i can not give you a more detailed description with the little information you have provided.
As far as I understood from your question and follow-up comments, you want a constant duty cycle (~50%), but you want variable fequency, as well as phase shift. That is totally doable, and you can change the values on the fly, but for the phase shift, I would suggest using two timers. One master, one slave.
Idea:
Master slave controls the phase shift. The period of master slave is equal to the period of the final waveform. It goes from 0 to its ARR, and at some point there is the phase shift value in Compare register, which flips the output of master from LOW to HIGH on its way from 0 to ARR.
The slave is activated by master output change from LOW to HIGH, runs for one period, which is equal to the period of the master (ARR). It outputs PWM to some pin. Once it reaches ARR, it stops (only for master to start it again). Obviously, you need to adjust Compare register for PWM output to keep the duty cycle constant.
I made some crude illustration of what I mean, because discussing timers with text only can be a little (very) tricky. Paint skills 10/10:
How to adjust stuff:
Adjust frequency (period length) by changing ARR of both timers (it's always the same), if you want to keep duty cycle, you will need to immediately adjust compare value to ARR/2 (for ~50% duty cycle) of the slave timer. Make sure the compare value of the phase shifter master is below the ARR if you reduce ARR, otherwise the slave will never get triggered.
Adjust phase shift by changing compare value of the master timer between 0 and ARR.
Additional notes:
The master timer is configured to have TRGO (trigger output, master feature) on switch from LOW to HIGH of "compare".
The slave timer is in one pulse mode (OPM), meaning it disables itself after a single period. It will be reactivated by the next master's phase shift pulse (compare HIGH).
The master's signal is supposed to do reset and activation of the timer (resets CNT) (there is a list of modes - what TRGI trigger input does to the slave timer). Resetting the timer will load new values into ARR (see next point).
Both master and slave have ARR buffer enabled. This will allow you to change ARR values, but the changes take effect only when the current cycle ends. This will prevent jitter while changing period length and/or phase shift.
The slave timer is in PWM1 or PWM2 mode, depending on whether you want the first part of the output aveform to be LOW or HIGH, that's all the difference.
Helpful example from me:
I have written an implementation of master/slave timers with them activating each other differently purely on registers and with every line of code commented. I was a little new to it all (which shows in the structure of the project), it was literally my first experiment with timers after studying the timers in the reference manual for days, but I tried my best. I have a description of what I do in main.c. You may find it helpful. Note that the timers with the same numbers are similar or even identical across various STM32 devices, so my code is likely portable down to copy-paste into your code (which I'm totally OK with if you or anyone does it). Here is a link to main.c on my GitHub. I also have oscilloscope screenshots there.

Error -200361 using USB-6356 X-series DAQ board for SPI control

I'm using a USB-6356 DAQ board to control an IC via SPI.
I'm using parts of the NI SPI Digital Waveform library to create the digital waveform, then a small wrapper VI to transmit the code.
My IC measures temperature on an RTD, and currently the controlling VI has a 'push for single measurement' style button.
When I push it, the temperature is returned by a series of other VIs running the SPI communication.
After some number of pushes (clicking the button very quickly makes this happen more quickly in time, but not necessarily in number of clicks), the VI generates an error -200361, which is nominally FIFO buffer overflow on the DAQ board.
It's unclear to me if that could actually be the cause of the problem, but I don't think so...
An NI guide describing this error for USB-600{0,8,9} devices looks promising, but following the suggestions didn't help me. I substituted 'DI.UsbXferReqCount' for the analog equivalent, since my read task is digital. Reading the default returned 4, so I changed the property to write and selected '1', but this made no difference.
I tried uninstalling the DAQ board using the Device Manager, unplugging and replugging, but this also didn't change anything.
My guess is that additional clock samples are generated after the end of the 'Finite Samples' part for the Read and Write tasks, and that these might be adding blank data that overflows, but the temperatures returned don't indicate strange data, and I'd have assumed that if this were the case, my VIs would be unable to interpret the data read in as the correct temperature.
I've attached an image of the block diagram for the Transmit VI I'm using, but actually getting it to run would require an entire library of VIs.
The controlling VI is attached to a nearly identical forum post at NI forums.
I think USB-6356 don't have output buffers used for Digital signal. You can try it by NI-MAX, if you select the digital output, you may find that there is no parameters for Samples. It's only output a bool-value(0 or 1) in one time.
You can also use DAQ Assistant in LabVIEW, when you config Digital output, if you select N-Samples or Continuous samples, then push OK button, here comes a Dialog that tell you there is no buffer for lines that you selected.

FFT in non-flowgraph centered application different from flowgraph centered apps like uhd_fft

I've written a small GNU Radio program to capture and plot FFT data from the USRP N210.
To avoid locking up my GUI (matplotlib and wxpython), I am only running the flowgraph after the GUI reports that it's idle.
In order to do that kind of timing, I'm using a non-flowgraph centered approach, introduced in the GNU Radio Tutorial.
Essentially, I have a main loop that looks like this (pseudocode):
topblock.set_usrp_freq()
while True:
topblock.run()
data = topblock.vector_sink.data()
thread-safe call to plot data
wait for gui.EVT_IDLE
topblock.skiphead.reset()
topblock.head.reset()
Where the flowgraph looks basically like:
self.connect(usrp, skiphead, stream_to_vector, head, fft, c2mag_sq, stats, volts_to_dBm, vector_sink)
# skiphead is modified to be resettable like head
When I use similar parameters, I expect to see the same thing that I see when I run uhd_fft -f 700M -s 10e6:
The output from my matplotlib plot is, at first, very similar, with the exception of a very pronounced LO. I have tried to follow the code through uhd_fft and I don't see them doing any LO offsetting, so my first questions is Q: Does uhd_fft avoid plotting the LO in some way, or is the way I'm running my flowgraph from a main loop causing the LO to be pronounced?
Edit: I've confirmed that the extreme LO is a byproduct of a voltage spike that occurs each time the flowgraph is "run()". The number of samples you need to drop to lower to LO can be seen in the time data in my follow-up post: voltage pulse from USRP when using simple GNU Radio flowgraph from Python
After the second run, I periodically get strange data plotted that definitely doesnt' happen in uhd_fft. I can make this go away by dumping several thousand samples per flowgraph run with the skiphead block, but my second question is: Q: Why does running the flowgraph from a separate main loop cause junk data to be plotted, even though the USRP is not being retuned? uhd_fft uses a flowgraph-centric process and does not have this issue:
My intuition is that there are some caveats with running a non-flowgraph centered app that aren't mentioned in the tutorial.

Using NAudio, How do I get the amplitude and rhythm of an MP3 file?

The wife asked for a device to make the xmas lights 'rock' with the best of music. I am going to use an Arduino micro-controller to control relays hooked up to the lights, sending down 6 signals from C# winforms to turn them off and on. I want to use NAduio to separate the amplitude and rhythm to send the six signals. For a specific range of hertz like an equalizer with six bars for the six signals, then the timing from the rhythm. I have seen the WPF demo, and the waveform seems like the answer. I want to know how to get those values real time while the song is playing.
I'm thinking ...
1. Create a simple mp3 player and load all my songs.
2. Start the songs playing.
3. Sample the current dynamics of the song and put that into an integer that I can send to which channel on the Arduino micro-controller via usb.
I'm not sure how to capture real time the current sound information and give integer values for that moment. I can read the e.MaxSampleValues[0] values real time while the song is playing, but I want to be able to distinguish what frequency range is active at that moment.
Any help or direction would be appreciated for this interesting project.
Thank you
Sounds like a fun signal processing project.
Using the NAudio.Wave.WasapiLoopbackCapture object you can get the audio data being produced from the sound card on the local computer. This lets you skip the 'create an MP3 player' step, although at the cost of a slight delay between sound and lights. To get better synchronization you can do the MP3 decoding and pre-calculate the beat patterns and output states during playback. This will let you adjust the delay between sending the outputs and playing the audio block those outputs were generated from, getting near perfect synchronization between lights and music.
Once you have the samples, the next step is to use an FFT to find the frequency components. Fortunately NAudio includes a class to help with this: NAudio.Dsp.FastFourierTransform. (Thank you Mark!) Take the output of the FFT() function and sum out the frequency ranges you want for each controlled light.
The next step is Beat Detection. There's an interesting article on this here. The main difference is that instead of doing energy detection on a stream of sample blocks you'll be using the data from your spectral analysis stage to feed the beat detection algorithm. Those ranges you summed become inputs into individual beat detection processors, giving you one output for each frequency range you defined. You might want to add individual scaling/threshold factors for each frequency group, with some sort of on-screen controls to adjust these for best effect.
At the end of the process you will have a stream of sample blocks, each with a set of output flags. Push the flags out to your Arduino and queue the samples to play, with a delay on either of those operations to achieve your synchronization.

What does post-layout simulation take a long time?

I am wondering why post-layout simulations for digital designs take a long time?
Why can't software just figure out a chip's timing and model the behavior with a program that creates delays with sleep() or something? My guess is that sleep() isn't accurate enough to model hardware, but I'm not sure.
So, what is it actually doing that takes so long?
Thanks!
Post layout simulations (in fact - anything post synthesis) will be simulating gates rather than RTL, and there's a lot of gates.
I think you've got your understanding of how a simulator works a little confused. I say that because a call like sleep() is related to waiting for time as measured by the clock on the wall, not the simulator time counter. Simulator time advances however quickly the simulator runs.
A simulator is a loop that evaluates the system state. Each iteration of the loop is a 'time slice' e.g. what the state of the system is at time 100ns. It only advances from one time slice to the next when all the signals in it have reached a steady state.
In an RTL or untimed gate simulation, most evaluation of signals happens in 'zero time', which is to say that the effect of evaluating an assignment happens in the same time slice. The one exception tends to be the clock, which is defined to change at a certain time and it causes registers to fire, which causes them to change their output, which causes processes, modules, assignments which have inputs from registers to re-evaluate, which causes other signals to change, which causes other processes to re-evaluate, etc, etc, etc.... until everything has settled down, and we can move to the next clock edge.
In a post layout simulation, with back-annotated timing, every gate in the system has a time from input to output associated with it. This means nothing happens in 'zero time' any more. The simulator now has put the effect of every assignment on a list saying 'signal b will change to 1 at time 102.35ns'. Every gate has different timing. Every input on every gate will have different timing to the output. This means that a back-annotated simulation has to evaluate lots and lots of time slices as signals are changing state at lot's of different times. Not just when the clock changes. There probably isn't much happening in each slice, but there's lots of them.
...and I've only talked about adding gate timing. Add wire timing and things get even more complex.
Basically there's a whole lot more to worry about, and so the sims get slower.