How do I configure DAQ assistant to generate voltage pulses defined by a waveform? - labview

How do I feed the waveform pulses into DAQ Assistant to cause a DAQ 6259 board to generate desired voltage pulses?
Using the Simulate Signal express VI I have created a square pulse waveform.
My goal is to allow a LabView user to configure the Frequency and Pulse width using knobs from the GUI as needed in order to generate a desired pulse train. This pulse train should be sent to the DAQ 6259 to generate a voltage pulse train. The voltage pulse train would be captured by an oscilloscope in order to verify its correctness (i.e. the captured pulse train looks exactly like the waveform displayed in the labview GUI).
What is the simplest way this can be achieved? Are there any tutorials that explain how this can be done?

Have you checked out the example finder (Help>Find Examples...)
Hardware Input and Output > DAQmx> Analog Output / Digital Output
There are a bunch of examples in there that will get you 90% of the way

Related

GFSK modulation/demodulation with GNU Radio and USRP

Im currently creating a satellite ground station which will be used to control our cubesat in coming months. The modulation scheme used is GFSK and the baud rate is 9600. I have tried to run some tests by using a USRP board before I could try to communicate with the satellite by directly connecting the tx and rx blocks in the flowgraph.I was able to send and receive a png file using this flowgraph.
However, when I connect the tx and rx output to my USRP B210 TX/RX(transmission sink) and RX2(reception source) as shown below, I receive no data even though the two the source and sinks have been connected to each other carefully by RF cables with attenuators.
Below are the assumptions I took into account when I was making the second flowgraph. Please tell me if im on the right path.
Transmitter side : The packet decoder and GFSK mod blocks use 20 samples per symbol. Baud rate is 9600 and sample rate is 20*baud rate = 192K. Since the expected symbol rate by the satellite is baud_rate = 9600, I included a rational resampler and set UHD symbol rate to baud_rate. Is my logic correct?
GFSK mod and demod : For both of these blocks, I calculated sensitivity as S = Pi * Modulation_index/Samples_Per_Symbol. The default BT value of 0.5 is used. Are my calculations sound? Is there a link for to find documentation for GFSK blocks? My derivations are based on the GFSK python source code which is a poor substitute for documentation.
Packet Encoder/Decoder : The output of packet decoder is null even though the GFSK demod block give some kind of output which is rather meaningless. Is this normal? What is the meaning of the threshold variable and why its value is -1?
I'm a newbie in GNU Radio as well as GFSK in general. So please drop me any further references.
Thanks in advance.
Moses.
I was finally able to solve the problem. All I did was re-implementing the GFSK demod in GRC. If you go into source of gfsk.py, you will find out that the blocks used are Quadrature Demod --> M&M Clock recovery --> Binary slicer which can easily be connected in GRC directly. As Marcus suggested in my other thread, GFSK demodulation with Xlating filter in GNU Radio , I replaced the M&M Clock recovery block with PFB block. My flowgraph is shown below.
Even if I can not answer all of your questions, I provide below some thoughts:
When using hardware devices the Throttle MUST be removed from the flow-graph. The hardware device is now responsible for the rate limiting. Mixing hardware device and the Throttle block may break the real-time boundary of your flow-graph required by the device. Underflows or Overflows messages should be produced by the UHD driver in such a case.
Are you sure that the USRP can support the requested sampling rate? You may need also to change the master_clock_rate of the device, if the requested sampling rate is not an integer decimation of the clock. If this is not possible consider some kind of re-sampling.
EDIT: The B200 can not provide 192e3 sampling rate with the default clock. You can set the master_clock_rate at 19.2e6. The hardware will apply then the proper decimation. The master_clock_rate can be changed either by the device specific arguments or the Clock Rate field of the UHD Sink/Source blocks that presents in the latests GNU Radio versions.

Labview: I can't read the voltage from more than one channel (DAQmx read)

I have a SCB 68A connector from National Instruments and I want to read out the open voltage from it. So I used the example code provided by National Instruments (https://decibel.ni.com/content/docs/DOC-28502):
I got 5 mV which is a reasonable value (I measured the noise signal with an oscilloscope). Now I want to read out the noise signal from few channels. So I sightly changed the VI (according to the documentation I need to create an array of channels and flatten them):
But now I read out approximately 200 mV on both channels (and one of them is the same as in the first VI). It doesn't make any sense.
What am I doing wrong?
I want the user to be able to choose the channels, so I can't just write "Dev1/ai0:4".
Edit: I'm using the DAQ 14.0.0.
Edit 2: 1) There is nothing connected to the deivce - I just want to read out the noise signal.
2) I'm using the connector in the MIO with the disabled temperature sensor mode (the default configuration).
You are observing charge injection from the DAQ device's multiplexer. Connect each aiN terminal to aignd and you will be able to measure the noise of the DAQ device.
Charge Injection
Most NI DAQ boards have a single analog to digital converter (ADC) and provide multiple input channels by using a multiplexer (MUX) to switch the input of the ADC to the different analog input terminals ai0, ai1, etc:
As NI explains, when the DAQ device's multiplexer moves from one channel to the next, it can introduce a small charge on each channel. Since the open channel does not have a path for this charge to dissipate, the voltage of the channel will increase. This can also cause the channel to rail, slowly floating up to the maximum input voltage (usually 10 V).
Characterizing Noise
You can determine the noise of each component in your system by:
Measuring the noise of the DAQ device
Measuring the noise of the DAQ device and terminal block
Subtracting the DAQ device noise (step 1) from the system noise (step 2)
When you're finished, the value from step 1 is the noise of the DAQ device, and the value from step 3 is the noise of the SCB-68.
To measure the noise of an electric path, there must be a complete circuit for the ADC to sample. For step 1, connect each aiN terminal to aignd and run your VI. For step 2, connect the terminal block to the DAQ device, disconnect the sensor, and connect the terminal block's channel terminals to its ground terminal and run your VI.
Minimizing Noise
In addition to charge injection, noise can be introduced to a DAQ system from several sources, including the environment. Open terminals act like small antennas and receive radiated energy from other electronics, lights, and the AC mains.
The link also outlines how to find and minimize noise, but the gist is:
Systematically identify the sources of the noise.
Remove sources of noise that aren't necessary for your measurements.
Depending on the nature and source of the remaining noise, use appropriate shielding, cabling, and terminal configuration.
Over-sample and average the signal.
Please have a look on the links below:
http://forums.ni.com/t5/Multifunction-DAQ/How-to-use-DAQmx-Read-to-measure-multiple-analog-channels/td-p/2620949
http://digital.ni.com/public.nsf/allkb/A3A05920BF915F1486256D210069BE49
There is the complete solution to your question.

How to link analog output and analog input in simulated DAQ device?

I simulate NI PCI-6110 device in NI MAX. In LabView I need to send some signal on AO0 of this device and read THIS signal from the device in other scope (doesn't matter read from AO0 or AI0).
How to configure redirect from AO to AI?
I could link AI and AO with a wire on real/physical device, but I don't know how to do this on simulated device.
LabView 2013 x86.
The simulated input data for DAQmx devices is always going to be a sine wave when called in LabVIEW. If you want to test how your application will respond (in this case output a voltage on the 6110) based on an input, you're going to need to simulate both the input and output with custom code.
I do this by placing a case structure around the DAQmx VIs, with a "Debug?" control OR'd with a "Debug?" global to toggle simulated data. Then in the debug cases you will need to write new code that simulates the acquisition/generation. This allows you to switch between simulated and real data fairly easily for unit testing or integration testing.

Regeneration of sine wave using microcontroller

I don't have much knowledge about microcontrollers. In my project, I need to shift the sine wave. Here, I want to know, if I feed pure sine at port A pin 2. Then, will i get the shifted version of pure sine wave at port B pin 2 . will the following instruction work?
Inialise port A as input and port B as output
call delay
portb=porta
we can generate sine wave using DAC in microcontroller. But, as it is not perfect, it wont meet required conditions.
First of all the input needs to be to an ADC, and the output needs to be from a DAC (or a PWM with appropriate output filtering). It is not clear from your question that the pins you have chosen are appropriate for that.
If you are generating the sine from the DAC, why would you apply it to an input only to output it again? If you need two sine waves shifted in phase, why not simply generate calculated outputs from two DAC or PWM? Either way you need two analogue outputs, but that way you do not need any input. A PWM will need greater analogue filtering than a DAC and is likely to support lower bandwidth, but most microcontroller have more PWMs than DACs.
You cannot simply call a delay than copy port a to port b, that would be simply be a copy of a to b after a delay. You need to take samples from A and place then in a FIFO buffer, then apply the output of the FIFO to B. The length of the FIFO determines the delay.
A microcontroller is not an analogue device, you cannot put in an analogue signal on just any old pin and and transfer that signal to another pin. Most pins are digital GPIO, they except just two states representing 0 or 1. No matter what voltage you apply, it will be interpreted as either high or low.
Rather you will have to use an ADC input, sample at sufficiently high frequency, delay the samples through a FIFO, then apply the delayed samples to a DAC. Reconstruction of a "pure" sine wave from the quantized DAC output requires analogue filtering circuitry. With a filter cut-off lower than half the sampling rate you will recover a reasonably good representation of the original signal (which can be any signal with components below half the sampling frequency - it need not be a sine wave). If you do use a more complex signal, you will need to analogue filter the input to remove components above half the sampling rate to avoid aliasing.
It might be possible to do all that on one chip using a Cypress PSoC, since these are hybrid chips with reconfigurable analogue elements as well as a microcontroller.

Detecting heartbeats signals with "Digital heart beat rate sensor (IC)" - iOS

I just bought Digital heart beat rate sensor:
http://www.dealextreme.com/p/digital-heart-beat-rate-sensor-3-5mm-data-port-16009
And I'm looking how I can make application for iOS to work with.
Sensor has 3.5mm jack and I can detect signal with audio framework on iOS.
Can you give me some guidelines how to start with detecting these signals into heart beat rates?
That sensor looks rather like one I have here in my junk box. If so, it generates a voltage signal which depends on the pressure exerted on it by the skin against which it is pressed. If there is a strong pulse at the point of pressure, I see a signal on an oscilloscope which has a component at the pulse rate: so it is at a frequency of around 1-2Hz.
This is WAY below the audio range, and in most audio interfaces would be filtered out before it ever got to the audio in ADC. I don't have a handy iPhone to check this on, but it would be bad design if the audio input did let such frequencies through. And Mr Jobs (R.I.P.) did not approve of bad design!
There is also a lot of interference at other frequencies: mains hum (50Hz here), and at lower frequencies spurious signals from muscle twitches.
To make this work, you would need some sort of signal conditioning. If it was up to me, I would use a high input impedance amplifier, with about a 0.1Hz - 10Hz passband, followed by a voltage to frequency converter. That would give me a tone, which i could set in the audio band, whose frequency varied up & down as the pressure on the sensor changes. That would let me use fairly simple frequency detection software to recover the pressure waveform, which could then be processed using autocorrelation or similar techniques to recover the heartbeat frequency. A DTMF decoder is not the right tool, though.
I did find when I played about with the senor that it was very touchy, responding to almost everything going, and it wouldn't be easy to pick out the heartbeat. Your sensor may be different, though.