DDS frequency synthesizer digital phase lock loop - frequency

I am working on a project about a frequency-hopping tranceiver. I want to implement a phase lock loop on FPGA i.e. a digital PLL. I am multiplying the incoming signal with a certain frequency and passing it through a LPF. Now I give this low frequency to DDS. I want my DDS to work like a VCO and lock to incoming phase/frequency. How can I do that?
I also need to know that how the phase accumulator in a DDS works: how or what input they are getting to generate corresponding frequency?

The datasheets of the Xilinx DDS Compiler have some information about the theory of operation. You may want to have a look at them.

Related

how to prevent cpu usage from changing timing in labview?

I'm trying to write a code in which every 1 ms a number plused one , should be replaced the old number . (something like a chronometer ! ) .
the problem is whenever the cpu usage increases because of some other programs running on the pc, this 1 milliseconds is also increased and timing in my program changes !
is there any way to prevent cpu load changes affecting timing in my program ?
It sounds as though you are trying to generate an analogue output waveform with a digital-to-analogue converter card using software timing, where your software is responsible for determining what value should be output at any given time and updating the output accordingly.
This is OK for stationary or low-speed signals but you are trying to do it at 1 ms intervals, in other words to output 1000 samples per second or 1 ks/s. You cannot do this reliably on a desktop operating system - there are too many other processes going on which can use CPU time and block your program from running for many milliseconds (or even seconds, e.g. for network access).
Here are a few ways you could solve this:
Use buffered, hardware-clocked output if your analogue output device supports it. Instead of writing one sample at a time, you send the device a waveform or array of samples and it outputs them at regular intervals using a timing signal generated in hardware. Unfortunately, low-end DAQ devices often don't support hardware-clocked output.
Instead of expecting the loop that writes your samples to the AO to run every millisecond, read LabVIEW's Tick Count (ms) value in the loop and use that as an index to your array of samples: rather than trying to output every sample, your code will now say 'what time is it now, and therefore what should the output be?' That won't give you a perfect signal out but at least now it should keep the correct frequency rather than be 'slowed down' - instead you will see glitches imposed on the signal whenever the loop can't keep up. This is easy to test and maybe it will be adequate for your needs.
Use a real-time operating system instead of a desktop OS. In the case of LabVIEW this would mean using the Real-Time software module and either a National Instruments hardware device that supports RT, such as the CompactRIO series, or installing the RT OS on a dedicated PC if the hardware is compatible. This is not a cheap option, obviously (unless it's strictly for personal, home use). In any case you would need to have an RT-compatible driver for your output device.
Use your computer's sound output as the output device. LabVIEW has functions for buffered sound output and you should be able to get reliable results. You'll need to upsample your signal to one of the sound output's available sample rates, probably 44.1 ks/s. The drawbacks are that the output level is limited in range and is not calibrated, and will probably be AC-coupled so you can't output a DC or very low-frequency signal. However if the level is OK for what you want to connect it to, or you can add suitable signal conditioning, this could be a neat solution. If you need the output level to be calibrated you could simultaneously measure it with your DAQ card and scale the sound waveform you're outputting to keep it correct.
The answer to your question is "not on a desktop computer." This is why products like LabVIEW Real-Time and dedicated deterministic hardware exist: you need a computer built around dedication to a particular process in order to consistently serve that process. Every application in a regular Windows/Mac/Linux desktop system has the problem you are seeing of potentially being interrupted by other system processes, particularly in its UI layer.
There is no way to prevent cpu load changes from affecting timing in your program unless the computer has a realtime clock.
If it doesn't have a realtime clock, there is no reason to expect it to behave deterministically. Do you need for your program to run at that pace?

Usrp1 and Gnuradio

I am trying to set up 5 USRP1 and some daughterboards on 2.4 and 5 GHz.
Some of them are out of order and some work properly, but I don't know which is which. I am trying to send a symbol (QAM modulation) sequence then I am trying to pass it with a file source to both a USRP sink and an FFT sink.
I am trying to find articles and tutorials, how sample rates are connected and how to set up them but I can't understand what am I missing. Could somebody please help with the schemes?
128 MS/s is not a rate that is possible with the USRP1. The console will contain a UHD warning that a differen, possible rate was chosen (most likely, 8MS/s).
Now, you contradict that rate by having a "Throttle" block in your flow graph - that block's job is only (and nothing more) to slow down the average rate at which samples are being let through – and that is something your "USRP Sink" already does. In fact, modern versions of GRC will warn you that using a throttle block in the same flow graph as a hardware sink or source is a bad idea.
Now, you'll say "ok, if the USRP sink actually needs to consume but 8MS/s, and my interpolator makes 128 MS/s out of my nominally 1M/s flow (really, signals within GNU Radio don't have a sampling rate), then that's gotta be fast enough to satisfy the 8MS/s demand!".
But the fact is that a 128-interpolator is really a CPU-intense thing, and the resulting rate might not be that high, made even worse by the choppy nature of how Throttle works.
In fact, your interpolator is totally unnecessary. The USRP internally has proper interpolators for integer fractions of its master clock rate 64MS/s, which means that you can set the USRP Sink to a sampling rate of 1MS/s and just directly connect the file source to it.

Is it possible to do SPI operation using GPIO Pins?

I want to to execute the SPI protocol operation using GPIO Pins, want to configure to single slave operation, in which way I have to configure that, I am using STM32F100RB Microcontroller and Coocox IDE for this executing in windowsxp.
if any body have example source code regarding the configuration of SPI Protocol operation using GPIO pins, then please send me that.
it very helpful for my project, Thanks in advance.
Regards,
Pavan Neo.
You're asking about Bit banging. This is the process of using an IO (or several) to encode or decode a serial signal. Wikipedia has a good description of this process.
For SPI specifically, you will need two or three outputs (depending on whether or not chip select is needed) and one input. You'll have to ensure that your bits are set or read in the correct order to not violate any setup/hold requirements of your peripheral, and you'll need to pay attention to the polarity needed on the clock signal (to make sure you're reading/writing data on the correct edge).
The Wikipedia link has some example code for bit banging that you may find useful as a starting point.

two analog channel affect each other in pic

Iam doing a project to recognize gestures by reading adc values in pic 16f73 using embedded c. Everything works fine while using single adc channel. When i use multiple channels, values are affected each other. is this a hardware error or software problem?
Probably. It's very likely to be one, or the other, or both. Split problem in half.
Eliminate one at a time. Scope/meter on both analog inputs. Change one input - does the other change too? If it does, there is a hardware issue at least. If not, it's software.
This is debugging 101.
It's a hardware effect, but not an error.
From the datasheet:
11.1 A/D Acquisition Requirements
For the A/D converter to meet its specified accuracy,
the charge holding capacitor (CHOLD) must be allowed
to fully charge to the input channel voltage level. The
analog input model is shown in Figure 11-2. The source
impedance (RS) and the internal sampling switch (RSS)
impedance directly affect the time required to charge
the capacitor CHOLD. The sampling switch (RSS)
impedance varies over the device voltage (VDD), see
Figure 11-2. The source impedance affects the offset
voltage at the analog input (due to pin leakage current).
The maximum recommended impedance for analog sources is 10 kΩ. After the analog input channel is
selected (changed), the acquisition period must pass
before the conversion can be started.
To calculate the minimum acquisition time, TACQ, see
the PICmicro™ Mid-Range MCU Family Reference
Manual (DS33023). In general, however, given a maximum source impedance of 10 kΩ and at a temperature
of 100°C, TACQ will be no more than 16 µsec.
It will likely be because you have high impedance sources driving all the ADC pins. When the multiplexer switches from one input to the next, any charge that is stored on the sampling capacitor of the ADC from the previous input will still be there.
If you drive each input with the output of a suitable op amp, when the ADC's multiplexer switches, the op amp is able to drive charge in or suck charge out of the sampling capacitor and the time needed for the new input you are reading can be significantly reduced. Plus, with this method you are not loading the voltage you are wanting to read.
If you cannot drive with a low impedance source, then ensure you have plenty of time for the new input's value to settle.

Control stepper motors via USB

I'm doing a USB device is to control stepper motors. I've done this before using a parallel port. because these ports do not exist in current motherboards, I decided to implement a USB communication between my device and the PC (host).
To achieve My objective, I endowed the freescale microcontroller the device with that has a USB module 12Mbps.
My USB device must receive 4 bytes (one for each motor driver) at a given time, because every byte is a step that should move the engine.
In the PC (Host) an application of user processes a text file with information and make the trajectory coordinates sending bytes at a certain rate for each motor (time is trivial to achieve the acceleration and speed of the motors) .
Using the parallel port was an easy the task because each byte is sent sequentially to a time determined by the user app.
doing a little research about full speed USB protocol understood that the frame is sent every 1ms.
then you can send 4 byte or many more every 1ms but I can not manage time like I did with the parallel port.
My microcontroller can send up to 64 bytes per frame (Based on transfer papers type Control, Bulk, Int, Iso ..).
question 1:
I want to know in what way I can send 4-byte packets faster than every 1 ms?
question 2:
What type of transfer can advise me for these type of devices?
Thanks.
Like Ricardo said, USB-serial will suffice.
As for the type of transfer, try implementing a CDC stack and use your SCI receiver to listen for PC commands. That will give you a receive buffer which will meet your needs.
Initialize your SCI (baud, etc)
Enable receiver and interrupt
On data receive, move it to your 4-byte command buffer
Clear receive buffer, wait for more
When you have all 4 bytes, fire off the steppers! Four bytes should take µs.
Check with Freescale to see if your processor is supported.
http://cache.freescale.com/files/microcontrollers/doc/support_info/USB_STACK_RELEASE_NOTES_V4.1.1.pdf?fpsp=1
There might even be some sample code to get you started.
-Cheers
I am achieving the same goal (driving/control CNC machines) like this:
the USB device is just synchronous I/O parallel port. Using continuous bulk transfer one pipe as input and one as output. This way I was able to achieve synchronous 64bit parallel communication with ~70KHz sample rate. It uses traffic around (i)4.27+(o)4.27 MBit/s that is limit for mine MCU and code. Bigger speeds cause jitter on the output due to USB events interrupts.
How to do it (on MCU side)
I have 2 FIFO's one for ingoing and one for outgoing data. I have timer interrupt occurring with sample rate frequency. In it I read the inputs and feed it to the first FIFO and read data from the other FIFO and send it to the outputs.
On top of that the USB task is called (inside the same interrupt) checking FIFO for sending to and incoming data from USB handling the transfer itself
I choose ATMEL AT32UC3A chips for this task. After a long and pain full research I decided these MCU's because they have enough memory for both FIFO's and program so no need for additional IC. It has FPGA package which can be used (BGA is not an option). It has HS USB (most USB MCU's have only FS like yours). It runs at 66MHz. It supports many interesting features (did interesting projects with it in the past) and of coarse I have experience with ATMEL MCU's from past
So if you want to achieve something similar then
start with bulk transfer (PC -> USB -> MCU -> output)
add FIFO if needed
do not know the sample rate you need. The old LPT's could handle from 80-196KHz depend on the manufactor. The modern ones are much much slower (which is silly and sad).
measure the critical sample rate
you need oscilloscope or very good hearing for this. The output data must be synchronous so no holes in it, no jitter, etc...
if any of these are present you have to lower the sample rate. Mine setup could handle even 1MHz sample rate but the USB jitter was present (sometimes USB event froze the sending for longer that one sample...) so I achieve only 70KHz of stable output.
if needed also inputs then add them
but only if the output is working as it should. Do not forget to lower the sample rate after this too ... Use separate bulk pipes and FIFOs for input and output.