Incorporate the spreading code into the matched filter input to a PFB clock sync - gnuradio

I have a BPSK modulator/demodulator working that I have added a few blocks around to effectively have a DSSS system working. However, when I try to add a second user (or spreading code) I can only lock on to one of the signals, I am assuming because I am using the standard root raised cosine filter in the PFB clock sync block, and it has no direct knowledge of the spreading code used.
My question is if there is a way to somehow incorporate the spreading code into the root raised cosine filter, or maybe incorporate it some other way into the PFB clock sync block so that I can perform symbol timing recovery on the correct set of symbols?
The RRC I am using now is:
firdes.root_raised_cosine(nfilts,nfilts,1.0,0.35,11*sps*nfilts)
where nfilts = 32 and sps = 2.

I am sorry I am not directly answering your question, but first we need to understand where the RRC is applied. If you are using the Constellation Modulator (CM) block to generate the BPSK and then spreading, the RRC is being applied before the spreading; i.e., it's performed by the CM. If this is true, then I think it may be just luck that it worked for one spreading code.
On the other hand, if you apply an RRC post-spreading, then the PFB Clock Sync should not care. I suggest changing sps to 4 and then looking at the time domain signal post-spreading. Do you see RRC-shaped symbols?

Related

Zero derivative calculation during optimization (no impact to objective)

I am writing an optimization code for a finite-difference radiation solver model. I started to use "src_indices" for connecting parameters rather promoting all the variables. But when I changed the connection, optimization does not calculate derivatives, gives "no impact to objective" error, and successfully terminates optimization after first iteration. Could not find any clue for finding the error in the logs (Bug may be in a completely different reason).
Is there any suggestion where I can start?
I uploaded the code to GitHub https://github.com/TufanAkba/opt_question
The first thing that comes to mind when you mention "design variables have no impact on objective" is that there may be a missing connection. Since this behavior only started after you changed the connection style, I think this is even more likely.
There are a couple of tools you can use to diagnose this. The first is the n2 viewer, which you can launch by typing the following at your command prompt:
openmdao n2 receiver_opt.py
This will launch a browser window that contains a graphical model viewer which is described in detail here. You can use this to explore the structure of your model. To find unconnected inputs in your model, look for any input blocks that are colored orange. These are technically connected to a hidden IndepVarComp called _auto_ivc, and will include design variables, which are set by the optimizer. You will want to look for any that should be connected to other component outputs.
OpenMDAO also has a connection viewer that just shows connections.
openmdao view_connections receiver_opt.py
You can use this tool to just focus on the connections. It is described here. If you choose to use this, just filter to see any connection to _auto_ivc in the source output string to see the unconnected inputs.
If you reach this point, and are satisfied that all the connections are correct, then there are a couple of other possibilities:
Are all of your src_indices correct? Maybe some of them are an empty set, or maybe some create a "degenerate" case. For example, if you have a set of cascading components that each multiply an incoming vector by a diagonal matrix, and if your indices are [0] in one connection, and [4] in another connection, then you've effectively severed the entire model. None of our visualization tools can pick that up, and you will need to inspect the indices manually.
It could also be a derivative problem, though what you describe sounds like connections. In that case, I recommend using check_partials to look for any missing or incorrect derivatives.
Are you computing any derivatives using complex step? It is possible that you are losing the complex part of the calculation through a complex-unsafe operation. Checking your derivatives against 'fd' can help to find these.

Why am I getting only zeros out of the VCO block in GNU Radio?

In GNU radio I am trying to use the frequency of one signal to generate another signal of a different frequency. Here is the flow diagram that I am using:
I generate a 50 kHz signal with a signal source block and feed this into a Log Power FFT block. I used the Argmax block to find the FFT bin with the most power and multiply that with a constant. I want to use this result as the input to the complex vco block to generate another signal with a different frequency. All vectors have a length of 4096.
However, looking at the output of the complex QT Gui Time Sink block, the output of the vco is always zero. This is strange to me because using a float QT Gui Time Sink to look at the output of the multiply block (which is also going to the input of the vco block), the result is 50,000 as expected. Why am I only getting zero out of the vco?
Also, my sample rate is set to 1M. I am assuming because of the vector length of 4096 that the sample rate out of the Argmax block will be 1M/4096 = 244. Is this correct?
I am running gnu radio companion on windows 10.
The proposed solution is not a solution. Please don't abuse the signal probe, which is really just that, a probe for slow, debugging or purely visual purposes. Every time I use it myself, I see how architecturally bad it is, and I personally think the project should be removing it from the block library altogether.
Now, instead of just say "probe is bad, do something else", let's analyse where your flow graph falls short:
your frequency estimation depends on the argmax of a block that was meant for pure visualization purposes. No, the output rate is not (sampling rate/FFT length), the output rate is roughly "frame rate" (but not actually exactly. That block is terrible and mixes "sample times" with "wall clock times"). Don't do that. If you need something like that, use the FFT block, followed my "complex to magnitude squared". You don't even want the logarithm - you're just looking for a maximum
Instead of looking for a maximum absolute value in an FFT, which is inherently a quantizing frequency estimator, use something that actually gives you an oscillation. There's multiple ways you can do that with a PLL!
your VCO solution probably does what it's programmed to do. You just use an inadequate sensitivity!
The sampling rate you assume in your time sinks is totally off, which is probably why you have the impression of a constant output – it just changes so slowly that you'll not notice.
So, I propose to instead, either / or:
Use the PLL Freq det. Feed the output of that into the VCO. Don't scale with a constant, but simply apply the proper sensitivity. Sensitivity is the factor between "input amplitude" and "phase advance per sample on the output in radians".
Use the PLL Carrier recovery. Use a resampler, or some other mathematical method, to generate the new frequency. You haven't told us how that other frequency relates to the input frequency, so I can't give you concrete advice.
Also notice that this very much suggest this being a case of "I'm trying to recreate an analog approach in digital"; that might be a good approach, but in many cases it's not.
If I might be as brazen: Describe why you need to generate that other frequency, for which purpose, in a post on https://dsp.stackexchange.com or to the GNU Radio mailing list discuss-gnuradio#gnu.org (sign up here). This really only barely is a programming problem, but really a signal processing problem. And there's a lot of people out there eager to help you find an appropriate solution that actually tackles your problem!
It looks like a better solution was to probe the output of the multiplier using a probe signal block along with a Function Probe Block to create a new variable. This variable could then be used as the frequency value in a separate Signal Source Block that is used to generate the new signal. This flow diagram seems to satisfy the original intended purpose: new flow diagram

QPSK works in simulation but not with SDR

I'm going to start off by saying that I'm very new to SDR and GNU Radio. This may be a dumb question, but I have been googling and testing things for about two months now trying to get this to work without success. Any help or pointers would be appreciated!
I'm attempting to use GNU Radio 3.8 to transfer a file using differential QPSK. I've tried to follow the tutorials on the wiki as well as several similar academic papers I found on the internet (which also seem to be based off the wiki tutorial). None of them worked on their own but combining what actually works from each one, I managed to create a flowgraph sans hardware that does indeed send and receive the data from a file. Here's the Flowgraph and here is a screenshot of the results. The results show the four constellation points, and the data from the file source matches up perfectly with the data having gone though the entire transmit+receive chain. In the simulation I have a throttle block and a channel model block where the LimeSDR Source and LimeSDR Sink block would be. So far so good (at least as far as I can tell).
When I actually start transmitting this signal with the SDR, the received data no longer matches up with what is transmitted. Here's the flowgraph I've been using for the transmission. I added a protocol formatter and some FEC blocks that I could have removed for this illustration, but the point is that simply looking at what bits are going into the modulator vs what's being recovered, the two do not match up. The constellation looks good (as far as I can tell) but the bits are all wrong. Here's a screenshot showing the bits being transmitted. You'll notice in the screenshot of the transmitted signal that the signal has a repeating series of three flat top "1's" surrounded on both sides by a period of "0's" (at time 1.5ms and 3.5ms). This is a screenshot of the received bits. At time 1ms and 3ms you can see how it is has significantly more transitions between 1 and 0 than it should.
So at this point I'm stumped. The simulation worked but the real world test does not. I've messed around with the RRC filter properties a significant amount. I have no clue if the values I have chosen are correct as I have not found a tutorial or explanation on how to do so. I just looked at some of the example flowgraphs and made some guesses as to how those values were derived and applied those guesses to my use case. It worked well in the simulation so I thought it would be fine in the real world test. I've tried a variety of samples per symbol but my goal is for a 4800 bit per second transfer speed, and using different samples per symbol didn't help anyway. What should I change in order to get this to work?
Bonus question: The constellation object has QPSK and DQPSK, and the constellation modulator has a differential checkbox. What is the best practice combination of selections to get a differential QPSK modulation?

Can we use GPIO_PinAFConfig function to make a pin as output on STM32L1?

I'm using stm32l100rc board. I need to make a pin output/input several times so can I use GPIO_PinAFConfig function to do that? or do I have to initialize whole GPIO_InitTypeDef structure for it.
A GPIO can be configured as either an input, output, or one of possibly several alternate functions. As its name suggests, GPIO_PinAFConfig sets one of those alternate functions, so would not achieve your aim at all.
If switching between input and output using the standard peripheral library, there will be some redundancy - register values that do not change, or which are mutually exclusive to input or output. If you need the switch to be as fast as possible (and we are talking tens or hundreds of nanoseconds here), then direct register access might make significant savings by changing only those registers necessarily necessary.

Making a Gnuradio Settings block

We're going to be using Gnuradio to stream in data from a radio peripheral. In addition, we have another peripheral that is part of the system which control programatically. I have a basic C program to do the controls.
I'd like to be able to implement this in GNUradio, but I dont' know what the best way to do this is. I've seen that you can make blocks, so I was thinking I could make a sink block, have a constant feed into that, and have the constant's value defined by some control like a WX slider.
It would take a needless part out of this if I could remove the constant block and just have the variable assigned to the WX slider directly be assigned to the control block, but then there would be no input. Can you make an inputless and outputless block that just runs some program or subroutine?
Also, when doing a basic test to see if this was feasible, I used a slider to a constant source to a WX scope plot. There seems to be a lag or delay between putting in an option and seeing the result show up on the plot. Is there a more efficient way to do this that will reduce that lag? Or is the lag just becasue my computer is slow?
It would take a needless part out of this if I could remove the constant block and just have the variable assigned to the WX slider directly be assigned to the control block, but then there would be no input. Can you make an inputless and outputless block that just runs some program or subroutine?
Yes, if you do this it will work. In fact, you can write any sort of Python code in a GRC XML file, and if you set up the properties and setter code properly, what you want will work. It doesn't have to actually create any GNU Radio blocks per se.
Also, when doing a basic test to see if this was feasible, I used a slider to a constant source to a WX scope plot. There seems to be a lag or delay between putting in an option and seeing the result show up on the plot.
GNU Radio is not optimized for minimum latency, but for efficient bulk processing. You're seeing the buffering between the source and the sink. Whenever you have a source that computes values rather than being tied to some hardware clock, the buffers downstream of it will be always-nearly-full and you'll get this lag.
In the advanced options there are settings to tune the buffer size, but they will only help so much.
I would guess you need a throttle in your work flow diagram or the sampling rate between blocks is incorrect.
It's almost impossible to help you unless you post your grc file or an image of the it.