gnu radio - bit rate - gnuradio

I have propably very stupid/simple question to GnuRadio users.
I have a Random Source as a source of bits [-1, 1]. And I want to multiply every bit with cosinus to make bpsk modulator.
Problem is that Bits are generated as fast as possible... (dont have enything in common with samp_rate). When I have 1 period of cosinus, there are generated many bits from Random Source.
Question is, how can I slow down bit rate generation ??
Thanks for any help
(I dont want to use DPSK Mod :))

Strictly speaking you can not delay the generation of bits. However you can increase the duration of each symbol. This can be done with the repeat block of GNU Radio. This block takes a parameter called interpolation that corresponds to the number of times an input item will be repeated at the output.
So you find the period of your cosine in samples, lets say p. For each random bit produced by the Random source block, you repeated it p times with the repeat block. With this way you increase the duration of your random symbol. Then you pass the resulting samples to the multiply block of your flowgraph.

Related

Big Oh! algorithms running in O(4^N)

Locked. There are disputes about this question’s content being resolved at this time. It is not currently accepting new answers or interactions.
For algorithms running in
O(16^N)
If we triple the size, the time is multiplied by what number??
This is an interesting question because while equivalent questions for runtimes like Θ(n) or Θ(n3) have clean answers, the answer here is a bit more nuanced.
Let's start with a simpler question. We have an algorithm whose runtime is Θ(n2), and on a "sufficiently large" input the runtime is T seconds. What should we expect the runtime to be once we triple the size of the input? To answer this, let's imagine, just for simplicity's sake, that the actual runtime of this function is closely approximated by cn2, and let's have k be the "sufficiently large" input we plugged into it. Then, plugging in 3k, we see that the runtime is
c(3k)2 = 9ck2 = 9(ck2) = 9T.
That last step follows because the cost of running the algorithm on an input of size k is T, meaning that ck2 = T.
Something important to notice here - tripling the size of the input does not change the fact that the runtime here is Θ(n2). The runtime is still quadratic; we're just changing how big the input is.
More generally, for any algorithm whose runtime is Θ(nm) for some fixed constant m, the runtime will grow by roughly a factor of 3m if you triple the size of the input. That's because
c(3k)m = 3mckm = 3mT.
But something interesting happens if we try performing this same analysis on a function whose runtime is Θ(4n). Let's imagine that we ran this algorithm on some input k and it took T time units to finish. Then running this algorithm on an input of size 3k will take time roughly
c43k = c4k42k = T42k = 16kT.
Notice how we aren't left with a constant multiple of the original cost, but rather something that's 16k times bigger. In particular, that means that the amount by which the algorithm slows down will depend on how big the input is. For example, the slowdown going from input size 10 to input size 30 is a factor of 1620, while the slowdown going from input size 30 to input size 90 is a staggering 1660. For what it's worth, 1660 = 2240, which is pretty close to the number of atoms in the observable universe.
And, intuitively, that makes sense. Exponential functions grow at a rate proportional to how big they already are. That means that the scale in runtime for doubling or tripling the size of the input will lead to a runtime change that depends on the size of that input.
And, as above, notice that the runtime is not now Θ(43n). The runtime is still Θ(4n); we're just changing which inputs we're plugging in.
So, to summarize:
The runtime of the function slows down by a factor of 42n if you triple the size of the input n. This means that the slowdown depends on how big the input is.
The runtime of the function stays at Θ(4n) when we do this. All that's changing is where we're evaluating the 4n.
Hope this helps!
The time complexity of the algorithm represents the growth in run-time of the algorithm with respect to the growth in input size. So, if our input size increases by 3 times, that means we have now new value for our input size.
Hence, time complexity of the algorithm still remains same. i.e O(4^N)

Why does my simulation compute only a certain number of digits before only changing the power magnitude?

I am using another person's code to try and demonstrate this problem in physics:
a large mass M collides with a smaller mass m, which then moves moves to rebound off a wall returning to collide with the larger mass M. This process repeats until larger mass has turned and its velocity sign flips. If the mass of the larger block is 16*100^n (where n is an integer) times more massive than the first block the number of collisions between the large block and the small block compute the (n+1) digits of pi. For example: when the block is 1600 times bigger there are 31 collisions. If the block is 16000000 there are be 3141 collisions.
I did my code in vPython and it works, but only until a certain amount. I was able to get 31415 collisions when the original code. When I make N=5 the simulation completely fails and the screen turns black. Apparently this is because the time step is not small enough. So I tried to make it smaller and see if it can compute more numbers and it does. I was able to count 314159 collisions by changing the time step to 0.00001. But then I input N=6 and again it collapses. So I try to increase the time step to 0.000001 and it works but only gives me the number 3.14159e+6 without the extra digit of pi.
enter image description here
Can someone please tell why this is. Why do I not get the next digit. Is my computer not strong enough. I do not need to actually fix this problem, that is not the point, I just need to understand the limitations of my simulation and computer and why it cannot compute the next digit.

How to slow down a file source in GNU Radio?

I'm attempting to unpack bytes from an input file in GNU Radio Companion into a binary bitstream. My problem is that the Unpack K Bits block works at the same sample rate as the file source. So by the time the first bit of byte 1 is clocked out, byte 2 has already been loaded. How do I either slow down the file source or speed up the Unpack K Bits block? Is there a way I can tell GNU Radio Companion to repeat each byte from the file source 8 times?
Note that "after pack" is displaying 4 times as much data as "before pack".
My problem is that the Unpack K Bits block works at the same sample rate as the file source
No it doesn't. Unpack K Bits is an interpolator block. In your case the interpolation is 8. For every bytes 8 new bytes are produced.
The result is right, but the time scale of your sink is wrong. You have to change the sampling rate at the second GUI Time Sink to fit the true sampling rate of the flowgraph after the Unpack K Bits.
So instead of 32e3 it should be 8*32e3.
Manos' answer is very good, but I want to add to this:
This is a common misunderstanding for people that just got in touch with doing digital signal processing down at the sample layer:
GNU Radio doesn't have a notion of sampling rate itself. The term sampling rate is only used by certain blocks to e.g. calculate the period of a sine (in the case of the signal source: Period = f_signal/f_sample), or to calculate times or frequencies that are written on display axes (like in your case).
"Slowing down" means "making the computer process samples slower", but doesn't change the signal.
All you need to do is match what you want the displaying sink to show as time units with what you configure it to do.

Need help generating discrete random numbers from distribution

I searched the site but did not find exactly what I was looking for... I wanted to generate a discrete random number from normal distribution.
For example, if I have a range from a minimum of 4 and a maximum of 10 and an average of 7. What code or function call ( Objective C preferred ) would I need to return a number in that range. Naturally, due to normal distribution more numbers returned would center round the average of 7.
As a second example, can the bell curve/distribution be skewed toward one end of the other? Lets say I need to generate a random number with a range of minimum of 4 and maximum of 10, and I want the majority of the numbers returned to center around the number 8 with a natural fall of based on a skewed bell curve.
Any help is greatly appreciated....
Anthony
What do you need this for? Can you do it the craps player's way?
Generate two random integers in the range of 2 to 5 (inclusive, of course) and add them together. Or flip a coin (0,1) six times and add 4 to the result.
Summing multiple dice produces a normal distribution (a "bell curve"), while eliminating high or low throws can be used to skew the distribution in various ways.
The key is you are going for discrete numbers (and I hope you mean integers by that). Multiple dice throws famously generate a normal distribution. In fact, I think that's how we were first introduced to the Gaussian curve in school.
Of course the more throws, the more closely you approximate the bell curve. Rolling a single die gives a flat line. Rolling two dice just creates a ramp up and down that isn't terribly close to a bell. Six coin flips gets you closer.
So consider this...
If I understand your question correctly, you only have seven possible outcomes--the integers (4,5,6,7,8,9,10). You can set up an array of seven probabilities to approximate any distribution you like.
Many frameworks and libraries have this built-in.
Also, just like TokenMacGuy said a normal distribution isn't characterized by the interval it's defined on, but rather by two parameters: Mean μ and standard deviation σ. With both these parameters you can confine a certain quantile of the distribution to a certain interval, so that 95 % of all points fall in that interval. But resticting it completely to any interval other than (−∞, ∞) is impossible.
There are several methods to generate normal-distributed values from uniform random values (which is what most random or pseudorandom number generators are generating:
The Box-Muller transform is probably the easiest although not exactly fast to compute. Depending on the number of numbers you need, it should be sufficient, though and definitely very easy to write.
Another option is Marsaglia's Polar method which is usually faster1.
A third method is the Ziggurat algorithm which is considerably faster to compute but much more complex to program. In applications that really use a lot of random numbers it may be the best choice, though.
As a general advice, though: Don't write it yourself if you have access to a library that generates normal-distributed random numbers for you already.
For skewing your distribution I'd just use a regular normal distribution, choosing μ and σ appropriately for one side of your curve and then determine on which side of your wanted mean a point fell, stretching it appropriately to fit your desired distribution.
For generating only integers I'd suggest you just round towards the nearest integer when the random number happens to fall within your desired interval and reject it if it doesn't (drawing a new random number then). This way you won't artificially skew the distribution (such as you would if you were clamping the values at 4 or 10, respectively).
1 In testing with deliberately bad random number generators (yes, worse than RANDU) I've noticed that the polar method results in an endless loop, rejecting every sample. Won't happen with random numbers that fulfill the usual statistic expectations to them, though.
Yes, there are sophisticated mathematical solutions, but for "simple but practical" I'd go with Nosredna's comment. For a simple Java solution:
Random random=new Random();
public int bell7()
{
int n=4;
for (int x=0;x<6;++x)
n+=random.nextInt(2);
return n;
}
If you're not a Java person, Random.nextInt(n) returns a random integer between 0 and n-1. I think the rest should be similar to what you'd see in any programming language.
If the range was large, then instead of nextInt(2)'s I'd use a bigger number in there so there would be fewer iterations through the loop, depending on frequency of call and performance requirements.
Dan Dyer and Jay are exactly right. What you really want is a binomial distribution, not a normal distribution. The shape of a binomial distribution looks a lot like a normal distribution, but it is discrete and bounded whereas a normal distribution is continuous and unbounded.
Jay's code generates a binomial distribution with 6 trials and a 50% probability of success on each trial. If you want to "skew" your distribution, simply change the line that decides whether to add 1 to n so that the probability is something other than 50%.
The normal distribution is not described by its endpoints. Normally it's described by it's mean (which you have given to be 7) and its standard deviation. An important feature of this is that it is possible to get a value far outside the expected range from this distribution, although that will be vanishingly rare, the further you get from the mean.
The usual means for getting a value from a distribution is to generate a random value from a uniform distribution, which is quite easily done with, for example, rand(), and then use that as an argument to a cumulative distribution function, which maps probabilities to upper bounds. For the standard distribution, this function is
F(x) = 0.5 - 0.5*erf( (x-μ)/(σ * sqrt(2.0)))
where erf() is the error function which may be described by a taylor series:
erf(z) = 2.0/sqrt(2.0) * Σ∞n=0 ((-1)nz2n + 1)/(n!(2n + 1))
I'll leave it as an excercise to translate this into C.
If you prefer not to engage in the exercise, you might consider using the Gnu Scientific Library, which among many other features, has a technique to generate random numbers in one of many common distributions, of which the Gaussian Distribution (hint) is one.
Obviously, all of these functions return floating point values. You will have to use some rounding strategy to convert to a discrete value. A useful (but naive) approach is to simply downcast to integer.

Mathematical analysis of a sound sample (as an array of numbers)

I need to find the frequency of a sample, stored (in vb) as an array of byte. Sample is a sine wave, known frequency, so I can check), but the numbers are a bit odd, and my maths-foo is weak.
Full range of values 0-255. 99% of numbers are in range 235 to 245, but there are some outliers down to 0 and 1, and up to 255 in the remaining 1%.
How do I normalise this to remove outliers, (calculating the 235-245 interval as it may change with different samples), and how do I then calculate zero-crossings to get the frequency?
Apologies if this description is rubbish!
The FFT is probably the best answer, but if you really want to do it by your method, try this:
To normalize, first make a histogram to count how many occurrances of each value from 0 to 255. Then throw out X percent of the values from each end with something like:
for (i=lower=0;i< N*(X/100); lower++)
i+=count[lower];
//repeat in other direction for upper
Now normalize with
A[i] = 255*(A[i]-lower)/(upper-lower)-128
Throw away results outside the -128..127 range.
Now you can count zero crossings. To make sure you are not fooled by noise, you might want to keep track of the slope over the last several points, and only count crossings when the average slope is going the right way.
The standard method to attack this problem is to consider one block of data, hopefully at least twice the actual frequency (taking more data isn't bad, so it's good to overestimate a bit), then take the FFT and guess that the frequency corresponds to the largest number in the resulting FFT spectrum.
By the way, very similar problems have been asked here before - you could search for those answers as well.
Use the Fourier transform, it's much more noise insensitive than counting zero crossings
Edit: #WaveyDavey
I found an F# library to do an FFT: From here
As it turns out, the best free
implementation that I've found for F#
users so far is still the fantastic
FFTW library. Their site has a
precompiled Windows DLL. I've written
minimal bindings that allow
thread-safe access to FFTW from F#,
with both guru and simple interfaces.
Performance is excellent, 32-bit
Windows XP Pro is only up to 35%
slower than 64-bit Linux.
Now I'm sure you can call F# lib from VB.net, C# etc, that should be in their docs
If I understood well from your description, what you have is a signal which is a combination of a sine plus a constant plus some random glitches. Say, like
x[n] = A*sin(f*n + phi) + B + N[n]
where N[n] is the "glitch" noise you want to get rid of.
If the glitches are one-sample long, you can remove them using a median filter which has to be bigger than the glitch length. On both sides of the glitch. Glitches of length 1, mean you will have enough with a median of 3 samples of length.
y[n] = median3(x[n])
The median is computed so: Take the samples of x you want to filter (x[n-1],x[n],x[n+1]), sort them, and your output is the middle one.
Now that the noise signal is away, get rid of the constant signal. I understand the buffer is of a limited and known length, so you can just compute the mean of the whole buffer. Substract it.
Now you have your single sinus signal. You can now compute the fundamental frequency by counting zero crossings. Count the amount of samples above 0 in which the former sample was below 0. The period is the total amount of samples of your buffer divided by this, and the frequency is the oposite (1/x) of the period.
Although I would go with the majority and say that it seems like what you want is an fft solution (fft algorithm is pretty quick), if fft is not the answer for whatever reason you may want to try fitting a sine curve to the data using a fitting program and reading off the fitted frequency.
Using Fityk, you can load the data, and fit to a*sin(b*x-c) where 2*pi/b will give you the frequency after fitting.
Fityk can be used from a gui, from a command-line for scripting and has a C++ API so could be included in your programs directly.
I googled for "basic fft". Visual Basic FFT Your question screams FFT, but be careful, using FFT without understanding even a little bit about DSP can lead results that you don't understand or don't know where they come from.
get the Frequency Analyzer at http://www.relisoft.com/Freeware/index.htm and run it and look at the code.