Which ns2 protocol is the most practical to use - awk

My trace file on AODV is more than 1GB. My problem is which protocol I should use: AODV or DSR or DSDV or....
Since my scenario has about 300 nodes with 3 mobile nodes. I am getting too much difficulties while writing awk scripts to get the matrix evaluation for my scenario.

Related

USRP N210 overflows in virtual machine using GnuRadio

I am using the USRP N210 through a Debian (4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u1) VM and run very quickly into processing overflow. GnuRadio-Companion is printing the letter "D" the moment one of the CPUs load is reaching 100 %. This was tested by increasing the number of taps for a low-pass filter, as shown in the picture with a sampling rate of 6.25 MHz.
I have done all instructions on How to tune an USRP, except the CPU governor. This is because I am not able to do this due to a missing driver reported by cpufreq-info. The exact output is
No or unknown cpufreq driver is active on this CPU.
The output of the lscpu command is also shown in a picture.
Has anyone an idea how I can resolve the problem? Or is GnuRadio just not fully supported for VMs?
Dropping packets when your CPU can't keep up is expected. It's the direct effect of that.
The problem is most likely to be not within your VM, but with the virtualizer.
Virtualization adds some overhead, and whilst modern virtualizers have gotten pretty good at it, you're requesting that
an application with hard real-time requirements runs
under high network load.
This might take away CPU cycles on your host side that your VM doesn't even know of – your 100% is less than it looks!
So, first of all, make sure your virtualizer does as little things with the network traffic as possible. Especially, no NAT, but best-case hardware bridging.
Then, the freq-xlating FIR definitely isn't the highest-performing block. Try using a rotator instead, followed by an FFT FIR. In your case, let that FIR decimate by a factor of 2 – you've done enough low-pass filtering to reduce the sampling rate without getting aliases.
Lastly, might be a good idea to use a newer version of GNU Radio. In Debian testing, apt will get you a 3.8 release series GNU Radio.

Calculating throughput in lte network simulation in ns3

I want to calculate throughput, packet loss and other network parameters in an lte network that I simulate in ns3. I can get 3 trace files as results: one for Mac trace, Rlc trace, and Pdcp.
In ns3 website, the trace part for lte is empty!
https://www.nsnam.org/docs/release/3.10/manual/html/lte.html
I do not understand in which part you refer to, as there is no 'trace' section in the LTE manual. Also, use the latest version of ns-3 documentation, and not one from ns-3.10 (almost 5 yr old).
Specifically, for LTE output there is a section explaining what these file you get contain. You would then to analyse those files to get total throughput or other statistics.
In addition to the LTE specific output, you can use FlowMonitor and/or the data collection framework (both explained in the ns-3 manual) and with examples available in the codebase.

Usrp1 and Gnuradio

I am trying to set up 5 USRP1 and some daughterboards on 2.4 and 5 GHz.
Some of them are out of order and some work properly, but I don't know which is which. I am trying to send a symbol (QAM modulation) sequence then I am trying to pass it with a file source to both a USRP sink and an FFT sink.
I am trying to find articles and tutorials, how sample rates are connected and how to set up them but I can't understand what am I missing. Could somebody please help with the schemes?
128 MS/s is not a rate that is possible with the USRP1. The console will contain a UHD warning that a differen, possible rate was chosen (most likely, 8MS/s).
Now, you contradict that rate by having a "Throttle" block in your flow graph - that block's job is only (and nothing more) to slow down the average rate at which samples are being let through – and that is something your "USRP Sink" already does. In fact, modern versions of GRC will warn you that using a throttle block in the same flow graph as a hardware sink or source is a bad idea.
Now, you'll say "ok, if the USRP sink actually needs to consume but 8MS/s, and my interpolator makes 128 MS/s out of my nominally 1M/s flow (really, signals within GNU Radio don't have a sampling rate), then that's gotta be fast enough to satisfy the 8MS/s demand!".
But the fact is that a 128-interpolator is really a CPU-intense thing, and the resulting rate might not be that high, made even worse by the choppy nature of how Throttle works.
In fact, your interpolator is totally unnecessary. The USRP internally has proper interpolators for integer fractions of its master clock rate 64MS/s, which means that you can set the USRP Sink to a sampling rate of 1MS/s and just directly connect the file source to it.

Retrieve data from USRP N210 device

The N210 is connected to the RF frontend, which gets configured using the GNU Radio Companion.
I can see the signal with the FFT plot; I need the received signal (usrp2 output) as digital numbers.The usrp_sense_spectrum.py output the power and noise_floor as digital numbers as well.
I would appreciate any help from your side.
Answer from the USRP/GNU Radio mailing lists:
Dear Abs,
you've asked this question on discuss-gnuradio and already got two
answers. In case you've missed those, and to avoid that people tell
you what you already know:
Sylvain wrote that, due to a large number of factors contributing to
what you see as digital amplitude, you will need to calibrate
yourself, using exactly the system you want to use to measure power:
You mean you want the signal power as a dBm value ?
Just won't happen ... Too many things in the chain, you'd have to
calibrate it for a specific freq / board / gain / temp / phase of the
moon / ...
And I explained that if you have a mathematical representation of how
your estimator works, you might be able to write a custom estimator
block for both of your values of interest:
>
I assume you already have definite formulas that define the estimator for these two numbers.
Unless you can directly "click together" that estimator in GRC, you will most likely have to implement it.
In many cases, doing this in Python is rather easy (especially if you come from a python or matlab background),
so I'd recommend reading at least the first 3 chapters of
https://gnuradio.org/redmine/projects/gnuradio/wiki/Guided_Tutorials
If these answers didn't help you out, I think it would be wise to
explain what these answers are lacking, instead of just re-posting the
same question.
Best regards, Marcus
I suggest that you write a python application and stream raw UDP bytes from the USRP block. Simply add a UDP Sink block and connect it to the output of the UDH: USRP Source block. Select an appropriate port and stream to 127.0.0.1 (localhost)
Now in your python application open a listening UDP socket on the same port and receive the data. Each sample from the UDH: USRP Source is a complex pair of single prevision floats. This means 8 bytes per sample. The I float is first, followed by the Q float.
Note that the you need to pay special attention to the Payload Size field in the UDP Sink. Since you are streaming localhost, you can use a very large value here. I suggest you use something like 1024*8 here. This means that each packet will contain 1024 IQ Pairs.
I suggest you first connect a Signal Source and just pipe a sin() wave over the UDP socket into your Python or C application. This will allow you to verify that you are getting the float bytes correct. Make sure to check for glitches due to overflowing buffers. (this will be your biggest problem).
Please comment or update your post if you have further questions.

tc (netem) - reported loss is different to set loss

I'm doing some testing on some links, bench marking and I'm using the tc command to simulate packet loss across WANs.
What I have found is that after issuing the command to drop 15% of packets, the reported packet loss is different for different streams within my VoIP analysis.
The setup is simple, two clients attached to a server through a L2 switch. minimal packet loss normally only around 50 machines to this switch.
I'm looking for factors which have made this so, are there facets of the 'tc' command I am not familiar with?
http://i.stack.imgur.com/OBzxH.png [ss of packet loss]
[edit]
The 26.1% is loss I caused, it is the other two I am curious about.
It turns out that wireshark was causing my problem, when a VoIP stream changed to, say the hold music, the stream was switched to the hold music from the server, and when the user was taken off hold the sequence numbers had jumped a few hundred making wireshark think there was packet loss.