How to calculate LTE handover delay in NS-3 - ns-3

Hello stackoverflow community,
I am trying to calculate handover delay for a LTE simulation using LENA module in NS-3. To do so, I am using Simulator::Now().GetSeconds() function to log current simulation time at the beginning and the end of handover process.
Surprisingly the results I get from several handovers (distance between Ue and eNB nodes differ but Ue speed is the same for all) all indicate the same handover delay, as if it is constant. Also the calculated delay is too small to be true. Some results:
12.12 Prepare Handover (start)
12.12 Handover Recv Ack
12.1242 Handover Done To 8 (end)
^^(4.2ms)
19.8 Prepare Handover (start)
19.8 Handover Recv Ack
19.8042 Handover Done To 6 (end)
^^(4.2ms)
Am I calculating it wrong?
Or is it because of miss-configuration of my scenario? If so would you mind providing me with some configuration which have effect on handover delay? I already have tried adding noise and x2linkdelay but no luck there.

Alright guys, I figured that it's because of the RRC Configuration which was ideal instead of real. Although results are not as good as I expected, they make sense now :P

Related

How to log a particular address from an STM32 NUCLEO-F334R8 with an inbuilt ST-LINK in real time using SWD & openOCD without halting the processor?

I am trying to learn how to debug an MCU non-intrusively using SWD & openOCD.
while (1)
{
my_count++;
HAL_GPIO_TogglePin(LD2_GPIO_Port,LD2_Pin);
HAL_Delay(750);
}
The code running on my MCU has a free running counter "my_count" . I want to sample/trace the data stored in the address holding "my_count" in real time :
I was doing it this way:
while(1){// generic algorithm no specific language
mdw 0x00000000200000ac; //openOCD command to read from an address
}
0x200000ac is the address of the variable my_count from the .map file.
But, this method is very slow and experiences data drops at high frequencies.
Is there any other way to trace the data at high frequencies without experiencing data drops?
I made some napkin math, and I have an idea that may work.
As per Reference Manual, page 948, the max baud rate for UART of STM32F334 is 9Mbit/s.
If we want to send memory at the specific address, it will be 32 bits. 1 bit takes 1/9Mbps or 1.111*10^(-7)s, multiply that by 32 bits, that makes it 3.555 microseconds. Obviously, as I said, it's purely napkin math. There are start and stop bits involved. But we have a lot of wiggle room. You can easily fit 64 bits into transmission too.
Now, I've checked with the internet, it seems the ST-Link based on STM32F103 can have max baud rate of 4.5Mbps. A bummer, but we simply need to double our timings. 3.55*2 = 7.1us for 32-bit and 14.2us for 64-bit transmission. Even given there is some start and stop bit overhead, we still seem to fit into our 25us time budget.
So the suggestion is the following:
You have a timer set to 25us period that fires an interrupt, that activates DMA UART transmission. That way your MCU actually has very little overhead since DMA will autonomously handle the transmission, while your MCU can do whatever it wants in the meantime. Entering and exiting the timer ISR will be in fact the greatest part of the overhead caused by this, since in the ISR you will literally flip a pair of bits to tell DMA to send stuff over UART # 4.5Mbps.

ofdm sysetm with usrp implemntation

I am trying to build an ofdm flow graph with usrp, as shown below
Flowgraph
The flowgraph works fine without any errors. However, there is no received signal at the receiver end as shown in results(1) and results(2).
It seems that the USRP does not get data/signal from the OFDM Trasnmitter, and the signal shown at Rx spectrum is just noise.
Any recommendation to solve this problem?
your sampling rate is too low. The USRPs you're using probably aren't running at the rate - look for the warnings at the beginning of your flow graph execution about other sampling rates being used
The gain is too low, probably, for over-the-air transmission.
The analog bandwidth settings make no sense. Since this seems to be a network-connected USRP: only very few of these have adjustable frontend bandwidths, anyway, but using a frontend bandwidth much larger than your sampling rate makes no sense.
Most importantly: This is communications, with a random channel, noise, interferers and device imperfections. It's very normal for packets to get lost.

PSK Modulation in GNU Radio in USRP N310

Currently I am working with the Ettus Research's N310 on the implementation of different PSK modulation schemes. I am interested on measuring the Bit Error Rate -BER- for each scheme when I transmit data between two USRPs located one beside the other. Therefore, I am employing GNU Radio Companion for the SW development. In the case of the BPSK transceiver, I am using a standard configuration of a vector source and a constellation modulator to create the PSK symbols, which are transmitted at 2.45GHz using the VERT2450 antennas. These antennas work in the frequency range from 2.4 - 2.5 GHz and from 4.9 - 5.9 GHz. Since I have a desktop computer with only one ethernet port, I am using the NetGear GS108 switch, which has 16Gbps bandwidth and a forwarding rate of 10Mbps port. The current SW setup is shown in the following figure:
I am using as input a vector of only zeros since I am interested to probe that my transceiver detects correctly one constellation. However, I am having continious jumps between the constellation points as you can see from the picture in the left side. I have several questions about my setup:
What is the correct baud rate for each modulation scheme? It means how many symbols per second should I use for BPSK, QPSK, 8PSK and 16 QAM.
Since the USRP N310 has a default sample rate of 125MSamples/second, and my desktop machine can only deal with 5MSamples/second, then I have a decimation rate of 25(sample_rate_usrp/sample_rate_desktop). What is the value for the sps -samples per symbol- parameter that I should assign in each block of the transceiver?
When is the CMA equalizer necessary? Since the USRPs have a static position, then there is no frequency changes due to the Doppler effect. Consequently, an equalizer should not be necessary. Why is this reasoning not correct? I suppressed the equalizer and the constellation diagram is presented as a circle.
Does the Polyphase Clock Synchonization really synchronize the received signal with the transmitted signal or can I supprime it and replace it with an equalizer?
I would really appreciate if someone could help me to bring some light to all of this questions.
Thanks in advance
See my response at https://lists.gnu.org/archive/html/discuss-gnuradio/2020-08/msg00172.html
The 'correct' baud rate is anything you want to use.
You need to check the minimum sample rate for the N310.
The CMA Equalizer is optional under your conditions. I left it out of the BPSK to simplify the flowgraph.
The Polyphase Clock Sync Block recovers the timing of the received signal. The equalization is for fading and is a separate function.
It looks like you're modulating with 8 SPS but then demodulating 16 SPS -- 8:1 on the PCSync and 2:1 on the CMA eq.

Max number encoder pulses through interrupt change on PorB

I am using a 16F877A pic with 20MHz crystal and a change interruption on portB, pin 6-7 connected to an encoder. I'm using the encoder to calculate the velocity of a wheel and I have a doubt about the maximum ppr that I can use to avoid the program to stop or freeze? Thanks
I watched a student have this problem in a lab next to me.
Without interrupt shadow registers, you'll find the maximum quadrature decoding rate probably slower than you want. IIRC under 100000pps
You can measure it easily by running your wheel backwards and forwards with a motor and going faster until the counts for forward and reverse passes no longer line up.
Microchip recommend using the PIC16F18877 in new designs, which has automatic register shadowing on interrupt. All the 18 series PIC have this feature too and it raises the rate significantly to IIRC over 200000pps.
I'm sorry I can't give hard numbers, the exact figures are at an earlier employer.

GFSK modulation/demodulation with GNU Radio and USRP

Im currently creating a satellite ground station which will be used to control our cubesat in coming months. The modulation scheme used is GFSK and the baud rate is 9600. I have tried to run some tests by using a USRP board before I could try to communicate with the satellite by directly connecting the tx and rx blocks in the flowgraph.I was able to send and receive a png file using this flowgraph.
However, when I connect the tx and rx output to my USRP B210 TX/RX(transmission sink) and RX2(reception source) as shown below, I receive no data even though the two the source and sinks have been connected to each other carefully by RF cables with attenuators.
Below are the assumptions I took into account when I was making the second flowgraph. Please tell me if im on the right path.
Transmitter side : The packet decoder and GFSK mod blocks use 20 samples per symbol. Baud rate is 9600 and sample rate is 20*baud rate = 192K. Since the expected symbol rate by the satellite is baud_rate = 9600, I included a rational resampler and set UHD symbol rate to baud_rate. Is my logic correct?
GFSK mod and demod : For both of these blocks, I calculated sensitivity as S = Pi * Modulation_index/Samples_Per_Symbol. The default BT value of 0.5 is used. Are my calculations sound? Is there a link for to find documentation for GFSK blocks? My derivations are based on the GFSK python source code which is a poor substitute for documentation.
Packet Encoder/Decoder : The output of packet decoder is null even though the GFSK demod block give some kind of output which is rather meaningless. Is this normal? What is the meaning of the threshold variable and why its value is -1?
I'm a newbie in GNU Radio as well as GFSK in general. So please drop me any further references.
Thanks in advance.
Moses.
I was finally able to solve the problem. All I did was re-implementing the GFSK demod in GRC. If you go into source of gfsk.py, you will find out that the blocks used are Quadrature Demod --> M&M Clock recovery --> Binary slicer which can easily be connected in GRC directly. As Marcus suggested in my other thread, GFSK demodulation with Xlating filter in GNU Radio , I replaced the M&M Clock recovery block with PFB block. My flowgraph is shown below.
Even if I can not answer all of your questions, I provide below some thoughts:
When using hardware devices the Throttle MUST be removed from the flow-graph. The hardware device is now responsible for the rate limiting. Mixing hardware device and the Throttle block may break the real-time boundary of your flow-graph required by the device. Underflows or Overflows messages should be produced by the UHD driver in such a case.
Are you sure that the USRP can support the requested sampling rate? You may need also to change the master_clock_rate of the device, if the requested sampling rate is not an integer decimation of the clock. If this is not possible consider some kind of re-sampling.
EDIT: The B200 can not provide 192e3 sampling rate with the default clock. You can set the master_clock_rate at 19.2e6. The hardware will apply then the proper decimation. The master_clock_rate can be changed either by the device specific arguments or the Clock Rate field of the UHD Sink/Source blocks that presents in the latests GNU Radio versions.