I am working on creating a modified version of MRHOF for RPL. However, I
have some doubts about the ETX metrics used. i am running an rpl-udp example (..../contiki-3.0/examples/ipv6/rpl-udp).
As per my understanding, the general definition of ETX is following:
ETX = 1/(df * dr)
where df is the measured probability that a data packet successfully arrives at the recipient and dr is the probability that the ACK packet is successfully received.
The implementation of ETX is defined in neighbor_link_callback(rpl_parent_t *p, int status, int numtx) (contiki/core/net/rpl/rpl-mrhof.c) as below:
new_etx = ((uint32_t)recorded_etx * ETX_ALPHA +(uint32_t)packet_etx * (ETX_SCALE - ETX_ALPHA)) / ETX_SCALE
where
recorded_etx = nbr->link_metric
packet_etx = MAX_LINK_METRIC * RPL_DAG_MC_ETX_DIVISOR
nbr->link_metric = RPL_INIT_LINK_METRIC * RPL_DAG_MC_ETX_DIVISOR (rpl-dag.c)
RPL_INIT_LINK_METRIC = 2 (rpl-conf.h)
ETX_SCALE = 100
ETX_ALPHA = 90
RPL_DAG_MC_ETX_DIVISOR = 256 (rpl-private.h)
MAX_LINK_METRIC = 10
Here every time when link layer receives an ACK or time-out event the function inside this file (neighbor_link_callback) is fired.
I understood the formal definition of ETX, but when i am trying to map the standard ETX formula with contikiRPL's ETX formula then i am facing some trouble in understanding the implementation of ETX in contikiRPL.
How the probability of a data packet successfully arrives at the recipient (df) and probability that the ACK packet is successfully received (dr) are implemented in ContikiRPL?
In the code, df and dr individually are not known. The algorithm is run on the sender device, which has no means to differentiate between the case when the packet is lost and the case when the ACK is lost. They look exactly the same to it: as the absence of the
The value of packet_etx roughly corresponds to 1 / (df * dr) of the last packet. Note that a single packet already may have had multiple retransmissions on the link. The metric is updated only when the packet is successfully ACKed or when the maximal number of retransmissions is exceeded.
Another issue in Contiki is that since its designed for embedded systems, it does not have the memory to keep in track the ETX of many recent packets. Instead, this information is aggregated in single value with the help of exponentially weighted moving average (EWMA) filter. The \alpha paramter of the algorithm is given as ETX_ALPHA / ETX_SCALE in the code; the scaling is done to avoid the more expensive floating point operations.
The value of recorded_etx is the previous value of the ETX, reflecting the ETX calculated from all of the previous packets. The value of new_etx is the value of the link's ETX when the previous ETX and the last packet's ETX have been combined with the ETX algorithm.
Related
I have been experimenting with message passing in the Signal Source block in GNU Radio companion. I can see from its source code that we can pass messages to change the frequency, amplitude, offset and phase of the source. For example, the following message PMT sent from a message strobe can change the amplitude of the signal to 0.5.
pmt.dict_add(pmt.make_dict(), pmt.intern("ampl"), pmt.from_double(0.5))
But when I viewed the code of UHD USRP Sink, I couldn't get a clear idea as to what commands can be sent to this block or that which parameters can be changed. I have read at some places in the documentation that frequency, gain, LO offset, timestamp, center frequency and other transceiver related settings of the USRP Sink can be manipulated through command messages.
What commands can be sent to the USRP Sink block from a message strobe (in the pmt format) and which parameters (and their keys) can be modified?
This is officially documented:
https://www.gnuradio.org/doc/doxygen/page_uhd.html#uhd_command_syntax
Command name
Value Type
Description
chan
int
Specifies a channel. If this is not given, either all channels are chosen, or channel 0, depending on the action. A value of -1 forces 'all channels', where possible.
gain
double
Sets the Tx or Rx gain (in dB). Defaults to all channels.
power_dbm
double
Sets the Tx or Rx power reference level (in dBm). Defaults to all channels. Works for certain devices only, and only if calibration data is available.
freq
double
Sets the Tx or Rx frequency. Defaults to all channels. If specified without lo_offset, it will set the LO offset to zero.
lo_offset
double
Sets an LO offset. Defaults to all channels. Note this does not affect the effective center frequency.
tune
tune_request
Like freq, but sets a full tune request (i.e. center frequency and DSP offset). Defaults to all channels.
mtune
tune_request_t
Like tune, but supports a full manual tune request as uhd::tune_request_t. Defaults to all channels.
lo_freq
double
For fully manual tuning: Set the LO frequency (RF frequency). Conflicts with freq, lo_offset, and tune.
dsp_freq
double
For fully manual tuning: Set the DSP frequency (CORDIC frequency). Conflicts with freq, lo_offset, and tune.
direction
string
Used for timed transceiver tuning to ensure tuning order is maintained. Values other than 'TX' or 'RX' will be ignored.
rate
double
See usrp_block::set_samp_rate(). Always affects all channels.
bandwidth
double
See usrp_block::set_bandwidth(). Defaults to all channels.
time
timestamp
Sets a command time. See usrp_block::set_command_time(). A value of PMT_NIL will clear the command time.
mboard
int
Specify mboard index, where applicable.
antenna
string
See usrp_block::set_antenna(). Defaults to all channels.
gpio
gpio
PMT dictionary including bank, attr, value, mask for GPIO. See notes.
I'm writing a very specific application protocol to enable communication between 2 nodes. Node 1 is an embedded platform (a microcontroller), while node 2 is a common computer.
Such protocol defines messages of variable length. This means that sometimes node 1 sends a message of 100 bytes to node 2, while another time it sends a message of 452 bytes.
Such protocol shall be independent on how the messages are transmitted. For instance, the same message can be sent over USB, Bluetooth, etc.
Let's assume that a protocol message is defined as:
| Length (4 bytes) | ...Payload (variable length)... |
I'm struggling about how the receiver can recognise how long is the incoming message. So far, I have thought about 2 approaches.
1st approach
The sender sends the length first (4 bytes, always fixed size), and the message afterwards.
For instance, the sender does something like this:
// assuming that the parameters of send() are: data, length of data
send(msg_length, 4)
send(msg, msg_length - 4)
While the receiver side does:
msg_length = receive(4)
msg = receive(msg_length)
This may be ok with some "physical protocols" (e.g. UART), but with more complex ones (e.g. USB) transmitting the length with a separate packet may introduce some overhead. The reason being that an additional USB packet (with control data, ACK packets as well) is required to be transmitted for only 4 bytes.
However, with this approach the receiver side is pretty simple.
2nd approach
The alternative would be that the receiver keeps receiving data into a buffer, and at some point tries to find a valid message. Valid means: finding the length of the message first, and then its payload.
Most likely this approach requires adding some "start message" byte(s) at the beginning of the message, such that the receiver can use them to identify where a message is starting.
I have a question regarding the error rate calculation in .cc file for udpapp.
errorRate = ((float)(numPKTDropped) / (float)(numReceived + numPKTDropped))*100;
EV << "Error rate= "<<errorRate<<"%, Sent= "<<numSent<<" , Received= "<<numReceived<< endl;
this is my code and its a duplex system. Udp packet receiver is unaware with the number of sent packets from sender. How could this be possible to know this via code in omnetpp.
I would suggest put a sequence number into the UDP payload so you will know on the receiving side if a sequence number is skipped (except the case when the last packets at the end of the simulation are lost). That would be a good enough estimation for the USP packet loss.
I have built an OFDM transceiver with rayleigh channel using standard PDP's Like EPA,EVA and ETU.The problem is I am getting very high BER even for BPSK i.e 50-60 % or higher bits in error.Scatterplotting confirms it.My OFDM transceiver blocks include:
---- Random Data -- Modulation(BPSK,QPSK,QAM) -- Serial2Parallel -- IFFT -- CyclicPrefix >>> Rayleigh Ch >>> Remove CP Data---FFT --- Par2Ser ---DeMod --- Sink Data.
I have used builtin matlab function to create Rayleigh channel passing standard PDP as parameter.
channelObj = rayleighchan(tSampling,fDoppler,tau_in_sec,pdb_in_dB);
channelObj.ResetBeforeFiltering=0; % channel remains static before filtering
Filtering for n-OFDM symbols & calculating CIR
for symb=1:OFDMSymb
ofdm_td_rx_signal(:,symb) = filter(channelObj, ofdm_td_TXdata(:,symb));
channel_cir(tapIndices,symb)= (channelObj.PathGains).';
end
channel_cfr = fft(channel_cir,nCarrier); % freq. response from CIR
Similarly at receiver,after FFT block,I just tried to use this CFR by dividing received symbol by CFR as
fft_RXdata=fft_data./channel_cfr;
What I am getting is very high SNR and scattered constellation symbols.Rest of transceiver blocks are simple and all verified as bug free...Do let me how to improve it.
How I could get improve BER?
Any need of equalizer?Should a match filter would help?Thanks in advance.
NOTE:ONLY RAYLEIGH CHANNEL IS USED AWGN NOISE IS NOT ADDED AT ALL ...
One possible solution that have helped me is use of block based pilots(reference dummy data) transmission with OFDM symbols.Least square channel estimation is performed at RX using received pilots data which inherently captured the channel behavior.
Say that A sends B a UDP message of size N like
sockaddr_in to;
to.sin_family=AF_INET;
to.sin_port=htons(port);
to.sin_addr.s_addr=inet_addr(address);
sendto(sock,(const char*)buffer,N,0,(sockaddr*)&to,sizeof(to));
Now B receives this message expecting it to be of size N_1
sockaddr from;
socklen_t length_from=sizeof(from);
recvfrom(sock,(char*)buffer,N_1,0,&from,&length_from);
What happens when N_1!=N ?
What happens when N_1!=N ?
If the receive buffer is larger than the incoming datagram, the entire datagram is transferred into the buffer and the actual length is returned as the return value of recfvrom(). You're presently ignoring it. Don't do that.
If the receive buffer is smaller than the incoming datagram, it is truncated to fit into the receive buffer and he excess beyond that is discarded. The actual length of data transferred into the buffer is returned.