Udp Packet error rate in omnetpp - udp

I have a question regarding the error rate calculation in .cc file for udpapp.
errorRate = ((float)(numPKTDropped) / (float)(numReceived + numPKTDropped))*100;
EV << "Error rate= "<<errorRate<<"%, Sent= "<<numSent<<" , Received= "<<numReceived<< endl;
this is my code and its a duplex system. Udp packet receiver is unaware with the number of sent packets from sender. How could this be possible to know this via code in omnetpp.

I would suggest put a sequence number into the UDP payload so you will know on the receiving side if a sequence number is skipped (except the case when the last packets at the end of the simulation are lost). That would be a good enough estimation for the USP packet loss.

Related

How ETX (Estimated Transmissions) is implemented in contikiRPL?

I am working on creating a modified version of MRHOF for RPL. However, I
have some doubts about the ETX metrics used. i am running an rpl-udp example (..../contiki-3.0/examples/ipv6/rpl-udp).
As per my understanding, the general definition of ETX is following:
ETX = 1/(df * dr)
where df is the measured probability that a data packet successfully arrives at the recipient and dr is the probability that the ACK packet is successfully received.
The implementation of ETX is defined in neighbor_link_callback(rpl_parent_t *p, int status, int numtx) (contiki/core/net/rpl/rpl-mrhof.c) as below:
new_etx = ((uint32_t)recorded_etx * ETX_ALPHA +(uint32_t)packet_etx * (ETX_SCALE - ETX_ALPHA)) / ETX_SCALE
where
recorded_etx = nbr->link_metric
packet_etx = MAX_LINK_METRIC * RPL_DAG_MC_ETX_DIVISOR
nbr->link_metric = RPL_INIT_LINK_METRIC * RPL_DAG_MC_ETX_DIVISOR (rpl-dag.c)
RPL_INIT_LINK_METRIC = 2 (rpl-conf.h)
ETX_SCALE = 100
ETX_ALPHA = 90
RPL_DAG_MC_ETX_DIVISOR = 256 (rpl-private.h)
MAX_LINK_METRIC = 10
Here every time when link layer receives an ACK or time-out event the function inside this file (neighbor_link_callback) is fired.
I understood the formal definition of ETX, but when i am trying to map the standard ETX formula with contikiRPL's ETX formula then i am facing some trouble in understanding the implementation of ETX in contikiRPL.
How the probability of a data packet successfully arrives at the recipient (df) and probability that the ACK packet is successfully received (dr) are implemented in ContikiRPL?
In the code, df and dr individually are not known. The algorithm is run on the sender device, which has no means to differentiate between the case when the packet is lost and the case when the ACK is lost. They look exactly the same to it: as the absence of the
The value of packet_etx roughly corresponds to 1 / (df * dr) of the last packet. Note that a single packet already may have had multiple retransmissions on the link. The metric is updated only when the packet is successfully ACKed or when the maximal number of retransmissions is exceeded.
Another issue in Contiki is that since its designed for embedded systems, it does not have the memory to keep in track the ETX of many recent packets. Instead, this information is aggregated in single value with the help of exponentially weighted moving average (EWMA) filter. The \alpha paramter of the algorithm is given as ETX_ALPHA / ETX_SCALE in the code; the scaling is done to avoid the more expensive floating point operations.
The value of recorded_etx is the previous value of the ETX, reflecting the ETX calculated from all of the previous packets. The value of new_etx is the value of the link's ETX when the previous ETX and the last packet's ETX have been combined with the ETX algorithm.

How to send/receive variable length protocol messages independently on the transmission layer

I'm writing a very specific application protocol to enable communication between 2 nodes. Node 1 is an embedded platform (a microcontroller), while node 2 is a common computer.
Such protocol defines messages of variable length. This means that sometimes node 1 sends a message of 100 bytes to node 2, while another time it sends a message of 452 bytes.
Such protocol shall be independent on how the messages are transmitted. For instance, the same message can be sent over USB, Bluetooth, etc.
Let's assume that a protocol message is defined as:
| Length (4 bytes) | ...Payload (variable length)... |
I'm struggling about how the receiver can recognise how long is the incoming message. So far, I have thought about 2 approaches.
1st approach
The sender sends the length first (4 bytes, always fixed size), and the message afterwards.
For instance, the sender does something like this:
// assuming that the parameters of send() are: data, length of data
send(msg_length, 4)
send(msg, msg_length - 4)
While the receiver side does:
msg_length = receive(4)
msg = receive(msg_length)
This may be ok with some "physical protocols" (e.g. UART), but with more complex ones (e.g. USB) transmitting the length with a separate packet may introduce some overhead. The reason being that an additional USB packet (with control data, ACK packets as well) is required to be transmitted for only 4 bytes.
However, with this approach the receiver side is pretty simple.
2nd approach
The alternative would be that the receiver keeps receiving data into a buffer, and at some point tries to find a valid message. Valid means: finding the length of the message first, and then its payload.
Most likely this approach requires adding some "start message" byte(s) at the beginning of the message, such that the receiver can use them to identify where a message is starting.

Telnet reader will split input after 1448 characters

I am writing a java applet that will print what a telnet client sends to the connection. Unfortunately, the client splits at 1448 characters.
The code that is proving to be a problem:
char[] l = new char[5000];
Reader r = new BufferedReader(new InputStreamReader(s.getInputStream(), "US-ASCII"));
int i = r.read(line);
I cannot change the source of what the telnet client reads from, so I am hoping it is an issue with the above three lines.
You're expecting to get telnet protocol data units from the TCP layer. It just doesn't work that way. You can only extract telnet protocol data units from the code that implements the telnet protocol. The segmentation of bytes of data at the TCP layer is arbitrary and it's the responsibility of higher layers to reconstruct the protocol data units.
The behavior you are seeing is normal, and unless you're diagnosing a performance issue, you should completely ignore the way the data is split at the TCP level.
The reason you're only getting 1448 bytes at a time is that the underlying protocols divide the transmission into packets. Frequently, this size is around 1500, and there are some bytes used for bookkeeping, so you're left with a chunk of 1448 bytes. The protocols don't guarantee that if you send X bytes in a 'single shot', that the client will receive X bytes in a single shot (e.g. a single call to the receive method).
As has been noted already in the comments above, its up to the receiving program to re-assemble these packets in a way that is meaningful to the client. In generally, you perform receives and append the data you receive to some buffer until you find an agreed-upon 'end of the block of data' marker (such as an end-of-line, new-line, carriage return, some symbol that won't appear in the data, etc.).
If the server is genuinely a telnet server--its output might be line-based (e.g. a single block of data is terminated with a 'end of line': carriage return and linefeed characters). RFC 854 may be helpful--it details the Telnet protocol as originally specified.

VB .net get the client connection params of the socket on server side

I want both the client and server to write and read resp. at a constant rate (which can be configured on the GUI of the client) to the buffer.
Say,
I am able to send from the client at 150 bytes per packet
Now, I should be able to read also at 150 bytes per packet on the server too
Since, both are connected through a socket, can we retrieve the socket params (set on the client size, like 150 here) from the server end, using the tcpServer object.
Or is it must to send an initial setup packet which tells about these client params and so accordingly the server can continue?
It's kinda usual to sort message sizes out at the application level. You could indeed send a 'setup message' as the first data after a successful connect. You should send this setup message in a form that will not be misunderstood due to endianness or the number of bytes received per read call. Perhaps a fixed-size messge in ASCII, maybe five bytes:
'00150'
The server can then read five bytes only, convert to integer, save it in the server-client socket object so that the server always knows how many bytes to send and then issue a read call for that number of bytes.
Alternatively, you could use a simple protocol that embeds the size into each message, eg:
SOH
"0"
"0"
"1"
"5"
"0"
[150 bytes of data]
EOT
Rgds,
Martin

read specific size of data from boost asio udp socket

I want to read specific number of bytes from udp socket. In tcp socket I can use socket.read where I can specify the amount of data to receive. I don't find similar function for UDP socket. I am using receive_from() where I can specify the amount of data to read, but if there is more data then no data is read and I get following error.
"A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself" std::basic_string<char,std::char_traits<char>,std::allocator<char> >
I am not able to find what value do I need to give for message_flags (3rd arg to receive_from) so that it will read the number of bytes specified. Currenly I am using the following code to read data but it either reads all data or no data.
size_t size=socket.receive_from(boost::asio::buffer((const void*)&discRsp,sizeof(DataStructure)),remote_endpoint,0,errors);
Try this:
socket.set_option(boost::asio::socket_base::receive_buffer_size(65536));