The User Datagram Protocol provides some error detection.
Someone says that it has checksum mechanics.
But this protocol does not have a hand shaking process.
So this protocol does not seem to worry about the data errors.
How can it have a checksum part?
Checksum has nothing to do with handshake. It simply validates integrity of a packet being transmitted. If packet is invalid, it will be discarded. In case of TCP, receiver (or rather a next hop router) will try to recover by asking sender to re-send invalid packet. But in case of UDP, it simply ends right there - packet is simply discarded and that's it. Beware though that UDP checksum is actually optional. And I believe can even be removed by an intermediate hop router. It's also a pretty weak checksum even if used.
In general, modern day "common wisdom" says that on modern equipment packets rarely get corrupted in transmission. And if you're ok with that assumption, then I'd suggest simply assume that packets are never corrupted ignoring whether checksum was used in transition or not. But if you aren't ok with occasional data corruption, then you should rather embed a better checksum into your data, such as a CRC or even a cryptographic hash. It all depends on how important data integrity is for you and how far you are willing to go to achieve it. And that actually applies to TCP as well as to UDP.
See RFC and also this answer.
Related
I've been trying to diagnose an issue with dropped UDP-IP datagrams, and one thing I'm noticing with Wireshark is that we're occasionally getting a datagram that Wireshark doesn't consider a packet (it won't do its trick of automagically reassembling the fragmented UDP datagram into the last fragment's packet).
Upon closer inspection, it looks like the last fragment of these packets that Wireshark doesn't like is a data-less fragment of length 60 (the minimum). What appears to be happening is that our stack's IP fragmentation algorithm, when the datagram fits into fragments exactly (its size is a multiple of 1480), rather than clearing the "last packet" flag on that last completely full packet, instead sends one last empty fragment with the "last packet" flag cleared.
Obviously this is weird enough to throw off Wireshark. But how wrong is it? Is this wrong enough to be causing receiving stacks (in this case some version of Windows I think) to discard the IP fragments? Is this actually a violation of the IPv4 standards for fragmented packets?
Fortunately, we have the sources for this IP stack, so we can fix it if we need to.
How wrong is it? I think that's a somewhat difficult question to answer definitively. From a technical standpoint, I don't think it's wrong per se, as it doesn't seem to violate any RFC's for fragment reassembly, but it's certainly inefficient and there might be mechanisms in place to drop fragments of this sort. Perusing the various RFC's, I've come across the following relevant or related RFC's:
RFC 791: Internet Protocol
RFC 815: IP Datagram Reassembly Algorithms
RFC 1858: Security Considerations for IP Fragment Filtering
RFC 3128: Protection Against a Variant of the Tiny Fragment Attack
There may be others. These RFC's don't address this particular case; however, they do address reassembly and security considerations for other cases, which leads me to believe that the prudent thing to do would be to modify your IP stack, if possible, in order to avoid this inefficient transmission of an empty datagram. Not only will you improve efficiency, you'll avoid any potential problems that could arise from this case. Considering how many devices and IP stacks there are out there, I'd say it's a bit risky to leave the implementation as it is, but that's just my opinion.
As for why Wireshark doesn't reassemble the fragmented datagrams, that's simple. The IP dissector (packet-ip.c) currently expects at least 1 byte of payload. Here's the relevant code snippet from that file:
/* If ip_defragment is on, this is a fragment, we have all the data
* in the fragment, and the header checksum is valid, then just add
* the fragment to the hashtable.
*/
save_fragmented = pinfo->fragmented;
if (ip_defragment && (iph->ip_off & (IP_MF|IP_OFFSET)) &&
iph->ip_len > hlen &&
tvb_bytes_exist(tvb, offset, iph->ip_len - hlen) &&
ipsum == 0) {
ipfd_head = fragment_add_check(&ip_reassembly_table, tvb, offset,
pinfo,
iph->ip_proto ^ iph->ip_id ^ src32 ^ dst32 ^ pinfo->vlan_id,
NULL,
(iph->ip_off & IP_OFFSET) * 8,
iph->ip_len - hlen,
iph->ip_off & IP_MF);
next_tvb = process_reassembled_data(tvb, offset, pinfo, "Reassembled IPv4",
ipfd_head, &ip_frag_items,
&update_col_info, ip_tree);
} else {
...
}
As a simple test, I tweaked the last fragment of a two-part packet such that it contained 0 bytes. I then made the following change to the IP dissector:
iph->ip_len >= hlen &&
After recompiling Wireshark, the packet was reassembled. So, I believe this simple change would allow Wireshark to successfully reassemble fragments of the type you have now where the last fragment contains no data. While I think your IP stack should still be modified to avoid sending these empty fragments, I also think in keeping with Postel's Law that Wireshark should be modified to handle this case, albeit with an "Expert Info" added to indicate this strange empty fragment so developers can be alerted to their inefficient implementations. To that end, I would recommend that you file a Wireshark enhancement bug request so Wireshark will be able to reassemble such fragments.
I'm currently working with the Amazon S3 API, and have a general wondering about the server-side integrity checks that can be done if you provide the MD5 hash during posting of an object.
I'm not sure I understand if the integrity check is required if you send the data (I'm assuming the object data you're posting also) via SSL/TLS, which provide their own support for data integrity in transit.
Should you send the digest regardless if you're posting over SSL/TLS? Isn't it superfluous to do so? Or is there something I'm missing?
Thanks.
Integrity checking provided by TLS provides no guarantees about what happens going into the TLS wrapper at the sender side, or coming out of it and being written to disk at the receiver.
So, no, it is not entirely superfluous because TLS is not completely end-to-end -- the unencrypted data is still processed, however little, on both ends of the connection... and any hardware or software that touches the unencrypted bits can malfunction and mangle them.
S3 gives you an integrity checking mechanism -- two, if you use both Content-MD5 and x-amz-content-sha256 -- and it seems unthinkable to try to justify bypassing them.
Is there any standard that defines the behavior when malformed UDP packets(like UDP packets with empty payload) is received in RTPStream?
I don't think there is. Malformed packets are largely left ignored, as the assumption is that all is well with the world.
In most cases, there's one of three different behaviors you should be expecting - depending on how things got implemented:
Browser crash. In such a case, just file a bug with the browser vendor
Ignore. The browser will ignore the packet altogether and move on
Disconnect. The browser may decide to disconnect the peer connection due to that (highly unlikely, but not unheard of)
Just don't send malformed packets on a media connection and expect consistent behavior.
The heartbeat protocol requires the other end to reply with the same data that was sent to it, to know that the other end is alive. Wouldn't sending a certain fixed message be simpler? Is it to prevent some kind of attack?
At least the size of the packet seems to be relevant, because according to RFC6520, 5.1 the heartbeat message will be used with DTLS (e.g. TLS over UDP) for PMTU discovery - in which cases it needs messages of different sizes. Apart from that it might be simply modelled after ICMP ping, where you can also specify the payload content for no reason.
Just like with ICMP Ping, the idea is to ensure you can match up a "pong" heartbeat response you received with whichever "ping" heartbeat request you made. Some packets may get lost or arrive out of order and if you send the requests fast enough and all the response contents are the same, there's no way to tell which of your requests were answered.
One might think, "WHO CARES? I just got a response; therefore, the other side is alive and well, ready to do my bidding :D!" But what if the response was actually for a heartbeat request 10 minutes ago (an extreme case, maybe due to the server being overloaded)? If you just sent another heartbeat request a few seconds ago and the expected responses are the same for all (a "fixed message"), then you would have no way to tell the difference.
A timely response is important in determining the health of the connection. From RFC6520 page 3:
... after a number of retransmissions without
receiving a corresponding HeartbeatResponse message having the
expected payload, the DTLS connection SHOULD be terminated.
By allowing the requester to specify the return payload (and assuming the requester always generates a unique payload), the requester can match up a heartbeat response to a particular heartbeat request made, and therefore be able to calculate the round-trip time, expiring the connection if appropriate.
This of course only makes much sense if you are using TLS over a non-reliable protocol like UDP instead of TCP.
So why allow the requester to specify the length of the payload? Couldn't it be inferred?
See this excellent answer: https://security.stackexchange.com/a/55608/44094
... seems to be part of an attempt at genericity and coherence. In the SSL/TLS standard, all messages follow regular encoding rules, using a specific presentation language. No part of the protocol "infers" length from the record length.
One gain of not inferring length from the outer structure is that it makes it much easier to include optional extensions afterwards. This was done with ClientHello messages, for instance.
In short, YES, it could've been, but for consistency with existing format and for future proofing, the size is spec'd out so that other data can follow the same message.
I want to develop a program in c using pjsip for peer to peer file transfer. As pjsip uses ice and in ICE UDP is used, so do I need to handle the packet delivery assurance.
And as I would be sending the file by breaking it into several parts and them re assemble all the parts at the receiver's end, so do I have to maintain the sequence of the packets or can i assume that packets are delivered in the correct sequence??
With UDP you can neither assume that packets are delivered in order nor that they are delivered exactly once nor that they are delivered at all! So you need to come up with a protocol that does a lot of things which normally TCP would take care of. It has to reassemble the original data stream and handle the things I listed above.
Additionally, with UDP you can have the problem that you cause congestion. TCP can avoid that with its congestion avoidance algorithms, with UDP you can easily send packets too fast causing them to drop at the overloaded router.
All these are non trivial problems to solve so I suggest you read up on the topic. I'd start with a good book about TCP.