I've been trying to diagnose an issue with dropped UDP-IP datagrams, and one thing I'm noticing with Wireshark is that we're occasionally getting a datagram that Wireshark doesn't consider a packet (it won't do its trick of automagically reassembling the fragmented UDP datagram into the last fragment's packet).
Upon closer inspection, it looks like the last fragment of these packets that Wireshark doesn't like is a data-less fragment of length 60 (the minimum). What appears to be happening is that our stack's IP fragmentation algorithm, when the datagram fits into fragments exactly (its size is a multiple of 1480), rather than clearing the "last packet" flag on that last completely full packet, instead sends one last empty fragment with the "last packet" flag cleared.
Obviously this is weird enough to throw off Wireshark. But how wrong is it? Is this wrong enough to be causing receiving stacks (in this case some version of Windows I think) to discard the IP fragments? Is this actually a violation of the IPv4 standards for fragmented packets?
Fortunately, we have the sources for this IP stack, so we can fix it if we need to.
How wrong is it? I think that's a somewhat difficult question to answer definitively. From a technical standpoint, I don't think it's wrong per se, as it doesn't seem to violate any RFC's for fragment reassembly, but it's certainly inefficient and there might be mechanisms in place to drop fragments of this sort. Perusing the various RFC's, I've come across the following relevant or related RFC's:
RFC 791: Internet Protocol
RFC 815: IP Datagram Reassembly Algorithms
RFC 1858: Security Considerations for IP Fragment Filtering
RFC 3128: Protection Against a Variant of the Tiny Fragment Attack
There may be others. These RFC's don't address this particular case; however, they do address reassembly and security considerations for other cases, which leads me to believe that the prudent thing to do would be to modify your IP stack, if possible, in order to avoid this inefficient transmission of an empty datagram. Not only will you improve efficiency, you'll avoid any potential problems that could arise from this case. Considering how many devices and IP stacks there are out there, I'd say it's a bit risky to leave the implementation as it is, but that's just my opinion.
As for why Wireshark doesn't reassemble the fragmented datagrams, that's simple. The IP dissector (packet-ip.c) currently expects at least 1 byte of payload. Here's the relevant code snippet from that file:
/* If ip_defragment is on, this is a fragment, we have all the data
* in the fragment, and the header checksum is valid, then just add
* the fragment to the hashtable.
*/
save_fragmented = pinfo->fragmented;
if (ip_defragment && (iph->ip_off & (IP_MF|IP_OFFSET)) &&
iph->ip_len > hlen &&
tvb_bytes_exist(tvb, offset, iph->ip_len - hlen) &&
ipsum == 0) {
ipfd_head = fragment_add_check(&ip_reassembly_table, tvb, offset,
pinfo,
iph->ip_proto ^ iph->ip_id ^ src32 ^ dst32 ^ pinfo->vlan_id,
NULL,
(iph->ip_off & IP_OFFSET) * 8,
iph->ip_len - hlen,
iph->ip_off & IP_MF);
next_tvb = process_reassembled_data(tvb, offset, pinfo, "Reassembled IPv4",
ipfd_head, &ip_frag_items,
&update_col_info, ip_tree);
} else {
...
}
As a simple test, I tweaked the last fragment of a two-part packet such that it contained 0 bytes. I then made the following change to the IP dissector:
iph->ip_len >= hlen &&
After recompiling Wireshark, the packet was reassembled. So, I believe this simple change would allow Wireshark to successfully reassemble fragments of the type you have now where the last fragment contains no data. While I think your IP stack should still be modified to avoid sending these empty fragments, I also think in keeping with Postel's Law that Wireshark should be modified to handle this case, albeit with an "Expert Info" added to indicate this strange empty fragment so developers can be alerted to their inefficient implementations. To that end, I would recommend that you file a Wireshark enhancement bug request so Wireshark will be able to reassemble such fragments.
Related
So far, everything I've read on webrtc peer connections says that an "offer" is sent, and it is responded to with an "answer". Then the connection starts and all is well.
In my understanding, the offer is like "Hey, let's use this codec and encryption". Given that the answer always leads to a connection, it seems the answer is always "okay, let's use that!". Can there be a counter offer like "No, let's use this codec instead!". Who ultimately decides which settings are used?
The offer contains a list of one side's acceptable codecs (prioritzed).
The answer contains the subset of those codecs, listing only the ones that both sides can do - possibly in a different order.
So: No the answer shouldn't contain a codec that wasn't in the offer.
But... Once Offer/Answer has happened, either side can send a second offer (this is typically used to add video to an existing audio-only session) and receive a new answer.
This means you could send an answer with no codecs and then send an second offer with a different set of codecs, but there is no reason to expect that the other side will change it's mind (unless there was some resource exhaustion)
The User Datagram Protocol provides some error detection.
Someone says that it has checksum mechanics.
But this protocol does not have a hand shaking process.
So this protocol does not seem to worry about the data errors.
How can it have a checksum part?
Checksum has nothing to do with handshake. It simply validates integrity of a packet being transmitted. If packet is invalid, it will be discarded. In case of TCP, receiver (or rather a next hop router) will try to recover by asking sender to re-send invalid packet. But in case of UDP, it simply ends right there - packet is simply discarded and that's it. Beware though that UDP checksum is actually optional. And I believe can even be removed by an intermediate hop router. It's also a pretty weak checksum even if used.
In general, modern day "common wisdom" says that on modern equipment packets rarely get corrupted in transmission. And if you're ok with that assumption, then I'd suggest simply assume that packets are never corrupted ignoring whether checksum was used in transition or not. But if you aren't ok with occasional data corruption, then you should rather embed a better checksum into your data, such as a CRC or even a cryptographic hash. It all depends on how important data integrity is for you and how far you are willing to go to achieve it. And that actually applies to TCP as well as to UDP.
See RFC and also this answer.
I'm writing a toy MUD client which uses a TCP/IP socket to make a connection to a telnet server.
As a common feature in MUD clients, I should be able to run a bunch of regular expressions on the responses from the server and do stuff when they are triggered.
Now the problem arises when the response is long and received in 2 or more TCP/IP packets, and therefore the regular expressions wont match when I run them on the responses, as they are not complete yet (the first or second part wont match alone).
So the question is how do I know the server is done sending a packet of data before running my regular expressions on them.
The short answer is: you don't
TCP/IP is a serial protocol, that has no notion of packets.
If your application layer protocol uses packets (most do), then you have two options:
use a transport layer that supports packets natively (UDP, SCTP,...)
add packetizing information to your data stream
The simplest way to add packetizing info, is by adding delimiter characters (usually \n); obviously you cannot use the delimiter in the payload then, as it is already reserved for other purposes.
If you need to be able to transmit any character in the payload (so you cannot reserve a delimiter), use something like SLIP on top of TCP/IP
you can keep a stack, add the packets to it, keep testing until you get a full response
If the MUD is to be played (almost) exclusively by the client (not telnet itself), you can add delimiters, again have the stack, but don't test blindly, test when you get a delimiter.
If there is a command you can send that has no gameplay effect but has a constant reply from the server (eg a ping) you could use it as a delimiter of sorts.
You may be over thinking it. Nearly all muds delimit lines with LF, i.e. \n (some oddball servers will use CRLF, \r\n, or even \n\r). So buffer your input and scan for the delimiter \n. When you find one, move the line out of the input buffer and then run your regexps.
A special case is the telnet command IAC GA, which some muds use to denote prompts. Read the Telnet RFC for more details, https://www.rfc-editor.org/rfc/rfc854 , and do some research on mud-specific issues, for example http://cryosphere.net/mud-protocol.html .
Practically speaking, with muds you will never have a problem with waiting for a long line. If there's a lot of lag between mud and client there's not a whole lot you can do about that.
I am currently using CocoaAsyncSocket to send UDP Socket messages to a server. Occasionally I need to enforce that messages arrive in a specific order. Basically my code structure is similar to below.
NSMutableArray *msgs = #[#0, #1, #2].mutableCopy;
-(void)sendMessages:(NSString *)str{
// blackbox function that converts to nsdata and sends to socket server
}
Normally, I don't care about the order so I am just blindly sending individual messages. For very specific commands this doesn't work. I have an example in java that spawns a new thread and sends the messages after a 0.2 second time span. I was hoping to find a more elegant solution in Objective-C. Does anybody have any suggestions for an approach?
Guaranteeing a specific packet arrival order for UDP is exactly like doing the same for the postal system.
If you send two letters from country A to country B, there isn't really a way of telling which one will arrive first. Heck, one of them (or maybe even both) might even be lost and won't arrive at all. Sending the second letter 0.2 days after the first one increases the chances of "correct" ordering, but guarantees nothing.
The only way of maintaining order is adding sequence numbers to packets and buffering them on the receiving end. Then, once the relevant packets have arrived and have been ordered by sequence number you deliver them to processing. Note that this means that you'll also need a retransmission mechanism for lost packets, so if packets 1 and 3 arrive but 2 doesn't, the sender knows to send the missing packet before moving on. This is what TCP does.
I am trying to write an app that exchanges data with other iPhones running the app through the Game Kit framework. The iPhones discover each other and connect fine, but the problems happens when I send the data. I know the iPhones are connected properly because when I serialize an NSString and send it through the connection it comes out on the other end fine. But when I try to archive a larger object (using NSKeyedArchiver) I get the error message "AGPSessionBroadcast failed (801c0001)".
I am assuming this is because the data I am sending is too large (my files are about 500k in size, Apple seems to recommend a max of 95k). I have tried splitting up the data into several transfers, but I can never get it to unarchive properly at the other end. I'm wondering if anyone else has come up against this problem, and how you solved it.
I had the same problem w/ files around 300K. The trouble is the sender needs to know when the receiver has emptied the pipe before sending the next chunk.
I ended up with a simple state engine that ran on both sides. The sender transmits a header with how many total bytes will be sent and the packet size, then waits for acknowledgement from the other side. Once it gets the handshake it proceeds to send fixed size packets each stamped with a sequence number.
The receiver gets each one, reads it and appends it to a buffer, then writes back to the pipe that it got packet with the sequence #. Sender reads the packet #, slices out another buffer's worth, and so on and so forth. Each side keeps track of the state they're in (idle, sending header, receiving header, sending data, receiving data, error, done etc.) The two sides have to keep track of when to read/write the last fragment since it's likely to be smaller than the full buffer size.
This works fine (albeit a bit slow) and it can scale to any size. I started with 5K packet sizes but it ran pretty slow. Pushed it to 10K but it started causing problems so I backed off and held it at 8096. It works fine for both binary and text data.
Bear in mind that the GameKit isn't a general file-transfer API; it's more meant for updates of where the player is, what the current location or other objects are etc. So sending 300k for a game doesn't seem that sensible, though I can understand hijacking the API for general sharing mechanisms.
The problem is that it isn't a TCP connection; it's more a UDP (datagram) connection. In these cases, the data isn't a stream (which gets packeted by TCP) but rather a giant chunk of data. (Technically, UDP can be fragmented into multiple IP packets - but lose one of those, and the entire UDP is lost, as opposed to TCP, which will re-try).
The MTU for most wired networks is ~1.5k; for bluetooth, it's around ~0.5k. So any UDP packet that you sent (a) may get lost, (b) may be split into multiple MTU-sized IP packets, and (c) if one of those packets is lost, then you will automatically lose the entire set.
Your best strategy is to emulate TCP - it sends out packets with a sequence number. The receiving end can then request dupe transmissions of packets which went missing afterwards. If you're using the equivalent of an NSKeyedArchiver, then one suggestion is to iterate through the keys and write those out as individual keys (assuming each keyed value isn't that big on its own). You'll need to have some kind of ACK for each packet that gets sent back, and a total ACK when you're done, so the sender knows it's OK to drop the data from memory.