How to send Socket Messages in Series with Obj-c - objective-c

I am currently using CocoaAsyncSocket to send UDP Socket messages to a server. Occasionally I need to enforce that messages arrive in a specific order. Basically my code structure is similar to below.
NSMutableArray *msgs = #[#0, #1, #2].mutableCopy;
-(void)sendMessages:(NSString *)str{
// blackbox function that converts to nsdata and sends to socket server
}
Normally, I don't care about the order so I am just blindly sending individual messages. For very specific commands this doesn't work. I have an example in java that spawns a new thread and sends the messages after a 0.2 second time span. I was hoping to find a more elegant solution in Objective-C. Does anybody have any suggestions for an approach?

Guaranteeing a specific packet arrival order for UDP is exactly like doing the same for the postal system.
If you send two letters from country A to country B, there isn't really a way of telling which one will arrive first. Heck, one of them (or maybe even both) might even be lost and won't arrive at all. Sending the second letter 0.2 days after the first one increases the chances of "correct" ordering, but guarantees nothing.
The only way of maintaining order is adding sequence numbers to packets and buffering them on the receiving end. Then, once the relevant packets have arrived and have been ordered by sequence number you deliver them to processing. Note that this means that you'll also need a retransmission mechanism for lost packets, so if packets 1 and 3 arrive but 2 doesn't, the sender knows to send the missing packet before moving on. This is what TCP does.

Related

UDP hole punching logic puzzle

I am trying to solve a logical puzzle in my UDP hole punching implementation.
The puzzle is the following: "can I guarantee that two clients I am trying to connect will come to the same conclusion (hole punched/hole not punched) within a reasonable time (ideally no more than a few seconds after they were given each other's IP addresses).
With UDP hole punching, both clients have to start sending "punch" packets around the same time. Some of these initial packets will be lost because of the NAT/firewall, it is expected. Then at some point, when client's #1 message gets through, let's take an optimistic scenario,
this does not mean that the message that other client's #2 sent also gets through. Clients then have to reply with "ack" messages to confirm that connection has been established. This is where timing issue also comes into play: one of the clients receives ack before the timeout, while other does not. And they come to different conclusions.
I also tried to make logic more complex: keeping track of how many acks each client sent, giving each new sent ack a unique number. Plus keeping track of how many different acks the given client received. Still, one client condition of "success" does not mean that the other client comes to the same conclusion in a limited time, given the nature of UDP when packets can get lost. And from usability point of view, I cannot hold user waiting forever, I have to present the result ideally within 2 seconds after server connects two clients.
To be more concrete, I can have a loop:
while (within_timeout)
{
// check for new data
// reply with acks if received
if (acks_sent >= 3 && acks_received >= 3) break;
}
success = (acks_sent >= 3 && acks_received >= 3);
Client #1 in our case knows that it sent three or more acks and received at least 3 acks. So it leaves the loop. Client #2 knows that it sent at least 3 acks (because client #1 received that many) but it may not received all acks sent by client #1 and timeout ends for client #2.
Server can also can designate each client "master" or "slave" id. When master will have a final word. Still, even then master will have to tell the slave of his solution and expect the ack, which may not arrive within the reasonable timeout.
It can be that there's no 100% solution, is there a solution that approaches 100%?

UDP packet fragmentation

After reading dozens of articles I can't find an answer to a simple question - can UDP datagram arrive fragmented? I know that it can get fragmented on the way if it's size is above 576 bytes or something like that, but will it get merged when it arrives?
In other words, if I send a single packet via udp::socket::send_to(), can I assume that if it's not dropped on the way, I'll retrieve it by a single call to udp::socket::async_receive_from()?
The OS network stack will reassemble the fragments and give the user space the complete packet. And if one of the fragments get lost the user space will not receive the rest, but nothing.

Why does the TLS heartbeat extension allow user supplied data?

The heartbeat protocol requires the other end to reply with the same data that was sent to it, to know that the other end is alive. Wouldn't sending a certain fixed message be simpler? Is it to prevent some kind of attack?
At least the size of the packet seems to be relevant, because according to RFC6520, 5.1 the heartbeat message will be used with DTLS (e.g. TLS over UDP) for PMTU discovery - in which cases it needs messages of different sizes. Apart from that it might be simply modelled after ICMP ping, where you can also specify the payload content for no reason.
Just like with ICMP Ping, the idea is to ensure you can match up a "pong" heartbeat response you received with whichever "ping" heartbeat request you made. Some packets may get lost or arrive out of order and if you send the requests fast enough and all the response contents are the same, there's no way to tell which of your requests were answered.
One might think, "WHO CARES? I just got a response; therefore, the other side is alive and well, ready to do my bidding :D!" But what if the response was actually for a heartbeat request 10 minutes ago (an extreme case, maybe due to the server being overloaded)? If you just sent another heartbeat request a few seconds ago and the expected responses are the same for all (a "fixed message"), then you would have no way to tell the difference.
A timely response is important in determining the health of the connection. From RFC6520 page 3:
... after a number of retransmissions without
receiving a corresponding HeartbeatResponse message having the
expected payload, the DTLS connection SHOULD be terminated.
By allowing the requester to specify the return payload (and assuming the requester always generates a unique payload), the requester can match up a heartbeat response to a particular heartbeat request made, and therefore be able to calculate the round-trip time, expiring the connection if appropriate.
This of course only makes much sense if you are using TLS over a non-reliable protocol like UDP instead of TCP.
So why allow the requester to specify the length of the payload? Couldn't it be inferred?
See this excellent answer: https://security.stackexchange.com/a/55608/44094
... seems to be part of an attempt at genericity and coherence. In the SSL/TLS standard, all messages follow regular encoding rules, using a specific presentation language. No part of the protocol "infers" length from the record length.
One gain of not inferring length from the outer structure is that it makes it much easier to include optional extensions afterwards. This was done with ClientHello messages, for instance.
In short, YES, it could've been, but for consistency with existing format and for future proofing, the size is spec'd out so that other data can follow the same message.

UDP Packet size and fragements

Let's say I am trying to send data using udp socket. If the data is big then I think the data is going to be divided into several packets and sent to the destination.
At the destination, if there is more than one incoming packets then how to I combined those separated packets into the original packet? Do I need to have a data structure that save all the incoming udp based on the sender ? Thanks in advance..
If you are simply sending the data in one datagram, using a single send() call, then the fragmentation and reassembly will be done for you, by the transport layer. All you need to do is supply a large enough buffer to recv(), and if all the fragments have arrived, then they will be reassembled and presented to you as a single datagram.
Basically, this is the service that UDP provides you (where a "datagram" is a single block of data sent by a single send() call):
The datagram may not arrive at all;
The datagram may arrive out-of-order with respect to other datagrams;
The datagram may arrive more than once;
If the datagram does arrive, it will be complete and correct1.
However, if you are performing the division of the data into several UDP datagrams yourself, at the application layer, then you will of course be responsible for reassembling it too.
1. Correct with the probability implied by the UDP checksum, anyway.
You should use TCP for this. TCP is for structured data that needs to arrive in a certain order without being dropped.
On the other hand, UDP is used when the packet becomes irrelevant after ~500 ms. This is used in games, telephony, and so on.
If your problem requires UDP, then you need to handle any lost, duplicate, or out-of-order packets yourself, or at least write code that is resilient to that possibility.
http://en.wikipedia.org/wiki/User_Datagram_Protocol
If you can't afford lost packets, then TCP is probably a better option than UDP, since it provides that guarantee out of the box.

iPhone: Sending large data with Game Kit

I am trying to write an app that exchanges data with other iPhones running the app through the Game Kit framework. The iPhones discover each other and connect fine, but the problems happens when I send the data. I know the iPhones are connected properly because when I serialize an NSString and send it through the connection it comes out on the other end fine. But when I try to archive a larger object (using NSKeyedArchiver) I get the error message "AGPSessionBroadcast failed (801c0001)".
I am assuming this is because the data I am sending is too large (my files are about 500k in size, Apple seems to recommend a max of 95k). I have tried splitting up the data into several transfers, but I can never get it to unarchive properly at the other end. I'm wondering if anyone else has come up against this problem, and how you solved it.
I had the same problem w/ files around 300K. The trouble is the sender needs to know when the receiver has emptied the pipe before sending the next chunk.
I ended up with a simple state engine that ran on both sides. The sender transmits a header with how many total bytes will be sent and the packet size, then waits for acknowledgement from the other side. Once it gets the handshake it proceeds to send fixed size packets each stamped with a sequence number.
The receiver gets each one, reads it and appends it to a buffer, then writes back to the pipe that it got packet with the sequence #. Sender reads the packet #, slices out another buffer's worth, and so on and so forth. Each side keeps track of the state they're in (idle, sending header, receiving header, sending data, receiving data, error, done etc.) The two sides have to keep track of when to read/write the last fragment since it's likely to be smaller than the full buffer size.
This works fine (albeit a bit slow) and it can scale to any size. I started with 5K packet sizes but it ran pretty slow. Pushed it to 10K but it started causing problems so I backed off and held it at 8096. It works fine for both binary and text data.
Bear in mind that the GameKit isn't a general file-transfer API; it's more meant for updates of where the player is, what the current location or other objects are etc. So sending 300k for a game doesn't seem that sensible, though I can understand hijacking the API for general sharing mechanisms.
The problem is that it isn't a TCP connection; it's more a UDP (datagram) connection. In these cases, the data isn't a stream (which gets packeted by TCP) but rather a giant chunk of data. (Technically, UDP can be fragmented into multiple IP packets - but lose one of those, and the entire UDP is lost, as opposed to TCP, which will re-try).
The MTU for most wired networks is ~1.5k; for bluetooth, it's around ~0.5k. So any UDP packet that you sent (a) may get lost, (b) may be split into multiple MTU-sized IP packets, and (c) if one of those packets is lost, then you will automatically lose the entire set.
Your best strategy is to emulate TCP - it sends out packets with a sequence number. The receiving end can then request dupe transmissions of packets which went missing afterwards. If you're using the equivalent of an NSKeyedArchiver, then one suggestion is to iterate through the keys and write those out as individual keys (assuming each keyed value isn't that big on its own). You'll need to have some kind of ACK for each packet that gets sent back, and a total ACK when you're done, so the sender knows it's OK to drop the data from memory.