iOS Packet Length - objective-c

I am writing a small app that essentially swaps XML back and forth a-la SOAP. I have an OS X-based server and an iPad client. I use KissXML on the client and the built-in XML parser on the server. I use GCDAsyncSocket on both to communicate.
When I test my app on the iPad simulator, the full XML comes through. Everything works fine.
However, when I use my development device (an actual physical iPad), everything else works fine, but the XML terminates after the 1426th character. I have verified that this error occurs on multiple iPads.
When I subscribe to the incoming packets on GCDAsyncSocket I use
[sock readDataWithTimeout:-1
buffer:[NSMutableData new]
bufferOffset:0
maxLength:0
tag:0]; and previously just a simple [sock readDataWithTimeout:-1 tag:0]; but both have the same result. It seems that GCDAsyncSocket is not to blame at any rate since the execution is fine on the simulator. Note that the 0 at maxLength indicates an 'infinite' buffer.
Does anyone have any idea what could be causing this?

1426 sounds very much like the MTU (Maximum Transmit Unit), which is the size of the maximum TCP data you can send. It's different sizes on different network media and different configurations, but 1426 is pretty common.
This suggests that you're confusing the reception of a TCP packet with the completion of an XML message. There is no guarantee that TCP packets will end on an XML message boundary. GCDAsyncSocket is a low-level library that talks TCP, not XML.
As you get each packet, it's your responsibility to concatenate it onto an NSMutableData and then to decide when you have enough to process it. If your protocol closes the connection after every message, then you can read until the connection is closed. If not, then you will have to deal with the fact that a given packet might even include some of the next message. You'll have to parse the data sufficiently to decide where the boundaries are.
BTW, it is very possible that your Mac has a different MTU than your iPad, which is why you may be seeing different behavior on the different platforms.

The solution was that when left unspecified, AsyncSocket looks to the next line-return. When the packet terminates, it indeed returns the line. I was using (sock is my GCDAsyncSocket object)
[sock readDatawithTimeout:-1 tag:0]
but have since moved to
[sock readDataToData:[msgTerm dataUsingEncoding:NSUTF8StringEncoding]
withTimeout:-1
tag:0]
where msgTerm is an external constant NSString defined as "\r\n\r\n" and is shared between the client and server source. This effectively circumvents the line return issue ending the packet.
One additional note regarding this solution: Because I am using a SOAP-like protocol, the whitespace is not an issue. However if yours is temperamental about terminating whitespace lines you can use a method like [incomingDecodedNsstringMessage stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]] to clean it up.

Having had a look at the code for GCDAsyncSocket, I'd say it is entirely possible there is a bug in it. For instance, if you are reading a secure socket, the cfsocket mechanism is used instead of ordinary Unix style file descriptors, on iPhone and the author may be making invalid assumptions about when a socket is closed. Since you have the source code, I'd try stepping through it with a debugger to see if end of file is being flagged prematurely.
TCP is a stream based protocol. Theoretically, the packet size of the underlying IP protocol should make no difference, but if you read the socket fast enough, you may well get your data in chunks the size of the IP packet especially if the IP stack is somehow tuned for memory use (guessing here!).

Related

wavfile_sink: Value nan can not be represented in the target integer type

I have a couple of gnuradio apps that communicate across the internet. The flow graphs are rather complex, so I boiled them down to simplest form to re-create an issue I'm seeing.
The issue is that, when I invoke the client, it connects to the server as expected, but immediately throws an exception "Value nan can not be represented in the target integer type" and appears to be coming from the wavfile sink block.
I simplified the server down to a simple dial tone flow graph that presents the dial tone signal on a TCP port. The client can connect to that port and expect to receive the dialtone signal.
If I run the client and server on the same computer and connect using localhost, the client connects fine and works as expected. However if the client and server are separated at a distance (over the internet) then I receive the nan exception every time.
Theories:
Perhaps the client is starting up and not receiving data soon enough (connect time + 20ms latency) and the wavfile block is receiving NaN's from the socket PDU or stream to tagged stream block?
Perhaps the data is being mangled as it cross the internet in such a way that the packets aren't arriving as expected - fragmentation? I tried MTU 512 but the problem still exists. Trued MTU 10000: same result.
Maybe I'm overlooking a simple usage/syntax error when building my flowgraph
There's a bug in the wavfile sink and it should more gracefully handle the absence of data.
UPDATE 1:
Logged a bug report to gnuradio project https://github.com/gnuradio/gnuradio/issues/1763
UPDATE 2:
The socket PDU block seems to play a significant role. If I tweak the packet length parameter of the stream to tagged stream block, along with the MTU parameter of the socket PDU block, I can get either error free stream with dropped packets (low MTU) or a dead stream with NaN exception (high MTU).
Server Flowgraph
Client Flowgraph (XYZ.com is of course not my real IP)
wav result across the internet (20-100ms latency)
This wav file is 1k and is truncated probably at the same time as the exception but it's hard to tell for sure.
wav result across local interface (1ms or less latency)
Across the local interface the client runs happily until I stop it. Received signal is as expected.
One small thing - notice the glitchy initial first few samples? Not sure if this is a factor. Probably not. But the data is junk for the first 10ms or so even across local interface.

Java and its signed bytes: Sending hex information via UDP possible?

I am currently working on an application to change my RGBWW light strips by a Java application.
Information has to be sent via UDP packages in order to be understood by the controller.
Unfortunately, the hex number 0x80 has to be sent - which is causing some problems.
Whenever I send a byte array containing only numbers fron 0x00 to 0x79 (using DataPacket and a DataSocket), I do get an UDP Package popping up on my network monitor.
As soon as I include the number 0x80 or any other higher, I see two things Happen:
1: I do not longer get only UDP protocols, but messages are displayed as RTP / RTCP most of the time
2: The method Integer.hexToString() does not display "80", but gives me a "ffffff80".
My question: Is there something I am missing when it comes to sending hex info by UDP? Or is there another way of sending it, possibly avoiding the annoyingly signed bytes?
I unfortunately did not find any information that would have significantly helped me on that issue, but I hope you can help me!
Thanks in advance!

Twisted - succes (or failure) callback for LineReceiver sendLine

I'm still trying to master Twisted while in the midst of finishing an application that uses it.
My question is:
My application uses LineReceiver.sendLine to send messages from a Twisted TCP server.
I would like to know if the sendLine succeeded.
I gather that I need to somehow add a success (and error?) callback to sendLine but I don't know how to do this.
Thanks for any pointers / examples
You need to define "succeeded" in order to come up with an answer to this.
All sendLine does immediately (probably) is add some bytes to a send buffer. In some sense, as long as it doesn't raise an exception (eg, MemoryError because your line is too long or TypeError because your line was the number 3 instead of an actual line) it has succeeded.
That's not a very useful kind of success, though. Unfortunately, the useful kind of success is more like "the bytes were added to the send buffer, the send buffer was flushed to the socket, the peer received the bytes, and the receiving application acted on the data in a persistent way".
Nothing in LineReceiver can tell you that all those things happened. The standard solution is to add some kind of acknowledgement to your protocol: when the receiving application has acted on the data, it sends back some bytes that tell the original sender the message has been handled.
You won't get LineReceiver.sendLine to help you much here because all it really knows how to do is send some bytes in a particular format. You need a more complex protocol to handle acknowledgements.
Fortunately, Twisted comes with a few. twisted.protocols.amp is one: it offers remote method calls (complete with responses) as a basic feature. I find that AMP is suitable for a wide range of applications so it's often safe to recommend for new development. It largely supersedes the older twisted.spread (aka "PB") which also provides both remote method calls and remote object references (and is therefore more complex - in my experience, more complex than most applications need). There are also some options that are a bit more standard: for example, Twisted Web includes an HTTP implementation (HTTP, as you may know, is good at request/response style interaction).

UDP using socket API

My server use UDP. It sends 900bytes/1ms to my program automatically after being acquired. I'm using socket API in Windows (VB6). I had made a test and I know that the message processing time (about 0.3ms) of my program is shorter than cycle time (1ms). So the cause should be socket internal buffer. I try calling setsockopt function to set the bigger buffer:
setsockopt(SockNum, SOL_SOCKET, SO_RCVBUF, SockBuffer(1), 1048576)
but I still lost data. How can I fix my problem?
I'm using recv function to receive data. Should recvfrom be better?
Futhermore, I need make a FIFO buffer for UDP. How I can do so (i.e. algorithms or examples)?
In your question you seem to be complaining about using UDP and losing data.
If you are using UDP, you are going to lose data. The way that you avoid losing data is to use TCP, not UDP. If you try to take the User Datagram Protocol and add reliable delivery of data to it, you will end up with something that has all of the flow-control and data windowing of TCP... except it won't be implemented as well as you want.
Remember, "Those who do not understand TCP are doomed to reinvent it.... poorly"

iPhone: Sending large data with Game Kit

I am trying to write an app that exchanges data with other iPhones running the app through the Game Kit framework. The iPhones discover each other and connect fine, but the problems happens when I send the data. I know the iPhones are connected properly because when I serialize an NSString and send it through the connection it comes out on the other end fine. But when I try to archive a larger object (using NSKeyedArchiver) I get the error message "AGPSessionBroadcast failed (801c0001)".
I am assuming this is because the data I am sending is too large (my files are about 500k in size, Apple seems to recommend a max of 95k). I have tried splitting up the data into several transfers, but I can never get it to unarchive properly at the other end. I'm wondering if anyone else has come up against this problem, and how you solved it.
I had the same problem w/ files around 300K. The trouble is the sender needs to know when the receiver has emptied the pipe before sending the next chunk.
I ended up with a simple state engine that ran on both sides. The sender transmits a header with how many total bytes will be sent and the packet size, then waits for acknowledgement from the other side. Once it gets the handshake it proceeds to send fixed size packets each stamped with a sequence number.
The receiver gets each one, reads it and appends it to a buffer, then writes back to the pipe that it got packet with the sequence #. Sender reads the packet #, slices out another buffer's worth, and so on and so forth. Each side keeps track of the state they're in (idle, sending header, receiving header, sending data, receiving data, error, done etc.) The two sides have to keep track of when to read/write the last fragment since it's likely to be smaller than the full buffer size.
This works fine (albeit a bit slow) and it can scale to any size. I started with 5K packet sizes but it ran pretty slow. Pushed it to 10K but it started causing problems so I backed off and held it at 8096. It works fine for both binary and text data.
Bear in mind that the GameKit isn't a general file-transfer API; it's more meant for updates of where the player is, what the current location or other objects are etc. So sending 300k for a game doesn't seem that sensible, though I can understand hijacking the API for general sharing mechanisms.
The problem is that it isn't a TCP connection; it's more a UDP (datagram) connection. In these cases, the data isn't a stream (which gets packeted by TCP) but rather a giant chunk of data. (Technically, UDP can be fragmented into multiple IP packets - but lose one of those, and the entire UDP is lost, as opposed to TCP, which will re-try).
The MTU for most wired networks is ~1.5k; for bluetooth, it's around ~0.5k. So any UDP packet that you sent (a) may get lost, (b) may be split into multiple MTU-sized IP packets, and (c) if one of those packets is lost, then you will automatically lose the entire set.
Your best strategy is to emulate TCP - it sends out packets with a sequence number. The receiving end can then request dupe transmissions of packets which went missing afterwards. If you're using the equivalent of an NSKeyedArchiver, then one suggestion is to iterate through the keys and write those out as individual keys (assuming each keyed value isn't that big on its own). You'll need to have some kind of ACK for each packet that gets sent back, and a total ACK when you're done, so the sender knows it's OK to drop the data from memory.