Java and its signed bytes: Sending hex information via UDP possible? - udp

I am currently working on an application to change my RGBWW light strips by a Java application.
Information has to be sent via UDP packages in order to be understood by the controller.
Unfortunately, the hex number 0x80 has to be sent - which is causing some problems.
Whenever I send a byte array containing only numbers fron 0x00 to 0x79 (using DataPacket and a DataSocket), I do get an UDP Package popping up on my network monitor.
As soon as I include the number 0x80 or any other higher, I see two things Happen:
1: I do not longer get only UDP protocols, but messages are displayed as RTP / RTCP most of the time
2: The method Integer.hexToString() does not display "80", but gives me a "ffffff80".
My question: Is there something I am missing when it comes to sending hex info by UDP? Or is there another way of sending it, possibly avoiding the annoyingly signed bytes?
I unfortunately did not find any information that would have significantly helped me on that issue, but I hope you can help me!
Thanks in advance!

Related

usb hid: why should i write "null" to the control pipe in the out endpoint interrupt

Digging around with/for HID reports, I ran into a strange problem within a USB HID device. I'm implementing an HID class device and have based my program on the HID USB program supplied by Keil. Some codes have been changed in this project and it seems working fine with 32 bytes input and 32 bytes output reports. Somehow, after thousands times data transferring, the Endpoint 1 out would hang and become a bad pipe. Then I searched the google for some tips, a topic remind me that we should write a data length zero packet after sending a length of packet match what you defined in the report description. But it's not working for me. Then I write a data length zero to the control pipe after I receive a out packet and magically, it works! It would never hang after million times transferring!
Here is my question: Why does it works after writing a data length zero to a control pipe. The data transferring in the out pipe should have no relationship with the data in the control pipe. It confuses me!
If you transfer any data that is less than the expected payload size, you must send a Zero Length Packet to indicate that data has transferred.
But it depends heavily on the implementation on the host controller, and not all devices follow the specification to the point and may stall.
Source:
When do USB Hosts require a zero-length IN packet at the end of a Control Read Transfer?

Trouble with RTMP ingest chunk stream

I am trying to build my own client RTMP library for an app that I am working on. So far everything has gone pretty successfully in that I am able to connect to the RTMP server negotiate the handshake and then send all the necessary packets (FCPublish Publish ETC) then from the server i get the onStatus message of NetStream.Publish.Start which means that I have successfully got the server to allow me to start publishing my live video broadcast. Wireshark also confirms that the information (/Data packetizing) is correct as it shows up correctly there also.
Now for where I am having some trouble is RTMP Chunking, going off the Adobe RTMP Specification on page 17 & 18 shows an example of how a message is chunked. From this example I can see that it is broken down based on the chunk size (128 bytes). For me the chunk size gets negotiated in the initial connect and exchange which is always 4096 bytes. So for when I am exchanging video data that is larger than 4096 bytes I need to chunk the message down sending the RTMP packetHeader combined with the first 4096 bytes of data then sending a small RTMP header which is 0xc4 (0xc0 | packetHeaderType (0x04)) combined with 4096 bytes of video data until the full packet specified by the header has been sent. Then a new frame comes in and the same process is repeated.
By checking other RTMP client example written in different languages this seems to be what they are all doing. Unfortunately the ingest server that I am trying to stream to is not picking up the broadcast video data, they dont close the connection on my they just never show video or any sign that the video is right. Wireshark shows that after the video atom packet is sent most packets sent are Unknown (0x0) for a little bit and then they will switch into Video Data and will sort of flip flop between showing Unknown (0x0) and Video Data. However if I restrict my payload max size to 20000 bytes Wireshark shows everything as Video Data. Obviously the ingest server will not show video in this situation as i am removing chunks of data for it to be only 20k bytes.
Trying to figure out what is going wrong I started another xcode project that allows me to spoof a RTMP server on my Lan so that I can see what the data looks like from libRTMP IOS as it comes into the server. Also with libRTMP I can make it log the packets it sends and they seem to inject the byte 0xc4 even 128 bytes even tho I have sent the Change Chunk size message as the server. When I try to replicate this in my RTMP client Library by just using a 128 chunk size even tho it has been set to 4096 bytes the server will close my connection on me. However if change libRTMP to try to go to the live RTMP server it still prints out within LibRTMP that it is sending packets in a chunk size of 128. And the server seems to be accepting it as video is showing up. When I do look at the data coming in on my RTMP server I can see that it is all their.
Anyone have any idea what could be going on?
While I haven't worked specifically with RTMP, I have worked with RTSP/RTP/RTCP pretty extensively, so, based on that experience and the bruises I picked up along the way, here are some random, possibly-applicable tips that might help/things to look for that might be causing an issue:
Does your video encoding match what you're telling the server? In other words, if your video is encoded as H.264, is that what you're specifying to the server?
Does the data match the container format that the server is expecting? For example, if the server expects to receive an MPEG-4 movie (.m4v) file but you're sending only an encoded MPEG-4 (.mp4) stream, you'll need to encapsulate the MPEG-4 video stream into an MPEG-4 movie container. Conversely, if the server is expecting only a single MPEG-4 video stream but you're sending an encapsulated MPEG-4 Movie, you'll need to de-mux the MPEG-4 stream out of its container and send only that content.
Have you taken into account the MTU of your transmission medium? Regardless of chunk size, getting an MTU mismatch between the client and server can be hard to debug (and is possibly why you're getting some packets listed as "Unknown" type and others as "Video Data" type). Much of this will be taken care of with most OS' built-in Segmentation-and-Reassembly (SAR) infrastructure so long as the MTU is consistent, but in cases where you have to do your own SAR logic it's very easy to get this wrong.
Have you tried capturing traffic in Wireshark with libRTMP iOS and your own client and comparing the packets side by side? Sometimes a "reference" packet trace can be invaluable in finding that one little bit (or many) that didn't originally seem important.
Good luck!

UDP packet fragmentation

After reading dozens of articles I can't find an answer to a simple question - can UDP datagram arrive fragmented? I know that it can get fragmented on the way if it's size is above 576 bytes or something like that, but will it get merged when it arrives?
In other words, if I send a single packet via udp::socket::send_to(), can I assume that if it's not dropped on the way, I'll retrieve it by a single call to udp::socket::async_receive_from()?
The OS network stack will reassemble the fragments and give the user space the complete packet. And if one of the fragments get lost the user space will not receive the rest, but nothing.

iOS Packet Length

I am writing a small app that essentially swaps XML back and forth a-la SOAP. I have an OS X-based server and an iPad client. I use KissXML on the client and the built-in XML parser on the server. I use GCDAsyncSocket on both to communicate.
When I test my app on the iPad simulator, the full XML comes through. Everything works fine.
However, when I use my development device (an actual physical iPad), everything else works fine, but the XML terminates after the 1426th character. I have verified that this error occurs on multiple iPads.
When I subscribe to the incoming packets on GCDAsyncSocket I use
[sock readDataWithTimeout:-1
buffer:[NSMutableData new]
bufferOffset:0
maxLength:0
tag:0]; and previously just a simple [sock readDataWithTimeout:-1 tag:0]; but both have the same result. It seems that GCDAsyncSocket is not to blame at any rate since the execution is fine on the simulator. Note that the 0 at maxLength indicates an 'infinite' buffer.
Does anyone have any idea what could be causing this?
1426 sounds very much like the MTU (Maximum Transmit Unit), which is the size of the maximum TCP data you can send. It's different sizes on different network media and different configurations, but 1426 is pretty common.
This suggests that you're confusing the reception of a TCP packet with the completion of an XML message. There is no guarantee that TCP packets will end on an XML message boundary. GCDAsyncSocket is a low-level library that talks TCP, not XML.
As you get each packet, it's your responsibility to concatenate it onto an NSMutableData and then to decide when you have enough to process it. If your protocol closes the connection after every message, then you can read until the connection is closed. If not, then you will have to deal with the fact that a given packet might even include some of the next message. You'll have to parse the data sufficiently to decide where the boundaries are.
BTW, it is very possible that your Mac has a different MTU than your iPad, which is why you may be seeing different behavior on the different platforms.
The solution was that when left unspecified, AsyncSocket looks to the next line-return. When the packet terminates, it indeed returns the line. I was using (sock is my GCDAsyncSocket object)
[sock readDatawithTimeout:-1 tag:0]
but have since moved to
[sock readDataToData:[msgTerm dataUsingEncoding:NSUTF8StringEncoding]
withTimeout:-1
tag:0]
where msgTerm is an external constant NSString defined as "\r\n\r\n" and is shared between the client and server source. This effectively circumvents the line return issue ending the packet.
One additional note regarding this solution: Because I am using a SOAP-like protocol, the whitespace is not an issue. However if yours is temperamental about terminating whitespace lines you can use a method like [incomingDecodedNsstringMessage stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]] to clean it up.
Having had a look at the code for GCDAsyncSocket, I'd say it is entirely possible there is a bug in it. For instance, if you are reading a secure socket, the cfsocket mechanism is used instead of ordinary Unix style file descriptors, on iPhone and the author may be making invalid assumptions about when a socket is closed. Since you have the source code, I'd try stepping through it with a debugger to see if end of file is being flagged prematurely.
TCP is a stream based protocol. Theoretically, the packet size of the underlying IP protocol should make no difference, but if you read the socket fast enough, you may well get your data in chunks the size of the IP packet especially if the IP stack is somehow tuned for memory use (guessing here!).

iPhone: Sending large data with Game Kit

I am trying to write an app that exchanges data with other iPhones running the app through the Game Kit framework. The iPhones discover each other and connect fine, but the problems happens when I send the data. I know the iPhones are connected properly because when I serialize an NSString and send it through the connection it comes out on the other end fine. But when I try to archive a larger object (using NSKeyedArchiver) I get the error message "AGPSessionBroadcast failed (801c0001)".
I am assuming this is because the data I am sending is too large (my files are about 500k in size, Apple seems to recommend a max of 95k). I have tried splitting up the data into several transfers, but I can never get it to unarchive properly at the other end. I'm wondering if anyone else has come up against this problem, and how you solved it.
I had the same problem w/ files around 300K. The trouble is the sender needs to know when the receiver has emptied the pipe before sending the next chunk.
I ended up with a simple state engine that ran on both sides. The sender transmits a header with how many total bytes will be sent and the packet size, then waits for acknowledgement from the other side. Once it gets the handshake it proceeds to send fixed size packets each stamped with a sequence number.
The receiver gets each one, reads it and appends it to a buffer, then writes back to the pipe that it got packet with the sequence #. Sender reads the packet #, slices out another buffer's worth, and so on and so forth. Each side keeps track of the state they're in (idle, sending header, receiving header, sending data, receiving data, error, done etc.) The two sides have to keep track of when to read/write the last fragment since it's likely to be smaller than the full buffer size.
This works fine (albeit a bit slow) and it can scale to any size. I started with 5K packet sizes but it ran pretty slow. Pushed it to 10K but it started causing problems so I backed off and held it at 8096. It works fine for both binary and text data.
Bear in mind that the GameKit isn't a general file-transfer API; it's more meant for updates of where the player is, what the current location or other objects are etc. So sending 300k for a game doesn't seem that sensible, though I can understand hijacking the API for general sharing mechanisms.
The problem is that it isn't a TCP connection; it's more a UDP (datagram) connection. In these cases, the data isn't a stream (which gets packeted by TCP) but rather a giant chunk of data. (Technically, UDP can be fragmented into multiple IP packets - but lose one of those, and the entire UDP is lost, as opposed to TCP, which will re-try).
The MTU for most wired networks is ~1.5k; for bluetooth, it's around ~0.5k. So any UDP packet that you sent (a) may get lost, (b) may be split into multiple MTU-sized IP packets, and (c) if one of those packets is lost, then you will automatically lose the entire set.
Your best strategy is to emulate TCP - it sends out packets with a sequence number. The receiving end can then request dupe transmissions of packets which went missing afterwards. If you're using the equivalent of an NSKeyedArchiver, then one suggestion is to iterate through the keys and write those out as individual keys (assuming each keyed value isn't that big on its own). You'll need to have some kind of ACK for each packet that gets sent back, and a total ACK when you're done, so the sender knows it's OK to drop the data from memory.