I try to understand the message I received from Concox AT2 tracker but I got stuck in this
After successfully handle login message, I never received 0x12 protocol, the closest thing I got is a
0x2c protocol. This is a part of the message I received:
0x78,0x78,0x5d,0x2c,0x14,0x7,0x1d,0x0d,0x1b,0x34,0x01,0xfe,0x59,0x35,0xc8,0x00,0x36,0x5c,0x19,0x35,0xc8,0x00
The 6 byte data after the protocol byte correspond to date time like in the location data protocol, but the data after that doesn't seems correspond to location data.
Can anyone tell me what's wrong here ?
Nervermind, I read the manual for the wrong device.
Related
I'm writing a very specific application protocol to enable communication between 2 nodes. Node 1 is an embedded platform (a microcontroller), while node 2 is a common computer.
Such protocol defines messages of variable length. This means that sometimes node 1 sends a message of 100 bytes to node 2, while another time it sends a message of 452 bytes.
Such protocol shall be independent on how the messages are transmitted. For instance, the same message can be sent over USB, Bluetooth, etc.
Let's assume that a protocol message is defined as:
| Length (4 bytes) | ...Payload (variable length)... |
I'm struggling about how the receiver can recognise how long is the incoming message. So far, I have thought about 2 approaches.
1st approach
The sender sends the length first (4 bytes, always fixed size), and the message afterwards.
For instance, the sender does something like this:
// assuming that the parameters of send() are: data, length of data
send(msg_length, 4)
send(msg, msg_length - 4)
While the receiver side does:
msg_length = receive(4)
msg = receive(msg_length)
This may be ok with some "physical protocols" (e.g. UART), but with more complex ones (e.g. USB) transmitting the length with a separate packet may introduce some overhead. The reason being that an additional USB packet (with control data, ACK packets as well) is required to be transmitted for only 4 bytes.
However, with this approach the receiver side is pretty simple.
2nd approach
The alternative would be that the receiver keeps receiving data into a buffer, and at some point tries to find a valid message. Valid means: finding the length of the message first, and then its payload.
Most likely this approach requires adding some "start message" byte(s) at the beginning of the message, such that the receiver can use them to identify where a message is starting.
1. Is it possible to receive multiple messages in one receive call?
Sender pseudo-code:
target = ("xxx.xxx.xxx.xxx", 1234)
sender = new_udp_socket()
sender.send("Hello", target)
sender.send("World", target)
Receiver pseudo-code:
receiver = new_udp_socket()
receiver.bind("", 1234)
while true
data = receiver.recvfrom(512)
print(data)
Is it possible that the receiver will receive "HelloWorld" in one receive call instead of "Hello" and "World" separately?
I have been told that it is possible, but I'd like to make sure.
2. If it is possible to receive multiple messages in one receive call, how do I ensure that my code processes both messages separately?
I've been thinking about this but couldn't come up with any solution.
My first idea was that I would add a byte at the beginning of the send call stating the length of the message.
I don't believe this would be reliable either because if too much data is in the receivers buffer then the beginning (the message length) may be cut off and therefore my program would fail.
Thanks for any help!
After much research I have found an answer to my question.
One recvfrom call will only ever receive one sendto call.
Sources:
https://stackoverflow.com/a/8748884/1541397
https://stackoverflow.com/a/26185032/1541397
I'm using CocoaAsyncSocket for an iOS project. I'm trying to read VarInts through an asynchronous interface. The problem is unlike something else like a String, where I can prefix a length, I don't know the length of a varint beforehand. It needs to be processed one byte at a time, but since each read operation is asynchronous other read calls may have been queued in between.
I considered reading into a buffer then processing it, say reading 5 bytes (the max length for a varint-32), and pushing extra bytes back, but that may hang unnecessarily if the varint is only 4 bytes and I'm waiting for a 5th byte to be available.
How can I do this? Also, I cannot change the protocol on the other end, to use fixed size ints.
Here's a snippet of code as Josh requested
- (void)readByte:(void (^)(int8_t))onComplete {
NSUInteger size = 1;
int32_t tag = OSAtomicAdd32(1, &_nextTag);
dispatch_async(self.dispatchQueue, ^{
[self.onCompleteHandlers setObject:(^void (NSData* data) {
int8_t x = 0;
[data getBytes:&x length:size];
onComplete(x);
}) forKey:[NSNumber numberWithInteger:((NSInteger) tag)]];
[self.socket readDataToLength:size withTimeout:-1 tag:tag];
});
}
A callback is saved in a dictionary, which is used in the delegate method socket: didReadData: withTag.
Suppose I'm reading a VarInt byte by byte:
execute read first byte for varint
don't know if we need to read another byte for a varint or not; that depends on the result of the first read
(possible) read another byte for something else
read second byte for varint, but now it's actually the 3rd byte being read
I can imagine using a flag to indicate whether or not I'm in a multipart-read, and a queue to hold reads that should be executed after the multipart-read, and I've started writing it but it's quite messy. Just wondering if there is a standard/recommended/better way to approach this problem.
in short there are 4 ways to know how much to read from a socket...
read some format that you can infer the length from like the Content-Length header... only works if the whole request can be put together before the body is sent.
read until some pattern: like \r\n\r\n at the end of the headers
read until some timeout... after you get no bytes after n seconds you flush the buffers and close the connection.
read until the server closes the connection... actually used to be pretty common.
these each have problems and I would probably lean in your case from using some existing protocol.
of course there is overhead to doing it that way, and you may find that you don't want to use any of that application level stuff and your requests may be like:
client>"doMath(2+5)\0"
server>"(7)\0"
but it is hard to answer your general question specifically.
edit:
So I looked into the varint base-128 issue a little more and I think really only a timeout or the server closing the connection will work, if you are writing these right at the TCP level which is horrible...
I have a sending application that uses TCP to send files. Sometimes these files contain one message, and other times the file may contain multiple messages. Unfortunately, I do not have access to the Sending application's code.
I am working on editing legacy code to receive these messages. I have managed to get the legacy application to accept a file when there is a single message sent. However, since I disconnect the socket after receiving a single message, the Sender gives a disconnect error.
I wrote a small process to help determine whether there was another message. If it worked, I was going to incorporate it into the code, but I had mixed results:
Dim check(1) As Byte
If (handler.Receive(check, SocketFlags.Peek) > 0) Then
Dim bytesRec As Integer
ReDim bytes(1024)
bytesRec = handler.Receive(bytes)
End If
If there is another message being sent, this will detect it. However, if the file only has a single message, it locks up on Receive until I send another file, and then it is accepted.
Is there a way to tell if there is another message pending that will not lock up if the stream is empty?
I won't post all of the code for accepting the message, as it is a legacy rat's nest, but the general idea is below:
s2 = CType(ar.AsyncState, Socket)
handler = s2.EndAccept(ar)
bytes = New Byte(1024) {}
Dim bytesRec As Integer = handler.Receive(bytes)
' Send Ack/Nak.
numAckBytesSent = handler.Send(myByte)
Thank you in advance for any assistance.
Socket.Select can be used as a quick way of polling a socket for readability. Pass in a timeout of 0 seconds, and the socket in question in the readability list, and it will simply check and report back immediately.
Two other options might be to set Socket.ReceiveTimeout on your socket, or make the socket non-blocking using Socket.Blocking, so that you can find out (as part of the Receive call) whether there is incoming data. These look a bit inconvenient to do in .NET, though, as they throw exceptions rather than simply returning a value, which might make the code a little longer.
Just keep reading. If there is nothing left you will get an end-of-stream indication of some kind, depending on your API.
I'm executing 4 startup commands and also expecting to receive 4 responses. The server is already implemented and another dev who is developing android, is able to receive those 4 separate responses, however, I'm getting 2 good responses (separate) and then 3rd and 4th responses come as one response. I'v placed NSLog of NSData result in completeCurrentRead, and it outputs me merged packet "0106000000000b0600000000" instead of separate packets "010600000000" and "0b0600000000". I'v also tested those 3rd and 4th commands separatedly (only one at a time) and everything is OK with the server, it sends them separately, however there occurs merge (with 3rd and 4th) if all four commands are executed in a line. Any ideas?
UPDATE: I think I'v traced to the problem roots. There's a call that reads packet data from a stream in doBytesAvailable method:
CFIndex result = [self readIntoBuffer:subBuffer maxLength:bytesToRead];
And in readIntoBuffer:maxLength, there's a call (length == 256) :
return CFReadStreamRead(theReadStream, (UInt8 *)buffer, length);
So, CFReadStreamRead returns incorrect length of packet - it return length of 12 (instead of 6), and also grabs merged data. Hm, what might causing CFReadStreamRead to read two packets into one, instead of reading them separately...
UPDATE2: I'm using onSocket:didReadData:withTag: delegate method and expecting to receive response data with the tag of request I performed. I have realized recently, streams are streams, not packets but how I can solve that? Server responses does not have terminating chars at start and end of response, just response size, that comes as 2 - 5 bytes. I can cut the first part of response (first packet) and ignore the second part but how AsyncSocket will make another callback with the second part of the response (second packet)? If I will cut only the first parts and ignore the second then IMHO the second "packet" will be lost...
How to cut the first part of response and tell AsyncSocket to make another callback with tag and the second part of response as separate callback?
UPDATE3: In onSocket:didReadData:withTag:, I manually cut merged response, handle the first part (first packet) and then at the end, throwing a call to onSocket:didReadData:withTag: again:
if (isMergedPacket) {
...
[self onSocket:sock didReadData:restPartOfTheResponse withTag:myCommandTag];
}
However, it looks like AsyncSocket itself pairs every request packet with its response packet (via AsyncReadPacket class) using tags. So, my manual cutting works, but AsyncSocket does not know that I already handled both packets, and it still tries to read the second packet. So, I'm getting sock:shouldTimeoutReadWithTag:... callback which is called when a read operation has reached its timeout without completing.
Found solution. It's not necessary to change and dig into AsyncSocket. You just need to define the length of each response - how much bytes are you interested in reading and getting your callback. More info you can on other post here