I'm executing 4 startup commands and also expecting to receive 4 responses. The server is already implemented and another dev who is developing android, is able to receive those 4 separate responses, however, I'm getting 2 good responses (separate) and then 3rd and 4th responses come as one response. I'v placed NSLog of NSData result in completeCurrentRead, and it outputs me merged packet "0106000000000b0600000000" instead of separate packets "010600000000" and "0b0600000000". I'v also tested those 3rd and 4th commands separatedly (only one at a time) and everything is OK with the server, it sends them separately, however there occurs merge (with 3rd and 4th) if all four commands are executed in a line. Any ideas?
UPDATE: I think I'v traced to the problem roots. There's a call that reads packet data from a stream in doBytesAvailable method:
CFIndex result = [self readIntoBuffer:subBuffer maxLength:bytesToRead];
And in readIntoBuffer:maxLength, there's a call (length == 256) :
return CFReadStreamRead(theReadStream, (UInt8 *)buffer, length);
So, CFReadStreamRead returns incorrect length of packet - it return length of 12 (instead of 6), and also grabs merged data. Hm, what might causing CFReadStreamRead to read two packets into one, instead of reading them separately...
UPDATE2: I'm using onSocket:didReadData:withTag: delegate method and expecting to receive response data with the tag of request I performed. I have realized recently, streams are streams, not packets but how I can solve that? Server responses does not have terminating chars at start and end of response, just response size, that comes as 2 - 5 bytes. I can cut the first part of response (first packet) and ignore the second part but how AsyncSocket will make another callback with the second part of the response (second packet)? If I will cut only the first parts and ignore the second then IMHO the second "packet" will be lost...
How to cut the first part of response and tell AsyncSocket to make another callback with tag and the second part of response as separate callback?
UPDATE3: In onSocket:didReadData:withTag:, I manually cut merged response, handle the first part (first packet) and then at the end, throwing a call to onSocket:didReadData:withTag: again:
if (isMergedPacket) {
...
[self onSocket:sock didReadData:restPartOfTheResponse withTag:myCommandTag];
}
However, it looks like AsyncSocket itself pairs every request packet with its response packet (via AsyncReadPacket class) using tags. So, my manual cutting works, but AsyncSocket does not know that I already handled both packets, and it still tries to read the second packet. So, I'm getting sock:shouldTimeoutReadWithTag:... callback which is called when a read operation has reached its timeout without completing.
Found solution. It's not necessary to change and dig into AsyncSocket. You just need to define the length of each response - how much bytes are you interested in reading and getting your callback. More info you can on other post here
Related
1. Is it possible to receive multiple messages in one receive call?
Sender pseudo-code:
target = ("xxx.xxx.xxx.xxx", 1234)
sender = new_udp_socket()
sender.send("Hello", target)
sender.send("World", target)
Receiver pseudo-code:
receiver = new_udp_socket()
receiver.bind("", 1234)
while true
data = receiver.recvfrom(512)
print(data)
Is it possible that the receiver will receive "HelloWorld" in one receive call instead of "Hello" and "World" separately?
I have been told that it is possible, but I'd like to make sure.
2. If it is possible to receive multiple messages in one receive call, how do I ensure that my code processes both messages separately?
I've been thinking about this but couldn't come up with any solution.
My first idea was that I would add a byte at the beginning of the send call stating the length of the message.
I don't believe this would be reliable either because if too much data is in the receivers buffer then the beginning (the message length) may be cut off and therefore my program would fail.
Thanks for any help!
After much research I have found an answer to my question.
One recvfrom call will only ever receive one sendto call.
Sources:
https://stackoverflow.com/a/8748884/1541397
https://stackoverflow.com/a/26185032/1541397
I have a GSM module hooked up to PIC18F87J11 and they communicate just fine . I can send an AT command from the Microcontroller and read the response back. However, I have to know how many characters are in the response so I can have the PIC wait for that many characters. But if an error occurs, the response length might change. What is the best way to handle such scenario?
For Example:
AT+CMGF=1
Will result in the following response.
\r\nOK\r\n
So I have to tell the PIC to wait for 6 characters. However, if there response was an error message. It would be something like this.
\r\nERROR\r\n
And if I already told the PIC to wait for only 6 characters then it will mess out the rest of characters, as a result they might appear on the next time I tell the PIC to read the response of a new AT command.
What is the best way to find the end of the line automatically and handle any error messages?
Thanks!
In a single line
There is no single best way, only trade-offs.
In detail
The problem can be divided in two related subproblems.
1. Receiving messages of arbitrary finite length
The trade-offs:
available memory vs implementation complexity;
bandwidth overhead vs implementation complexity.
In the simplest case, the amount of available RAM is not restricted. We just use a buffer wide enough to hold the longest possible message and keep receiving the messages bytewise. Then, we have to determine somehow that a complete message has been received and can be passed to further processing. That essentially means analyzing the received data.
2. Parsing the received messages
Analyzing the data in search of its syntactic structure is parsing by definition. And that is where the subtasks are related. Parsing in general is a very complex topic, dealing with it is expensive, both in computational and laboriousness senses. It's often possible to reduce the costs if we limit the genericity of the data: the simpler the data structure, the easier to parse it. And that limitation is called "transport layer protocol".
Thus, we have to read the data to parse it, and parse the data to read it. This kind of interlocked problems is generally solved with coroutines.
In your case we have to deal with the AT protocol. It is old and it is human-oriented by design. That's bad news, because parsing it correctly can be challenging despite how simple it can look sometimes. It has some terribly inconvenient features, such as '+++' escape timing!
Things become worse when you're short of memory. In such situation we can't defer parsing until the end of the message, because it very well might not even fit in the available RAM -- we have to parse it chunkwise.
...And we are not even close to opening the TCP connections or making calls! And you'll meet some unexpected troubles there as well, such as these dreaded "unsolicited result codes". The matter is wide enough for a whole book. Please have a look at least here:
http://en.wikibooks.org/wiki/Serial_Programming/Modems_and_AT_Commands. The wikibook discloses many more problems with the Hayes protocol, and describes some approaches to solve them.
Let's break the problem down into some layers of abstraction.
At the top layer is your application. The application layer deals with the response message as a whole and understands the meaning of a message. It shouldn't be mired down with details such as how many characters it should expect to receive.
The next layer is responsible from framing a message from a stream of characters. Framing is extracting the message from a stream by identifying the beginning and end of a message.
The bottom layer is responsible for reading individual characters from the port.
Your application could call a function such as GetResponse(), which implements the framing layer. And GetResponse() could call GetChar(), which implements the bottom layer. It sounds like you've got the bottom layer under control and your question is about the framing layer.
A good pattern for framing a stream of characters into a message is to use a state machine. In your case the state machine includes states such as BEGIN_DELIM, MESSAGE_BODY, and END_DELIM. For more complex serial protocols other states might include MESSAGE_HEADER and MESSAGE_CHECKSUM, for example.
Here is some very basic code to give you an idea of how to implement the state machine in GetResponse(). You should add various types of error checking to prevent a buffer overflow and to handle dropped characters and such.
void GetResponse(char *message_buffer)
{
unsigned int state = BEGIN_DELIM1;
bool is_message_complete = false;
while(!is_message_complete)
{
char c = GetChar();
switch(state)
{
case BEGIN_DELIM1:
if (c = '\r')
state = BEGIN_DELIM2;
break;
case BEGIN_DELIM2:
if (c = '\n')
state = MESSAGE_BODY:
break;
case MESSAGE_BODY:
if (c = '\r')
state = END_DELIM;
else
*message_buffer++ = c;
break;
case END_DELIM:
if (c = '\n')
is_message_complete = true;
break;
}
}
}
I'm using CocoaAsyncSocket for an iOS project. I'm trying to read VarInts through an asynchronous interface. The problem is unlike something else like a String, where I can prefix a length, I don't know the length of a varint beforehand. It needs to be processed one byte at a time, but since each read operation is asynchronous other read calls may have been queued in between.
I considered reading into a buffer then processing it, say reading 5 bytes (the max length for a varint-32), and pushing extra bytes back, but that may hang unnecessarily if the varint is only 4 bytes and I'm waiting for a 5th byte to be available.
How can I do this? Also, I cannot change the protocol on the other end, to use fixed size ints.
Here's a snippet of code as Josh requested
- (void)readByte:(void (^)(int8_t))onComplete {
NSUInteger size = 1;
int32_t tag = OSAtomicAdd32(1, &_nextTag);
dispatch_async(self.dispatchQueue, ^{
[self.onCompleteHandlers setObject:(^void (NSData* data) {
int8_t x = 0;
[data getBytes:&x length:size];
onComplete(x);
}) forKey:[NSNumber numberWithInteger:((NSInteger) tag)]];
[self.socket readDataToLength:size withTimeout:-1 tag:tag];
});
}
A callback is saved in a dictionary, which is used in the delegate method socket: didReadData: withTag.
Suppose I'm reading a VarInt byte by byte:
execute read first byte for varint
don't know if we need to read another byte for a varint or not; that depends on the result of the first read
(possible) read another byte for something else
read second byte for varint, but now it's actually the 3rd byte being read
I can imagine using a flag to indicate whether or not I'm in a multipart-read, and a queue to hold reads that should be executed after the multipart-read, and I've started writing it but it's quite messy. Just wondering if there is a standard/recommended/better way to approach this problem.
in short there are 4 ways to know how much to read from a socket...
read some format that you can infer the length from like the Content-Length header... only works if the whole request can be put together before the body is sent.
read until some pattern: like \r\n\r\n at the end of the headers
read until some timeout... after you get no bytes after n seconds you flush the buffers and close the connection.
read until the server closes the connection... actually used to be pretty common.
these each have problems and I would probably lean in your case from using some existing protocol.
of course there is overhead to doing it that way, and you may find that you don't want to use any of that application level stuff and your requests may be like:
client>"doMath(2+5)\0"
server>"(7)\0"
but it is hard to answer your general question specifically.
edit:
So I looked into the varint base-128 issue a little more and I think really only a timeout or the server closing the connection will work, if you are writing these right at the TCP level which is horrible...
I read redis source code recently, and I'm now studying the networking codes.
Redis use nonblock mode and epoll(or something simliar) for networking data read/write. When read data event arrived,"readQueryFromClient" function will be called, and in this function request data will be readed into buffer.
In "readQueryFromClient" function, if there are really data arrived, data will be readed into buffer through one 'read' function, and then the request will be handled.
nread = read(fd, c->querybuf+qblen, readlen); // **one read function**
//... some other codes to check read function retuen value
processInputBuffer(c);// **request will be handled in this function**
My question is: how redis ensure all request data can be readed into buffer by only one 'read' function call, maybe all data will be gotten by more 'read' function call?
processInputBuffer(c);// request will be handled in this function
That part is not true. Redis protocol is designed to include length of every chunk of data passed around. So the server always knows how much data it has to read to make a complete request out of it. Inside processInputBuffer if neither processInlineBuffer nor processMultibulkBuffer returns REDIS_OK (i.e. request terminator was not found in the buffer/not enough arguments), control simply falls out of the function. All that processInputBuffer did in this case was filling up a chunk of the client buffer and updating the parsing state. Then, on the next iteration of event loop, in the call to aeProcessEvents, if there is unread data remaining in the socket buffer, readQueryFromClient callback will be triggered again to parse the remaining data.
I have a sending application that uses TCP to send files. Sometimes these files contain one message, and other times the file may contain multiple messages. Unfortunately, I do not have access to the Sending application's code.
I am working on editing legacy code to receive these messages. I have managed to get the legacy application to accept a file when there is a single message sent. However, since I disconnect the socket after receiving a single message, the Sender gives a disconnect error.
I wrote a small process to help determine whether there was another message. If it worked, I was going to incorporate it into the code, but I had mixed results:
Dim check(1) As Byte
If (handler.Receive(check, SocketFlags.Peek) > 0) Then
Dim bytesRec As Integer
ReDim bytes(1024)
bytesRec = handler.Receive(bytes)
End If
If there is another message being sent, this will detect it. However, if the file only has a single message, it locks up on Receive until I send another file, and then it is accepted.
Is there a way to tell if there is another message pending that will not lock up if the stream is empty?
I won't post all of the code for accepting the message, as it is a legacy rat's nest, but the general idea is below:
s2 = CType(ar.AsyncState, Socket)
handler = s2.EndAccept(ar)
bytes = New Byte(1024) {}
Dim bytesRec As Integer = handler.Receive(bytes)
' Send Ack/Nak.
numAckBytesSent = handler.Send(myByte)
Thank you in advance for any assistance.
Socket.Select can be used as a quick way of polling a socket for readability. Pass in a timeout of 0 seconds, and the socket in question in the readability list, and it will simply check and report back immediately.
Two other options might be to set Socket.ReceiveTimeout on your socket, or make the socket non-blocking using Socket.Blocking, so that you can find out (as part of the Receive call) whether there is incoming data. These look a bit inconvenient to do in .NET, though, as they throw exceptions rather than simply returning a value, which might make the code a little longer.
Just keep reading. If there is nothing left you will get an end-of-stream indication of some kind, depending on your API.