CRC calculation for header CRC and Data block using C# .net API - header

I am trying to send the customised protocol message as bytes data in my C# application. In my application ,I was asked to do the CRC calculation for the Header Block then do the CRC calculation for the existing CRC( Header block) and Data Block.
So I have two questions to ask related to my issue.
1) why can not we do CRC calculation together as a whole block of Header and Data ?
2) What exiting C# API can be used to calculate the CRC for the header block then do the 2nd CRC for existing one and actual data?

Related

MULE 3.7.0 C.E. - BufferInputStream payload turns into String

We are programming a MULE REST service which is divided in several layers.
The API layer (RAML-based) receives the inbound requests and prepares some flowVars so that the lower layers know how to proceed.
The second layer is also service defined, so there's one flow for each service oferred.
Finally, the third layer contains a unique flow and is the one which, depending on the flowVars configured in the upper layer, carries out a call using a HTTP Request component to the third-party service needed.
In this third layer, some audit registers are made in order to know what we are sending and what we are receiving. So, our audit component (a custom MULE connector) needs to write the content of the payload to our database, so a message.getPayloadAsString() (or similar) is needed. If we use a clean getter (like message.getPayload()), only the data type is obtained and thus written into the database.
The problem lays right in here. Every single payload received seems to be a BufferInputStream and, when doing the message.getPayloadAsString(), an inner casting seems to be affecting the payload. This, normally, wouldn't be a problem except for one of the cases that we have found: one of the services we invoke returns a PNG file, so message.getPayloadAsString() turns it into a String and breaks the image.
We've tried to clone the payload in order to keep one of the copies safe from the casting but, as an Object, it's not implementing Cloneable interface; we've tried to make a copy of the payload in any other single way, but only a new reference is generated; we've tried to serialize the payload to create a new copy from the serialized data but the Object doesn't either implement Serializable interface... Everything useless.
Any help, idea or piece of advice would be appreciated.
We finally managed to solve the problem by using message.getPayloadAsBytes();, which return value is a brand new byte[] object. This method doesn't either alter the payload within the message. By using the byte array we can create a String object to be written in our audit like this:
byte[] auditByteArray[] = message.getPayloadAsBytes();
String auditString = new String(auditByteArray);
Moreover, we tried a test consisting in stablishing that byte array as the new payload in the message and both JSON and PNG responses are managed correctly by the browser.

SBJson Stream Parser

I'm working in Xcode 4.3.2 + building for an app in iOS 5.
I've decided to use SBJson to parse streams of data from our server. I've verified that I'm receiving a valid JSON response from the server. My question concerns the design behind the classes SBJsonStreamParser and the SBJsonParser.
It appears that in SBJsonParser the method "objectWithData" takes the data received from the JSON response and uses the SBJsonStreamParserAccumulator to append the stream of data into a single JSON document. Once the data stream is gathered into one object, it is then parsed by the "parse" method in SBJsonStreamParser.
I've run into several issues when requesting larger JSON documents. The size of the responses seem to be reasonable (specially 9.4 KB response). It appears that the SBJsonStreamParser breaks when getting a data stream greater than a certain size. The parser succeeds when the response is small (~3KB), but fails when the response is larger (~10KB).
I used NSLog to verify that in both cases, pulling a small & large stream, the methods are successfully receiving the full json document - because it looks like [{"id": .... 123}]. I'm convinced that the issue is that the data stream is too long.
I'm wondering if I'm using SBJson incorrectly or is this simply a limitation of the parser? Is there anything that I can configure that allows SBJsonStreamParser to not throw an error for larger (but reasonable) data streams & continue to parse the full response?
Thanks in advance!
Actually you have the workings of objectWithData: backwards. SBJsonStreamParserAccumulator is used to accumulate the parsed output, not the unparsed data stream.

How to define a communication protocol?

I'm new to networking concepts and need an explaination of how to implement a communication protocol for sending different types of messages. I'm currently working on a Cocoa app that will send video messages between iPhones. Currently I only send messages of type 3. Here's the app flow I need to implement:
Browsing for available iPhones on the network (using Bonjour)
When an iPhone client is found, send NSData "request contact info" (MessageType1)
iPhone client will send back an NSData instance with contact info (MessageType2)
Init a new message with recorded video, send to selected contact (MessageType3)
When the different types of message are received, they will need to be handled differently. I guess one way to solve it is to add a header to the message that identify the message type and extract this on the receiver's side, then handle like this:
if (messageType == 1) // MessageType1
[self sendMyContactInfo:(Contact *)ownInfo];
if (messageType == 2) // MessageType2
[self updateViewWithContactInfo:(Contact *)contactInfo];
if (messageType == 3) // MessageType3
[self sendMessageToSelectedContact:(Message *)message]
For creating a message for MessageType3, I'll do this:
/* Not currently implemented */
NSMutableData *data = [[NSMutableData alloc] init];
int messageType = 3;
[data appendBytes:messageType]
/* Already Implemented */
NSData *encodedMessage = [NSKeyedArchiver archivedDataWithRootObject:message];
[data appendData:encodedMessage];
[self sendMessage:(NSData *)encodedMessage];
Is this a nice way of doing it? If so, should the protocol rules be defined in a more formal way, e.g. in a separate class or something? I'm looking for the best overall solution here, so don't take too much notice of my drawings if there's a better way to do it...
Is this a nice way of doing it?
It's a standard way for defining a communications protocol. From the Wikipedia article:
Digital message bitstrings are exchanged. The bitstrings are divided in fields and each field carries information relevant to the protocol. Conceptually the bitstring is divided into two parts called the header area and the data area. The actual message is stored in the data area, so the header area contains the fields with more relevance to the protocol. The transmissions are limited in size, because the number of transmission errors is proportional to the size of the bitstrings being sent. Bitstrings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size. Each piece has almost the same header area contents, because only some fields are dependent on the contents of the data area (notably CRC fields, containing checksums that are calculated from the data area contents).
End Wikipedia quote
If so, should the protocol rules be defined in a more formal way, e.g. in a separate class or something?
That's up to you. It's not necessary, since your application is communicating with other copies of your application.

Is it better create a library with several functions or create classes?

I'm developing a software to comunicate with a device.
The software will send commands for the device. The device has to answer using the protocol below:
<STX><STX><COMMAND>[<DATA_1><DATA_2>...<DATA_N>]<CHKSUM><ETX>
where:
<STX> is the Start of TeXt (0x55);
<COMMAND> can be 0x01 for read, 0x02 for write, etc;
<DATA> is any value;
<CHKSUM> is the checksum;
<ETX> is the End of TeXt (0x04).
So, I have to validate the received data.
Then, the received data:
cannot be empty;
must have 3 or more characters;
must have an header in the first two characters of the string data;
must have a "footer" in the last character of the string data;
must hava a valid CheckSum.
If the answer is valid, then I can handle the data. But before I'll have to extract this data from the response received.
Ok, this is a relatively easy task. Beforetime I would do it on a procedural way, using only one function and putting many if's.
Now I'm studying more about good programming practices, things seem to be getting harder to do.
To validate the device answer, is better create a class "ValidateReceivedData" for example and pass the received data in the constructor of this class? And then create a public method called "IsReceivedDataValid" that check all steps given above?
Or maybe would be better create a library with with several functions to validate the received data?
I'd like to use unit test too.
As I said before, I'm studying more to make better code. But I realize that I'm spending more time now to code than before. And there are too many questions that are arising, but in my view they seem easy to solve, but I'm not getting.
For what it's worth, I've done this sort of thing before using object-oriented design. Here's a high level possibility for your design:
ProtocolParser class:
Takes a SerialPort object, or equivalent, in the constructor and listens to it for incoming bytes
Passes received bytes to OnByteReceived, which implements the protocol-specific state machine (with states like Unknown, Stx1Received, Stx2Received, ..., CkSumReceived).
After an entire good message is received, creates an object of type Packet, which accepts a byte list in its constructor. It then raises an event PacketReceived, passing the Packet as an argument.
If a bad byte is received, it raises an event BadDataReceived and passes the bad data (for logging/debugging purposes, perhaps).
Packet class:
Takes a list/array of bytes and stores them as Command and Data properties.
Does not need to save the checksum, as this class is only meant to represent a valid packet.
The above classes are sufficient to implement the receive protocol. You should be able to test it by mocking a SerialPort class (i.e., the ProtocolParser could actually take an IDataSource instead of a SerialPort).
You could then add a higher-level class to implement your device-specific functions, which would listen to the PacketReceived event of the ProtocolParser.
Of course it will better to use OOP design.
By what you explained, I'd make at least 2 classes:
Message
Executer
The message will receive the command from the device, and the Executer will handle the message.
The Message object will initiate with the device's answer. It will parse it, and hold fields as you described:
STX
COMMAND
DATA
CHKSUM
ETX
Then an Executer object will receive the Message object and do the actual execution of the message, and hold the logical code.
I would go a step further than Yochai's answer, and create the following classes:
Command: Actually not a class, but an Enum value so you can check against Command.Read, etc., rather than just "knowing" what 0x01 and 0x02 mean.
Message: Just a plain object (POJO/POCO/whatever) that's intended to hold a data representation of the message. This would contain the following fields:
Command (the enum type mentioned earlier)
Data: List of the data. Depending on how the data is represented, you might create a class for this, or you could just represent each datum as a string.
MessageParser: this would have a function that would parse a string or text stream and create a Message object. If the text is invalid, I'd throw a customized exception (another class), which can be caught by the caller.
MessageExecutor: This would take a Message object and perform the action that it represents.
By making the intermediate representation object (Message), you make it possible to separate the various actions you're performing. For example, if the Powers That Be decide that the message text can be sent as XML or JSON, you can create different MessageParser classes without having to mess with the logic that decides what to do with the message.
This also makes unit testing far easier, because you can test the message parser independently of the executor. First test the message parser by calling the parse function and examining the resulting Message object. Then test the executor by creating a Message object and ensuring that the appropriate action is taken.

How to determine WCF message size at the encoder level

I am building a custom encoder that compresses WCF responses. It is based on the Gzip encoder in Microsoft's WCF samples and this blog post:
http://frenk.wordpress.com/2009/12/04/gzip-compression-wcfsilverlight/
I've got it all working, but now I would like to apply the compression only if the reply is beyond a certain size, but I am not sure how to retrieve the total size of the actual message from the encoder level.
I would need to get the message size at both the WriteMessage(...) method in the EncoderFactory, so I know whether to compress the message) and at the BeforeSendReply(...) method in the DispatchMessageInspector so that I can add the "gzip" ContentEncoding header to the response. Requests are always small and not compressed, so I don't need to worry about that.
Any help appreciated.
Jon.
I think you would do this in two stages. First, write a custom MessageEncoder that encodes the message to a byte[] just normal. Once you have the encoded byte-array (and this can be any message encoding format... Xml, Json, binary, whatever) you can examine the byte-array size and determine whether you want to create another compressed byte array.
Several resources you may find useful:
MSDN WCF Sample Code for a custom compression message encoder
Nicholas Allen's "Build a Custom Message Encoder" blog series. In this series He creates a "counting encoder" that basically wraps another encoder of any type and allows you to know what the encoded message size is (based on the byte[] size). You could probably adapt this and create a "ThresholdCompressionEncoder".
You can try calculating it based on reply.ToString.Length() and message.ToString.Length()