How can I write a signed byte to a serial port in VB - vb.net

I need to be able to write signed bytes to a serial port using
SerialPort.Write() method, except that method only takes byte[] arrays of unsigned bytes, how would i write a signed byte to the serial port?
For what I'm working on the particular command takes values from -1700 to 1700.
thanks
nightmares

The serial communication channel has no concept of signed or unsigned, only a concept of 1's and 0's on the wire. It is your operating system (and ultimately your CPU architecture) that assigns a numeric value to those 1's and 0's, on both the sending and receiving side.
The value range you state cannot be represented in a byte (per my comment and your reply). You need to understand what bit pattern the receiving device expects for a given number (is the other device big endian or little endian?), and then you can send an appropriate sequence of byte[] to represent the number you want to transmit.
If both devices have the same endianness, you can setup an array of short then copy to an array of byte like this:
short[] sdata = new short[] { 1, -1 };
byte[] bdata = new byte[sdata.Length * 2];
Buffer.BlockCopy(sdata, 0, bdata, 0, bdata.Length);
However, be sure and test for a range of values. Especially if you are dealing with embedded devices, numeric encoding may not be exactly as on an Intel PC.

Related

Does the "C" code algorithm in RFC1071 work well on big-endian machine?

As described in RFC1071, an extra 0-byte should be added to the last byte when calculating checksum in the situation of odd count of bytes:
But in the "C" code algorithm, only the last byte is added:
The above code does work on little-endian machine where [Z,0] equals Z, but I think there's some problem on big-endian one where [Z,0] equals Z*256.
So I wonder whether the example "C" code in RFC1071 only works on little-endian machine?
-------------New Added---------------
There's one more example of "breaking the sum into two groups" described in RFC1071:
We can just take the data here (addr[]={0x00, 0x01, 0xf2}) for example:
Here, "standard" represents the situation described in the formula [2], while "C-code" representing the C code algorithm situation.
As we can see, in "standard" situation, the final sum is f201 regardless of endian-difference since there's no endian-issue with the abstract form of [Z,0] after "Swap". But it matters in "C-code" situation because f2 is always the low-byte whether in big-endian or in little-endian.
Thus, the checksum is variable with the same data(addr&count) on different endian.
I think you're right. The code in the RFC adds the last byte in as low-order, regardless of whether it is on a litte-endian or big-endian machine.
In these examples of code on the web we see they have taken special care with the last byte:
https://github.com/sjaeckel/wireshark/blob/master/epan/in_cksum.c
and in
http://www.opensource.apple.com/source/tcpdump/tcpdump-23/tcpdump/print-ip.c
it does this:
if (nleft == 1)
sum += htons(*(u_char *)w<<8);
Which means that this text in the RFC is incorrect:
Therefore, the sum may be calculated in exactly the same way
regardless of the byte order ("big-endian" or "little-endian")
of the underlaying hardware. For example, assume a "little-
endian" machine summing data that is stored in memory in network
("big-endian") order. Fetching each 16-bit word will swap
bytes, resulting in the sum; however, storing the result
back into memory will swap the sum back into network byte order.
The following code in place of the original odd byte handling is portable (i.e. will work on both big- and little-endian machines), and doesn't depend on an external function:
if (count > 0)
{
char buf2[2] = {*addr, 0};
sum += *(unsigned short *)buf2;
}
(Assumes addr is char * or const char *).

Interoperability of AES CTR mode?

I use AES128 crypto in CTR mode for encryption, implemented for different clients (Android/Java and iOS/ObjC). The 16 byte IV used when encrypting a packet is formated like this:
<11 byte nonce> | <4 byte packet counter> | 0
The packet counter (included in a sent packet) is increased by one for every packet sent. The last byte is used as block counter, so that packets with fewer than 256 blocks always get a unique counter value. I was under the assumption that the CTR mode specified that the counter should be increased by 1 for each block, using the 8 last bytes as counter in a big endian way, or that this at least was a de facto standard. This also seems to be the case in the Sun crypto implementation.
I was a bit surprised when the corresponding iOS implementation (using CommonCryptor, iOS 5.1) failed to decode every block except the first when decoding a packet. It seems that CommonCryptor defines the counter in some other way. The CommonCryptor can be created in both big endian and little endian mode, but some vague comments in the CommonCryptor code indicates that this is not (or at least has not been) fully supported:
http://www.opensource.apple.com/source/CommonCrypto/CommonCrypto-60026/Source/API/CommonCryptor.c
/* corecrypto only implements CTR_BE. No use of CTR_LE was found so we're marking
this as unimplemented for now. Also in Lion this was defined in reverse order.
See <rdar://problem/10306112> */
By decoding block by block, each time setting the IV as specified above, it works nicely.
My question: is there a "right" way of implementing the CTR/IV mode when decoding multiple blocks in a single go, or can I expect it to be interoperability problems when using different crypto libs? Is CommonCrypto bugged in this regard, or is it just a question of implementing the CTR mode differently?
The definition of the counter is (loosely) specified in NIST recommendation sp800-38a Appendix B. Note that NIST only specifies how to use CTR mode with regards to security; it does not define one standard algorithm for the counter.
To answer your question directly, whatever you do you should expect the counter to be incremented by one each time. The counter should represent a 128 bit big endian integer according to the NIST specifications. It may be that only the least significant (rightmost) bits are incremented, but that will usually not make a difference unless you pass the 2^32 - 1 or 2^64 - 1 value.
For the sake of compatibility you could decide to use the first (leftmost) 12 bytes as random nonce, and leave the latter ones to zero, then let the implementation of the CTR do the increments. In that case you simply use a 96 bit / 12 byte random at the start, in that case there is no need for a packet counter.
You are however limited to 2^32 * 16 bytes of plaintext until the counter uses up all the available bits. It is implementation specific if the counter returns to zero or if the nonce itself is included in the counter, so you may want to limit yourself to messages of 68,719,476,736 = ~68 GB (yes that's base 10, Giga means 1,000,000,000).
because of the birthday problem you've got a 2^48 chance (48 = 96 / 2) of creating a collision for the nonce (required for each message, not each block), so you should limit the amount of messages;
if some attacker tricks you into decrypting 2^32 packets for the same nonce, you run out of counter.
In case this is still incompatible (test!) then use the initial 8 bytes as nonce. Unfortunately that does mean that you need to limit the number of messages because of the birthday problem.
Further investigations sheds some light on the CommonCrypto problem:
In iOS 6.0.1 the little endian option is now unimplemented. Also, I have verified that CommonCrypto is bugged in that the CCCryptorReset method does not in fact change the IV as it should, instead using pre-existing IV. The behaviour in 6.0.1 is different from 5.x.
This is potentially a security risc, if you initialize CommonCrypto with a nulled IV, and reset it to the actual IV right before encrypting. This would lead to all your data being encrypted with the same (nulled) IV, and multiple streams (that perhaps should have different IV but use same key) would leak data via a simple XOR of packets with corresponding ctr.

Bluetooth incoming data string distortion

I have scales equipped with RS232 serial port and a Bluetooth transmitter. I made a program in VBA to receive data from the scales. However, lets say out of 10 incoming strings I get 3 distorted. My regular strings look like: "+001500./3 G S". This means 1500.3 grams above zero and the output is stable. But sometimes I get strings like separated like "+" or "001500./3" or "G S". When I plug serial cable I have no distortions.
Serial ports are just byte streams. You can never make assumptions about how many of the bytes will show up on each read operation. It's only coincidence that when you use a real cable you read the whole string at once. You have to do the string splitting yourself, and continue reading when you only get a partial result.

NSInteger to byte array in reverse order

I've been banging my head for the last couple of hours with what seemed to be a very easy task.
My app is communicating with a server over tcpip. The protocol requires that the first 4 bytes of each request be the length of the stream, in reverse order. For example, if the length if 13, I need to supply (decimal) {0,0,0,13}; if it's 300, I need to supply {0,0,44,256}. Then, the actual data follows.
Apparently this is something very straightforward to do in Java, and also in VB (e.g. BitConverter.GetBytes(sendString.Length).Reverse().ToArray()). But in obj-c I just couldn't make it work, I've tried all sorts of conversions between NSString/NSData/NSArray, with no luck.
Thanks in advance!
The server is asking for the data in big-endian order (most significant byte first). Big-endian is the standard network byte order for Internet protocols (including IP, TCP, UDP, DNS, and lots more). It happens that you're compiling for a little-endian platform, so you need to swap the bytes.
However, you should not rely on being on a little-endian platform. Instead, you should make your code independent of the local (host) byte order, using the Core Foundation byte-swapping functions.
Specifically, you should use CFSwapInt32HostToBig to convert your 4-byte int to big-endian order. On a little-endian platform, this rearranges the bytes. On a big-endian platform, this does nothing.
Similarly, you should use CFSwapInt32BigToHost to convert the 4-byte ints you receive from the server to your host byte order.
Alternatively, you can use the standard POSIX byte-swapping functions. The htonl function stands for host-to-network-long, and converts a 32-bit int from host order to network (big-endian) order. The ntohl function converts a 32-bit int from network to host order. (Back when these functions were created, some popular operating systems had 16-bit ints and 32-bit longs. Can you believe it?)
NSInteger a = 300; //13;
char* aa = &a;
Byte b[] = {0,0,0,0};
memcpy(&b[0], &aa[3], 1);
memcpy(&b[1], &aa[2], 1);
memcpy(&b[2], &aa[1], 1);
memcpy(&b[3], &aa[0], 1);
As indicated in the accepted answer for the duplicate question, Foundation provides functions for byte swapping. In this case, since you're dealing with a long, you probably want NSSwapLong.

Reading signed and unsigned values from a stream in both byte orders

I need to read signed and unsigned 8 bit, 16 bit and 32 bit values from a file stream which may be little-endian or big-endian (it happens to be a tiff file which carries the byte order indicator at the start).
I initially started by writing my own functions to read the values and was able to do so for unsigned values. e.g.
Public Function ReadUInt32() As UInt32
Dim b(4) As Byte
input.Read(b, 0, 4)
Dim out As UInt32 = CUInt(If(IsBigEndian, b(0), b(3))) << 24
out += CUInt(If(IsBigEndian, b(1), b(2))) << 16
out += CUInt(If(IsBigEndian, b(2), b(1))) << 8
out += CUInt(If(IsBigEndian, b(3), b(0)))
Return out
End Function
But then I started looking at signed values and my brain broke.
As an alternative, I found the IO.BinaryReader which will let me read signed values directly but doesn't seem to have any way to indicate that the data is big-endian or little-endian.
Is there a nice way of handling this? Failing that, can someone tell me how to convert multiple bytes into signed values (in both byte orders)?
It's not ideal, but you can use the various overloads of the HostToNetworkOrder and NetworkToHostOrder methods from the System.Net.IPAddress class to do signed-integer endian conversion.
Have you taken a look at the BitConverter Class?
http://msdn.microsoft.com/en-US/library/system.bitconverter_members(v=VS.80).aspx
Some byte shuffling and a call to ToUInt32 should get what you want.