Websocket (draft 76) handshake difficulties! - vb.net

I'm using the following keys to calculate the correct handshake response string:
Key1: 18x 6]8vM;54 *(5: { U1]8 z [ 8
Key2: 1_ tx7X d < nw 334J702) 7]o}` 0
Key3: 54:6d:5b:4b:20:54:32:75
I've calculated Key1 and Key2's values:
Key1: 0947fa63 (hex)
Key2: 0a5510d3
However I'm not sure on what to do next, from what I can gather, you concatenate them and MD5 it, but that doesn't seem to work out i.e.
MD5 hashing: 0947fa630a5510d3546d5b4b20543275
Help!

This is the python code for creating the response hash:
from hashlib import md5
import struct
....
hashed = md5(struct.pack('>II8s', num1, num2, key3)).digest()
In the example num1 and num2 are the numeric values of key1 and key2. key3 is the actual textual string (raw bytes) received.
The struct.pack() call is using big endian mode (for the numeric values) and packing them 4 bytes for each number followed by the 8 byte key3 string (bytes).
See the Documentation for the python struct module.
The C version would look more like this:
/* Pack it big-endian */
buf[0] = (num1 & 0xff000000) >> 24;
buf[1] = (num1 & 0xff0000) >> 16;
buf[2] = (num1 & 0xff00) >> 8;
buf[3] = num1 & 0xff;
buf[4] = (num2 & 0xff000000) >> 24;
buf[5] = (num2 & 0xff0000) >> 16;
buf[6] = (num2 & 0xff00) >> 8;
buf[7] = num2 & 0xff;
strncpy(buf+8, headers->key3, 8);
buf[16] = '\0';
md5_buffer(buf, 16, target);
target[16] = '\0';
md5_buffer is in glibc.
For further reference you can look at working implementations (where the above code came from) of websockify (disclaimer: I wrote websockify).

Here's my version:
https://github.com/boothead/stargate/blob/master/stargate/handshake.py#L104
If you use stargate then all of that nasty stuff is done for you :-)

Related

How to get 4 bytes of data (uint8_t) into a variable of type uint32_t

I've been working with Cypress BLE PSoC 4200, and I've set up my GATT database to send int32 data packets to my iPhone. However, you can only write to the GATT database with uint8 pieces of data. So I wrote the following to take this int32 voltage reading and put it into a uint8 byte array:
// function passes in int32 variable 'result'
uint8 array[4];
array[0] = result & 0xFF;
array[1] = (result >> 8) & 0xFF;
array[2] = (result >> 16) & 0xFF;
array[3] = (result >> 24) & 0xFF;
So, given that in mind, when that int32 packet gets sent, I want to be able take each byte, and recombine them somehow into the original int32 value, and print it to the screen (e.g. 456000 will be 0.456 V).
Right now, I obtain the 4 bytes and handle them like such:
NSData* data = [characteristic value];
const uint8_t *reportData = [data bytes];
// variable to hold the eventual 32-bit data
uint32_t voltage = 0;
Is there a way to go through each index of *reportData and concatenate the bytes? Any help will do, thanks.
Would something like this not work?
uint32_t v0 = (uint32_t)reportData[0];
uint32_t v1 = (uint32_t)reportData[1] << 8;
uint32_t v2 = (uint32_t)reportData[2] << 16;
uint32_t v3 = (uint32_t)reportData[3] << 24;
uint32_t voltage = v0 | v1 | v2 | v3;

Calc CRC8 for objective c

I need the method to check sum CRC8.
I found this code, but it's not working:
- (int)crc8Checksum:(NSString*)dataFrame{
char j;
int crc8 = 0;
int x = 0;
for (int i = 0; i < [dataFrame length]; i++){
x = [dataFrame characterAtIndex:i];
for (int k = 0; k < 8; k++){
j = 1 & (x ^ crc8);
crc8 = floor0(crc8 / 2) & 0xFF;
x = floor0(x / 2) & 0xFF;
if (j != 0 ){
crc8 = crc8 ^ 0x8C;
}
}
}
return crc8;
}
Help me please!
What do you mean "it's not working"? There are 14 different CRC-8 definitions in this catalog, and probably many more out there in the wild. Do you have some CRC values you are comparing to? Is there documentation on what CRC you actually need? What are your test messages and corresponding expected CRCs?
You can't just pick some random CRC-8 code and expect it to do what you need.
That particular code computes a CRC-8/MAXIM in the linked catalog. However it is truly awful code. With unnecessary divides and floors. Here is a better, simpler, faster inner loop:
crc8 ^= x;
for (int k = 0; k < 8; k++)
crc8 = crc8 & 1 ? (crc8 >> 1) ^ 0x8c : crc8 >> 1;
You can get it faster still with tables and algorithms that compute the CRC a byte at a time or a machine word at a time.
The x in the code has its own problems, since an NSString can be a string of unicode characters, so characterAtIndex may not return a byte, and length may not return the number of bytes. You need a way to get the message as a series of bytes.

Field packing to form a single byte

I'm struggling to learn how to pack four seperate values into a single byte. I'm trying to get a hex output of 0x91 and the binary representation is supposed to be 10010001 but instead I'm getting outputs of: 0x1010001 and 16842753 respectively. Or is there a better way to do this?
uint8_t globalColorTableFlag = 1;
uint8_t colorResolution = 001;
uint8_t sortFlag = 0;
uint8_t sizeOfGlobalColorTable = 001;
uint32_t packed = ((globalColorTableFlag << 24) | (colorResolution << 16) | (sortFlag << 8) | (sizeOfGlobalColorTable << 0));
NSLog(#"%d",packed); // Logs 16842753, should be: 10010001
NSLog(#"0x%02X",packed); // Logs 0x1010001, should be: 0x91
Try the following:
/* packed starts at 0 */
uint8_t packed = 0;
/* one bit of the flag is kept and shifted to the last position */
packed |= ((globalColorTableFlag & 0x1) << 7);
/* three bits of the resolution are kept and shifted to the fifth position */
packed |= ((colorResolution & 0x7) << 4);
/* one bit of the flag is kept and shifted to the fourth position */
packed |= ((sortFlag & 0x1) << 3);
/* three bits are kept and left in the first position */
packed |= ((sizeOfGlobalColorTable & 0x7) << 0);
For an explanation about the relation between hexadecimal and binary digits see this answer: https://stackoverflow.com/a/17914633/4178025
For bitwise operations see: https://stackoverflow.com/a/3427633/4178025
packed = ((globalColorTableFlag & 1) << 7) +
((colorResolution & 0x7) << 4) +
((sortFlag & 1) << 3) +
((sizeOfGlobalColorTable & 0x7);

How does this message splitting work?

I have been trying to reverse engineer various encryption algorithms in compiled code recently, and I came upon this code. It is a part of a RSA algorithm. I've noted that the key size is too small to encrypt/decrypt the data it's supposed to (in this case an int), so the code splits the message into two pieces, and encrypt/decrypt each, then sum them together. I've pulled the segments of code that splits and joins the message, and experimented with it. It appears that the numerical values that it uses is dependent on the n modulus. So, what exactly is this scheme, and how does it work?
uint n = 32437;
uint origVal = 12345;
uint newVal = 0;
for (int i = 0; i < 2; ++i)
{
ulong num = (ulong)origVal * 43827549;
//uint num2 = ((origVal - (uint)(num >> 32)) / 2 + (uint)(num >> 32)) >> 14;
uint num2 = (origVal + (uint)(num >> 32)) / 32768;
origVal -= num2 * n;
// RSA encrypt/decrypt here
newVal *= n;
newVal += origVal;
origVal = num2;
}
// Put newVal into origVal, to reverse
origVal = newVal;
newVal = 0;
for (int i = 0; i < 2; ++i)
{
ulong num = (ulong)origVal * 43827549;
//uint num2 = ((origVal - (uint)(num >> 32)) / 2 + (uint)(num >> 32)) >> 14;
uint num2 = (origVal + (uint)(num >> 32)) / 32768;
origVal -= num2 * n;
// RSA encrypt/decrypt here
newVal *= n;
newVal += origVal;
origVal = num2;
}
Note: it seems the operations applied are symmetric.
After using various values for origVal, I've found out that the first three lines after the for loop is just a division, with the line immediately after that a modulo operation. The lines
ulong num = (ulong)origVal * 43827549;
//uint num2 = ((origVal - (uint)(num >> 32)) / 2 + (uint)(num >> 32)) >> 14;
uint num2 = (origVal + (uint)(num >> 32)) / 32768;
translates into
uint valDivN = origVal / n;
and
origVal -= num2 * n;
into
origVal = origVal % n;
So the final code inside the for loop looks like this:
uint valDivN = origVal / n;
origVal = origVal % n;
// RSA encrypt/decrypt here
newVal*= n;
newVal+= origVal;
origVal = valDivN;
Analysis
This code splits values by taking the modulo of the original value, transforming it, then multiplying it by n, and tacking the transformation of the previous quotient onto the result. The lines uint valDivN = origVal / n; and newVal*= n; form inverse operations. You can think of the input message as having two "boxes". After the loop has run through, you get the transformed value put in opposite "boxes". When the message is decrypted, the two values in the "boxes" are reverse transformed, and put in their original spots in the "boxes". The reason the divisor is n is to keep the value being encrypted/decrypted under n, as the maximum value you can encrypt with RSA is no larger than n. There is no possibility of the wrong value being decrypted, as the code processes the packed message and extracts the part that should be decrypted prior to decrypting. The loop only runs twice because there is no chance for the quotient to exceed the size of an int (since the input is an int).

Separate signed int into bytes in NXC

Is there any way to convert a signed integer into an array of bytes in NXC? I can't use explicit type casting or pointers either, due to language limitations.
I've tried:
for(unsigned long i = 1; i <= 2; i++)
{
MM_mem[id.idx] = ((val & (0xFF << ((2 - i) * 8)))) >> ((2 - i) * 8));
id.idx++;
}
But it fails.
EDIT: This works... It just wasn't downloading. I've wasted about an hour trying to figure it out. >_>
EDIT: In NXC, >> is a arithmetic shift. int is a signed 16-bit integer type. A byte is the same thing as unsigned char.
NXC is 'Not eXactly C', a relative of C, but distinctly different from C.
How about
unsigned char b[4];
b[0] = (x & 0xFF000000) >> 24;
b[1] = (x & 0x00FF0000) >> 16;
b[2] = (x & 0x0000FF00) >> 8;
b[3] = x & 0xFF;
The best way to do this in NXC with the opcodes available in the underlying VM is to use FlattenVar to convert any type into a string (aka byte array with a null added at the end). It results in a single VM opcode operation where any of the above options which use shifts and logical ANDs and array operations will require dozens of lines of assembly language.
task main()
{
int x = Random(); // 16 bit random number - could be negative
string data;
data = FlattenVar(x); // convert type to byte array with trailing null
NumOut(0, LCD_LINE1, x);
for (int i=0; i < ArrayLen(data)-1; i++)
{
#ifdef __ENHANCED_FIRMWARE
TextOut(0, LCD_LINE2-8*i, FormatNum("0x%2.2x", data[i]));
#else
NumOut(0, LCD_LINE2-8*i, data[i]);
#endif
}
Wait(SEC_4);
}
The best way to get help with LEGO MINDSTORMS and the NXT and Not eXactly C is via the mindboards forums at http://forums.mindboards.net/
Question originally tagged c; this answer may not be applicable to Not eXactly C.
What is the problem with this:
int value;
char bytes[sizeof(int)];
bytes[0] = (value >> 0) & 0xFF;
bytes[1] = (value >> 8) & 0xFF;
bytes[2] = (value >> 16) & 0xFF;
bytes[3] = (value >> 24) & 0xFF;
You can regard it as an unrolled loop. The shift by zero could be omitted; the optimizer would certainly do so. Even though the result of right-shifting a negative value is not defined, there is no problem because this code only accesses the bits where the behaviour is defined.
This code gives the bytes in a little-endian order - the least-significant byte is in bytes[0]. Clearly, big-endian order is achieved by:
int value;
char bytes[sizeof(int)];
bytes[3] = (value >> 0) & 0xFF;
bytes[2] = (value >> 8) & 0xFF;
bytes[1] = (value >> 16) & 0xFF;
bytes[0] = (value >> 24) & 0xFF;