How to get 4 bytes of data (uint8_t) into a variable of type uint32_t - objective-c

I've been working with Cypress BLE PSoC 4200, and I've set up my GATT database to send int32 data packets to my iPhone. However, you can only write to the GATT database with uint8 pieces of data. So I wrote the following to take this int32 voltage reading and put it into a uint8 byte array:
// function passes in int32 variable 'result'
uint8 array[4];
array[0] = result & 0xFF;
array[1] = (result >> 8) & 0xFF;
array[2] = (result >> 16) & 0xFF;
array[3] = (result >> 24) & 0xFF;
So, given that in mind, when that int32 packet gets sent, I want to be able take each byte, and recombine them somehow into the original int32 value, and print it to the screen (e.g. 456000 will be 0.456 V).
Right now, I obtain the 4 bytes and handle them like such:
NSData* data = [characteristic value];
const uint8_t *reportData = [data bytes];
// variable to hold the eventual 32-bit data
uint32_t voltage = 0;
Is there a way to go through each index of *reportData and concatenate the bytes? Any help will do, thanks.

Would something like this not work?
uint32_t v0 = (uint32_t)reportData[0];
uint32_t v1 = (uint32_t)reportData[1] << 8;
uint32_t v2 = (uint32_t)reportData[2] << 16;
uint32_t v3 = (uint32_t)reportData[3] << 24;
uint32_t voltage = v0 | v1 | v2 | v3;

Related

Field packing byte returns unexpected results

I've been experimenting with changing values for some of the bits for field packing a byte, based on my last question: Field packing to form a single byte
However, I'm getting unexpected results based on the values. The top code sample gives me an expected output of 0x91, however if I change colorResolution and sizeOfGlobalColorTable variables to: 010, I get an unexpected output of 0x80 which isn't the binary representation of what it should be: 10100010 based from here: http://www.best-microcontroller-projects.com/hex-code-table.html. I would expect an output of: 0xA2 for the bottom code sample. What am I missing or not understanding?
This code correctly logs: 0x91
uint8_t screenDescriptorPackedFieldByte = 0;
uint8_t globalColorTableFlag = 1;
uint8_t colorResolution = 001;
uint8_t screenDescriptorSortFlag = 0;
uint8_t sizeOfGlobalColorTable = 001;
screenDescriptorPackedFieldByte |= ((globalColorTableFlag & 0x1) << 7);
screenDescriptorPackedFieldByte |= ((colorResolution & 0x7) << 4);
screenDescriptorPackedFieldByte |= ((screenDescriptorSortFlag & 0x1) << 3);
screenDescriptorPackedFieldByte |= ((sizeOfGlobalColorTable & 0x7) << 0);
NSLog(#"0x%02X",screenDescriptorPackedFieldByte);
This code incorrectly logs: 0x80
uint8_t screenDescriptorPackedFieldByte = 0;
uint8_t globalColorTableFlag = 1;
uint8_t colorResolution = 010;
uint8_t screenDescriptorSortFlag = 0;
uint8_t sizeOfGlobalColorTable = 010;
screenDescriptorPackedFieldByte |= ((globalColorTableFlag & 0x1) << 7);
screenDescriptorPackedFieldByte |= ((colorResolution & 0x7) << 4);
screenDescriptorPackedFieldByte |= ((screenDescriptorSortFlag & 0x1) << 3);
screenDescriptorPackedFieldByte |= ((sizeOfGlobalColorTable & 0x7) << 0);
NSLog(#"0x%02X",screenDescriptorPackedFieldByte);
This value is not binary. It is octal.
uint8_t sizeOfGlobalColorTable = 010;
In (Objective) C constants starting from 0 are interpreted as octal values. What you actually write is b1000 & b0111 = 0.
It should be:
uint8_t sizeOfGlobalColorTable = 0x2;
010 in C is octal (base 8) for the decimal 8.
The leading 0 makes the compiler assume you want an octal value, similar to the usage of the 0x prefix to indicate hexadecimal.
That will (correctly) result in 0 and 0 for lines 2&4 of the bit calculation.
You want to just eliminate the leading 0 of 010 for decimal 10, or if you intended binary 010, 0x10.

Field packing to form a single byte

I'm struggling to learn how to pack four seperate values into a single byte. I'm trying to get a hex output of 0x91 and the binary representation is supposed to be 10010001 but instead I'm getting outputs of: 0x1010001 and 16842753 respectively. Or is there a better way to do this?
uint8_t globalColorTableFlag = 1;
uint8_t colorResolution = 001;
uint8_t sortFlag = 0;
uint8_t sizeOfGlobalColorTable = 001;
uint32_t packed = ((globalColorTableFlag << 24) | (colorResolution << 16) | (sortFlag << 8) | (sizeOfGlobalColorTable << 0));
NSLog(#"%d",packed); // Logs 16842753, should be: 10010001
NSLog(#"0x%02X",packed); // Logs 0x1010001, should be: 0x91
Try the following:
/* packed starts at 0 */
uint8_t packed = 0;
/* one bit of the flag is kept and shifted to the last position */
packed |= ((globalColorTableFlag & 0x1) << 7);
/* three bits of the resolution are kept and shifted to the fifth position */
packed |= ((colorResolution & 0x7) << 4);
/* one bit of the flag is kept and shifted to the fourth position */
packed |= ((sortFlag & 0x1) << 3);
/* three bits are kept and left in the first position */
packed |= ((sizeOfGlobalColorTable & 0x7) << 0);
For an explanation about the relation between hexadecimal and binary digits see this answer: https://stackoverflow.com/a/17914633/4178025
For bitwise operations see: https://stackoverflow.com/a/3427633/4178025
packed = ((globalColorTableFlag & 1) << 7) +
((colorResolution & 0x7) << 4) +
((sortFlag & 1) << 3) +
((sizeOfGlobalColorTable & 0x7);

reading bit mask and bits in the following example

Can someone explain how to read these two bit masks?
uint32_t = 0x1 << 0;
uint32_t = 0x1 << 1;
Basically, how would you translate this to a person that can't read code? Which one is smaller than the other?
Well, 0x1 is just the hex value of 1, which in binary is represented as ~001. When you apply a 0 bit shift to 0x1, the value is unchanged because you haven't actually shifted anything. When you shift 1, you're looking at a representation of ~010 which in good ol' numerics is a 2 because you have a 1 in the twos column and zeros everywhere else.
Therefore, uint32_t i = 0x1 << 0; has a lesser value than uint32_t j = 0x1 << 1;.
uint32_t i = 0x1 << 0;
uint32_t j = 0x1 << 1;
NSLog(#"%u",i); // outputs 1
NSLog(#"%u",j); // outputs 2

Separate signed int into bytes in NXC

Is there any way to convert a signed integer into an array of bytes in NXC? I can't use explicit type casting or pointers either, due to language limitations.
I've tried:
for(unsigned long i = 1; i <= 2; i++)
{
MM_mem[id.idx] = ((val & (0xFF << ((2 - i) * 8)))) >> ((2 - i) * 8));
id.idx++;
}
But it fails.
EDIT: This works... It just wasn't downloading. I've wasted about an hour trying to figure it out. >_>
EDIT: In NXC, >> is a arithmetic shift. int is a signed 16-bit integer type. A byte is the same thing as unsigned char.
NXC is 'Not eXactly C', a relative of C, but distinctly different from C.
How about
unsigned char b[4];
b[0] = (x & 0xFF000000) >> 24;
b[1] = (x & 0x00FF0000) >> 16;
b[2] = (x & 0x0000FF00) >> 8;
b[3] = x & 0xFF;
The best way to do this in NXC with the opcodes available in the underlying VM is to use FlattenVar to convert any type into a string (aka byte array with a null added at the end). It results in a single VM opcode operation where any of the above options which use shifts and logical ANDs and array operations will require dozens of lines of assembly language.
task main()
{
int x = Random(); // 16 bit random number - could be negative
string data;
data = FlattenVar(x); // convert type to byte array with trailing null
NumOut(0, LCD_LINE1, x);
for (int i=0; i < ArrayLen(data)-1; i++)
{
#ifdef __ENHANCED_FIRMWARE
TextOut(0, LCD_LINE2-8*i, FormatNum("0x%2.2x", data[i]));
#else
NumOut(0, LCD_LINE2-8*i, data[i]);
#endif
}
Wait(SEC_4);
}
The best way to get help with LEGO MINDSTORMS and the NXT and Not eXactly C is via the mindboards forums at http://forums.mindboards.net/
Question originally tagged c; this answer may not be applicable to Not eXactly C.
What is the problem with this:
int value;
char bytes[sizeof(int)];
bytes[0] = (value >> 0) & 0xFF;
bytes[1] = (value >> 8) & 0xFF;
bytes[2] = (value >> 16) & 0xFF;
bytes[3] = (value >> 24) & 0xFF;
You can regard it as an unrolled loop. The shift by zero could be omitted; the optimizer would certainly do so. Even though the result of right-shifting a negative value is not defined, there is no problem because this code only accesses the bits where the behaviour is defined.
This code gives the bytes in a little-endian order - the least-significant byte is in bytes[0]. Clearly, big-endian order is achieved by:
int value;
char bytes[sizeof(int)];
bytes[3] = (value >> 0) & 0xFF;
bytes[2] = (value >> 8) & 0xFF;
bytes[1] = (value >> 16) & 0xFF;
bytes[0] = (value >> 24) & 0xFF;

Websocket (draft 76) handshake difficulties!

I'm using the following keys to calculate the correct handshake response string:
Key1: 18x 6]8vM;54 *(5: { U1]8 z [ 8
Key2: 1_ tx7X d < nw 334J702) 7]o}` 0
Key3: 54:6d:5b:4b:20:54:32:75
I've calculated Key1 and Key2's values:
Key1: 0947fa63 (hex)
Key2: 0a5510d3
However I'm not sure on what to do next, from what I can gather, you concatenate them and MD5 it, but that doesn't seem to work out i.e.
MD5 hashing: 0947fa630a5510d3546d5b4b20543275
Help!
This is the python code for creating the response hash:
from hashlib import md5
import struct
....
hashed = md5(struct.pack('>II8s', num1, num2, key3)).digest()
In the example num1 and num2 are the numeric values of key1 and key2. key3 is the actual textual string (raw bytes) received.
The struct.pack() call is using big endian mode (for the numeric values) and packing them 4 bytes for each number followed by the 8 byte key3 string (bytes).
See the Documentation for the python struct module.
The C version would look more like this:
/* Pack it big-endian */
buf[0] = (num1 & 0xff000000) >> 24;
buf[1] = (num1 & 0xff0000) >> 16;
buf[2] = (num1 & 0xff00) >> 8;
buf[3] = num1 & 0xff;
buf[4] = (num2 & 0xff000000) >> 24;
buf[5] = (num2 & 0xff0000) >> 16;
buf[6] = (num2 & 0xff00) >> 8;
buf[7] = num2 & 0xff;
strncpy(buf+8, headers->key3, 8);
buf[16] = '\0';
md5_buffer(buf, 16, target);
target[16] = '\0';
md5_buffer is in glibc.
For further reference you can look at working implementations (where the above code came from) of websockify (disclaimer: I wrote websockify).
Here's my version:
https://github.com/boothead/stargate/blob/master/stargate/handshake.py#L104
If you use stargate then all of that nasty stuff is done for you :-)