Can someone explain how to read these two bit masks?
uint32_t = 0x1 << 0;
uint32_t = 0x1 << 1;
Basically, how would you translate this to a person that can't read code? Which one is smaller than the other?
Well, 0x1 is just the hex value of 1, which in binary is represented as ~001. When you apply a 0 bit shift to 0x1, the value is unchanged because you haven't actually shifted anything. When you shift 1, you're looking at a representation of ~010 which in good ol' numerics is a 2 because you have a 1 in the twos column and zeros everywhere else.
Therefore, uint32_t i = 0x1 << 0; has a lesser value than uint32_t j = 0x1 << 1;.
uint32_t i = 0x1 << 0;
uint32_t j = 0x1 << 1;
NSLog(#"%u",i); // outputs 1
NSLog(#"%u",j); // outputs 2
Related
I've been working with Cypress BLE PSoC 4200, and I've set up my GATT database to send int32 data packets to my iPhone. However, you can only write to the GATT database with uint8 pieces of data. So I wrote the following to take this int32 voltage reading and put it into a uint8 byte array:
// function passes in int32 variable 'result'
uint8 array[4];
array[0] = result & 0xFF;
array[1] = (result >> 8) & 0xFF;
array[2] = (result >> 16) & 0xFF;
array[3] = (result >> 24) & 0xFF;
So, given that in mind, when that int32 packet gets sent, I want to be able take each byte, and recombine them somehow into the original int32 value, and print it to the screen (e.g. 456000 will be 0.456 V).
Right now, I obtain the 4 bytes and handle them like such:
NSData* data = [characteristic value];
const uint8_t *reportData = [data bytes];
// variable to hold the eventual 32-bit data
uint32_t voltage = 0;
Is there a way to go through each index of *reportData and concatenate the bytes? Any help will do, thanks.
Would something like this not work?
uint32_t v0 = (uint32_t)reportData[0];
uint32_t v1 = (uint32_t)reportData[1] << 8;
uint32_t v2 = (uint32_t)reportData[2] << 16;
uint32_t v3 = (uint32_t)reportData[3] << 24;
uint32_t voltage = v0 | v1 | v2 | v3;
according to [1], I should be able to access Kinect accelerometer data with request 0x32, providing a buffer with 10 bytes. The accelerometer vector xyz values should be short ints at bytes 3 thru 8. As stated in the text (and as expected), with a horizontally, stationary camera, I should get values near 0 for both x and z, and about 981 for y. This would be g and would make sense. Instead, while y values are as expected, I get x and z values near 0xffff. Here's the code (i skipped error-checking for better readability):
const unsigned short VENDOR_ID_MSFT = 0x045e;
const unsigned short PRODUCT_ID_KINECT360_MOTOR = 0x02b0;
XN_USB_DEV_HANDLE deviceHandle = NULL;
const XnUSBConnectionString *paths = NULL;
XnUInt32 count;
xnUSBInit();
xnUSBEnumerateDevices( VENDOR_ID_MSFT, PRODUCT_ID_KINECT360_MOTOR, &paths, &count );
xnUSBOpenDeviceByPath( paths[0], &this->deviceHandle );
//init motor
xnUSBSendControl( this->deviceHandle, (XnUSBControlType) 0xc0, 0x10, 0x00, 0x00, buf, sizeof( buf ), 0 );
XnStatus res;
XnUChar buf[10] = { 0 };
XnUInt32 size = 0;
//query motor data
xnUSBReceiveControl( this->deviceHandle, XN_USB_CONTROL_TYPE_VENDOR, 0x32, 0, 0, buf, sizeof( buf ), &size, 0 );
int accelCountX = (int) ( ( (short) buf[2] << 8 ) | buf[3] );
int accelCountY = (int) ( ( (short) buf[4] << 8 ) | buf[5] );
int accelCountZ = (int) ( ( (short) buf[6] << 8 ) | buf[7] );
std::cout << accelCountX << "/" << accelCountY << "/" << accelCountZ << std::endl;
The output shows values kind of like these:
65503/847/65516
Any idea, what the problem could be? Thanks!
[1] http://fivedots.coe.psu.ac.th/~ad/jg/nui16/motorControl.pdf
I've been experimenting with changing values for some of the bits for field packing a byte, based on my last question: Field packing to form a single byte
However, I'm getting unexpected results based on the values. The top code sample gives me an expected output of 0x91, however if I change colorResolution and sizeOfGlobalColorTable variables to: 010, I get an unexpected output of 0x80 which isn't the binary representation of what it should be: 10100010 based from here: http://www.best-microcontroller-projects.com/hex-code-table.html. I would expect an output of: 0xA2 for the bottom code sample. What am I missing or not understanding?
This code correctly logs: 0x91
uint8_t screenDescriptorPackedFieldByte = 0;
uint8_t globalColorTableFlag = 1;
uint8_t colorResolution = 001;
uint8_t screenDescriptorSortFlag = 0;
uint8_t sizeOfGlobalColorTable = 001;
screenDescriptorPackedFieldByte |= ((globalColorTableFlag & 0x1) << 7);
screenDescriptorPackedFieldByte |= ((colorResolution & 0x7) << 4);
screenDescriptorPackedFieldByte |= ((screenDescriptorSortFlag & 0x1) << 3);
screenDescriptorPackedFieldByte |= ((sizeOfGlobalColorTable & 0x7) << 0);
NSLog(#"0x%02X",screenDescriptorPackedFieldByte);
This code incorrectly logs: 0x80
uint8_t screenDescriptorPackedFieldByte = 0;
uint8_t globalColorTableFlag = 1;
uint8_t colorResolution = 010;
uint8_t screenDescriptorSortFlag = 0;
uint8_t sizeOfGlobalColorTable = 010;
screenDescriptorPackedFieldByte |= ((globalColorTableFlag & 0x1) << 7);
screenDescriptorPackedFieldByte |= ((colorResolution & 0x7) << 4);
screenDescriptorPackedFieldByte |= ((screenDescriptorSortFlag & 0x1) << 3);
screenDescriptorPackedFieldByte |= ((sizeOfGlobalColorTable & 0x7) << 0);
NSLog(#"0x%02X",screenDescriptorPackedFieldByte);
This value is not binary. It is octal.
uint8_t sizeOfGlobalColorTable = 010;
In (Objective) C constants starting from 0 are interpreted as octal values. What you actually write is b1000 & b0111 = 0.
It should be:
uint8_t sizeOfGlobalColorTable = 0x2;
010 in C is octal (base 8) for the decimal 8.
The leading 0 makes the compiler assume you want an octal value, similar to the usage of the 0x prefix to indicate hexadecimal.
That will (correctly) result in 0 and 0 for lines 2&4 of the bit calculation.
You want to just eliminate the leading 0 of 010 for decimal 10, or if you intended binary 010, 0x10.
I'm struggling to learn how to pack four seperate values into a single byte. I'm trying to get a hex output of 0x91 and the binary representation is supposed to be 10010001 but instead I'm getting outputs of: 0x1010001 and 16842753 respectively. Or is there a better way to do this?
uint8_t globalColorTableFlag = 1;
uint8_t colorResolution = 001;
uint8_t sortFlag = 0;
uint8_t sizeOfGlobalColorTable = 001;
uint32_t packed = ((globalColorTableFlag << 24) | (colorResolution << 16) | (sortFlag << 8) | (sizeOfGlobalColorTable << 0));
NSLog(#"%d",packed); // Logs 16842753, should be: 10010001
NSLog(#"0x%02X",packed); // Logs 0x1010001, should be: 0x91
Try the following:
/* packed starts at 0 */
uint8_t packed = 0;
/* one bit of the flag is kept and shifted to the last position */
packed |= ((globalColorTableFlag & 0x1) << 7);
/* three bits of the resolution are kept and shifted to the fifth position */
packed |= ((colorResolution & 0x7) << 4);
/* one bit of the flag is kept and shifted to the fourth position */
packed |= ((sortFlag & 0x1) << 3);
/* three bits are kept and left in the first position */
packed |= ((sizeOfGlobalColorTable & 0x7) << 0);
For an explanation about the relation between hexadecimal and binary digits see this answer: https://stackoverflow.com/a/17914633/4178025
For bitwise operations see: https://stackoverflow.com/a/3427633/4178025
packed = ((globalColorTableFlag & 1) << 7) +
((colorResolution & 0x7) << 4) +
((sortFlag & 1) << 3) +
((sizeOfGlobalColorTable & 0x7);
Is there any way to convert a signed integer into an array of bytes in NXC? I can't use explicit type casting or pointers either, due to language limitations.
I've tried:
for(unsigned long i = 1; i <= 2; i++)
{
MM_mem[id.idx] = ((val & (0xFF << ((2 - i) * 8)))) >> ((2 - i) * 8));
id.idx++;
}
But it fails.
EDIT: This works... It just wasn't downloading. I've wasted about an hour trying to figure it out. >_>
EDIT: In NXC, >> is a arithmetic shift. int is a signed 16-bit integer type. A byte is the same thing as unsigned char.
NXC is 'Not eXactly C', a relative of C, but distinctly different from C.
How about
unsigned char b[4];
b[0] = (x & 0xFF000000) >> 24;
b[1] = (x & 0x00FF0000) >> 16;
b[2] = (x & 0x0000FF00) >> 8;
b[3] = x & 0xFF;
The best way to do this in NXC with the opcodes available in the underlying VM is to use FlattenVar to convert any type into a string (aka byte array with a null added at the end). It results in a single VM opcode operation where any of the above options which use shifts and logical ANDs and array operations will require dozens of lines of assembly language.
task main()
{
int x = Random(); // 16 bit random number - could be negative
string data;
data = FlattenVar(x); // convert type to byte array with trailing null
NumOut(0, LCD_LINE1, x);
for (int i=0; i < ArrayLen(data)-1; i++)
{
#ifdef __ENHANCED_FIRMWARE
TextOut(0, LCD_LINE2-8*i, FormatNum("0x%2.2x", data[i]));
#else
NumOut(0, LCD_LINE2-8*i, data[i]);
#endif
}
Wait(SEC_4);
}
The best way to get help with LEGO MINDSTORMS and the NXT and Not eXactly C is via the mindboards forums at http://forums.mindboards.net/
Question originally tagged c; this answer may not be applicable to Not eXactly C.
What is the problem with this:
int value;
char bytes[sizeof(int)];
bytes[0] = (value >> 0) & 0xFF;
bytes[1] = (value >> 8) & 0xFF;
bytes[2] = (value >> 16) & 0xFF;
bytes[3] = (value >> 24) & 0xFF;
You can regard it as an unrolled loop. The shift by zero could be omitted; the optimizer would certainly do so. Even though the result of right-shifting a negative value is not defined, there is no problem because this code only accesses the bits where the behaviour is defined.
This code gives the bytes in a little-endian order - the least-significant byte is in bytes[0]. Clearly, big-endian order is achieved by:
int value;
char bytes[sizeof(int)];
bytes[3] = (value >> 0) & 0xFF;
bytes[2] = (value >> 8) & 0xFF;
bytes[1] = (value >> 16) & 0xFF;
bytes[0] = (value >> 24) & 0xFF;