NSMakeRange - replaceBytesInRange Question - objective-c

In the following code I get the expected results
- (int)getInt{
NSRange intRange = NSMakeRange(0,3);
char buffer[4];
[stream getBytes:buffer range:intRange];
[stream replaceBytesInRange:NSMakeRange(0, 3) withBytes:NULL length:0];
return (int) (
(((int)buffer[0] & 0xffff) << 24) |
(((int)buffer[1] & 0xffff) << 16) |
(((int)buffer[2] & 0xffff) << 8) |
((int)buffer[3] & 0xffff) );
}
If I change intRange to 0, 4 I get the expected results.
If I change replaceBytesInRange to 0, 4 I seem to loose an extra byte in the stream.
I'm okay with using 0,3 - but I'm wondering why this happens, because with 2 and 8 byte replacements I do not get this behavior.

Related

Field packing to form a single byte

I'm struggling to learn how to pack four seperate values into a single byte. I'm trying to get a hex output of 0x91 and the binary representation is supposed to be 10010001 but instead I'm getting outputs of: 0x1010001 and 16842753 respectively. Or is there a better way to do this?
uint8_t globalColorTableFlag = 1;
uint8_t colorResolution = 001;
uint8_t sortFlag = 0;
uint8_t sizeOfGlobalColorTable = 001;
uint32_t packed = ((globalColorTableFlag << 24) | (colorResolution << 16) | (sortFlag << 8) | (sizeOfGlobalColorTable << 0));
NSLog(#"%d",packed); // Logs 16842753, should be: 10010001
NSLog(#"0x%02X",packed); // Logs 0x1010001, should be: 0x91
Try the following:
/* packed starts at 0 */
uint8_t packed = 0;
/* one bit of the flag is kept and shifted to the last position */
packed |= ((globalColorTableFlag & 0x1) << 7);
/* three bits of the resolution are kept and shifted to the fifth position */
packed |= ((colorResolution & 0x7) << 4);
/* one bit of the flag is kept and shifted to the fourth position */
packed |= ((sortFlag & 0x1) << 3);
/* three bits are kept and left in the first position */
packed |= ((sizeOfGlobalColorTable & 0x7) << 0);
For an explanation about the relation between hexadecimal and binary digits see this answer: https://stackoverflow.com/a/17914633/4178025
For bitwise operations see: https://stackoverflow.com/a/3427633/4178025
packed = ((globalColorTableFlag & 1) << 7) +
((colorResolution & 0x7) << 4) +
((sortFlag & 1) << 3) +
((sizeOfGlobalColorTable & 0x7);

What is operation in enum type?

What is:
NSStreamEventOpenCompleted = 1 << 0 , 1 << 1 , 1 << 2 , 1 << 3 , 1 << 4 ?
In the example below
typedef enum {
NSStreamEventNone = 0,
NSStreamEventOpenCompleted = 1 << 0,
NSStreamEventHasBytesAvailable = 1 << 1,
NSStreamEventHasSpaceAvailable = 1 << 2,
NSStreamEventErrorOccurred = 1 << 3,
NSStreamEventEndEncountered = 1 << 4
};
That's a bitwise shift operation. It is used so that you can set one or more flags from the enum. This answer has a good explanation: Why use the Bitwise-Shift operator for values in a C enum definition?
Basically, it's so that one integer can store multiple flags which can be checked with the binary AND operator. The enum values end up looking like this:
typedef enum {
NSStreamEventNone = 0, // 00000
NSStreamEventOpenCompleted = 1 << 0, // 00001
NSStreamEventHasBytesAvailable = 1 << 1, // 00010
NSStreamEventHasSpaceAvailable = 1 << 2, // 00100
NSStreamEventErrorOccurred = 1 << 3, // 01000
NSStreamEventEndEncountered = 1 << 4 // 10000
};
So you can say:
// Set two flags with the binary OR operator
int flags = NSStreamEventEndEncountered | NSStreamEventOpenCompleted // 10001
if (flags & NSStreamEventEndEncountered) // true
if (flags & NSStreamEventHasBytesAvailable) // false
If you didn't have the binary shift, the values could clash or overlap and the technique wouldn't work. You may also see enums get set to 0, 1, 2, 4, 8, 16, which is the same thing as the shift above.

Separate signed int into bytes in NXC

Is there any way to convert a signed integer into an array of bytes in NXC? I can't use explicit type casting or pointers either, due to language limitations.
I've tried:
for(unsigned long i = 1; i <= 2; i++)
{
MM_mem[id.idx] = ((val & (0xFF << ((2 - i) * 8)))) >> ((2 - i) * 8));
id.idx++;
}
But it fails.
EDIT: This works... It just wasn't downloading. I've wasted about an hour trying to figure it out. >_>
EDIT: In NXC, >> is a arithmetic shift. int is a signed 16-bit integer type. A byte is the same thing as unsigned char.
NXC is 'Not eXactly C', a relative of C, but distinctly different from C.
How about
unsigned char b[4];
b[0] = (x & 0xFF000000) >> 24;
b[1] = (x & 0x00FF0000) >> 16;
b[2] = (x & 0x0000FF00) >> 8;
b[3] = x & 0xFF;
The best way to do this in NXC with the opcodes available in the underlying VM is to use FlattenVar to convert any type into a string (aka byte array with a null added at the end). It results in a single VM opcode operation where any of the above options which use shifts and logical ANDs and array operations will require dozens of lines of assembly language.
task main()
{
int x = Random(); // 16 bit random number - could be negative
string data;
data = FlattenVar(x); // convert type to byte array with trailing null
NumOut(0, LCD_LINE1, x);
for (int i=0; i < ArrayLen(data)-1; i++)
{
#ifdef __ENHANCED_FIRMWARE
TextOut(0, LCD_LINE2-8*i, FormatNum("0x%2.2x", data[i]));
#else
NumOut(0, LCD_LINE2-8*i, data[i]);
#endif
}
Wait(SEC_4);
}
The best way to get help with LEGO MINDSTORMS and the NXT and Not eXactly C is via the mindboards forums at http://forums.mindboards.net/
Question originally tagged c; this answer may not be applicable to Not eXactly C.
What is the problem with this:
int value;
char bytes[sizeof(int)];
bytes[0] = (value >> 0) & 0xFF;
bytes[1] = (value >> 8) & 0xFF;
bytes[2] = (value >> 16) & 0xFF;
bytes[3] = (value >> 24) & 0xFF;
You can regard it as an unrolled loop. The shift by zero could be omitted; the optimizer would certainly do so. Even though the result of right-shifting a negative value is not defined, there is no problem because this code only accesses the bits where the behaviour is defined.
This code gives the bytes in a little-endian order - the least-significant byte is in bytes[0]. Clearly, big-endian order is achieved by:
int value;
char bytes[sizeof(int)];
bytes[3] = (value >> 0) & 0xFF;
bytes[2] = (value >> 8) & 0xFF;
bytes[1] = (value >> 16) & 0xFF;
bytes[0] = (value >> 24) & 0xFF;

Websocket (draft 76) handshake difficulties!

I'm using the following keys to calculate the correct handshake response string:
Key1: 18x 6]8vM;54 *(5: { U1]8 z [ 8
Key2: 1_ tx7X d < nw 334J702) 7]o}` 0
Key3: 54:6d:5b:4b:20:54:32:75
I've calculated Key1 and Key2's values:
Key1: 0947fa63 (hex)
Key2: 0a5510d3
However I'm not sure on what to do next, from what I can gather, you concatenate them and MD5 it, but that doesn't seem to work out i.e.
MD5 hashing: 0947fa630a5510d3546d5b4b20543275
Help!
This is the python code for creating the response hash:
from hashlib import md5
import struct
....
hashed = md5(struct.pack('>II8s', num1, num2, key3)).digest()
In the example num1 and num2 are the numeric values of key1 and key2. key3 is the actual textual string (raw bytes) received.
The struct.pack() call is using big endian mode (for the numeric values) and packing them 4 bytes for each number followed by the 8 byte key3 string (bytes).
See the Documentation for the python struct module.
The C version would look more like this:
/* Pack it big-endian */
buf[0] = (num1 & 0xff000000) >> 24;
buf[1] = (num1 & 0xff0000) >> 16;
buf[2] = (num1 & 0xff00) >> 8;
buf[3] = num1 & 0xff;
buf[4] = (num2 & 0xff000000) >> 24;
buf[5] = (num2 & 0xff0000) >> 16;
buf[6] = (num2 & 0xff00) >> 8;
buf[7] = num2 & 0xff;
strncpy(buf+8, headers->key3, 8);
buf[16] = '\0';
md5_buffer(buf, 16, target);
target[16] = '\0';
md5_buffer is in glibc.
For further reference you can look at working implementations (where the above code came from) of websockify (disclaimer: I wrote websockify).
Here's my version:
https://github.com/boothead/stargate/blob/master/stargate/handshake.py#L104
If you use stargate then all of that nasty stuff is done for you :-)

how to change value of NSColor object into its 8 bit value

I need to convert Value of NSColor object into 8 bit integer value
code :
uint8_t r = (uint32_t)(MIN(1.0f, MAX(0.0f, [[CWhiteBoardController ReturnFillColor] redComponent])) * 0xff);
uint8_t g = (uint32_t)(MIN(1.0f, MAX(0.0f, [[CWhiteBoardController ReturnFillColor] greenComponent])) * 0xff);
uint8_t b = (uint32_t)(MIN(1.0f, MAX(0.0f, [[CWhiteBoardController ReturnFillColor] blueComponent])) * 0xff);
uint8_t a = (uint32_t)(MIN(1.0f, MAX(0.0f, [[CWhiteBoardController ReturnFillColor] alphaComponent])) * 0xff);
uint8_t value = (a << 24) | (r<< 16) | (g << 8) | b;
value that I received is 0.
i am not getting where I am wrong.
So anyone help me out plz.
I come to know the problem,actually I need to write
int value = (a << 24) | (r<< 16) | (g << 8) | b;
in place of
uint8_t value = (a << 24) | (r<< 16) | (g << 8) | b;