how to change value of NSColor object into its 8 bit value - objective-c

I need to convert Value of NSColor object into 8 bit integer value
code :
uint8_t r = (uint32_t)(MIN(1.0f, MAX(0.0f, [[CWhiteBoardController ReturnFillColor] redComponent])) * 0xff);
uint8_t g = (uint32_t)(MIN(1.0f, MAX(0.0f, [[CWhiteBoardController ReturnFillColor] greenComponent])) * 0xff);
uint8_t b = (uint32_t)(MIN(1.0f, MAX(0.0f, [[CWhiteBoardController ReturnFillColor] blueComponent])) * 0xff);
uint8_t a = (uint32_t)(MIN(1.0f, MAX(0.0f, [[CWhiteBoardController ReturnFillColor] alphaComponent])) * 0xff);
uint8_t value = (a << 24) | (r<< 16) | (g << 8) | b;
value that I received is 0.
i am not getting where I am wrong.
So anyone help me out plz.

I come to know the problem,actually I need to write
int value = (a << 24) | (r<< 16) | (g << 8) | b;
in place of
uint8_t value = (a << 24) | (r<< 16) | (g << 8) | b;

Related

Jetpack Compose how to build custom Color by overlaying 2 existing colors in code?

Designers often building custom colors by putting existing colros from "our" custom theme one of top of another with alpha applied.
How can I calculate resulting Color without applying multiple backgrounds one of top of another?
Something like
val background = MaterialTheme.colors.MyDanger.copy(alpha = 0.12f) + MaterialTheme.colors.CustomTint16
Plus is not defined for Colors as it's not Commutative, but is there a way to just put one Color on top on another in Code and apply only result?
Example1:
// Ratio value has to be 0.5 to achive even mix.
//The 3rd argument is ratio(the proportion while blending the colors). eg.
//If you want 30% of color1 & 70% of color2, then do ColorUtils.blendARGB(***, ***, 0.3F);
int resultColor = androidx.core.graphics.ColorUtils.blendARGB(color1, color2, 0.5F);
Example2:
public static int mixColor(int color1, int color2, float ratio) {
final float inverse = 1 - ratio;
float a = (color1 >>> 24) * inverse + (color2 >>> 24) * ratio;
float r = ((color1 >> 16) & 0xFF) * inverse + ((color2 >> 16) & 0xFF) * ratio;
float g = ((color1 >> 8) & 0xFF) * inverse + ((color2 >> 8) & 0xFF) * ratio;
float b = (color1 & 0xFF) * inverse + (color2 & 0xFF) * ratio;
return ((int) a << 24) | ((int) r << 16) | ((int) g << 8) | (int) b;
}
val result = color1.compositeOver(color2)
Is what I was looking for

Field packing to form a single byte

I'm struggling to learn how to pack four seperate values into a single byte. I'm trying to get a hex output of 0x91 and the binary representation is supposed to be 10010001 but instead I'm getting outputs of: 0x1010001 and 16842753 respectively. Or is there a better way to do this?
uint8_t globalColorTableFlag = 1;
uint8_t colorResolution = 001;
uint8_t sortFlag = 0;
uint8_t sizeOfGlobalColorTable = 001;
uint32_t packed = ((globalColorTableFlag << 24) | (colorResolution << 16) | (sortFlag << 8) | (sizeOfGlobalColorTable << 0));
NSLog(#"%d",packed); // Logs 16842753, should be: 10010001
NSLog(#"0x%02X",packed); // Logs 0x1010001, should be: 0x91
Try the following:
/* packed starts at 0 */
uint8_t packed = 0;
/* one bit of the flag is kept and shifted to the last position */
packed |= ((globalColorTableFlag & 0x1) << 7);
/* three bits of the resolution are kept and shifted to the fifth position */
packed |= ((colorResolution & 0x7) << 4);
/* one bit of the flag is kept and shifted to the fourth position */
packed |= ((sortFlag & 0x1) << 3);
/* three bits are kept and left in the first position */
packed |= ((sizeOfGlobalColorTable & 0x7) << 0);
For an explanation about the relation between hexadecimal and binary digits see this answer: https://stackoverflow.com/a/17914633/4178025
For bitwise operations see: https://stackoverflow.com/a/3427633/4178025
packed = ((globalColorTableFlag & 1) << 7) +
((colorResolution & 0x7) << 4) +
((sortFlag & 1) << 3) +
((sizeOfGlobalColorTable & 0x7);

What is the better way to implement a perfect hash function for an iOS app?

I need to create a perfect hash for a list of string identifiers, so before to begin with this implementation (i have never did it before) i want to know if there is any good framework or good tutorial that could be useful?
Thanks!
I use the MurmurHash written by Austin Appleby:
unsigned int Hash (const char* buffer, size_t size, unsigned seed)
{
const unsigned int m = 0x5bd1e995;
const int r = 2;
unsigned int h = seed ^ (unsigned int)size;
const unsigned char* data = (const unsigned char*)buffer;
while(size >= 4)
{
unsigned int k;
k = data[0];
k |= data[1] << 8;
k |= data[2] << 16;
k |= data[3] << 24;
k *= m;
k ^= k >> r;
k *= m;
h *= m;
h ^= k;
data += 4;
size -= 4;
}
switch(size)
{
case 3: h ^= data[2] << 16;
case 2: h ^= data[1] << 8;
case 1: h ^= data[0];
h *= m;
}
h ^= h >> 13;
h *= m;
h ^= h >> 15;
return h;
}
But ultimately your choice of hashing function depends on the trade-off between quality and speed.

Separate signed int into bytes in NXC

Is there any way to convert a signed integer into an array of bytes in NXC? I can't use explicit type casting or pointers either, due to language limitations.
I've tried:
for(unsigned long i = 1; i <= 2; i++)
{
MM_mem[id.idx] = ((val & (0xFF << ((2 - i) * 8)))) >> ((2 - i) * 8));
id.idx++;
}
But it fails.
EDIT: This works... It just wasn't downloading. I've wasted about an hour trying to figure it out. >_>
EDIT: In NXC, >> is a arithmetic shift. int is a signed 16-bit integer type. A byte is the same thing as unsigned char.
NXC is 'Not eXactly C', a relative of C, but distinctly different from C.
How about
unsigned char b[4];
b[0] = (x & 0xFF000000) >> 24;
b[1] = (x & 0x00FF0000) >> 16;
b[2] = (x & 0x0000FF00) >> 8;
b[3] = x & 0xFF;
The best way to do this in NXC with the opcodes available in the underlying VM is to use FlattenVar to convert any type into a string (aka byte array with a null added at the end). It results in a single VM opcode operation where any of the above options which use shifts and logical ANDs and array operations will require dozens of lines of assembly language.
task main()
{
int x = Random(); // 16 bit random number - could be negative
string data;
data = FlattenVar(x); // convert type to byte array with trailing null
NumOut(0, LCD_LINE1, x);
for (int i=0; i < ArrayLen(data)-1; i++)
{
#ifdef __ENHANCED_FIRMWARE
TextOut(0, LCD_LINE2-8*i, FormatNum("0x%2.2x", data[i]));
#else
NumOut(0, LCD_LINE2-8*i, data[i]);
#endif
}
Wait(SEC_4);
}
The best way to get help with LEGO MINDSTORMS and the NXT and Not eXactly C is via the mindboards forums at http://forums.mindboards.net/
Question originally tagged c; this answer may not be applicable to Not eXactly C.
What is the problem with this:
int value;
char bytes[sizeof(int)];
bytes[0] = (value >> 0) & 0xFF;
bytes[1] = (value >> 8) & 0xFF;
bytes[2] = (value >> 16) & 0xFF;
bytes[3] = (value >> 24) & 0xFF;
You can regard it as an unrolled loop. The shift by zero could be omitted; the optimizer would certainly do so. Even though the result of right-shifting a negative value is not defined, there is no problem because this code only accesses the bits where the behaviour is defined.
This code gives the bytes in a little-endian order - the least-significant byte is in bytes[0]. Clearly, big-endian order is achieved by:
int value;
char bytes[sizeof(int)];
bytes[3] = (value >> 0) & 0xFF;
bytes[2] = (value >> 8) & 0xFF;
bytes[1] = (value >> 16) & 0xFF;
bytes[0] = (value >> 24) & 0xFF;

NSMakeRange - replaceBytesInRange Question

In the following code I get the expected results
- (int)getInt{
NSRange intRange = NSMakeRange(0,3);
char buffer[4];
[stream getBytes:buffer range:intRange];
[stream replaceBytesInRange:NSMakeRange(0, 3) withBytes:NULL length:0];
return (int) (
(((int)buffer[0] & 0xffff) << 24) |
(((int)buffer[1] & 0xffff) << 16) |
(((int)buffer[2] & 0xffff) << 8) |
((int)buffer[3] & 0xffff) );
}
If I change intRange to 0, 4 I get the expected results.
If I change replaceBytesInRange to 0, 4 I seem to loose an extra byte in the stream.
I'm okay with using 0,3 - but I'm wondering why this happens, because with 2 and 8 byte replacements I do not get this behavior.