Basic bit manipulation question. How can I declare a uint8_t bitmap value in Objective-C?
e.g. value: "00000001"
Is it as simple as:
uint8_t value = 00000001
or does it need to have some hexadecimal prefix?
uint8_t valuePrefix = 0x00000001
When you say "bitmap", I assume you're talking about binary representations. If you're specifying a binary number, you use the 0b prefix:
uint8_t value = 0b00000100; // 4
Or, if there's only one bit on, we often use bitwise shift operator:
uint8_t value = 1 << 2; // 4
Related
Why do we get this value as output:- ffffffff
struct bitfield {
signed char bitflag:1;
};
int main()
{
unsigned char i = 1;
struct bitfield *var = (struct bitfield*)&i;
printf("\n %x \n", var->bitflag);
return 0;
}
I know that in a memory block of size equal to the data-type, the first bit is used to represent if it is positive(0) or negative(1); when interpreted as a signed data-type. But, still can't figure out why -1 (ffffffff) is printed. When the struct with only one bit set, I was expecting that when it gets promoted to a 1 byte char. Because, my machine is a little-endian and I was expecting that one bit in the field to be interpreted as the LSb in my 1 byte character.
Can somehow please explain. I'm really confused.
Here are two similar constraint blocks, one written using decimal notation, and the other using hexadecimal notation. The first works as expected, but the second only generates positive values (including 0) out of the 5 available values:
-- positive and negative values generated as expected
var rnd_byte : int(bits: 8);
for i from 0 to 9 {
gen rnd_byte keeping {
soft it == select {
90 : [-1, -128 , 127, 1];
10 : 0x00;
};
};
print rnd_byte;
};
-- only positive values (including 0) generated!!!
var rnd_byte : int(bits: 8);
for i from 0 to 9 {
gen rnd_byte keeping {
soft it == select {
90 : [0xFF, 0x80, 0x7F, 0x01];
10 : 0x00;
};
};
print rnd_byte;
};
How can I make the second example behave as the first one, but keep the hexadecimal notation. I don't want to write large decimal numbers.
some more about this issue - with procedural code there is auto casting. so you can write
var rnd_byte : int( bits : 8);
rnd_byte = 0xff;
and it will result with rnd_byte == -1.
constraints work with int (bits :8 ) semantics, and this code would fail:
var rnd_byte : int( bits : 8);
gen rnd_byte keeping {it == 0xff};
as suggested - for getting 0xff - define the field as unsigned.
0xff and 0x80 are not in the range of the rnd_byte data type. You need to declare rnd_byte as uint(bits:8).
Alternatively, try to typecast the literals (I could not verify the syntax):
(0xff).as_a(int(bits:8))
In procedural code, automatic casting between numeric types takes care of the absolute majority of cases. However, in generation numbers are viewed by their natural values, as in int(bits:*) semantics. Hex notation means the value is unsigned.
Need help with union struct. I'm receiving byte stream that consists of various packets, so I'm putting the byte data into union struct and accessing needed data via struct members. The problem is with uint32_t type member - the read skips its two bytes and shows wrong value when accessing via its member. Here's full demo code:
PacketUtils.h
#include <stdint.h>
typedef struct {
uint8_t startSymbol;
uint8_t packetType;
uint32_t deviceId;
uint16_t packetCRC;
} PacketData;
typedef union {
uint8_t *bytes; // stores raw bytes
PacketData *packet;
} Packet;
// Puts bytes into predefined struct
void getPacketFromBytes(void *bytes, Packet *packetRef);
PacketUtils.c
#include <stdio.h>
#include "UnionStruct.h"
void getPacketFromBytes(void *bytes, Packet *packetRef)
{
uint8_t *rawBytes = (uint8_t *)bytes;
packetRef->bytes = rawBytes;
}
Calling code:
// sample byte data
uint8_t packetBytes[] = {0x11, 0x02, 0x01, 0x01, 0x01, 0x03, 0xbb, 0xbd};
Packet packetRef;
getPacketFromBytes(packetBytes, &packetRef);
printf("%x\n", packetRef.packet->startSymbol); // good - prints 0x11
printf("%x\n", packetRef.packet->packetType); // good - prints 0x02
printf("%x\n", packetRef.packet->deviceId); // bad - prints bd bb 03 01
printf("%x\n", packetRef.packet->packetCRC); // bad - prints 36 80 (some next values in memory)
Everything is OK when PacketData struct consist of uint8_t or uint16_t type members then the print shows correct values. However, printing deviceId of type uint32_t skips two bytes (0x01 0x01) and grabs last 4 bytes. Printing packetCRC prints the values out of given byte array - some two values in memory, like packetBytes[12] and packetBytes[13]. I can't figure out why it skips two bytes...
The problem is due to the fields being padded out to default alignment on your platform. On most modern architectures 32-bit values are most efficient when read/written to a 32-bit word aligned address.
In gcc you can avoid this by using a special attribute to indicate that the structure is "packed". See here:
http://gcc.gnu.org/onlinedocs/gcc-3.3.6/gcc/Type-Attributes.html
So struct definition would look something like this:
typedef struct {
uint8_t startSymbol;
uint8_t packetType;
uint32_t deviceId;
uint16_t packetCRC;
} PacketData __attribute__((packed));
The 32-bit number will be aligned on a 4-byte boundary only. If you move it to the start of your struct, it may just work as you want.
Processors usually are optimised to fetch data on multiples of the datum size - 4 bytes for 32-bit, 8 bytes for 64-bit... - and the compiler knows this and adds gaps into the data structures to make sure that the processor can fetch the data efficiently.
If you don't want to deal with the padding and can't move the data structure around, you could define
typedef struct {
uint8_t startSymbol;
uint8_t packetType;
uint16_t deviceIdLow;
uint16_t deviceIdHigh;
uint16_t packetCRC;
} PacketData;
and then just write
uint32_t deviceID = packetRef.packet->deviceIdLow | (packetRef.packet->deviceIdLow << 16);
I'm trying to implement a function that will read from a byte array (which is a char* in my code) a 32bit int stored with different endianness. I was suggested to use NSSwapInt, but I'm clueless on how to go about it. Could anyone show me a snippet?
Thanks in advance!
Heres a short example:
unsigned char bytes[] = { 0x00, 0x00, 0x01, 0x02 };
int intData = *((int *)bytes);
int reverseData = NSSwapInt(intData);
NSLog(#"integer:%d", intData);
NSLog(#"bytes:%08x", intData);
NSLog(#"reverse integer: %d", reverseData);
NSLog(#"reverse bytes: %08x", reverseData);
The output will be:
integer:33619968
bytes:02010000
reverse integer: 258
reverse bytes: 00000102
As mentioned in the docs,
Swaps the bytes of iv and returns the resulting value. Bytes are swapped from each low-order position to the corresponding high-order position and vice versa. For example, if the bytes of inv are numbered from 1 to 4, this function swaps bytes 1 and 4, and bytes 2 and 3.
There's also a NSSwapShort and NSSwapLongLong.
There is a potential of a data misalignment exception if you solve this problem by using integer pointers - e.g. some architectures require 32-bit values to be at addresses which are multiples of 2 or 4 bytes. The ARM architecture used by the iPhone et al. may throw an exception in this case, but I've no iOS device handy to test whether it does.
A safe way to do this which will never throw any misalignment exceptions is to assemble the integer directly:
int32_t bytes2int(unsigned char *b)
{
int32_t i;
i = b[0] | b[1] << 8 | b[2] << 16 | b[3] << 24; // little-endian, or
i = b[3] | b[2] << 8 | b[1] << 16 | b[0] << 24; // big-endian (pick one)
return i;
}
You can pass this any byte pointer and it will assemble 4 bytes to make a 32-bit int. You can extend the idea to 64-bit integers if required.
If have an array of char's pulled out of an NSData object with getBytes:range:
I want to test if a particular bit is set. I would assume I would do it with a bitwise AND but it doesn't seem to be working for me.
I have the following:
unsigned char firstBytes[3];
[data getBytes:&firstBytes range:range];
int bitIsSet = firstBytes[0] & 00100000;
if (bitIsSet) {
// Do Something
}
The value of firstBytes[0] is 48 (or '0' as an ASCII character). However bitIsSet always seems to be 0. I would imagine I am just doing something silly here, I am new to working on a bit level so maybe my logic is wrong.
If you put a 0 before a number you are saying it's expressed in octal representation.
00100000 actually means 32768 in decimal representation, 10000000 00000000 in binary representation.
Try
int bitIsSet = firstBytes[0] & 32;
or
int bitIsSet = firstBytes[0] & 0x20;