Why the bitfield's least significant bit is promoted to MSb during typecasting in the below program? - bit-fields

Why do we get this value as output:- ffffffff
struct bitfield {
signed char bitflag:1;
};
int main()
{
unsigned char i = 1;
struct bitfield *var = (struct bitfield*)&i;
printf("\n %x \n", var->bitflag);
return 0;
}
I know that in a memory block of size equal to the data-type, the first bit is used to represent if it is positive(0) or negative(1); when interpreted as a signed data-type. But, still can't figure out why -1 (ffffffff) is printed. When the struct with only one bit set, I was expecting that when it gets promoted to a 1 byte char. Because, my machine is a little-endian and I was expecting that one bit in the field to be interpreted as the LSb in my 1 byte character.
Can somehow please explain. I'm really confused.

Related

How should GMP/MPFR limbs be interpreted?

The arbitrary precision libraries GMP and MPFR use heap-allocated arrays of machine word-sized integers to store the limbs that make up the high precision number/mantissa.
How should this array of limbs be interpreted to recover the arbitrary precision integer number? In other words: for N limbs holding B bits each, how should I interpret them to recover the N*B bit number?
Does the limb size really affect the in-memory representation (see below)? If so, what is the rationale behind this?
Background:
I wrote a small program to look inside the representation, but I was confused by what I saw. The limbs seem to be ordered in most significant digit order, whereas the limbs themselves are in native least significant digit format. When representing the 64-bit word 0xAAAABBBBCCCCDDDD using 32-bit words and precision fixed to 128 bits, I see:
% c++ limbs.cpp -lgmp -lmpfr -o limbs && ./limbs
ccccdddd|aaaabbbb|00000000|00000000
00000000|00000000|ccccdddd|aaaabbbb
This seems to imply that the in-memory representation can not be read back as a string of bits to recover the arbitrary precision number (e.g., if loaded this into a register on a machine that supported N*B sized words). Furthermore, this also seems to suggest that the limb size changes the representation, so that I would not be able to deserialize a number without knowing which limb size was used to serialize it.
Here's my test program (uses 32-bit limbs with the __GMP_SHORT_LIMB macro):
#define __GMP_SHORT_LIMB
#include <gmp.h>
#include <mpfr.h>
#include <iomanip>
#include <iostream>
constexpr int PRECISION = 128;
void PrintLimbs(mp_limb_t const *const limbs) {
std::cout << std::hex;
constexpr int NUM_LIMBS = PRECISION / (8 * sizeof(mp_limb_t));
for (int i = 0; i < NUM_LIMBS; ++i) {
std::cout << std::setfill('0') << std::setw(2 * sizeof(mp_limb_t)) << limbs[i];
if (i < NUM_LIMBS - 1) {
std::cout << "|";
}
}
std::cout << "\n";
}
int main() {
{ // GMP
mpz_t num;
mpz_init2(num, PRECISION);
mpz_set_ui(num, 0xAAAABBBBCCCCDDDD);
PrintLimbs(num->_mp_d);
mpz_clear(num);
}
{ // MPFR
mpfr_t num;
mpfr_init2(num, PRECISION);
mpfr_set_ui(num, 0xAAAABBBBCCCCDDDD, MPFR_RNDN);
PrintLimbs(num->_mpfr_d);
mpfr_clear(num);
}
return 0;
}
3 things that matter for the byte representation:
The limb size depends on your machine and the chosen ABI. The real size is also affected by the optional presence of nails (an experimental feature, thus it is unlikely that limbs have nails). MPFR does not support the presence of nails.
The limb representation in memory follows the endianness of the machine.
Limbs are stored least significant limb first (a.k.a. little endian).
Note that from the last two points, on a same big-endian machine, the byte representation of the array will depend on the limb size.
Concerning the size of the array of limbs, it depends on the type. For instance, with the mpn layer of GMP, it is entirely handled by the user.
For MPFR, the size is deduced from the precision of the mpfr_t object; and if the precision is not a multiple of the limb bitsize, the trailing bits are always set to 0. Note also that more memory may be allocated than the one actually used, and it must not be confused with the size of the array; you can ignore this fact, as the unused data are always after the actual array of limbs.
EDIT concerning the rationale: Manipulating limbs instead of bytes is for speed reasons. Then I suppose that little endian has been chosen to represent the array of limbs for two reasons. First, it makes the basic operations (addition, subtraction, multiplication) easier to implement and potentially faster. Second, this is much better to implement arithmetic modulo 2^K, in particular when K may change.
It finally clicked for me. The limb size does not affect the in-memory representation.
The data in GMP/MPFR is stored consistently in little-endian format, even when interpreted as a string of bytes across limbs. But registers on x86 are big-endian.
The inconsistent outcome when printing the limbs comes from how words are interpreted when read back from memory. When loaded into a register, memory is reinterpreted from little-endian (how it is stored in memory) to big-endian (how it is stored in registers).
I've modified the example below to show how it is in fact the word size with which we reinterpret memory that affects how the content is printed, as the output will be the same no matter if 32-bit or 64-bit limbs are used:
#define __GMP_SHORT_LIMB
#include <gmp.h>
#include <mpfr.h>
#include <iomanip>
#include <iostream>
#include <cstdint>
constexpr int PRECISION = 128;
template <typename InterpretAs>
void PrintLimbs(mp_limb_t const *const limbs) {
constexpr int LIMB_BITS = 8 * sizeof(InterpretAs);
constexpr int NUM_LIMBS = PRECISION / LIMB_BITS;
std::cout << LIMB_BITS << "-bit: ";
for (int i = 0; i < NUM_LIMBS; ++i) {
const auto limb = reinterpret_cast<InterpretAs const *>(limbs)[i];
for (int b = 0; b < LIMB_BITS; ++b) {
if (b > 0 && b % 16 == 0) {
std::cout << " ";
}
uint64_t bit = (limb >> (LIMB_BITS - 1 - b)) & 0x1;
std::cout << bit;
}
if (i < NUM_LIMBS - 1) {
std::cout << "|";
}
}
std::cout << "\n";
}
int main() {
uint64_t literal = 0b1111000000000000000000000000000000000000000000000000000000001001;
{ // GMP
mpz_t num;
mpz_init2(num, PRECISION);
mpz_set_ui(num, literal);
std::cout << "GMP where limbs are interpreted as:\n";
PrintLimbs<uint64_t>(num->_mp_d);
PrintLimbs<uint32_t>(num->_mp_d);
PrintLimbs<uint16_t>(num->_mp_d);
mpz_clear(num);
}
{ // MPFR
mpfr_t num;
mpfr_init2(num, PRECISION);
mpfr_set_ui(num, literal, MPFR_RNDN);
std::cout << "MPFR where limbs are interpreted as:\n";
PrintLimbs<uint64_t>(num->_mpfr_d);
PrintLimbs<uint32_t>(num->_mpfr_d);
PrintLimbs<uint16_t>(num->_mpfr_d);
mpfr_clear(num);
}
return 0;
}
This prints (regardless of limb size):
GMP where limbs are interpreted as:
64-bit: 1111000000000000 0000000000000000 0000000000000000 0000000000001001|0000000000000000 0000000000000000 0000000000000000 0000000000000000
32-bit: 0000000000000000 0000000000001001|1111000000000000 0000000000000000|0000000000000000 0000000000000000|0000000000000000 0000000000000000
16-bit: 0000000000001001|0000000000000000|0000000000000000|1111000000000000|0000000000000000|0000000000000000|0000000000000000|0000000000000000
MPFR where limbs are interpreted as:
64-bit: 0000000000000000 0000000000000000 0000000000000000 0000000000000000|1111000000000000 0000000000000000 0000000000000000 0000000000001001
32-bit: 0000000000000000 0000000000000000|0000000000000000 0000000000000000|0000000000000000 0000000000001001|1111000000000000 0000000000000000
16-bit: 0000000000000000|0000000000000000|0000000000000000|0000000000000000|0000000000001001|0000000000000000|0000000000000000|1111000000000000

ARM: saturate signed int to unsigned byte

saturating instructions saturate unsigned to unsigned or signed to signed int.
What's the best way to saturate signed 16-bit ints to unsigned byte?
In short, here's the logic
uint8_t usat8(uint8_t u8, int16_t s16)
{
s16 += u8;
if(s16 <= 0) {
return 0;
} else if(s16 >=255){
return 255;
}else{
return (uint8_t)s16;
}
}
void add_row(uint8_t * dst, uint8_t * u8, int16_t * s16)
{
for(int i=0; i<XXX; ++i)
{
dst[i] = usat8(u8[i] + s16[i]);
}
}
values of s16 are usually not much off from the [0, 255] range, e.g. it's safe to assume that abs(s16[x]) < 1000.
EDIT: I just realized that USAT16 actually saturates signed 16-bit int to unsigned integer. Simple USAT16 is the solution to the problem.
After 5 mins of thinking I have this idea (pseudo arm-asm):
sadd16 sum, s16, u8 # do two additions in parallel
orr signs, 0x1001, sum, lsr #15 # extract signs of the two 16 bit results
usat16 sum, sum, #8 # saturate both of the 16-bit sums to unsigned 8-byte range
uadd16 sum, sum, signs
this way, if sign bit was set for any of the sums the resulting sum will become 256, or 0x100. When writing back the data the shifted out 0x1 will be discarded.
Any comments, does that seem like the optimal approach, is there any better alternative?
PS. I do it for an armv6 device, no NEON or armv6t2

NSSwapInt from byte array

I'm trying to implement a function that will read from a byte array (which is a char* in my code) a 32bit int stored with different endianness. I was suggested to use NSSwapInt, but I'm clueless on how to go about it. Could anyone show me a snippet?
Thanks in advance!
Heres a short example:
unsigned char bytes[] = { 0x00, 0x00, 0x01, 0x02 };
int intData = *((int *)bytes);
int reverseData = NSSwapInt(intData);
NSLog(#"integer:%d", intData);
NSLog(#"bytes:%08x", intData);
NSLog(#"reverse integer: %d", reverseData);
NSLog(#"reverse bytes: %08x", reverseData);
The output will be:
integer:33619968
bytes:02010000
reverse integer: 258
reverse bytes: 00000102
As mentioned in the docs,
Swaps the bytes of iv and returns the resulting value. Bytes are swapped from each low-order position to the corresponding high-order position and vice versa. For example, if the bytes of inv are numbered from 1 to 4, this function swaps bytes 1 and 4, and bytes 2 and 3.
There's also a NSSwapShort and NSSwapLongLong.
There is a potential of a data misalignment exception if you solve this problem by using integer pointers - e.g. some architectures require 32-bit values to be at addresses which are multiples of 2 or 4 bytes. The ARM architecture used by the iPhone et al. may throw an exception in this case, but I've no iOS device handy to test whether it does.
A safe way to do this which will never throw any misalignment exceptions is to assemble the integer directly:
int32_t bytes2int(unsigned char *b)
{
int32_t i;
i = b[0] | b[1] << 8 | b[2] << 16 | b[3] << 24; // little-endian, or
i = b[3] | b[2] << 8 | b[1] << 16 | b[0] << 24; // big-endian (pick one)
return i;
}
You can pass this any byte pointer and it will assemble 4 bytes to make a 32-bit int. You can extend the idea to 64-bit integers if required.

Trying to test if a bit is set in a char value

If have an array of char's pulled out of an NSData object with getBytes:range:
I want to test if a particular bit is set. I would assume I would do it with a bitwise AND but it doesn't seem to be working for me.
I have the following:
unsigned char firstBytes[3];
[data getBytes:&firstBytes range:range];
int bitIsSet = firstBytes[0] & 00100000;
if (bitIsSet) {
// Do Something
}
The value of firstBytes[0] is 48 (or '0' as an ASCII character). However bitIsSet always seems to be 0. I would imagine I am just doing something silly here, I am new to working on a bit level so maybe my logic is wrong.
If you put a 0 before a number you are saying it's expressed in octal representation.
00100000 actually means 32768 in decimal representation, 10000000 00000000 in binary representation.
Try
int bitIsSet = firstBytes[0] & 32;
or
int bitIsSet = firstBytes[0] & 0x20;

Reading Binary File

so I am trying to read a filesystem disk, which has been provided.
So, what I want to do is read the 1044 byte from the filesystem. What I am currently doing is the following:
if (fp = fopen("filesysFile-full", "r")) {
fseek(fp, 1044, SEEK_SET); //Goes to 1024th byte
int check[sizeof(char)*4]; //creates a buffer array 4 bytes long
fread(check, 1, 4, fp); //reads 4 bytes from the file
printf("%d",check); //prints
int close = fclose(fp);
if (close == 0) {
printf("Closed");
}
}
The value that check should be printing is 1. However I am getting negative values which keep changing everytime I run the file. I don't understand what I am doing wrong. Am I taking the right approach to reading bytes of the disk, and printing them.
What I basically want to do is read bytes of the disk, and read the values at certain bytes. Those bytes are fields which will help me understand the structure/format of the disk.
Any help would be appreciated.
Thank you.
This line:
int check[sizeof(char)*4];
allocates an array of 4 ints.
The type of check is therefore int*, so this line:
printf("%d",check);
prints the address of the array.
What you should do it allocate it as an int:
int check;
and then fread into it:
fread(&check, 1, sizeof(int), fp);
(This code, incidentally, assumes that int is 4 bytes.)
int check[sizeof(char)*4]; //creates a buffer array 4 bytes long
This is incorrect. You are creating an array of four integers, which are typically 32 bits each, and then when you printf("%d",check) you are printing the address of that array, which will probably change every time you run the program. I think what you want is this:
if (fp = fopen("filesysFile-full", "r")) {
fseek(fp, 1044, SEEK_SET); //Goes to 1024th byte
int check; //creates a buffer array the size of one integer
fread(&check, 1, sizeof(int), fp); //reads an integer (presumably 1) from the file
printf("%d",check); //prints
int close = fclose(fp);
if (close == 0) {
printf("Closed");
}
}
Note that instead of declaring an array of integers, you are declaring just one. Also note the change from fread(check, ...) to fread(&check, ...). The first parameter to fread is the address of the buffer (in this case, a single integer) into which you want to read the data.
Keep in mind that while integers are probably 32 bits long, this isn't guaranteed. Also, in most operating systems, integers are stored with the least significant byte first on the disk, so you will only read 1 if the data on the disk looks like this at byte 1044:
0x01 0x00 0x00 0x00
If it is the other way around, 0x00 00 00 01, that will be read as 16777216 (0x01000000).
If you want to read more than one integer, you can use an array as follows:
if (fp = fopen("filesysFile-full", "r")) {
fseek(fp, 1044, SEEK_SET); //Goes to 1024th byte
int check[10]; //creates a buffer of ten integers
fread(check, 10, sizeof(int), fp); //reads 10 integers into the array
for (int i = 0; i < 10; i++)
printf("%d ", check[i]); //prints
int close = fclose(fp);
if (close == 0) {
printf("Closed");
}
}
In this case, check (without brackets) is a pointer to the array, which is why I've changed the fread back to fread(check, ...).
Hope this helps!