I'm working on a problem where I need to convert an integer into a special text encoding. The requirements state the I pack the int into bytes and then clear the most significant bit. I am using the bitwise operator I am unsure of how to clear the most significant bit. Here is the problem and my method that I'm working with so far:
PROBLEM:
For this task, you need to write a small program including a pair of functions that can
convert an integer into a special text encoding
The Encoding Function
This function needs to accept a signed integer in the 14-bit range [-8192..+8191] and return a 4 character string.
The encoding process is as follows:
Add 8192 to the raw value, so its range is translated to [0..16383]
2.Pack that value into two bytes such that the most significant bit of each is cleared
Unencoded intermediate value (as a 16-bit integer):
00HHHHHH HLLLLLLL
Encoded value:
0HHHHHHH 0LLLLLLL
1 of 3
Format the two bytes as a single 4-character hexadecimal string and return it.
Sample values:
Unencoded (decimal) | Intermediate (decimal) | Intermediate (hex) | Encoded (hex)
0 | 8192 | 2000 | 4000
-8192 | 0 | 0000 | 0000
8191 | 16383 | 3fff | 7F7F
2048 | 10240 | 2800 | 5000
-4096 | 4096 | 1000 | 2000
My function
-(NSString *)encodeValue{
// get the input value
int decValue = [_inputValue.text intValue];
char* bytes = (char*)&decValue;
NSNumber *number = #(decValue+8192); //Add 8192 so that the number can't be negative, because we're about to lose the sign.
u_int16_t shortNumber = [number unsignedShortValue]; //Convert the integer to an unsigned short (2 bytes) using NSNumber.
shortNumber = shortNumber << 1; // !!!! This is what I'm doing to clear the MSB !!!!!!!
NSLog(#"%hu", shortNumber);
NSString *returnString = [NSString stringWithFormat:#"%x", shortNumber]; //Convert the 2 byte number to a hex string using format specifiers
return returnString;
}
I'm using the shift bitwise operator to clear the MSB and I get the correct answer for a couple of the values, but not every time.
If I am understanding you correctly then I believe you are after something like this:
u_int16_t number;
number = 0xFFFF;
number &= ~(1 << ((sizeof(number) * 8) - 1));
NSLog(#"%x", number); // Output will be 7fff
How it works:
sizeof(number) * 8 gives you the number of bits in the input number (eg. 16 for a u_int16_t)
1 << (number of bits in number - 1) gives you a mask with only the MSB set (eg. 0x8000)
~(mask) gives you the bitwise NOT of the mask (eg. 0x7fff)
ANDing the mask with your number then clears only the MSB leaving all others as they were
You are misunderstanding your task.
You are not supposed to clear the most significant bit anywhere. You have 14 bits. You are supposed to separate these 14 bits into two groups of seven bits. And since a byte has 8 bits, storing 7 bits into a byte will leave the most significant bit cleared.
PS. Why on earth are you using an NSNumber? If this is homework, I would fail you for the use of NSNumber alone, no matter what the rest of the code does.
PS. What is this char* bytes supposed to be good for?
PS. You are not clearing any most significant bit anywhere. You have an unsigned short containing 14 significant bits, so the two most significant bits are cleared. You shift the number to the left, so the most significant bit, which was always cleared, remains cleared, but the second most significant bit isn't. And all this has nothing to do with your task.
Related
Hi have to convert an int value into an HEX string.
From 4 to 0x04
From -4 to 0xFC
I use this code [NSString stringWithFormat:#"0x%02X", x] where x is int.
With 4 I obtain 0x04, but with -4 I obtain 0xFFFFFFFC.
Where am I wrong ?
-4 of type int is indeed 0xFFFFFFFC on 32-bit systems. %02 will pad single-digit numbers with zero, but it will not truncate a longer number to two digits.
If you are interested in printing only the last eight bits, you need to mask the number yourself:
[NSString stringWithFormat:#"0x%02X", (x & 0xFF)];
I was following 'A tour of GO` on http://tour.golang.org.
The table 15 has some code that I cannot understand. It defines two constants with the following syntax:
const (
Big = 1<<100
Small = Big>>99
)
And it's not clear at all to me what it means. I tried to modify the code and run it with different values, to record the change, but I was not able to understand what is going on there.
Then, it uses that operator again on table 24. It defines a variable with the following syntax:
MaxInt uint64 = 1<<64 - 1
And when it prints the variable, it prints:
uint64(18446744073709551615)
Where uint64 is the type. But I can't understand where 18446744073709551615 comes from.
They are Go's bitwise shift operators.
Here's a good explanation of how they work for C (they work in the same way in several languages).
Basically 1<<64 - 1 corresponds to 2^64 -1, = 18446744073709551615.
Think of it this way. In decimal if you start from 001 (which is 10^0) and then shift the 1 to the left, you end up with 010, which is 10^1. If you shift it again you end with 100, which is 10^2. So shifting to the left is equivalent to multiplying by 10 as many times as the times you shift.
In binary it's the same thing, but in base 2, so 1<<64 means multiplying by 2 64 times (i.e. 2 ^ 64).
That's the same as in all languages of the C family : a bit shift.
See http://en.wikipedia.org/wiki/Bitwise_operation#Bit_shifts
This operation is commonly used to multiply or divide an unsigned integer by powers of 2 :
b := a >> 1 // divides by 2
1<<100 is simply 2^100 (that's Big).
1<<64-1 is 2⁶⁴-1, and that's the biggest integer you can represent in 64 bits (by the way you can't represent 1<<64 as a 64 bits int and the point of table 15 is to demonstrate that you can have it in numerical constants anyway in Go).
The >> and << are logical shift operations. You can see more about those here:
http://en.wikipedia.org/wiki/Logical_shift
Also, you can check all the Go operators in their webpage
It's a logical shift:
every bit in the operand is simply moved a given number of bit
positions, and the vacant bit-positions are filled in, usually with
zeros
Go Operators:
<< left shift integer << unsigned integer
>> right shift integer >> unsigned integer
If the size of a float is 4 bytes then shouldn't it be able to hold digits from 8,388,607 to -8,388,608 or somewhere around there because I probably calculated it wrong.
Why does f display the extra 15 because the value of f (0.1) is still between 8,388,607 to -8,388,608 right?
int main(int argc, const char * argv[])
{
#autoreleasepool {
float f = .1;
printf("%lu", sizeof(float));
printf("%.10f", f);
}
return 0;
}
2012-08-28 20:53:38.537 prog[841:403] 4
2012-08-28 20:53:38.539 prog[841:403] 0.1000000015
The values -8,388,608 ... 8,388,607 lead me to believe that you think floats use two's complement, which they don't. In any case, the range you have indicates 24 bits, not the 32 that you'd get from four bytes.
Floats in C use IEEE754 representation, which basically has three parts:
the sign.
the exponent (sort of a scale).
the fraction (actual digits of the number).
You basically get a certain amount of precision (such as 7 decimal digits) and the exponent dictates whether you use those for a number like 0.000000001234567 or 123456700000.
The reason you get those extra digits at the end of your 0.1 is because that number cannot be represented exactly in IEEE754. See this answer for a treatise explaining why that is the case.
Numbers are only representable exactly if they can be built by adding inverse powers of two (like 1/2, 1/16, 1/65536 and so on) within the number of bits of precision (ie, number of bits in the fraction), subject to scaling.
So, for example, a number like 0.5 is okay since it's 1/2. Similarly 0.8125 is okay since that can be built from 1/2, 1/4 and 1/16.
There is no way (at least within 23 bits of precision) that you can build 0.1 from inverse powers of two, so it gives you the nearest match.
My Questing is regarding structure padding? Can any one tell me what's logic behind structure padding.
Example:
structure Node{
char c1;
short s1;
char c2;
int i1;
};
Can any one tell me how structure padding will apply on this structure?
Assumption: Integer takes 4 Byte.
Waiting for the answer.
How padding works depends entirely on the implementation.
For implementations where you have a two-byte short and four-byte int and types have to be aligned to a multiple of their size, you will have:
Offset Var Size
------ ---- ----
0 c1 1
1 ?? 1
2 s1 2
4 c2 1
5 ?? 3
8 i1 4
12 next
An implementation is free to insert padding between fields of a structure and following the last field (but not before the first field) for any reason whatsoever. The ability to pad after a structure is important for aligning subsequent elements in an array. For example:
struct { int i1; char c1; };
may give you:
Offset Var Size
------ ---- ----
0 i1 4
4 c1 1
5 ?? 3
8 next
Padding is usually done because either aligned data works faster, or misaligned data is illegal (some CPU architectures disallow misaligned access).
There is no simple answer to this, except "It depends".
It could be as little as 8 bytes, assuming two byte shorts, or it could take 12 bytes, or it could take 42 bytes on a suitably bizarre implementation. It depends on at least the underlying architecture, the compiler and the compiler flags. Check your tool's manual for information.
Inside a struct, each member's offset in memory is based on their size and alignment. Note that this is implementation specific
E.g. if char takes 1 byte, short takes 2 bytes and int takes 4 bytes:
structure Node{
char c1; // 1 byte
// 1 byte padding (next member requires 2 byte alignment)
short s1; // 2 bytes
char c2; // 1 byte
// 3 bytes padding (since next member requires 4 byte alignment)
int i1; // 4 bytes
};
This also depends on your compiler settings and architecture, and can also be modified.
If you packed this structure properly (by rearranging the order of members), you could fit it into 8 bytes, not 12 bytes (by switching c2 with s1).
The reason for alignment enforcement is that the hardware can do certain operations faster with data that have a natural alignment; otherwise it would have to perform some bitmasking, shifting and ORing to construct the data before operating on it.
Please excuse my lack of knowledge here but could someone let me know how i can get the first 16 bits of a 32 bit hex number.
That depends on what you mean by "first". Given a number, such as 0xdeadbeef, would you consider 0xdead or 0xbeef to be "first"?
If the former, divide the number by 65536 (as an integer). If the latter, compute the modulus to 65536.
This is of course also doable with binary operators such as shift/and, I'm just not sure sure how to express that in your desired language. I'm sure there will be other answers with more precise details.
Assuming by first, you mean least value?
if My32BitNumber is an int
dim f16 as integer = &hFFFF and My32BitNumber
If you're actually looking at a 32 bit number e.g. Gee, what are the first 16 bits of DEADBEEF
that would be the last four hex digits BEEF
& it with 0xffff.
int input = 0xabcd;
int first2Bytes = input & 0xffff;
Dim i As Integer = &HDEADBEEF
Dim s16 As UShort
s16 = i And &HFFFF 'BEEF
'or
s16 = (i >> 16) And &HFFFF 'DEAD
This will get you the 32 bit number as a four byte array:
Dim bytes As Byte() = BitConverter.GetBytes(number)
To get the first two bytes as a signed 16 bit number:
Dim first As Int16 = BitConverter.ToInt16(bytes, 0)
To get the first two bytes as an unsigned 16 bit number:
Dim first As UInt16 = BitConverter.ToUInt16(bytes, 0)
This is of course a bit slower than using bit shifts or division, but it handles the sign bit (the most significant bit) correctly, which you may have problems with using bit shift or division.
You can also get the first two bytes as a 16 bit unsigned number and assign it to an Integer:
Dim first As Integer = BitConverter.ToUInt16(bytes, 0)
(Getting a signed 16 bit number and assign to an Integer means that the sign bit would also be copied to the top 16 bits of the Integer, which is probably not desired.)
If you want the last two bytes (least significant) instead of the first two (most significant), just change the index in the ToUInt16/ToInt16 call from 0 to 2.