My Questing is regarding structure padding? Can any one tell me what's logic behind structure padding.
Example:
structure Node{
char c1;
short s1;
char c2;
int i1;
};
Can any one tell me how structure padding will apply on this structure?
Assumption: Integer takes 4 Byte.
Waiting for the answer.
How padding works depends entirely on the implementation.
For implementations where you have a two-byte short and four-byte int and types have to be aligned to a multiple of their size, you will have:
Offset Var Size
------ ---- ----
0 c1 1
1 ?? 1
2 s1 2
4 c2 1
5 ?? 3
8 i1 4
12 next
An implementation is free to insert padding between fields of a structure and following the last field (but not before the first field) for any reason whatsoever. The ability to pad after a structure is important for aligning subsequent elements in an array. For example:
struct { int i1; char c1; };
may give you:
Offset Var Size
------ ---- ----
0 i1 4
4 c1 1
5 ?? 3
8 next
Padding is usually done because either aligned data works faster, or misaligned data is illegal (some CPU architectures disallow misaligned access).
There is no simple answer to this, except "It depends".
It could be as little as 8 bytes, assuming two byte shorts, or it could take 12 bytes, or it could take 42 bytes on a suitably bizarre implementation. It depends on at least the underlying architecture, the compiler and the compiler flags. Check your tool's manual for information.
Inside a struct, each member's offset in memory is based on their size and alignment. Note that this is implementation specific
E.g. if char takes 1 byte, short takes 2 bytes and int takes 4 bytes:
structure Node{
char c1; // 1 byte
// 1 byte padding (next member requires 2 byte alignment)
short s1; // 2 bytes
char c2; // 1 byte
// 3 bytes padding (since next member requires 4 byte alignment)
int i1; // 4 bytes
};
This also depends on your compiler settings and architecture, and can also be modified.
If you packed this structure properly (by rearranging the order of members), you could fit it into 8 bytes, not 12 bytes (by switching c2 with s1).
The reason for alignment enforcement is that the hardware can do certain operations faster with data that have a natural alignment; otherwise it would have to perform some bitmasking, shifting and ORing to construct the data before operating on it.
Related
What is the difference between just giving 1 and giving 1'b1 in verilog code?
The 1 is 32 bits wide, thus is the equivalent of 32'b00000000_00000000_00000000_00000001
The 1'b1 is one bit wide.
There are several places where you should be aware of the difference in length but the one most likely to catch you out is in concatenations. {}
reg [ 7:0] A;
reg [ 8:0] B;
assign A = 8'b10100101;
assign B = {1'b1,A}; // B is 9'b110100101
assign B = {1,A}; // B is 9'b110100101
assign B = {A,1'b1}; // B is 9'b101001011
assign B = {A,1}; // B is 9'b000000001 !!!!
So, what's the difference between, say,
logic [7:0] count;
...
count <= count + 1'b1;
and
logic [7:0] count;
...
count <= count + 1;
Not a lot. In the first case your simulator/synthesiser will do this:
i) expand the 1'b1 to 8'b1 (because count is 8 bits wide)
ii) do all the maths using 8 bits (because now everything is 8 bits wide).
In the second case your simulator/synthesiser will do this:
i) do all the maths using 32 bits (because 1 is 32 bits wide)
ii) truncate the 32-bit result to 8 bits wide (because count is 8 bits wide)
The behaviour will be the same. However, that is not always the case. This:
count <= (count * 8'd255) >> 8;
and this:
count <= (count * 255) >> 8;
will behave differently. In the first case, 8 bits will be used for the multiplication (the width of the 8 in the >> 8 is irrelevant) and so the multiplication will overflow; in the second case, 32 bits will be used for the multiplication and so everything will be fine.
1'b1 is an binary, unsigned, 1-bit wide integral value. In the original verilog specification, 1 had the same type as integer. It was signed, but its width was unspecified. A tool could choose the width base on its host implementation of the int type.
Since Verilog 2001 and SystemVerilog 2005, the width of integer and int was fixed at 32-bits. However, because of this original unspecified width, and the fact that so many people write 0 or 1 without realizing that it is now 32-bits wide, the standard does not allow you to use an unbased literal inside a concatenation. {A,1} is illegal.
I'm working on a problem where I need to convert an integer into a special text encoding. The requirements state the I pack the int into bytes and then clear the most significant bit. I am using the bitwise operator I am unsure of how to clear the most significant bit. Here is the problem and my method that I'm working with so far:
PROBLEM:
For this task, you need to write a small program including a pair of functions that can
convert an integer into a special text encoding
The Encoding Function
This function needs to accept a signed integer in the 14-bit range [-8192..+8191] and return a 4 character string.
The encoding process is as follows:
Add 8192 to the raw value, so its range is translated to [0..16383]
2.Pack that value into two bytes such that the most significant bit of each is cleared
Unencoded intermediate value (as a 16-bit integer):
00HHHHHH HLLLLLLL
Encoded value:
0HHHHHHH 0LLLLLLL
1 of 3
Format the two bytes as a single 4-character hexadecimal string and return it.
Sample values:
Unencoded (decimal) | Intermediate (decimal) | Intermediate (hex) | Encoded (hex)
0 | 8192 | 2000 | 4000
-8192 | 0 | 0000 | 0000
8191 | 16383 | 3fff | 7F7F
2048 | 10240 | 2800 | 5000
-4096 | 4096 | 1000 | 2000
My function
-(NSString *)encodeValue{
// get the input value
int decValue = [_inputValue.text intValue];
char* bytes = (char*)&decValue;
NSNumber *number = #(decValue+8192); //Add 8192 so that the number can't be negative, because we're about to lose the sign.
u_int16_t shortNumber = [number unsignedShortValue]; //Convert the integer to an unsigned short (2 bytes) using NSNumber.
shortNumber = shortNumber << 1; // !!!! This is what I'm doing to clear the MSB !!!!!!!
NSLog(#"%hu", shortNumber);
NSString *returnString = [NSString stringWithFormat:#"%x", shortNumber]; //Convert the 2 byte number to a hex string using format specifiers
return returnString;
}
I'm using the shift bitwise operator to clear the MSB and I get the correct answer for a couple of the values, but not every time.
If I am understanding you correctly then I believe you are after something like this:
u_int16_t number;
number = 0xFFFF;
number &= ~(1 << ((sizeof(number) * 8) - 1));
NSLog(#"%x", number); // Output will be 7fff
How it works:
sizeof(number) * 8 gives you the number of bits in the input number (eg. 16 for a u_int16_t)
1 << (number of bits in number - 1) gives you a mask with only the MSB set (eg. 0x8000)
~(mask) gives you the bitwise NOT of the mask (eg. 0x7fff)
ANDing the mask with your number then clears only the MSB leaving all others as they were
You are misunderstanding your task.
You are not supposed to clear the most significant bit anywhere. You have 14 bits. You are supposed to separate these 14 bits into two groups of seven bits. And since a byte has 8 bits, storing 7 bits into a byte will leave the most significant bit cleared.
PS. Why on earth are you using an NSNumber? If this is homework, I would fail you for the use of NSNumber alone, no matter what the rest of the code does.
PS. What is this char* bytes supposed to be good for?
PS. You are not clearing any most significant bit anywhere. You have an unsigned short containing 14 significant bits, so the two most significant bits are cleared. You shift the number to the left, so the most significant bit, which was always cleared, remains cleared, but the second most significant bit isn't. And all this has nothing to do with your task.
I was following 'A tour of GO` on http://tour.golang.org.
The table 15 has some code that I cannot understand. It defines two constants with the following syntax:
const (
Big = 1<<100
Small = Big>>99
)
And it's not clear at all to me what it means. I tried to modify the code and run it with different values, to record the change, but I was not able to understand what is going on there.
Then, it uses that operator again on table 24. It defines a variable with the following syntax:
MaxInt uint64 = 1<<64 - 1
And when it prints the variable, it prints:
uint64(18446744073709551615)
Where uint64 is the type. But I can't understand where 18446744073709551615 comes from.
They are Go's bitwise shift operators.
Here's a good explanation of how they work for C (they work in the same way in several languages).
Basically 1<<64 - 1 corresponds to 2^64 -1, = 18446744073709551615.
Think of it this way. In decimal if you start from 001 (which is 10^0) and then shift the 1 to the left, you end up with 010, which is 10^1. If you shift it again you end with 100, which is 10^2. So shifting to the left is equivalent to multiplying by 10 as many times as the times you shift.
In binary it's the same thing, but in base 2, so 1<<64 means multiplying by 2 64 times (i.e. 2 ^ 64).
That's the same as in all languages of the C family : a bit shift.
See http://en.wikipedia.org/wiki/Bitwise_operation#Bit_shifts
This operation is commonly used to multiply or divide an unsigned integer by powers of 2 :
b := a >> 1 // divides by 2
1<<100 is simply 2^100 (that's Big).
1<<64-1 is 2⁶⁴-1, and that's the biggest integer you can represent in 64 bits (by the way you can't represent 1<<64 as a 64 bits int and the point of table 15 is to demonstrate that you can have it in numerical constants anyway in Go).
The >> and << are logical shift operations. You can see more about those here:
http://en.wikipedia.org/wiki/Logical_shift
Also, you can check all the Go operators in their webpage
It's a logical shift:
every bit in the operand is simply moved a given number of bit
positions, and the vacant bit-positions are filled in, usually with
zeros
Go Operators:
<< left shift integer << unsigned integer
>> right shift integer >> unsigned integer
I'm using FFT's for audio processing, and I've come up with some potentially very fast ways of doing the bit reversal needed which might be of use to others, but because of the size of my FFT's (8192), I'm trying to reduce memory usage / cache flushing do to size of lookup tables or code, and increase performance. I've seen lots of clever bit reversal routines; they all allow you can feed them with any arbitrary value and get a bit reversed output, but FFT's don't need that flexibility since they go in a predictable sequence. First let me state what I have tried and/or figured out since it may be the fastest to date and you can see the problem, then I'll ask the question.
1) I've written a program to generate straight through, unlooped x86 source code that can be pasted into my FFT code, which reads an audio sample, multiplies it by a window value (that's a lookup table itself) and then just places the resulting value in it's proper bit reversed sorted position by absolute values within the x86 addressing modes like: movlps [edi+1876],xmm0. This is the absolute fastest way to do this for smaller FFT sizes. The problem is when I write straight through code to handle 8192 values, the code grows beyond the L1 instruction cache size and performance drops way down. Of course in contrast, a 32K bit reversal lookup table mixed with a 32K window table, plus other stuff, is also too big to fit the L1 data cache, and performance drops way down, but that's the way I'm currently doing it.
2) I've found patterns in the bit reversal sequence that can be exploited to reduce lookup table size, for example using 4 bit numbers (0..15) as an example, the bit reversal sequence looks like: 0,8,4,12,2,10,6,14|1,5,9,13,3,11,7,15. First thing that can be seen is that the last 8 numbers are the same as the first 8 +1, so I can chop my LUT half. If I look at the difference between the numbers there is more redundancy, so if I start with a zero in a register and want to add values to it to get the next bit reversed number they would be: +0,+8,-4,+8,-10,+8,-4,+8 and the same for the second half. As can be seen, I could have a lookup table of just 0 and -10 because the +8's and -4's always show up in a predictable way. The code would be unrolled to handle 4 values per loop: one would be a lookup table read, and the other 3 would be straight code for +8, -4, +8, before looping around again. Then a second loop could handle the 1,5,9,13,3,11,7,15 sequence. This is great, because I can now chop down my lookup table by another factor of 4. This scales up the same way for an 8192 size FFT. I can now get by with a 4K size LUT instead of 32K. I can exploit the same pattern and double the size of my code and chop down the LUT by another half yet again, however far I want to go. But in order to eliminate the LUT altogether, I'm back to the prohibitive code size.
For large FFT sizes, I believe that this #2 solution is the absolute fastest to date, since a relatively small percentage of lookup table reads need to be done, and every algorithm I currently find on the web requires too many serial/dependency calculations which can't be vectorized.
The question is, is there an algorithm that can increment numbers so the MSB acts like the LSB, and so on? In other words (in binary): 0000, 1000, 0100, 1100, 0010, etc… I've tried to think up some way, and so far, short of a bunch of nested loops, I can't seem to find a way for a fast and simple algorithm that is a mirror image of simply adding 1 to the LSB of a number. Yet it seems like there should be a way.
One other approach to consider: take a well known bit reversal algorithm - typically a few masks, shifts, and ORs - then implement this with SSE, so you get e.g. 8 x 16 bit bit reversals for the price of one. For 16 bits you need 5*log2(N) = 20 instructions, so the aggregate throughput would be 2.5 instructions per bit reversal.
This is the most trivial and straightforward solution (in C):
void BitReversedIncrement(unsigned *var, int bit)
{
unsigned c, one = 1u << bit;
do {
c = *var & one;
(*var) ^= one;
one >>= 1;
} while (one && c);
}
The main problem with is the conditional branches, which are often costly on modern CPUs. You have one conditional branch per bit.
You can do reversed increments by working on several bits at a time, e.g. 3 if ints are 32-bit:
void BitReversedIncrement2(unsigned *var, int bit)
{
unsigned r = *var, t = 0;
while (bit >= 2 && !t)
{
unsigned tt = (r >> (bit - 2)) & 7;
t = (07351624 >> (tt * 3)) & 7;
r ^= ((tt ^ t) << (bit - 2));
bit -= 3;
}
if (bit >= 0 && !t)
{
t = r & ((1 << (bit + 1)) - 1);
r ^= t;
t <<= 2 - bit;
t = (07351624 >> (t * 3)) & 7;
t >>= 2 - bit;
r |= t;
}
*var = r;
}
This is better, you only have 1 conditional branch per 3 bits.
If your CPU supports 64-bit ints, you can work on 4 bits at a time:
void BitReversedIncrement3(unsigned *var, int bit)
{
unsigned r = *var, t = 0;
while (bit >= 3 && !t)
{
unsigned tt = (r >> (bit - 3)) & 0xF;
t = (0xF7B3D591E6A2C48ULL >> (tt * 4)) & 0xF;
r ^= ((tt ^ t) << (bit - 3));
bit -= 4;
}
if (bit >= 0 && !t)
{
t = r & ((1 << (bit + 1)) - 1);
r ^= t;
t <<= 3 - bit;
t = (0xF7B3D591E6A2C48ULL >> (t * 4)) & 0xF;
t >>= 3 - bit;
r |= t;
}
*var = r;
}
Which is even better. And the only look-up table (07351624 or 0xF7B3D591E6A2C48) is tiny and likely encoded as an immediate instruction operand.
You can further improve the code if the bit position for the reversed "1" is a known constant. Just unroll the while loop into nested ifs, substitute the reversed one bit position constant.
For larger FFTs, paying attention to cache blocking (minimizing total uncovered cache miss cycles) can have a far larger effect on performance than optimization of the cycle count taken by indexing bit reversal. Make sure not to de-optimize a bigger effect by a larger cycle count while optimizing the smaller effect. For small FFTs, where everything fits in cache, LUTs can be a good solution as long as you pay attention to any load-use hazards by making sure things are or can be pipelined appropriately.
I have been reading about bit operators in Objective-C in Kochan's book, "Programming in Objective-C".
I am VERY confused about this part, although I have really understood most everything else presented to me thus far.
Here is a quote from the book:
The Bitwise AND Operator
Bitwise ANDing is frequently used for masking operations. That is, this operator can be used easily to set specific bits of a data item to 0. For example, the statement
w3 = w1 & 3;
assigns to w3 the value of w1 bitwise ANDed with the constant 3. This has the same ffect of setting all the bits in w, other than the rightmost two bits to 0 and preserving the rightmost two bits from w1.
As with all binary arithmetic operators in C, the binary bit operators can also be used as assignment operators by adding an equal sign. The statement
word &= 15;
therefore performs the same function as the following:
word = word & 15;
Additionally, it has the effect of setting all but the rightmost four bits of word to 0. When using constants in performing bitwise operations, it is usually more convenient to express the constants in either octal or hexadecimal notation.
OK, so that is what I'm trying to understand. Now, I'm extremely confused with pretty much this entire concept and I am just looking for a little clarification if anyone is willing to help me out on that.
When the book references "setting all the bits" now, all of the bits.. What exactly is a bit. Isn't that just a 0 or 1 in 2nd base, in other words, binary?
If so, why, in the first example, are all of the bits except the "rightmost 2" to 0? Is it 2 because it's 3 - 1, taking 3 from our constant?
Thanks!
Numbers can be expressed in binary like this:
3 = 000011
5 = 000101
10 = 001010
...etc. I'm going to assume you're familiar with binary.
Bitwise AND means to take two numbers, line them up on top of each other, and create a new number that has a 1 where both numbers have a 1 (everything else is 0).
For example:
3 => 00011
& 5 => 00101
------ -------
1 00001
Bitwise OR means to take two numbers, line them up on top of each other, and create a new number that has a 1 where either number has a 1 (everything else is 0).
For example:
3 => 00011
| 5 => 00101
------ -------
7 00111
Bitwise XOR (exclusive OR) means to take two numbers, line them up on top of each other, and create a new number that has a 1 where either number has a 1 AND the other number has a 0 (everything else is 0).
For example:
3 => 00011
^ 5 => 00101
------ -------
6 00110
Bitwise NOR (Not OR) means to take the Bitwise OR of two numbers, and then reverse everything (where there was a 0, there's now a 1, where there was a 1, there's now a 0).
Bitwise NAND (Not AND) means to take the Bitwise AND of two numbers, and then reverse everything (where there was a 0, there's now a 1, where there was a 1, there's now a 0).
Continuing: why does word &= 15 set all but the 4 rightmost bits to 0? You should be able to figure it out now...
n => abcdefghjikl
& 15 => 000000001111
------ --------------
? 00000000jikl
(0 AND a = 0, 0 AND b = 0, ... j AND 1 = j, i AND 1 = i, ...)
How is this useful? In many languages, we use things called "bitmasks". A bitmask is essentially a number that represents a whole bunch of smaller numbers combined together. We can combine numbers together using OR, and pull them apart using AND. For example:
int MagicMap = 1;
int MagicWand = 2;
int MagicHat = 4;
If I only have the map and the hat, I can express that as myInventoryBitmask = (MagicMap | MagicHat) and the result is my bitmask. If I don't have anything, then my bitmask is 0. If I want to see if I have my wand, then I can do:
int hasWand = (myInventoryBitmask & MagicWand);
if (hasWand > 0) {
printf("I have a wand\n");
} else {
printf("I don't have a wand\n");
}
Get it?
EDIT: more stuff
You'll also come across the "bitshift" operator: << and >>. This just means "shift everything left n bits" or "shift everything right n bits".
In other words:
1 << 3 = 0001 << 3 = 0001000 = 8
And:
8 >> 2 = 01000 >> 2 = 010 = 2
"Bit" is short for "binary digit". And yes, it's a 0 or 1. There are almost always 8 in a byte, and they're written kinda like decimal numbers are -- with the most significant digit on the left, and the least significant on the right.
In your example, w1 & 3 masks everything but the two least significant (rightmost) digits because 3, in binary, is 00000011. (2 + 1) The AND operation returns 0 if either bit being ANDed is 0, so everything but the last two bits are automatically 0.
w1 = ????...??ab
3 = 0000...0011
--------------------
& = 0000...00ab
0 & any bit N = 0
1 & any bit N = N
So, anything bitwise anded with 3 has all their bits except the last two set to 0. The last two bits, a and b in this case, are preserved.
#cHao & all: No! Bits are not numbers. They’re not zero or one!
Well, 0 and 1 are possible and valid interpretations. Zero and one is the typical interpretation.
But a bit is only a thing, representing a simple alternative. It says “it is” or “it is not”. It doesn’t say anything about the thing, the „it“, itself. It doesn’t tell, what thing it is.
In most cases this won’t bother you. You can take them for numbers (or parts, digits, of numbers) as you (or the combination of programming languages, cpu and other hardware, you know as being “typical”) usaly do – and maybe you’ll never have trouble with them.
But there is no principal problem if you switch the meaning of “0“ and “1”. Ok, if doing this while programming assembler, you’ll find it a bit problematic as some mnemonics will do other logic then they tell you with their names, numbers will be negated and such things.
Have a look at http://webdocs.cs.ualberta.ca/~amaral/courses/329/webslides/Topic2-DeMorganLaws/sld017.htm if you want.
Greetings