I was looking for something and got in to this enum is apple UITableViewCell.h.
I am sorry if this is trivial but I wonder/curious what is the point of this.
I know the << from ruby but I don't really understand this enum ?
enum {
UITableViewCellStateDefaultMask = 0,
UITableViewCellStateShowingEditControlMask = 1 << 0,
UITableViewCellStateShowingDeleteConfirmationMask = 1 << 1
};
Thanks
BTW
Found it as a great way to learn coding, I am trying once in a day to get into the header files of at list on object.
Shani
These are bit-field flags. They are used because you can combine them using the bitwise-OR operator. So for example you can combine them like
(UITableViewCellStateShowingEditControlMask | UITableViewCellStateShowingDeleteConfirmationMask)
They work by having one bit set in an integer. In this example, in binary,
UITableViewCellStateShowingEditControlMask = 0000 0001
UITableViewCellStateShowingDeleteConfirmationMask = 0000 0010
When they are OR'ed together, they produce 0000 0011. The framework then knows that both of these flags are set.
The << operator is a left-shift. It shifts the binary representation. So 1 << 1 means
0000 0001 shifted left by one bit = 0000 0010
1 << 2 would equal 0000 0100.
Its actually BItwise shift operator
<< Indicates the bits are to be shifted to the left.
>> Indicates the bits are to be shifted to the right.
So in your statement the value of 1 << 0 is 1 and 1 << 1 is 2
It's a common trick in C to use the bitwise shift operator in enum values to allow you to combine enumeration values with the bitwise or operator.
That piece of code is equivalent to
enum {
UITableViewCellStateDefaultMask = 0,
UITableViewCellStateShowingEditControlMask = 1, // 01 in binary
UITableViewCellStateShowingDeleteConfirmationMask = 2 // 10 in binary
};
This allows you to bitwise or two or more enumeration constants together
(UITableViewCellStateShowingEditControlMask | UITableViewCellStateShowingDeleteConfirmationMask) // == 3 (or 11 in binary)
to give a new constant that means both of those things at once. In this case, the cell is showing both an editing control and a delete confirmation control, or something like that.
These operand called bitshift. Bitshift operand can be preferred for
2 reasons.
- For fast operation
- Use multiple bool value in one time.
For example : 1<<2 is a left shift; that means
1 : 0001,
2 : 0010
1 << 2 this line means is 2 should be left one bit. As a result 0010 shifted to 0100
Also shifted value must ordered as a 1,2,4,8,16...
typedef NS_OPTIONS(int, EntityEngineer) {
EntityEngineeriOS = 1 << 0,
EntityCategoryAndroid = 1 << 1,
EntityCategoryDB = 1 << 2,
EntityCategoryTeamLead = 1 << 16,};
Now, we want to check mutiple boolean in below line,
char engineer = (EntityEngineeriOS | EntityCategoryAndroid);
char 0011 = (0001 | 0010);
if (engineer & EntityEngineeriOS) {
NSLog(#"we are looking for a developer who can write objective c or java");
}
if (engineer & EntityCategoryDB) {
NSLog(#"we are not looking for a DB manager");
}
Result : we are looking for a developer who can write objective c or java
That is the bitshift operator. That is used commonly for objects that may have multiple behaviors (each enum being a behavior).
Here is a similar post that may clarify it better.
These types of operator are called bitwise operator which operates on bit value of a number. These operation are very fast as compared to other arithematic operations.
Related
This question already has answers here:
What are bitwise shift (bit-shift) operators and how do they work?
(10 answers)
Closed 9 years ago.
I'm somewhat familiar with the typedef enum syntax of C and C++. I'm now programming in Objective-C and came across the syntax in the following example. I'm not sure if the syntax is Objective-C specific or not. But, my question is in the following code snippet, what does syntax like 1 << 0 mean?
typedef enum {
CMAttitudeReferenceFrameXArbitraryZVertical = 1 << 0,
CMAttitudeReferenceFrameXArbitraryCorrectedZVertical = 1 << 1,
CMAttitudeReferenceFrameXMagneticNorthZVertical = 1 << 2,
CMAttitudeReferenceFrameXTrueNorthZVertical = 1 << 3
} CMAttitudeReferenceFrame;
This is common to the C-family of languages, and works identically in C, C++, and Objective-C. Unlike Java, Pascal, and similar languages, a C enum is not limited to the values named for it; it actually is an integral type of a size that can represent all the named values, and one can set a variable of the enum type to an arithmetic expression in the enum members. Typically, one uses bit shifts to make the values powers of 2, and one uses bit-wise logical operations to combine values.
typedef enum {
read = 1 << 2, // 4
write = 1 << 1, // 2
execute = 1 << 0 // 1
} permission; // A miniature version of UNIX file-permission masks
Again, the bit-shift operations are all from C.
You can now write:
permission all = read | write | execute;
You could even include that line in the permission declaration itself:
typedef enum {
read = 1 << 2, // 4
write = 1 << 1, // 2
execute = 1 << 0, // 1
all = read | write | execute // 7
} permission; // Version 2
How do you turn execute on for a file?
filePermission |= execute;
Note that this is dangerous:
filePermission += execute;
This will change something with value all to 8, which makes no sense.
<< is called the left shift operator.
http://www.techotopia.com/index.php/Objective-C_Operators_and_Expressions#Bitwise_Left_Shift
Long story short 1 << 0 = 1, 1 << 1 = 2, 1 << 2 = 4 and 1 << 3 = 8.
It looks like the typedef is representing a bit field value. 1 << n is 1 shifted left n bits. So each enum item represents a different bit setting. That particular bit set or clear would indicate something being one of two states. 1 shifted left by zero bits is 1.
If a datum is declared:
CMAttitudeReferenceFrame foo;
Then you can check any one of four independent states using the enum values, and foo is no bigger than an int. For example:
if ( foo & CMAttitudeReferenceFrameXArbitraryCorrectedZVertical ) {
// Do something here if this state is set
}
I can see in Apple's documentation that enumerations are sometimes defined like this
enum {
UICollectionViewScrollPositionTop = 1 << 0,
UICollectionViewScrollPositionBottom = 1 << 1
}
What does the << mean?
It's the bitwise shift left operator. It's used to create values having a single bit set, very common when combination through bitwise OR is intended.
For those values, you might later say:
const int top_and_bottom = UICollectionViewScrollPositionTop | UICollectionViewScrollPositionBottom;
which would result in top_and_bottom being set to 3 (binary 112).
Here it is simply left bit shift. So this means 1<<0 = 1 for instance. And 1<<1 is two. Maybe the author chose this way to initialize the enumeration to emphasize on the fact that UICollectionViewScrollPositionTop has only the least significant bit on and UICollectionViewScrollPositionBottom has only the second to least significant bit on. I guess the usage for this enumeration is to somehow later form bitmasks.
<< stands for left shift.
It shifts the binary to specified bits, as 4<<1 will be 8 and 4<<2 will be 16.
Each left shift makes the value multiplied by 2.
1<<0 will be 1 while 1<<1 will be 2.
Check here
A << operator is used in UITableViewCell, as listed below:
enum {
UITableViewCellStateDefaultMask = 0,
UITableViewCellStateShowingEditControlMask = 1 << 0,
UITableViewCellStateShowingDeleteConfirmationMask = 1 << 1
};
I had been to this post << operator in objective c enum? but still not clear about the use of << operator.
The same above mentioned Enum and be written as mentioned below, then why is it so, they have used << operator?
enum {
UITableViewCellStateDefaultMask = 0,
UITableViewCellStateShowingEditControlMask = 1,
UITableViewCellStateShowingDeleteConfirmationMask = 2
};
The post you have linked explains why quite clearly. The << operator in C shifts numbers left by the specified number of bits. By shifting a 1 into each column, it is easy to see that the enum options can be bitwise ORed together. This allows the enum options to be combined together using the | operator and held in a single integer. This would not work if the enum declaration was as follows:
enum {
UITableViewCellStateDefaultMask = 0, (= 00 in binary)
UITableViewCellStateShowingEditControlMask = 1, (= 01 in binary)
UITableViewCellStateShowingDeleteConfirmationMask = 2, (= 10 in binary)
UITableViewCellStateThatIJustMadeUpForThisExample = 3 (= 11 in binary)
};
As 3 = 11 in binary, it is not possible to know from a single integer if you have the state UITableViewCellStateThatIJustMadeUpForThisExample or UITableViewCellStateShowingEditControlMask ORed with UITableViewCellStateShowingDeleteConfirmationMask.
The enum values give names to bits that are to be used in a bitmask. The bits in a bitmask by value are 1, 2, 4, 8, 16, ... (the powers of two). These values can be more clearly shown using the expressions (1<<0, 1<<1, 1<<2, 1<<3) -- i.e,. 1 shifted to the left by 0, 1, ... places. It's clearly and less error prone than listing the powers of 2 as decimal constants.
When you use the values, they are normally combined using a bitwise-OR operation ('|'). The goal is to specify zero or more bits, each of which has a specific meaning. Using a bitmask allows you to specify them independently but compactly. You may wish to read more on bitmasks for more details and examples.
i am using objective-c to develop ios applications
i found in the documentations that enum have default values like this : "1<<0"
i don't understand this default value
example:
enum {
UIDataDetectorTypePhoneNumber = 1 << 0,
UIDataDetectorTypeLink = 1 << 1,
UIDataDetectorTypeAddress = 1 << 2,
UIDataDetectorTypeCalendarEvent = 1 << 3,
UIDataDetectorTypeNone = 0,
UIDataDetectorTypeAll = NSUIntegerMax
};
so, what is the default value for each element in this enum ?
thanks
That is an enum with bitwise values or bit flags. Each value is an binary value in which only one bit is set to 1 and all the others are set to 0. That way you can store in a value as much flags as bits of an integer number has.
The shift left operator '<<' is a displacement of bits to the left or to the most significant side of the binary number. It is the same that calculating a "* 2" (times two) operation.
For example in the enum you have send in your question, the first value, UIDataDetectorTypePhoneNumber, is 1. The second one, UIDataDetectorTypeLink, is 2 and the third one, UIDataDetectorTypeAddress, is 4.
You combine that values as flags to set some different bits in the same integer:
NSInteger fooIntValue = UIDataDetectorTypePhoneNumber | UIDataDetectorTypeLink;
As '|' operation is bitwise, the result will be a binary value ...0011, that is 3. And you are indicating that your variable fooIntValue has two flags set to true for two different properties.
This << sign is for shifting bits to the left (multiplying by 2).
1 << 0 equals 1 (0b00000001)
1 << 1 equals 2 (0b00000010)
1 << 2 equals 4 (0b00000100)
Usually, if you dont asign any values, compiler will define first value as 0, second as 1 and so on. You can alway assign values yourself if you prefer (assignment that you're refering to is usually used for bitmasks, where each bit in a byte or a word has it's own meaning).
I have been reading about bit operators in Objective-C in Kochan's book, "Programming in Objective-C".
I am VERY confused about this part, although I have really understood most everything else presented to me thus far.
Here is a quote from the book:
The Bitwise AND Operator
Bitwise ANDing is frequently used for masking operations. That is, this operator can be used easily to set specific bits of a data item to 0. For example, the statement
w3 = w1 & 3;
assigns to w3 the value of w1 bitwise ANDed with the constant 3. This has the same ffect of setting all the bits in w, other than the rightmost two bits to 0 and preserving the rightmost two bits from w1.
As with all binary arithmetic operators in C, the binary bit operators can also be used as assignment operators by adding an equal sign. The statement
word &= 15;
therefore performs the same function as the following:
word = word & 15;
Additionally, it has the effect of setting all but the rightmost four bits of word to 0. When using constants in performing bitwise operations, it is usually more convenient to express the constants in either octal or hexadecimal notation.
OK, so that is what I'm trying to understand. Now, I'm extremely confused with pretty much this entire concept and I am just looking for a little clarification if anyone is willing to help me out on that.
When the book references "setting all the bits" now, all of the bits.. What exactly is a bit. Isn't that just a 0 or 1 in 2nd base, in other words, binary?
If so, why, in the first example, are all of the bits except the "rightmost 2" to 0? Is it 2 because it's 3 - 1, taking 3 from our constant?
Thanks!
Numbers can be expressed in binary like this:
3 = 000011
5 = 000101
10 = 001010
...etc. I'm going to assume you're familiar with binary.
Bitwise AND means to take two numbers, line them up on top of each other, and create a new number that has a 1 where both numbers have a 1 (everything else is 0).
For example:
3 => 00011
& 5 => 00101
------ -------
1 00001
Bitwise OR means to take two numbers, line them up on top of each other, and create a new number that has a 1 where either number has a 1 (everything else is 0).
For example:
3 => 00011
| 5 => 00101
------ -------
7 00111
Bitwise XOR (exclusive OR) means to take two numbers, line them up on top of each other, and create a new number that has a 1 where either number has a 1 AND the other number has a 0 (everything else is 0).
For example:
3 => 00011
^ 5 => 00101
------ -------
6 00110
Bitwise NOR (Not OR) means to take the Bitwise OR of two numbers, and then reverse everything (where there was a 0, there's now a 1, where there was a 1, there's now a 0).
Bitwise NAND (Not AND) means to take the Bitwise AND of two numbers, and then reverse everything (where there was a 0, there's now a 1, where there was a 1, there's now a 0).
Continuing: why does word &= 15 set all but the 4 rightmost bits to 0? You should be able to figure it out now...
n => abcdefghjikl
& 15 => 000000001111
------ --------------
? 00000000jikl
(0 AND a = 0, 0 AND b = 0, ... j AND 1 = j, i AND 1 = i, ...)
How is this useful? In many languages, we use things called "bitmasks". A bitmask is essentially a number that represents a whole bunch of smaller numbers combined together. We can combine numbers together using OR, and pull them apart using AND. For example:
int MagicMap = 1;
int MagicWand = 2;
int MagicHat = 4;
If I only have the map and the hat, I can express that as myInventoryBitmask = (MagicMap | MagicHat) and the result is my bitmask. If I don't have anything, then my bitmask is 0. If I want to see if I have my wand, then I can do:
int hasWand = (myInventoryBitmask & MagicWand);
if (hasWand > 0) {
printf("I have a wand\n");
} else {
printf("I don't have a wand\n");
}
Get it?
EDIT: more stuff
You'll also come across the "bitshift" operator: << and >>. This just means "shift everything left n bits" or "shift everything right n bits".
In other words:
1 << 3 = 0001 << 3 = 0001000 = 8
And:
8 >> 2 = 01000 >> 2 = 010 = 2
"Bit" is short for "binary digit". And yes, it's a 0 or 1. There are almost always 8 in a byte, and they're written kinda like decimal numbers are -- with the most significant digit on the left, and the least significant on the right.
In your example, w1 & 3 masks everything but the two least significant (rightmost) digits because 3, in binary, is 00000011. (2 + 1) The AND operation returns 0 if either bit being ANDed is 0, so everything but the last two bits are automatically 0.
w1 = ????...??ab
3 = 0000...0011
--------------------
& = 0000...00ab
0 & any bit N = 0
1 & any bit N = N
So, anything bitwise anded with 3 has all their bits except the last two set to 0. The last two bits, a and b in this case, are preserved.
#cHao & all: No! Bits are not numbers. They’re not zero or one!
Well, 0 and 1 are possible and valid interpretations. Zero and one is the typical interpretation.
But a bit is only a thing, representing a simple alternative. It says “it is” or “it is not”. It doesn’t say anything about the thing, the „it“, itself. It doesn’t tell, what thing it is.
In most cases this won’t bother you. You can take them for numbers (or parts, digits, of numbers) as you (or the combination of programming languages, cpu and other hardware, you know as being “typical”) usaly do – and maybe you’ll never have trouble with them.
But there is no principal problem if you switch the meaning of “0“ and “1”. Ok, if doing this while programming assembler, you’ll find it a bit problematic as some mnemonics will do other logic then they tell you with their names, numbers will be negated and such things.
Have a look at http://webdocs.cs.ualberta.ca/~amaral/courses/329/webslides/Topic2-DeMorganLaws/sld017.htm if you want.
Greetings