Why do a lot of languages lack a logical XOR operator? - language-design

Off the top of my head, I cannot think of a single language I've used that had a logical exclusive or operator, but all have logical and bitwise and and or operators.
Looking around, the only reason to this that I could find was that exclusive or cannot be short circuited, so a logical version would be useless, which I really can't see being the case. The reason it came to my attention that most languages lack this is that I needed it (I was using Ruby, so I wrote a method to convert an integer to a boolean, and then use bitwise XOR, which on booleans acts like logical XOR).
Just using bitwise XOR does not work either, because it will give a different result.
0b0001 ^ 0b1000 = 0b1001 (True)
0b0001 XOR 0b1000 = False
// Where ^ is bitwise exclusive or and XOR is logical exclusive or
// Using != (not equal to) also doesn't work
0b0001 != 0b1000 = True
So why is it that most languages do not include a logical exclusive or operator?
Edit: I added an example with how != also does not do what I want, it almost does, but falls into the same problem that using bitwise exclusive or does, it only works if you know that you are working with zero or one, and not any other number.
And to note, this is assuming that language uses zero as false and nonzero as true.

Nearly every language has a logical XOR. The symbol they use for it varies, but regardless of the symbol, most people pronounce it "not equal".
Edit: for those who doubt, test code for three variables:
#include <iostream>
int main() {
for (int i=0; i<2; i++)
for (int j=0; j<2; j++)
for (int k=0; k<2; k++) {
std::cout << "\n!=" << (i!=j!=k);
std::cout << "\t^ " << (i^j^k);
}
return 0;
}

What do you mean by "logical XOR operator"? I'm not sure what result do you expect from your examples, but here's my answer:
a (logical XOR) b is the same as bool(a) != bool(b)
Inequality is a logical XOR. Since you already have the bitwise XOR version, you don't need a special operator for the logical one.

You can also write !a ^ !b to get the same effect as a logical xor.
You probably don't find logical xor in programming languages because:
it doesn't generate very efficient assembly code
is not needed very often
language designers have to put the line somewhere, why not NAND, NOR, NXOR, etc.

OK, so you're looking for a bytewise exclusive-or versus a bitwise exclusive-or. It seems like you're looking for something that will operate on a byte as "on" or "off", as thought the bytes are "0" or "nonzero", then XOR them. It really sounds like you're looking to compress each byte into a single bit indicating true or false. So, it sounds like you want (a!=0)^(b!=0)
a b a YOURXOR b a!=0 b!=0 (a!=0)^(b!=0)
0 0 0 0 0 0
0 1 1 0 1 1
1 0 1 1 0 1
7 0 1 1 0 1
0 7 1 0 1 1
3 7 0 1 1 0
7 3 0 1 1 0
7 7 0 1 1 0
As for why that's not in every language... that I can't really answer. However it's not all that difficult to implement with the building blocks of bitwise xor available in every language, and no language offers all possible functionality - they just offer enough to let you build the extended functionality you might need. Oh, and if this were a popular enough problem, you'd expect to see libraries or macros all over the place for it; while I may not have made a sufficient search for such libs/code, I didn't find any myself, indicating that the requirement is either trivial to write, or a niche requirement.

The only reason I can think of is that the operation is relatively rare. I don't see why something like ^^ couldn't be assigned to it though, maybe people who design languages want to avoid operator clutter.
As to it being useless because it can't be short-circuited, in many strongly typed languages you can't use the bitwise XOR operator to compare booleans without casting back and forth, in which case even a non-short circuiting logical XOR would make sense to have.

Probably because XOR isn't commonly needed as a logical operator, and you can do it almost as easily as this:
// foo xor bar
(foo & !bar) || (bar & !foo)
// or like this (assuming foo and bar are booleans)
!(foo==bar)
// assume boleans that can be cast as integers with true=1 and false=0
// foo XOR bar XOR baz XOR alice
((int)foo + (int)bar + (int)baz + (int)alice)&0b1

Related

Understanding weird logical operators in smalltalk

So my problem is the following:
When char = 0
boolean = char ~= 0 & char ~= 256
evaluates to true and if I invert the statements like so:
boolean = char ~= 256 & char ~= 0
I get false.
What's happening?. I am expecting false on both cases.
As #Uko said, you must understand the precedence of messages: all binary messages (+ = < & ~= etc..) are evaluated from left to right.
Thus you evaluate:
(((boolean = char) ~= 256) & char) ~= 0
I think you were after:
boolean := (char ~= 256) & (char ~= 0).
So what happens with your expression ?
booleanis presumably unitialized (thus nil)
char is 0.
boolean = char is false.
false ~= 256 is true.
true & char is char (see below why)
char ~= 0 is false (since char = 0)
If you invert 0 and 256, only the last step changes and awnswer true.
The interesting part is the implementation of message & in class True: it probably does not assert that the parameter is a Boolean and looks like:
& aBoolean
^aBoolean
If you pass something that is not a Boolean, (like 0 in your case), it will return this thing, whatever surprising it can be...
If you use an IDE (Squeak/Pharo/Visualworks/Dolphin... but not gnu Smalltalk) I suggest you use the menu Debug It and evaluate the expression step by step in the Debugger.
Last, note that char is probably not a good name in Smalltalk context: it might be misleading. Indeed, if it holds 0, it's rather an Integer, not a Character.
There is something we are repeating in some answers that I think deserves further clarification. We say evaluation proceeds from left to right. True, but the actual semantics of messages is:
First evaluate the receiver, then the arguments in order; finally send the message.
Since the Smalltalk VM is stack based, this rule means that:
The receiver is evaluated first and the result is pushed on the stack.
The arguments are evaluated in order and their results pushed on the stack.
The message is sent
Item 3 means that the method that the send invokes will find the receiver and the arguments in the stack, in the order defined above.
For instance, in
a := 1.
b := 2.
b := a - (a := b)
variable b will evaluate to (1 - (a := 2)) = -1 and a to 2. Why? Because by the time the assignment a := b is evaluated the receiver a of the subtraction has already been pushed with the value it had at that time, i.e., 1.
Note also that this semantics must be preserved even if the VM happens to use registers instead of the stack. The reason is that evaluations must preserve the semantics, and therefore the order. This fact has an impact on the optimizations that the native code may implement.
It is interesting to observe that this semantics along with the precedence unary > binary > keyword support polymorphism in a simple way. A language that gives more precedence to, say, * than + assumes that * is a multiplication and + an addition. In Smalltalk, however, it is up to the programmer to decide the meaning of these (and any other) selectors without the syntax getting in the way of the actual semantics.
My point here is that "left to right" comes from the fact that we write Smalltalk in English, which is read from "left to right". If we implemented Smalltalk using a language that people read from "right to left" this rule would be contradicted. The actual definition, the one that will remain unchanged, is the one expressed in items 1, 2 and 3 above, which comes from the stack metaphor.

Languages that support boolean syntactic sugar

There's a certain over-verbosity that I have to engage in when writing certain Boolean expressions, at least with all the languages I've used, and I was wondering if there were any languages that let you write more concisely?
The way it goes is like this:
I want to find out if I have a Thing that can be either A, B, C, or D.
And I'd like to see if Thing is an A or a B.
The logical way for me to express this is
//1: true if Thing is an A or a B
Thing == (A || B)
Yet all the languages I know expect it to be written as
//2: true if Thing is an A or a B
Thing == A || Thing == B
Are there any languages that support 1? It doesn't seem problematic to me, unless Thing is a Boolean.
Yes. Icon does.
As a simple example, here is how to get the sum of all numbers less than 1000 that are divisble by three or five (the first problem of Project Euler).
procedure main ()
local result
local n
result := 0
every n := 1 to 999 do
if n % (3 | 5) == 0 then
result +:= n
write (result)
end
Note the n % (3 | 5) == 0 expression. I'm a bit fuzzy on the precise semantics, but in Icon, the concept of booleans is not like other languages. Every expression is a generator and it may pass (generating a value) or fail. When used in an if expression, a generator will continue to iterate until it passes or exhausts itself. In this case, n % (3 | 5) == 0 is a generator which uses another generator (3 | 5) to test if n is divisible by 3 or 5. (To be entirely technical, this isn't even syntactic sugar.)
Likewise, in Python (which was influenced by Icon) you can use the in statement to test for equality on multiple elements. It's a little weaker than Icon though (as in, you could not translate the modulo comparison above directly). In your case, you would write Thing in (A, B), which translates exactly to what you want.
There are other ways to express that condition without trying to add any magic to the conditional operators.
In Ruby, for example:
$> thing = "A"
=> "A"
$> ["A","B"].include? thing
=> true
I know you are looking for answers that have the functionality built into the language, but here are two other means that I find work better as they solve more problems and have been in use for many decades.
Have you considered using a preprocessor?
Also languages like Lisp have macros which is part of the language.

How can I write like "x == either 1 or 2" in a programming language? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why do most programming languages only have binary equality comparison operators?
I have had a simple question for a fairly long time--since I started learning programming languages.
I'd like to write like "if x is either 1 or 2 => TRUE (otherwise FALSE)."
But when I write it in a programming language, say in C,
( x == 1 || x == 2 )
it really works but looks awkward and hard to read. I guess it should be possible to simplify such an or operation, and so if you have any idea, please tell me. Thanks, Nathan
Python allows test for membership in a sequence:
if x in (1, 2):
An extension version in C#
step 1: create an extension method
public static class ObjectExtensions
{
public static bool Either(this object value, params object[] array)
{
return array.Any(p => Equals(value, p));
}
}
step 2: use the extension method
if (x.Either(1,2,3,4,5,6))
{
}
else
{
}
While there are a number of quite interesting answers in this thread, I would like to point out that they may have performance implications if you're doing this kind of logic inside of a loop depending on the language. A far as for the computer to understand, the if (x == 1 || x == 2) is by far the easiest to understand and optimize when it's compiled into machine code.
When I started programming it seemed weird to me as well that instead of something like:
(1 < x < 10)
I had to write:
(1 < x && x < 10)
But this is how most programming languages work, and after a while you will get used to it.
So I believe it is perfectly fine to write
( x == 1 || x == 2 )
Writing it this way also has the advantage that other programmers will understand easily what you wrote. Using a function to encapsulate it might just make things more complicated because the other programmers would need to find that function and see what it does.
Only more recent programming languages like Python, Ruby etc. allow you to write it in a simpler, nicer way. That is mostly because these programming languages are designed to increase the programmers productivity, while the older programming languages' main goal was application performance and not so much programmer productivity.
It's Natural, but Language-Dependent
Your approach would indeed seem more natural but that really depends on the language you use for the implementation.
Rationale for the Mess
C being a systems programming language, and fairly close to the hardware (funny though, as we used to consider a "high-level" language, as opposed to writing machine code), it's not exactly expressive.
Modern higher-level languages (again, arguable, lisp is not that modern, historically speaking, but would allow you to do that nicely) allow you to do such things by using built-in constructs or library support (for instances, using Ranges, Tuples or equivalents in languages like Python, Ruby, Groovy, ML-languages, Haskell...).
Possible Solutions
Option 1
One option for you would be to implement a function or subroutine taking an array of values and checking them.
Here's a basic prototype, and I leave the implementation as an exercise to you:
/* returns non-zero value if check is in values */
int is_in(int check, int *values, int size);
However, as you will quickly see, this is very basic and not very flexible:
it works only on integers,
it works only to compare identical values.
Option 2
One step higher on the complexity ladder (in terms of languages), an alternative would be to use pre-processor macros in C (or C++) to achieve a similar behavior, but beware of side effects.
Other Options
A next step could be to pass a function pointer as an extra parameter to define the behavior at call-point, define several variants and aliases for this, and build yourself a small library of comparators.
The next step then would be to implement a similar thing in C++ using templates to do this on different types with a single implementation.
And then keep going from there to higher-level languages.
Pick the Right Language (or learn to let go!)
Typically, languages favoring functional programming will have built-in support for this sort of thing, for obvious reasons.
Or just learn to accept that some languages can do things that others cannot, and that depending on the job and environment, that's just the way it is. It mostly is syntactic sugar, and there's not much you can do. Also, some languages will address their shortcomings over time by updating their specifications, while others will just stall.
Maybe a library implements such a thing already and that I am not aware of.
that was a lot of interesting alternatives. I am surprised nobody mentioned switch...case - so here goes:
switch(x) {
case 1:
case 2:
// do your work
break;
default:
// the else part
}
it is more readable than having a
bunch of x == 1 || x == 2 || ...
more optimal than having a
array/set/list for doing a
membership check
I doubt I'd ever do this, but to answer your question, here's one way to achieve it in C# involving a little generic type inference and some abuse of operator overloading. You could write code like this:
if (x == Any.Of(1, 2)) {
Console.WriteLine("In the set.");
}
Where the Any class is defined as:
public static class Any {
public static Any2<T> Of<T>(T item1, T item2) {
return new Any2<T>(item1, item2);
}
public struct Any2<T> {
T item1;
T item2;
public Any2(T item1, T item2) {
this.item1 = item1;
this.item2 = item2;
}
public static bool operator ==(T item, Any2<T> set) {
return item.Equals(set.item1) || item.Equals(set.item2);
}
// Defining the operator== requires these three methods to be defined as well:
public static bool operator !=(T item, Any2<T> set) {
return !(item == set);
}
public override bool Equals(object obj) { throw new NotImplementedException(); }
public override int GetHashCode() { throw new NotImplementedException(); }
}
}
You could conceivably have a number of overloads of the Any.Of method to work with 3, 4, or even more arguments. Other operators could be provided as well, and a companion All class could do something very similar but with && in place of ||.
Looking at the disassembly, a fair bit of boxing happens because of the need to call Equals, so this ends up being slower than the obvious (x == 1) || (x == 2) construct. However, if you change all the <T>'s to int and replace the Equals with ==, you get something which appears to inline nicely to be about the same speed as (x == 1) || (x == 2).
Err, what's wrong with it? Oh well, if you really use it a lot and hate the looks do something like this in c#:
#region minimizethisandneveropen
public bool either(value,x,y){
return (value == x || value == y);
}
#endregion
and in places where you use it:
if(either(value,1,2))
//yaddayadda
Or something like that in another language :).
In php you can use
$ret = in_array($x, array(1, 2));
As far as I know, there is no built-in way of doing this in C. You could add your own inline function for scanning an array of ints for values equal to x....
Like so:
inline int contains(int[] set, int n, int x)
{
int i;
for(i=0; i<n; i++)
if(set[i] == x)
return 1;
return 0;
}
// To implement the check, you declare the set
int mySet[2] = {1,2};
// And evaluate like this:
contains(mySet,2,x) // returns non-zero if 'x' is contained in 'mySet'
In T-SQL
where x in (1,2)
In COBOL (it's been a long time since I've even glanced briefly at COBOL, so I may have a detail or two wrong here):
IF X EQUALS 1 OR 2
...
So the syntax is definitely possible. The question then boils down to "why is it not used more often?"
Well, the thing is, parsing expressions like that is a bit of a bitch. Not when standing alone like that, mind, but more when in compound expressions. The syntax starts to become opaque (from the compiler implementer's perspective) and the semantics downright hairy. IIRC, a lot of COBOL compilers will even warn you if you use syntax like that because of the potential problems.
In .Net you can use Linq:
int[] wanted = new int{1, 2};
// you can use Any to return true for the first item in the list that passes
bool result = wanted.Any( i => i == x );
// or use Contains
bool result = wanted.Contains( x );
Although personally I think the basic || is simple enough:
bool result = ( x == 1 || x == 2 );
Thanks Ignacio! I translate it into Ruby:
[ 1, 2 ].include?( x )
and it also works, but I'm not sure whether it'd look clear & normal. If you know about Ruby, please advise. Also if anybody knows how to write this in C, please tell me. Thanks. -Nathan
Perl 5 with Perl6::Junction:
use Perl6::Junction 'any';
say 'yes' if 2 == any(qw/1 2 3/);
Perl 6:
say 'yes' if 2 == 1|2|3;
This version is so readable and concise I’d use it instead of the || operator.
Pascal has a (limited) notion of sets, so you could do:
if x in [1, 2] then
(haven't touched a Pascal compiler in decades so the syntax may be off)
A try with only one non-bitwise boolean operator (not advised, not tested):
if( (x&3) ^ x ^ ((x>>1)&1) ^ (x&1) ^ 1 == 0 )
The (x&3) ^ x part should be equal to 0, this ensures that x is between 0 and 3. Other operands will only have the last bit set.
The ((x>>1)&1) ^ (x&1) ^ 1 part ensures last and second to last bits are different. This will apply to 1 and 2, but not 0 and 3.
You say the notation (x==1 || x==2) is "awkward and hard to read". I beg to differ. It's different than natural language, but is very clear and easy to understand. You just need to think like a computer.
Also, the notations mentioned in this thread like x in (1,2) are semantically different then what you are really asking, they ask if x is member of the set (1,2), which is not what you are asking. What you are asking is if x equals to 1 or to 2 which is logically (and semantically) equivalent to if x equals to 1 or x equals to 2 which translates to (x==1 || x==2).
In java:
List list = Arrays.asList(new Integer[]{1,2});
Set set = new HashSet(list);
set.contains(1)
I have a macro that I use a lot that's somewhat close to what you want.
#define ISBETWEEN(Var, Low, High) ((Var) >= (Low) && (Var) <= (High))
ISBETWEEN(x, 1, 2) will return true if x is 1 or 2.
Neither C, C++, VB.net, C#.net, nor any other such language I know of has an efficient way to test for something being one of several choices. Although (x==1 || x==2) is often the most natural way to code such a construct, that approach sometimes requires the creation of an extra temporary variable:
tempvar = somefunction(); // tempvar only needed for 'if' test:
if (tempvar == 1 || tempvar == 2)
...
Certainly an optimizer should be able to effectively get rid of the temporary variable (shove it in a register for the brief time it's used) but I still think that code is ugly. Further, on some embedded processors, the most compact and possibly fastest way to write (x == const1 || x==const2 || x==const3) is:
movf _x,w ; Load variable X into accumulator
xorlw const1 ; XOR with const1
btfss STATUS,ZERO ; Skip next instruction if zero
xorlw const1 ^ const2 ; XOR with (const1 ^ const2)
btfss STATUS,ZERO ; Skip next instruction if zero
xorlw const2 ^ const3 ; XOR with (const2 ^ const3)
btfss STATUS,ZERO ; Skip next instruction if zero
goto NOPE
That approach require two more instructions for each constant; all instructions will execute. Early-exit tests will save time if the branch is taken, and waste time otherwise. Coding using a literal interpretation of the separate comparisons would require four instructions for each constant.
If a language had an "if variable is one of several constants" construct, I would expect a compiler to use the above code pattern. Too bad no such construct exists in common languages.
(note: Pascal does have such a construct, but run-time implementations are often very wasteful of both time and code space).
return x === 1 || x === 2 in javascript

What are the practical uses of modulus (%) in programming? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Recognizing when to use the mod operator
What are the practical uses of modulus? I know what modulo division is. The first scenario which comes to my mind is to use it to find odd and even numbers, and clock arithmetic. But where else I could use it?
The most common use I've found is for "wrapping round" your array indices.
For example, if you just want to cycle through an array repeatedly, you could use:
int a[10];
for (int i = 0; true; i = (i + 1) % 10)
{
// ... use a[i] ...
}
The modulo ensures that i stays in the [0, 10) range.
I usually use them in tight loops, when I have to do something every X loops as opposed to on every iteration..
Example:
int i;
for (i = 1; i <= 1000000; i++)
{
do_something(i);
if (i % 1000 == 0)
printf("%d processed\n", i);
}
One use for the modulus operation is when making a hash table. It's used to convert the value out of the hash function into an index into the array. (If the hash table size is a power of two, the modulus could be done with a bit-mask, but it's still a modulus operation.)
To print a number as string, you need the modulus to find the value of a digit.
string number_to_string(uint number) {
string result = "";
while (number != 0) {
result = cast(char)((number % 10) + '0') ~ result;
// ^^^^^^^^^^^
number /= 10;
}
return result;
}
For the control number of international bank account numbers, the mod97 technique.
Also in large batches to do something after n iterations. Here is an example for NHibernate:
ISession session = sessionFactory.openSession();
ITransaction tx = session.BeginTransaction();
for ( int i=0; i<100000; i++ ) {
Customer customer = new Customer(.....);
session.Save(customer);
if ( i % 20 == 0 ) { //20, same as the ADO batch size
//Flush a batch of inserts and release memory:
session.Flush();
session.Clear();
}
}
tx.Commit();
session.Close();
The usual implementation of buffered communications uses circular buffers, and you manage them with modulus arithmetic.
For languages that don't have bitwise operators, modulus can be used to get the lowest n bits of a number. For example, to get the lowest 8 bits of x:
x % 256
which is equivalent to:
x & 255
Cryptography. That alone would account for an obscene percentage of modulus (I exaggerate, but you get the point).
Try the Wikipedia page too:
Modular arithmetic is referenced in number theory, group theory, ring theory, knot theory, abstract algebra, cryptography, computer science, chemistry and the visual and musical arts.
In my experience, any sufficiently advanced algorithm is probably going to touch on one more of the above topics.
Well, there are many perspectives you can look at it. If you are looking at it as a mathematical operation then it's just a modulo division. Even we don't need this as whatever % do, we can achieve using subtraction as well, but every programming language implement it in very optimized way.
And modulu division is not limited to finding odd and even numbers or clock arithmetic. There are hundreds of algorithms which need this module operation, for example, cryptography algorithms, etc. So it's a general mathematical operation like other +, -, *, /, etc.
Except the mathematical perspective, different languages use this symbol for defining built-in data structures, like in Perl %hash is used to show that the programmer declared a hash. So it all varies based on the programing language design.
So still there are a lot of other perspectives which one can do add to the list of use of %.

Is there a practical limit to the size of bit masks?

There's a common way to store multiple values in one variable, by using a bitmask. For example, if a user has read, write and execute privileges on an item, that can be converted to a single number by saying read = 4 (2^2), write = 2 (2^1), execute = 1 (2^0) and then add them together to get 7.
I use this technique in several web applications, where I'd usually store the variable into a field and give it a type of MEDIUMINT or whatever, depending on the number of different values.
What I'm interested in, is whether or not there is a practical limit to the number of values you can store like this? For example, if the number was over 64, you couldn't use (64 bit) integers any more. If this was the case, what would you use? How would it affect your program logic (ie: could you still use bitwise comparisons)?
I know that once you start getting really large sets of values, a different method would be the optimal solution, but I'm interested in the boundaries of this method.
Off the top of my head, I'd write a set_bit and get_bit function that could take an array of bytes and a bit offset in the array, and use some bit-twiddling to set/get the appropriate bit in the array. Something like this (in C, but hopefully you get the idea):
// sets the n-th bit in |bytes|. num_bytes is the number of bytes in the array
// result is 0 on success, non-zero on failure (offset out-of-bounds)
int set_bit(char* bytes, unsigned long num_bytes, unsigned long offset)
{
// make sure offset is valid
if(offset < 0 || offset > (num_bytes<<3)-1) { return -1; }
//set the right bit
bytes[offset >> 3] |= (1 << (offset & 0x7));
return 0; //success
}
//gets the n-th bit in |bytes|. num_bytes is the number of bytes in the array
// returns (-1) on error, 0 if bit is "off", positive number if "on"
int get_bit(char* bytes, unsigned long num_bytes, unsigned long offset)
{
// make sure offset is valid
if(offset < 0 || offset > (num_bytes<<3)-1) { return -1; }
//get the right bit
return (bytes[offset >> 3] & (1 << (offset & 0x7));
}
I've used bit masks in filesystem code where the bit mask is many times bigger than a machine word. think of it like an "array of booleans";
(journalling masks in flash memory if you want to know)
many compilers know how to do this for you. Adda bit of OO code to have types that operate senibly and then your code starts looking like it's intent, not some bit-banging.
My 2 cents.
With a 64-bit integer, you can store values up to 2^64-1, 64 is only 2^6. So yes, there is a limit, but if you need more than 64-its worth of flags, I'd be very interested to know what they were all doing :)
How many states so you need to potentially think about? If you have 64 potential states, the number of combinations they can exist in is the full size of a 64-bit integer.
If you need to worry about 128 flags, then a pair of bit vectors would suffice (2^64 * 2).
Addition: in Programming Pearls, there is an extended discussion of using a bit array of length 10^7, implemented in integers (for holding used 800 numbers) - it's very fast, and very appropriate for the task described in that chapter.
Some languages ( I believe perl does, not sure ) permit bitwise arithmetic on strings. Giving you a much greater effective range. ( (strlen * 8bit chars ) combinations )
However, I wouldn't use a single value for superimposition of more than one /type/ of data. The basic r/w/x triplet of 3-bit ints would probably be the upper "practical" limit, not for space efficiency reasons, but for practical development reasons.
( Php uses this system to control its error-messages, and I have already found that its a bit over-the-top when you have to define values where php's constants are not resident and you have to generate the integer by hand, and to be honest, if chmod didn't support the 'ugo+rwx' style syntax I'd never want to use it because i can never remember the magic numbers )
The instant you have to crack open a constants table to debug code you know you've gone too far.
Old thread, but it's worth mentioning that there are cases requiring bloated bit masks, e.g., molecular fingerprints, which are often generated as 1024-bit arrays which we have packed in 32 bigint fields (SQL Server not supporting UInt32). Bit wise operations work fine - until your table starts to grow and you realize the sluggishness of separate function calls. The binary data type would work, were it not for T-SQL's ban on bitwise operators having two binary operands.
For example .NET uses array of integers as an internal storage for their BitArray class.
Practically there's no other way around.
That being said, in SQL you will need more than one column (or use the BLOBS) to store all the states.
You tagged this question SQL, so I think you need to consult with the documentation for your database to find the size of an integer. Then subtract one bit for the sign, just to be safe.
Edit: Your comment says you're using MySQL. The documentation for MySQL 5.0 Numeric Types states that the maximum size of a NUMERIC is 64 or 65 digits. That's 212 bits for 64 digits.
Remember that your language of choice has to be able to work with those digits, so you may be limited to a 64-bit integer anyway.