Is it not a good practice to put long condition in if statement like
if(((FIO2PIN & 0x00001000)>>12))
which will give result as 0/1 at the end in ARM7?
Is that so that I can only check for 0 or 1 in if condition?
For example
if(x!=0)
or
if(x==1)??
indirectly (FIO2PIN & 0x00001000)>>12 will also give some value at the end which might be 0/1 depending on FIO2PIN status right?
The expression ((FIO2PIN & 0x00001000)>>12) is an integer expression and is implicitly cast to a boolean by the if(...), where zero is false and non-zero is true.
There is nothing wrong with that in the sense that it is entirely unambiguous as far as the compiler and language definition are concerned, but I prefer to use only explicitly boolean expressions in conditional statements - in order to make the intent of the programmer clear. That is easily done by explicitly comparing the result with zero; in this case:
if( ((FIO2PIN & 0x00001000) >> 12) != 0 )
However, the shift is entirely unnecessary in either case, because any non-zero value will be accepted as true (which is why you should always compare with zero - or nothing at all). So:
if( FIO2PIN & 0x00001000 )
or
if( (FIO2PIN & 0x00001000) != 0 )
are equally valid - the latter being my preference.
As mentioned, any non-zero value will be accepted as true while only zero is false, so where x is an integer expression, the test x == 1 is a dangerous one, and you should use x != 0 instead.
The if statement will be true if the expression is non-zero. So shifting right by twelve bits is not necessary in your example. Because (FIO2PIN & 0x00001000) is non-zero whenever ((FIO2PIN & 0x00001000) >> 12) is non-zero. In other words, it doesn't matter which bit is non-zero. The if statement will test true if any bit is non-zero.
In my opinion, using a complex expression within an if statement could be bad practice if the expression is so complex that it is difficult for a developer to understand or maintain. But otherwise, as long as the expression is correct, then the compiler should sort it out and you shouldn't need to worry whether it is too complex for the compiler.
Related
So my problem is the following:
When char = 0
boolean = char ~= 0 & char ~= 256
evaluates to true and if I invert the statements like so:
boolean = char ~= 256 & char ~= 0
I get false.
What's happening?. I am expecting false on both cases.
As #Uko said, you must understand the precedence of messages: all binary messages (+ = < & ~= etc..) are evaluated from left to right.
Thus you evaluate:
(((boolean = char) ~= 256) & char) ~= 0
I think you were after:
boolean := (char ~= 256) & (char ~= 0).
So what happens with your expression ?
booleanis presumably unitialized (thus nil)
char is 0.
boolean = char is false.
false ~= 256 is true.
true & char is char (see below why)
char ~= 0 is false (since char = 0)
If you invert 0 and 256, only the last step changes and awnswer true.
The interesting part is the implementation of message & in class True: it probably does not assert that the parameter is a Boolean and looks like:
& aBoolean
^aBoolean
If you pass something that is not a Boolean, (like 0 in your case), it will return this thing, whatever surprising it can be...
If you use an IDE (Squeak/Pharo/Visualworks/Dolphin... but not gnu Smalltalk) I suggest you use the menu Debug It and evaluate the expression step by step in the Debugger.
Last, note that char is probably not a good name in Smalltalk context: it might be misleading. Indeed, if it holds 0, it's rather an Integer, not a Character.
There is something we are repeating in some answers that I think deserves further clarification. We say evaluation proceeds from left to right. True, but the actual semantics of messages is:
First evaluate the receiver, then the arguments in order; finally send the message.
Since the Smalltalk VM is stack based, this rule means that:
The receiver is evaluated first and the result is pushed on the stack.
The arguments are evaluated in order and their results pushed on the stack.
The message is sent
Item 3 means that the method that the send invokes will find the receiver and the arguments in the stack, in the order defined above.
For instance, in
a := 1.
b := 2.
b := a - (a := b)
variable b will evaluate to (1 - (a := 2)) = -1 and a to 2. Why? Because by the time the assignment a := b is evaluated the receiver a of the subtraction has already been pushed with the value it had at that time, i.e., 1.
Note also that this semantics must be preserved even if the VM happens to use registers instead of the stack. The reason is that evaluations must preserve the semantics, and therefore the order. This fact has an impact on the optimizations that the native code may implement.
It is interesting to observe that this semantics along with the precedence unary > binary > keyword support polymorphism in a simple way. A language that gives more precedence to, say, * than + assumes that * is a multiplication and + an addition. In Smalltalk, however, it is up to the programmer to decide the meaning of these (and any other) selectors without the syntax getting in the way of the actual semantics.
My point here is that "left to right" comes from the fact that we write Smalltalk in English, which is read from "left to right". If we implemented Smalltalk using a language that people read from "right to left" this rule would be contradicted. The actual definition, the one that will remain unchanged, is the one expressed in items 1, 2 and 3 above, which comes from the stack metaphor.
I read a coding style suggestion about comparing bools that said to write
if (myBoolValue == true) { ... }
if (myBoolValue == false) { ... }
instead of writing
if (myBoolValue) { ... }
if (!myBoolValue) { ... }
since it increases readability of the code even though they are equivalent statements. I am well aware that it is not a usual coding practice but I will agree that it may increase readability in some cases.
My question is if there is any difference between the two in regards of optimization of code execution or if (a well implemented) compiler translate them to the same thing?
The productions are not the same in all languages.
For instance, they may produce different results for some "non-boolean" values of "myBoolValue" in both JavaScript and C.
// JavaScript
[] == true // false
[] ? true : false // true
// C
#define true 1
#define false 0
int x = -1;
x == true // 0 (false)
x ? true : false // true (1)
To see what a specific compiler does for a specific programming language (there is both what it is allowed to do and then what it will do), check the generated assembly/machine/byte code.
(Anyway, I prefer and use the latter form exclusively.)
Apart from cases like JavaScript and I guess Java with autoboxing and Boolean, where the semantics can differ between the two forms, it really depends on compiler optimization.
I never understand this recommendation. If myBool is a boolean, so is (myBool == true), so why doesn't it require ((myBool == true) == true), and so on forever? Where does this stop? Surely the answer is not to start in the first place?
Surely the more there is to read, the less readable it is?
In C (and many other languages) there's no boolean type, all are numeric expression, so myBoolValue may be equal to 2 or any value different from 0 and 1, and comparing them numerically won't give you the correct result.
#define true 1
#define false 0
if (5 == true) //...
To improve readability, name the variable meaningful (such as isNonZero, isInGoodState, isClosed, LoopEnded...) rather than comparing them with true
Another way is comparing them with 0 or false when you need the true case (myBoolValue != false). Also, using bool type (if it's available) may solve some problems.
The behavior depends on the language and on the type of the value being tested. The two forms are not equivalent in all cases.
In C, conditions are commonly represented using integers, with 0 being treated as false and any non-zero value being treated as true. Comparing such a value for equality to true is invalid; for example, if the value happens to be 2 or -1, it's logically true but not equal to true. As of C99, C does have a built-in boolean type called _Bool, aliased as bool if you have #include <stdbool.h>. But it's still common to use other types, particularly int to represent conditions, and even to redefine bool, false, and true in various ways.
C++ has has a built-in bool type, with false and true literals, since very early in its development, but it still retains much of its C ancestry. Other C-inspired languages are likely to have similar issues.
In languages that have a built-in boolean type and don't allow values of other types to be used directly as conditions, if (x) and if (x == true) are more likely to be equivalent. And if they're semantically equivalent, I would expect any decent optimizing compiler to generate the same code for either form, or very nearly so.
But I strongly disagree with the style advice. An explicit comparison to false or true, even in cases where it's equivalent to testing the value directly, do not IMHO aid readability.
If you have a variable whose value denotes a condition, it's more important to give it a name that indicates that. Programming language code shouldn't necessarily follow English grammar, but something like
if (file_is_open)
is not made more readable by changing it to
if (file_is_open == true)
As a C programmer, I like to write my conditions more explicitly than a lot of other programmers do; I'll write if (ptr != NULL) rather than if (ptr), and if (c != '\0') rather than if (c). But if a variable is already a condition, I see no point in adding a superfluous comparison.
And value == true is itself a condition; if if (value == true) is supposed to be more readable than if (value), why wouldn't if ((value == true) == true) be even better?
I keep running across (someone else's) code like this
Case
When (variable & (2|4|8|16)>0) Then ...
WHEN (variable & (1|32)>0 Then ...
...
End
I figure what's happening here is it's testing whether there is a 1 or a 0 in the 2^1, 2^2, 2^3, or 2^4 places of the variable variable. Is this right? Either way I'm still unclear on why this expression is written in the way it is. I'm unable to find any documentation on this logic mostly because I don't know what to really call it.
You're right, the "pipe" symbol | corresponds to the bitwise OR operator, whereas the ampersand & corresponds to the bitwise AND operator (at least in some databases).
They're checking bits using those bitwise operators. The most likely reason they are doing this the way they did, is for "improved readability". E.g. it is easier to see which bits are being checked when writing
2|4|8|16 -- rather than 30
1|32 -- rather than 33
I'm taking a course in Visual Basic 2010 and I'm trying to get a grasp on this new term called a flag. I kind of understand that it has something to do with a boolean condition. I don't quite understand what a flag is. I see references to it using the term flag. I understand it has something to do when a boolean, a condition triggers a flag. But what is the flag. How do you identify it? Can somebody give me an example.
In general, "Flag" is just another term for a true/false condition.
It may have more specific meanings in more specific contexts. For instance, a CPU may keep "arithmetic flags", each one indicating a true/false condition resulting from the previous arithmetic operation. For instance, if the previous operation was an "ADD", then the flags would indicate whether the result of the add was zero, less than zero, or greater than zero.
I believe the term comes from flags used to signal a go/no go condition, like, a railroad flagman indicating whether or not it is safe for the train to proceed.
You hear this quite a bit with BOOL being a 'Flag' since there are only 2 outcomes either TRUE or FALSE. Using BOOL in your decision making processes is an easy way to 'flag' a certain outcome if the condition is met.
An example could be:
if ($x == TRUE) {
// DO THIS
{
else {
//Flag not tripped, DO THIS
}
You can use this with bitwise operations. It can be used to pack 32 booleans into one integer. Here's a sample:
Dim flags As Integer
Const ADMINISTRATOR = 1
Const USER = 2
Const BLUE = 4
Const RED = 8
flags = ADMINISTRATOR or BLUE
If flags and ADMINISTRATOR then
' Do something since the person is an admin
End If
The ors add flags and ands check if the flag is set.
Now we can check up to 32 booleans for this one variable. Great for storing in a database. You can use bigger datatypes, like a long to store more.
I want to check whether a value is equal to 1. Is there any difference in the following lines of code
Evaluated value == 1
1 == evaluated value
in terms of the compiler execution
In most languages it's the same thing.
People often do 1 == evaluated value because 1 is not an lvalue. Meaning that you can't accidentally do an assignment.
Example:
if(x = 6)//bug, but no compiling error
{
}
Instead you could force a compiling error instead of a bug:
if(6 = x)//compiling error
{
}
Now if x is not of int type, and you're using something like C++, then the user could have created an operator==(int) override which takes this question to a new meaning. The 6 == x wouldn't compile in that case but the x == 6 would.
It depends on the programming language.
In Ruby, Smalltalk, Self, Newspeak, Ioke and many other single-dispatch object-oriented programming languages, a == b is actually a message send. In Ruby, for example, it is equivalent to a.==(b). What this means, is that when you write a == b, then the method == in the class of a is executed, but when you write b == a, then the method in the class of b is executed. So, it's obviously not the same thing:
class A; def ==(other) false end; end
class B; def ==(other) true end; end
a, b = A.new, B.new
p a == b # => false
p b == a # => true
No, but the latter syntax will give you a compiler error if you accidentally type
if (1 = evaluatedValue)
Note that today any decent compiler will warn you if you write
if (evaluatedValue = 1)
so it is mostly relevant for historical reasons.
Depends on the language.
In Prolog or Erlang, == is written = and is a unification rather than an assignment (you're asserting that the values are equal, rather then testing that they are equal or forcing them to be equal), so you can use it for an assertion if the left hand side is a constant, as explained here.
So X = 3 would unify the variable X and the value 3, whereas 3 = X would attempt to unify the constant 3 with the current value of X, and be equivalent of assert(x==3) in imperative languages.
It's the same thing
In general, it hardly matters whether you use,
Evaluated value == 1 OR 1 == evaluated value.
Use whichever appears more readable to you. I prefer if(Evaluated value == 1) because it looks more readable to me.
And again, I'd like to quote a well known scenario of string comparison in java.
Consider a String str which you have to compare with say another string "SomeString".
str = getValueFromSomeRoutine();
Now at runtime, you are not sure if str would be NULL. So to avoid exception you'll write
if(str!=NULL)
{
if(str.equals("SomeString")
{
//do stuff
}
}
to avoid the outer null check you could just write
if ("SomeString".equals(str))
{
//do stuff
}
Though this is less readable which again depends on the context, this saves you an extra if.
For this and similar questions can I suggest you find out for yourself by writing a little code, running it through your compiler and viewing the emitted asembler output.
For example, for the GNU compilers, you do this with the -S flag. For the VS compilers, the most convenient route is to run your test program in the debugger and then use the assembeler debugger view.
Sometimes in C++ they do different things, if the evaluated value is a user type and operator== is defined. Badly.
But that's very rarely the reason anyone would choose one way around over the other: if operator== is not commutative/symmetric, including if the type of the value has a conversion from int, then you have A Problem that probably wants fixing rather than working around. Brian R. Bondy's answer, and others, are probably on the mark for why anyone worries about it in practice.
But the fact remains that even if operator== is commutative, the compiler might not do exactly the same thing in each case. It will (by definition) return the same result, but it might do things in a slightly different order, or whatever.
if value == 1
if 1 == value
Is exactly the same, but if you accidentally do
if value = 1
if 1 = value
The first one will work while the 2nd one will produce an error.
They are the same. Some people prefer putting the 1 first, to void accidentally falling into the trap of typing
evaluated value = 1
which could be painful if the value on the left hand side is assignable. This is a common "defensive" pattern in C, for instance.
In C languages it's common to put the constant or magic number first so that if you forget one of the "=" of the equality check (==) then the compiler won't interpret this as an assignment.
In java, you cannot do an assignment within a boolean expression, and so for Java, it is irrelevant which order the equality operands are written in; The compiler should flag an error anyway.