I'm taking a course in Visual Basic 2010 and I'm trying to get a grasp on this new term called a flag. I kind of understand that it has something to do with a boolean condition. I don't quite understand what a flag is. I see references to it using the term flag. I understand it has something to do when a boolean, a condition triggers a flag. But what is the flag. How do you identify it? Can somebody give me an example.
In general, "Flag" is just another term for a true/false condition.
It may have more specific meanings in more specific contexts. For instance, a CPU may keep "arithmetic flags", each one indicating a true/false condition resulting from the previous arithmetic operation. For instance, if the previous operation was an "ADD", then the flags would indicate whether the result of the add was zero, less than zero, or greater than zero.
I believe the term comes from flags used to signal a go/no go condition, like, a railroad flagman indicating whether or not it is safe for the train to proceed.
You hear this quite a bit with BOOL being a 'Flag' since there are only 2 outcomes either TRUE or FALSE. Using BOOL in your decision making processes is an easy way to 'flag' a certain outcome if the condition is met.
An example could be:
if ($x == TRUE) {
// DO THIS
{
else {
//Flag not tripped, DO THIS
}
You can use this with bitwise operations. It can be used to pack 32 booleans into one integer. Here's a sample:
Dim flags As Integer
Const ADMINISTRATOR = 1
Const USER = 2
Const BLUE = 4
Const RED = 8
flags = ADMINISTRATOR or BLUE
If flags and ADMINISTRATOR then
' Do something since the person is an admin
End If
The ors add flags and ands check if the flag is set.
Now we can check up to 32 booleans for this one variable. Great for storing in a database. You can use bigger datatypes, like a long to store more.
Related
In the objective-c variant of C, NS_OPTIONS exists to help validate bit masks. But it seems to have an inherent flaw. If I need to define a value representing a bitwise OR of all of the bits, e.g. FubarAllOptions some would say that the convention is to simply use INT_MAX. However this has a problem.
Imagine that I use NS_OPTIONS for the lower five bits of a uint8_t. e.g.
typedef NS_OPTIONS(uint8_t) {
FubarA=1,
FubarB=1<<1,
FubarC=1<<2,
FubarD=1<<3,
FubarE=1<<4,
FubarAllOptions=0xff // MAX
} FubarOptions;
If I bitwise clear each of the assigned bits of a FubarOptions variable, the remaining three upper bits will remain set. Therefore if I check for the NS_OPTIONS value being nonzero as a test of whether all the bits are cleared, it will appear that some bits are still set. Therefore a bug. FubarAllOptions includes bits that are not assigned.
Q: How do I define FubarAllOptions so that it only includes assigned bits, without laboriously typing out all of the potential options and Or'ing them? i.e. FubarA|FubarB|.... But this would be vulnerable to typo mistakes.
Sure I can take the largest, <<1 and subtract 1. But too this would be vulnerable to typo mistakes.
You will have to set all options manually:
FubarAllOptions = (FubarA | FubarB | FubarC | FubarD | FubarE)
Of course, you can also fix the problem by always checking every option manually instead of masking them all and then comparing with zero.
You are too worried about typing mistakes when you should rather worry what will happen when you start using another bit.
How to write a conditional statement in Z3.
eg:
if (a%2==0){
value=1
}
I am trying to achieve this in Z3 Solver by Microsoft Research but so far no luck
Look up SSA form: https://en.wikipedia.org/wiki/Static_single_assignment_form
Essentially, you'll have to change your program to look something like:
value_0 = 0
value_1 = (a%2 == 0) ? 1 : value_0
Once it is in this so called static single assignment form, you can now translate each line more or less directly; with the latest assignment to value_N being the final value of value.
Loops will be problematic: The usual strategy is to unroll them up to a certain count (bounded model checking), and hope that this suffices. If you detect that the last unrolling isn't sufficient, then you can generate an uninterpreted value at that point; which might cause your proofs to fail with spurious counter-examples; but that's the best you can do without a scheme that involves proper handling of induction and loop-invariants.
Note that this field of study is called "symbolic execution" and has a long history, with active research still being conducted. You might want to read through this: https://en.wikipedia.org/wiki/Symbolic_execution
I'm trying to make a Sudoku game, and I gathered the following validations to each number inserted:
Number must be between 1 and 9;
Number must be unique in the line;
Number must be unique in the column;
Number must be unique in the sub-matrix.
As I'm repeating too much the "Number must be unique in..." rule, I made the following design:
There are 3 kinds of groups, ColumnGroup, LineGroup, and SubMatrixGroup (all of them implement the GroupInterface);
GroupInterface has a method public boolean validate(Integer number);
Each cell is related to 3 groups, and it must be unique between the groups, if any of them doesn't evaluate to true, number isn't allowed;
Each cell is an observable, making the group an observer, that reacts to one Cell change attempt.
And that s*cks.
I can't find what's wrong with my design. I just got stuck with it.
Any ideas of how I can make it work?
Where is it over-objectified? I can feel it too, maybe there is another solution that would be more simple than that...
Instead of having 3 validator classes, an abstract GroupInterface, an observable, etc., you can do it with a single function.
Pseudocode ahead:
bool setCell(int cellX, int cellY, int cellValue)
{
m_cells[x][y] = cellValue;
if (!isRowValid(y) || !isColumnValid(x) || !isSubMatrixValid(x, y))
{
m_cells[x][y] = null; // or 0 or however you represent an empty cell
return false;
}
return true;
}
What is the difference between a ColumnGroup, LineGroup and SubMatrixGroup? IMO, these three should simply be instances of a generic "Group" type, as the type of the group changes nothing - it doesn't even need to be noted.
It sounds like you want to create a checker ("user attempted to write number X"), not a solver. For this, your observable pattern sounds OK (with the change mentioned above).
Here (link) is an example of a simple sudoku solver using the above-mentioned "group" approach.
I remember many years back, when I was in school, one of my computer science teachers taught us that it was better to check for 'trueness' or 'equality' of a condition and not the negative stuff like 'inequality'.
Let me elaborate - If a piece of conditional code can be written by checking whether an expression is true or false, we should check the 'trueness'.
Example: Finding out whether a number is odd - it can be done in two ways:
if ( num % 2 != 0 )
{
// Number is odd
}
or
if ( num % 2 == 1 )
{
// Number is odd
}
(Please refer to the marked answer for a better example.)
When I was beginning to code, I knew that num % 2 == 0 implies the number is even, so I just put a ! there to check if it is odd. But he was like 'Don't check NOT conditions. Have the practice of checking the 'trueness' or 'equality' of conditions whenever possible.' And he recommended that I use the second piece of code.
I am not for or against either but I just wanted to know - what difference does it make? Please don't reply 'Technically the output will be the same' - we ALL know that. Is it a general programming practice or is it his own programming practice that he is preaching to others?
NOTE: I used C#/C++ style syntax for no reason. My question is equally applicable when using the IsNot, <> operators in VB etc. So readability of the '!' operator is just one of the issues. Not THE issue.
The problem occurs when, later in the project, more conditions are added - one of the projects I'm currently working on has steadily collected conditions over time (and then some of those conditions were moved into struts tags, then some to JSTL...) - one negative isn't hard to read, but 5+ is a nightmare, especially when someone decides to reorganize and negate the whole thing. Maybe on a new project, you'll write:
if (authorityLvl!=Admin){
doA();
}else{
doB();
}
Check back in a month, and it's become this:
if (!(authorityLvl!=Admin && authorityLvl!=Manager)){
doB();
}else{
doA();
}
Still pretty simple, but it takes another second.
Now give it another 5 to 10 years to rot.
(x%2!=0) certainly isn't a problem, but perhaps the best way to avoid the above scenario is to teach students not to use negative conditions as a general rule, in the hopes that they'll use some judgement before they do - because just saying that it could become a maintenance problem probably won't be enough motivation.
As an addendum, a better way to write the code would be:
userHasAuthority = (authorityLvl==Admin);
if (userHasAuthority){
doB();
else{
doA();
}
Now future coders are more likely to just add "|| authorityLvl==Manager", userHasAuthority is easier to move into a method, and even if the conditional is reorganized, it will only have one negative. Moreover, no one will add a security hole to the application by making a mistake while applying De Morgan's Law.
I will disagree with your old professor - checking for a NOT condition is fine as long as you are checking for a specific NOT condition. It actually meets his criteria: you would be checking that it is TRUE that a value is NOT something.
I grok what he means though - mostly the true condition(s) will be orders of magnitude smaller in quantity than the NOT conditions, therefore easier to test for as you are checking a smaller set of values.
I've had people tell me that it's to do with how "visible" the ping (!) character is when skim reading.
If someone habitually "skim reads" code - perhaps because they feel their regular reading speed is too slow - then the ! can be easily missed, giving them a critical mis-understanding of the code.
On the other hand, if a someone actually reads all of the code all of the time, then there is no issue.
Two very good developers I've worked with (and respect highily) will each write == false instead of using ! for similar reasons.
The key factor in my mind is less to do with what works for you (or me!), and more with what works for the guy maintaining the code. If the code is never going to be seen or maintained by anyone else, follow your personal whim; if the code needs to be maintained by others, better to steer more towards the middle of the road. A minor (trivial!) compromise on your part now, might save someone else a week of debugging later on.
Update: On further consideration, I would suggest factoring out the condition as a separate predicate function would give still greater maintainability:
if (isOdd(num))
{
// Number is odd
}
You still have to be careful about things like this:
if ( num % 2 == 1 )
{
// Number is odd
}
If num is negative and odd then depending on the language or implementation num % 2 could equal -1. On that note, there is nothing wrong with checking for the falseness if it simplifies at least the syntax of the check. Also, using != is more clear to me than just !-ing the whole thing as the ! may blend in with the parenthesis.
To only check the trueness you would have to do:
if ( num % 2 == 1 || num % 2 == -1 )
{
// Number is odd
}
That is just an example obviously. The point is that if using a negation allows for fewer checks or makes the syntax of the checks clear then that is clearly the way to go (as with the above example). Locking yourself into checking for trueness does not suddenly make your conditional more readable.
I remember hearing the same thing in my classes as well. I think it's more important to always use the more intuitive comparison, rather than always checking for the positive condition.
Really a very in-consequential issue. However, one negative to checking in this sense is that it only works for binary comparisons. If you were for example checking some property of a ternary numerical system you would be limited.
Replying to Bevan (it didn't fit in a comment):
You're right. !foo isn't always the same as foo == false. Let's see this example, in JavaScript:
var foo = true,
bar = false,
baz = null;
foo == false; // false
!foo; // false
bar == false; // true
!bar; // true
baz == false; // false (!)
!baz; // true
I also disagree with your teacher in this specific case. Maybe he was so attached to the generally good lesson to avoid negatives where a positive will do just fine, that he didn't see this tree for the forest.
Here's the problem. Today, you listen to him, and turn your code into:
// Print black stripe on odd numbers
int zebra(int num) {
if (num % 2 == 1) {
// Number is odd
printf("*****\n");
}
}
Next month, you look at it again and decide you don't like magic constants (maybe he teaches you this dislike too). So you change your code:
#define ZEBRA_PITCH 2
[snip pages and pages, these might even be in separate files - .h and .c]
// Print black stripe on non-multiples of ZEBRA_PITCH
int zebra(int num) {
if (num % ZEBRA_PITCH == 1) {
// Number is not a multiple of ZEBRA_PITCH
printf("*****\n");
}
}
and the world seems fine. Your output hasn't changed, and your regression testsuite passes.
But you're not done. You want to support mutant zebras, whose black stripes are thicker than their white stripes. You remember from months back that you originally coded it such that your code prints a black stripe wherever a white strip shouldn't be - on the not-even numbers. So all you have to do is to divide by, say, 3, instead of by 2, and you should be done. Right? Well:
#define DEFAULT_ZEBRA_PITCH 2
[snip pages and pages, these might even be in separate files - .h and .c]
// Print black stripe on non-multiples of pitch
int zebra(int num, int pitch) {
if (num % pitch == 1) {
// Number is odd
printf("*****\n");
}
}
Hey, what's this? You now have mostly-white zebras where you expected them to be mostly black!
The problem here is how think about numbers. Is a number "odd" because it isn't even, or because when dividing by 2, the remainder is 1? Sometimes your problem domain will suggest a preference for one, and in those cases I'd suggest you write your code to express that idiom, rather than fixating on simplistic rules such as "don't test for negations".
I want to check whether a value is equal to 1. Is there any difference in the following lines of code
Evaluated value == 1
1 == evaluated value
in terms of the compiler execution
In most languages it's the same thing.
People often do 1 == evaluated value because 1 is not an lvalue. Meaning that you can't accidentally do an assignment.
Example:
if(x = 6)//bug, but no compiling error
{
}
Instead you could force a compiling error instead of a bug:
if(6 = x)//compiling error
{
}
Now if x is not of int type, and you're using something like C++, then the user could have created an operator==(int) override which takes this question to a new meaning. The 6 == x wouldn't compile in that case but the x == 6 would.
It depends on the programming language.
In Ruby, Smalltalk, Self, Newspeak, Ioke and many other single-dispatch object-oriented programming languages, a == b is actually a message send. In Ruby, for example, it is equivalent to a.==(b). What this means, is that when you write a == b, then the method == in the class of a is executed, but when you write b == a, then the method in the class of b is executed. So, it's obviously not the same thing:
class A; def ==(other) false end; end
class B; def ==(other) true end; end
a, b = A.new, B.new
p a == b # => false
p b == a # => true
No, but the latter syntax will give you a compiler error if you accidentally type
if (1 = evaluatedValue)
Note that today any decent compiler will warn you if you write
if (evaluatedValue = 1)
so it is mostly relevant for historical reasons.
Depends on the language.
In Prolog or Erlang, == is written = and is a unification rather than an assignment (you're asserting that the values are equal, rather then testing that they are equal or forcing them to be equal), so you can use it for an assertion if the left hand side is a constant, as explained here.
So X = 3 would unify the variable X and the value 3, whereas 3 = X would attempt to unify the constant 3 with the current value of X, and be equivalent of assert(x==3) in imperative languages.
It's the same thing
In general, it hardly matters whether you use,
Evaluated value == 1 OR 1 == evaluated value.
Use whichever appears more readable to you. I prefer if(Evaluated value == 1) because it looks more readable to me.
And again, I'd like to quote a well known scenario of string comparison in java.
Consider a String str which you have to compare with say another string "SomeString".
str = getValueFromSomeRoutine();
Now at runtime, you are not sure if str would be NULL. So to avoid exception you'll write
if(str!=NULL)
{
if(str.equals("SomeString")
{
//do stuff
}
}
to avoid the outer null check you could just write
if ("SomeString".equals(str))
{
//do stuff
}
Though this is less readable which again depends on the context, this saves you an extra if.
For this and similar questions can I suggest you find out for yourself by writing a little code, running it through your compiler and viewing the emitted asembler output.
For example, for the GNU compilers, you do this with the -S flag. For the VS compilers, the most convenient route is to run your test program in the debugger and then use the assembeler debugger view.
Sometimes in C++ they do different things, if the evaluated value is a user type and operator== is defined. Badly.
But that's very rarely the reason anyone would choose one way around over the other: if operator== is not commutative/symmetric, including if the type of the value has a conversion from int, then you have A Problem that probably wants fixing rather than working around. Brian R. Bondy's answer, and others, are probably on the mark for why anyone worries about it in practice.
But the fact remains that even if operator== is commutative, the compiler might not do exactly the same thing in each case. It will (by definition) return the same result, but it might do things in a slightly different order, or whatever.
if value == 1
if 1 == value
Is exactly the same, but if you accidentally do
if value = 1
if 1 = value
The first one will work while the 2nd one will produce an error.
They are the same. Some people prefer putting the 1 first, to void accidentally falling into the trap of typing
evaluated value = 1
which could be painful if the value on the left hand side is assignable. This is a common "defensive" pattern in C, for instance.
In C languages it's common to put the constant or magic number first so that if you forget one of the "=" of the equality check (==) then the compiler won't interpret this as an assignment.
In java, you cannot do an assignment within a boolean expression, and so for Java, it is irrelevant which order the equality operands are written in; The compiler should flag an error anyway.