Is it good practice to create once-used variables? - variables

A colleague of mine refactored this code:
private void btnGeneral_Click(object sender, RoutedEventArgs e)
{
Button button = (Button)e.OriginalSource;
Type type = this.GetType();
Assembly assembly = type.Assembly;
string userControlFullName = String.Format("{0}.{1}", type.Namespace, button.Name);
UserControl userControl = (UserControl)assembly.CreateInstance(userControlFullName);
}
to this code:
private void btnGeneral_Click(object sender, RoutedEventArgs e)
{
Button button = (Button)e.OriginalSource;
Type type = this.GetType();
Assembly assembly = type.Assembly;
UserControl userControl = (UserControl)assembly.CreateInstance(String.Format("{0}.{1}", type.Namespace, button.Name));
}
saying that you don't need to create a variable if it is only going to be used once.
My response was that making once-used variables is good practice since it:
functions as and reduces comments (it is clear what "userControlFullName" is)
makes code easier to read, i.e. more of your code "reads like English"
avoids super-long statements by replacing parts of them with clear variable names
easier to debug since you can mouse over the variable name, and in the cases of e.g. PHP programming without debuggers, it is easier to echo out these variable names to get their values
The arguments against this way "more lines of code", "unnecessary variables" are arguments to make life easier for the compiler but with no significant speed or resource savings.
Can anyone think of any situations in which one should not create once-used variable names?

I'm with your opinion in this case. Readability is key. I'm sure that the compiler produces the same executable in both cases, with the compilers as intelligent as they are today.
But I wouldn't claim "always use once-used variables" either. Example:
String name = "John";
person.setName(name);
is unnecessary, because
person.setName("John");
reads equally well - if not even better. But, of course, not all cases are as clear cut. "Readability" is a subjective term, after all.

All your reasons seem valid to me.
There are occasions where you effectively have to avoid using intermediate variables, where you need a single expression (e.g. for member variable initialization in Java/C#) but introducing an extra variable for clarity is absolutely fine where it's applicable. Obviously don't do it for every argument to every method, but in moderation it can help a lot.
The debugging argument is particularly strong - it's also really nice to be able to step over the lines which "prepare" the arguments to a method, and step straight into the method itself, having seen the arguments easily in the debugger.

Your colleague doesn't seem to be consistent.
The consistent solution looks like this:
private void btnGeneral_Click(object sender, RoutedEventArgs e)
{
UserControl userControl = ((UserControl)type.Assembly).CreateInstance(String.Format("{0}.{1}", this.GetType().Namespace, ((Button)e.OriginalSource).Name));
}

I'm completely with you on this one.
I especially use this if a method takes a lot of booleans, ie
public void OpenDocument(string filename, bool asReadonly, bool copyLocal, bool somethingElse)
To me this is a lot more readable:
bool asReadonly = true;
bool copyLocal = false;
bool somethingElse = true;
OpenDocument("somefile.txt", asReadonly, copyLocal, somethingElse);
..than:
OpenDocument("somefile.txt", true, false, true);

Since the programming languages I use generally do not tell me what was null in an exception stacktrace I generally try to use variables so that no more than one item per line can be null. I actually find this to be the most significant limiter of how many statements I want to put on a single line.
If you get a nullpointerexception in this statement from your production logs you're really in trouble:
getCustomer().getOrders().iterator().next().getItems().iterator().next().getProduct().getName()
Although I agree with your thoughts, adding an extra variable can introduce an extra concept in the method and this concept may not always be relevant to the overall goal of the method. So excessive adding of variables can also increase method complexity and reduce legibility. Note the usage of excessive here.

I guess in some cases where it could have an effect on performance. In particular in this example:
for (int i1 = 0; i1 < BIG_NR; i1++)
{
for (int i2 = 0; i2 < BIG_NR; i2++)
{
for (int i3 = 0; i3 < BIG_NR; i3++)
{
for (int i4 = 0; i4 < BIG_NR; i4++)
{
int amount = a + b;
someVar[i1][i2][i3][i4] = amount;
}
}
}
}
... the extra assignment might have a too big impact on performance.
But in general, your arguments are 100% correct.

The both codes are exactly the same. Of course, yours is more readable, maintenable and debuggable, but, if that was the point of your colleague, his code is NOT memory less consumer.

I think it's a judgement call based on how tidy you want you code to be. I also think that both you and your colleague are correct.
In this instance I would side with you colleague based on the code you presented (for performance reasons), however as I said before it does depend on the context in which it will be used and I think your position is perfectly acceptable.
I would point out that creating variables for once used parameters can be pointless, unless they are const variables or things that you need to use in many places.
I would argue that declaring a once used variable could possible create more confusion when you are debugging if there are lots and lots of these, but one here and there is probably fine.

Creating a new variable means one more concept for the reader to keep track of. (Consider the extreme case: int b=a;c=b;) If a method is so complex - in such need of breaking up - that the extra concept is a price worth paying, then you ought to go the whole hog and split it into two methods. This way you get both the meaningful name and the smaller size. (Smaller for your original method: if it's like you say, then people won't typically need to read the auxiliary method.)
That's a generalisation, particularly in a language with a lot of boilerplate for adding new methods, but you're not going to disagree with the generalisation often enough to make it worth leaving out of your style guide.

I'm completely with your colleague in principle, but not in this case.
The problem with throwaway variables is that they introduce state, and more state means the code is harder to understand since you don't know what effects it could have on the program's flow. Functional programming has no variables at all, exactly for this reason. So the fewer variables there are, the better. Eliminating variables is good.
However, in this particular case, where the method ends right after the variable's only use, the disadvantages of having it are minimal, and the advantages you mention are probably more substantial.

Trying hard to come up with an argument against introducing new variables I'd say that when you read the code (for the first time, at least), you don't know if the variable is being used more than once. So immediately you will let your eyes scan down through the code to see if it is used in more places. The longer the function the more will you have to look to see if you can find it.
Thats's the best argument against that I can come up with! :-)

That's how I used to code. Nowadays I tried to minimize intermediate variables. The use of intermediate variables is perfectly fine if it's immutable.

I'm in agreement with the majority here, code readability is key.
It's a rare line count crusader that actually writes highly readable, highly maintainable code.
Additionally, it all gets compiled to MSIL anyway and the compiler will optimise a lot for you.
In the following example, the compiler will optimise the code anyway:
List<string> someStrings = new List<string>();
for (int i = 0; i < 1000; i++)
{
string localString = string.Format("prefix{0}", i);
someStrings.Add(localString);
}
Rather than:
List<string> someStrings = new List<string>();
string localString = string.Empty;
for (int i = 0; i < 1000; i++)
{
localString = string.Format("prefix{0}", i);
someStrings.Add(localString);
}
So there's really no performance reason not to go with it in many cases.

Agreed
"makes code easier to read, i.e. more of your code "reads like English"
I think this is the most important factor as there is no difference in performance or functionality on most moder managed languages
After all we all know code is harder to read than it is to write.
Karl

Related

Naming a function that compute a thing

Consider a Python module (but it is relevant to other language) with a series of functions meant to be used sequentialy.
Namely, functions are semantically linked according to the following scheme:
def function_a_to_b(thing_a):
"""Compute the thing_b."""
thing_b = thing_a**2
return thing_b
def function_b_to_c(thing_b):
"""Compute the thing_c."""
thing_c = thing_b**3
return thing_c
In this trivial example, a candidate name for function_a_to_b could be squaring and thing_a could be named square, and likewise we could use cubin and cube.
Now if thing_a is something complicated that does not support verbing, say a weighted_glonk.
How can I name function_a_to_b to keep things short, obvious, and avoid variable clash or error prone namespace subtleties?
I incline towards compute_weighted_glonk for the function. Another option is hungarian naming, say array_weighted_glonk for the thing
It depends what you want to do with that glonk, but you are mostly right with compute_wighted_glonk. If you want to compute him, then name that function compute_glonk(), if you want to get his name, the function would probably be get_name_of_glonk(). Lot of people will have lot of suggestions, there are lot of ways to name your functions. Was my answer enought?
If it is a complicated thing that doesn't support verbing, then i would suggest to create a class for it and create the thing by the constructor.
My advice is to find a balance between short and verbose in naming --Dmitri Pavlutin
Don't be afraid to use an auxiliary word. In plain english you wouldn't struggle with "The plant growthed..." but simply go with "The plant presented growth...".
Back to your example like cube/cubing you may write glonk/obtainGlonk or glonk/calculateGlonk.
Also, focus on the readability more than keeping the verbing standard which at the end of the day is what the fellow programmer will read. Your verbed word, is what, an effect? Keep it that way.
Your action should describe a change that has happened in your system
My fifty cents: I would argue against introducing naming conventions to reveal computational structure (Robert Martin: "We have enough encodings to deal with without adding more to our burden" [Clean Code, p.23]). If your functions are meant to be called in a strict order, you should encapsulate them in another function that does precisely that; or create some kind of latch, if that's approriate.
I don't know python, so I'd like to answer from javascript perspective. I can't figure out what you mean by a glonk and something that does not support verbing in programming context. thing_a can, at most be, as complicated as another array,object or another callable function -- in those cases you can put type checks in function_a_to_b. Also, Compute the thing_b can be replaced by another function if it involves multiple operations. What I would do is the following:
//I'm using $ for var names just for semantics.
//$var is a php usage but I want to emphasise
//what things are variables.
function myComputer($thing){
if(typeof($thing) == ("object")) {
//take total no of properties or whatever you want
return Object.keys($thing).length;
}
else if(typeof($thing) == ("number")) {
return $thing;
}
else if(typeof($thing) == ("string")) {
return $thing.length;
}
else {
return 0;
}
}
function mySquarer($a) { //sementic naming instead of function_a_to_b
var $b = Math.pow(myComputer($a),2); //In your original version whats the point of computing thing_b if you wanna reassign a different value afterwards?
return $b;
}
function myCuber($b) { //When you call myCuber you can do myCuber(mySquarer($anything));
var $c = Math.pow(myComputer($b),3);
return $c;
}
console.log(mySquarer(3));
console.log(myCuber(mySquarer(2)));

Why use a single incrementer class

Below code are found in WebKit:
RefPtr<Element> element = pendingScript.releaseElementAndClear();
if (ScriptElement* scriptElement = toScriptElement(element.get())) {
NestingLevelIncrementer nestingLevelIncrementer(m_scriptNestingLevel);
IgnoreDestructiveWriteCountIncrementer ignoreDestructiveWriteCountIncrementer(m_document);
//Do something else...
}
}
NestingLevelIncrementer is a simple class, which increase the counter in construction and decrease it in destruction. You could check the implementation here.
In this scrap, I think that is similar with increasing and reducing the number directly. Perhaps the only benefit is no matter to reduce the number then, but one new class is introduced.
Any other reason to use this pattern?
The intent is for the increment to be reversed no matter how the something else concludes; the stack variable will be destroyed when the method returns or an exception is thrown.
An alternative approach in other languages would use try...finally; see this for more discussion on RAII in C++ vs. finally:
Does C++ support 'finally' blocks? (And what's this 'RAII' I keep hearing about?)

How a programmers solve the dilemma of using old variables instead of new variables?

For example:
... some code
int sizeOfSomeObject = someObject.length();
... some code, sizeOfSomeObject is not need anymore
now I need other int variable for other action(for example, for position in some object), and i have the dilemma: create a new variable or use sizeOfSomeObject for this. In the first case I will keep readability, but lose performance. In the second case - on the contrary. What usually do programmers in this situation?
In the first case I will keep readability, but lose performance. In the second case - on the contrary.
So did you benchmark it? I suspect no, you didn't. Most modern compilers do a lot of agressive analysis during register allocation, so if the optimizer perceives that there's a variable that's not used anymore, but there's a new variable of the same type, it will just merge the two variables to the same memory region or processor register. No need to worry about performance penalties.
And anyway, don't do premature optimization (which this is). In 90% of the cases, readability is more important than "performance".
All in all, go ahead and create a new variable with an appropriate, different, descriptive name. And just for fun, compile this version and the version in which you used the same variable name, and look at the generated assembly (or bytecode, or...) - and find out that they're identical.
I would use different named variables for different things.
In terms of something like this, I don't think just one variable would cause a massive performance hit. In most languages you have the option to clear variables from memory in some way when they are no longer in use, so I would recommend doing that so that the code means something to you or others when read at a later date.
In C++, you can use blocks for objects to be destroyed as soon as they are not needed anymore:
void some_function () {
{
MyClass c;
// ... here we use c ...
}
// now c has been destroyed
{
MyClass d;
// ... here we use d ...
}
// now d has been destroyed
}
In your example (with int variables), there is no reason to worry about performance. The worst thing that could probably happen is memory for two variables being used instead of one, but (i) that's negligible and (ii) int's will probably live in a CPU register, anyway. If you really worry, use the block approach for your int example.
It depends how often such an int would be initialized. If it's not in some hugely nested for loop, most (all) programmers will go for the first. Besides, most modern programming languages have a garbage collector, which cleans up left over objects.
Decent compiler will optimize out your second variable, so that shouldn't be an issue.
That said, there are situations where variable reuse makes sense. E.g., you might have some variable that holds a generic output populated from call to some external API. According to the context and parameters passed to the API you'll process the data differently but it's probably better (more readable etc.) to reuse the same data variable.
For example, something like this:
void* data = getSomeData(params);
//process data
//change params
data = getSomeData(params);
//process data
//change params
data = getSomeData(params);

Is it bad practice to initialize a variable to a dummy value?

This question is a result of the answers to this question that I just asked.
It was claimed that this code is "ugly" because it initializes a variable to a value that will never be read:
String tempName = null;
try{
tempName = buildFileName();
}
catch(Exception e){
...
System.exit(1);
}
FILE_NAME = tempName;
Is this indeed bad practice? Should one avoid initializing variables to dummy values that will never actually be used?
(EDIT -
And what about initializing a String variable to "" before a loop that will concatenate values to the String...? Or is this in a separate category?
e.g.
String whatever = "";
for(String str : someCollection){
whatever += str;
}
)
I think there's no point in initializing variables to values that won't be used unless required by the language.
For example, in C#, the default value for a declared string variable is null, so you're not even gaining anything by explicitly writing it out. It's mostly a style choice, but since strings are immutable, initializing it to something else would actually allocate an extra string in memory that you'd just throw away anyway. Other languages may impose other considerations.
Regarding the string loop, if you change it to a StringBuilder instead you don't even have to think about it.
Edit: removed bits better answered by others.
As a practice, I tend to avoid setting variables to arbitrary values, and instead initialize them to a default value.
i.e.
int x = 0;
string name = "";
bool done = false;
Person randomGuy = null; //or = new Person();
I like this method best because it brings a sense of uniformity to your code while not forcing the next guy that comes along to deal with stuff like: string name = "luke skywalker";
This is all more of a personal preference, so it will vary between programmers.
At the very lest, you should be following the standards that your project as set. You'll have an idea of how the legacy code handles these things, and it's probably best to conform to that so the overall coding style is the same throughout the system.
It depends on the compiler. C# compiler requires that the variable be initialized before using. But the CLR does not have this requirement. At run time the CLR verifies if the variable is initialized or no. If not it will throw a nullreference exception.
In my opinion, it might be more accurate to refer to it as a "code smell" - in the Martin Fowler sense of the word.
I don't think you can change your default initialisation in isolation - it would need to be made in conjunction with other refactoring methods. It is also assuming that you have refactored your code so that you don't need temp variables:
try{
FILE_NAME = buildFileName();
//Do other stuff with file name
}
catch(Exception e){
...
System.exit(1);
}
It also then makes the assumtion that this code segment is only code in the method in which it is contained - i.e. the method only does one thing
When I'm coding I would be concerned that I am using dummy values with temp variables but I would only change when I am finished coding that section and it solves the problem as intended - and only in conjunction with other refactoring steps.

Can you write any algorithm without an if statement?

This site tickled my sense of humour - http://www.antiifcampaign.com/ but can polymorphism work in every case where you would use an if statement?
Smalltalk, which is considered as a "truly" object oriented language, has no "if" statement, and it has no "for" statement, no "while" statement. There are other examples (like Haskell) but this is a good one.
Quoting Smalltalk has no “if” statement:
Some of the audience may be thinking
that this is evidence confirming their
suspicions that Smalltalk is weird,
but what I’m going to tell you is
this:
An “if” statement is an abomination in an Object Oriented language.
Why? Well, an OO language is composed
of classes, objects and methods, and
an “if” statement is inescapably none
of those. You can’t write “if” in an
OO way. It shouldn’t exist.
Conditional execution, like everything
else, should be a method. A method of
what? Boolean.
Now, funnily enough, in Smalltalk,
Boolean has a method called
ifTrue:ifFalse: (that name will look
pretty odd now, but pass over it for
now). It’s abstract in Boolean, but
Boolean has two subclasses: True and
False. The method is passed two blocks
of code. In True, the method simply
runs the code for the true case. In
False, it runs the code for the false
case. Here’s an example that hopefully
explains:
(x >= 0) ifTrue: [
'Positive'
] ifFalse: [
'Negative'
]
You should be able to see ifTrue: and
ifFalse: in there. Don’t worry that
they’re not together.
The expression (x >= 0) evaluates to
true or false. Say it’s true, then we
have:
true ifTrue: [
'Positive'
] ifFalse: [
'Negative'
]
I hope that it’s fairly obvious that
that will produce ‘Positive’.
If it was false, we’d have:
false ifTrue: [
'Positive'
] ifFalse: [
'Negative'
]
That produces ‘Negative’.
OK, that’s how it’s done. What’s so
great about it? Well, in what other
language can you do this? More
seriously, the answer is that there
aren’t any special cases in this
language. Everything can be done in an
OO way, and everything is done in an
OO way.
I definitely recommend reading the whole post and Code is an object from the same author as well.
That website is against using if statements for checking if an object has a specific type. This is completely different from if (foo == 5). It's bad to use ifs like if (foo instanceof pickle). The alternative, using polymorphism instead, promotes encapsulation, making code infinitely easier to debug, maintain, and extend.
Being against ifs in general (doing a certain thing based on a condition) will gain you nothing. Notice how all the other answers here still make decisions, so what's really the difference?
Explanation of the why behind polymorphism:
Take this situation:
void draw(Shape s) {
if (s instanceof Rectangle)
//treat s as rectangle
if (s instanceof Circle)
//treat s as circle
}
It's much better if you don't have to worry about the specific type of an object, generalizing how objects are processed:
void draw(Shape s) {
s.draw();
}
This moves the logic of how to draw a shape into the shape class itself, so we can now treat all shapes the same. This way if we want to add a new type of shape, all we have to do is write the class and give it a draw method instead of modifying every conditional list in the whole program.
This idea is everywhere in programming today, the whole concept of interfaces is all about polymorphism. (Shape is an interface defining a certain behavior, allowing us to process any type that implements the Shape interface in our method.) Dynamic programming languages take this even further, allowing us to pass any type that supports the necessary actions into a method. Which looks better to you? (Python-style pseudo-code)
def multiply(a,b):
if (a is string and b is int):
//repeat a b times.
if (a is int and b is int):
//multiply a and b
or using polymorphism:
def multiply(a,b):
return a*b
You can now use any 2 types that support the * operator, allowing you to use the method with types that haven't event been created yet.
See polymorphism and what is polymorhism.
Though not OOP-related: In Prolog, the only way to write your whole application is without if statements.
Yes actually, you can have a turing-complete language that has no "if" per se and only allows "while" statements:
http://cseweb.ucsd.edu/classes/fa08/cse200/while.html
As for OO design, it makes sense to use an inheritance pattern rather than switches based on a type field in certain cases... That's not always feasible or necessarily desirable though.
#ennuikiller: conditionals would just be a matter of syntactic sugar:
if (test) body; is equivalent to x=test; while (x) {x=nil; body;}
if-then-else is a little more verbose:
if (test) ifBody; else elseBody;
is equivalent to
x = test; y = true;
while (x) {x = nil; y = nil; ifBody;}
while (y) {y = nil; elseBody;}
the primitive data structure is a list of lists. you could say 2 scalars are equal if they are lists of the same length. you would loop over them simultaneously using the head/tail operators and see if they stop at the same point.
of course that could all be wrapped up in macros.
The simplest turing complete language is probably iota. It contains only 2 symbols ('i' and '*').
Yep. if statements imply branches which can be very costly on a lot of modern processors - particularly PowerPC. Many modern PCs do a lot of pipeline re-ordering and so branch mis-predictions can cost an order of >30 cycles per branch miss.
On console programming it's sometimes faster to just execute the code and ignore it than check if you should execute it!
Simple branch avoidance in C:
if (++i >= 15)
{
i = 0;
)
can be re-written as
i = (i + 1) & 15;
However, if you want to see some real anti-if fu then read this
Oh and on the OOP question - I'll replace a branch mis-prediction with a virtual function call? No thanks....
The reasoning behind the "anti-if" campaign is similar to what Kent Beck said:
Good code invariably has small methods and
small objects. Only by factoring the system into many small pieces of state
and function can you hope to satisfy the “once and only once” rule. I get lots
of resistance to this idea, especially from experienced developers, but no one
thing I do to systems provides as much help as breaking it into more pieces.
If you don't know how to factor a program with composition and inheritance, then your classes and methods will tend to grow bigger over time. When you need to make a change, the easiest thing will be to add an IF somewhere. Add too many IFs, and your program will become less and less maintainable, and still the easiest thing will be to add more IFs.
You don't have to turn every IF into an object collaboration; but it's a very good thing when you know how to :-)
You can define True and False with objects (in a pseudo-python):
class True:
def if(then,else):
return then
def or(a):
return True()
def and(a):
return a
def not():
return False()
class False:
def if(then,else):
return false
def or(a):
return a
def and(a):
return False()
def not():
return True()
I think it is an elegant way to construct booleans, and it proves that you can replace every if by polymorphism, but that's not the point of the anti-if campaign. The goal is to avoid writing things such as (in a pathfinding algorithm) :
if type == Block or type == Player:
# You can't pass through this
else:
# You can
But rather call a is_traversable method on each object. In a sense, that's exactly the inverse of pattern matching. "if" is useful, but in some cases, it is not the best solution.
I assume you are actually asking about replacing if statements that check types, as opposed to replacing all if statements.
To replace an if with polymorphism requires a method in a common supertype you can use for dispatching, either by overriding it directly, or by reusing overridden methods as in the visitor pattern.
But what if there is no such method, and you can't add one to a common supertype because the super types are not maintained by you? Would you really go to the lengths of introducing a new supertype along with subtypes just to get rid of a single if? That would be taking purity a bit far in my opinion.
Also, both approaches (direct overriding and the visitor pattern) have their disadvantages: Overriding the method directly requires that you implement your method in the classes you want to switch on, which might not help cohesion. On the other hand, the visitor pattern is awkward if several cases share the same code. With an if you can do:
if (o instanceof OneType || o instanceof AnotherType) {
// complicated logic goes here
}
How would you share the code with the visitor pattern? Call a common method? Where would you put that method?
So no, I don't think replacing such if statements is always an improvement. It often is, but not always.
I used to write code a lot as the recommend in the anti-if campaign, using either callbacks in a delegate dictionary or polymorphism.
It's quite a beguiling argument, especially if you are dealing with messy code bases but to be honest, although it's great for a plugin model or simplifying large nested if statements, it does make navigating and readability a bit of a pain.
For example F12 (Go To Definition) in visual studio will take you to an abstract class (or, in my case an interface definition).
It also makes quick visual scanning of a class very cumbersome, and adds an overhead in setting up the delegates and lookup hashes.
Using the recommendations put forward in the anti-if campaign as much as they appear to be recommending looks like 'ooh, new shiny thing' programming to me.
As for the other constructs put forward in this thread, albeit it has been done in the spirit of a fun challenge, are just substitutes for an if statement, and don't really address what the underlying beliefs of the anti-if campaign.
You can avoid ifs in your business logic code if you keep them in your construction code (Factories, builders, Providers etc.). Your business logic code would be much more readable, easier to understand or easier to maintain or extend. See: http://www.youtube.com/watch?v=4F72VULWFvc
Haskell doesn't even have if statements, being pure functional. ;D
You can do it without if per se, but you can't do it without a mechanism that allows you to make a decision based on some condition.
In assembly, there's no if statement. There are conditional jumps.
In Haskell for instance, there's no explicit if, instead, you define a function multiple times, I forgot the exact syntax, but it's something like this:
pseudo-haskell:
def posNeg(x < 0):
return "negative"
def posNeg(x == 0):
return "zero"
def posNeg(x):
return "positive"
When you call posNeg(a), the interpreter will look at the value of a, if it's < 0 then it will choose the first definition, if it's == 0 then it will choose the second definition, otherwise it will default to the third definition.
So while languages like Haskell and SmallTalk don't have the usual C-style if statement, they have other means of allowing you to make decisions.
This is actually a coding game I like to play with programming languages. It's called "if we had no if" which has its origins at: http://wiki.tcl.tk/4821
Basically, if we disallow the use of conditional constructs in the language: no if, no while, no for, no unless, no switch etc.. can we recreate our own IF function. The answer depends on the language and what language features we can exploit (remember using regular conditional constructs is cheating co no ternary operators!)
For example, in tcl, a function name is just a string and any string (including the empty string) is allowed for anything (function names, variable names etc.). So, exploiting this we can do:
proc 0 {true false} {uplevel 1 $false; # execute false code block, ignore true}
proc 1 {true false} {uplevel 1 $true; # execute true code block, ignore flase}
proc _IF {boolean true false} {
$boolean $true $false
}
#usage:
_IF [expr {1<2}] {
puts "this is true"
} {
#else:
puts "this is false"
}
or in javascript we can abuse the loose typing and the fact that almost anything can be cast into a string and combine that with its functional nature:
function fail (discard,execute) {execute()}
function pass (execute,discard) {execute()}
var truth_table = {
'false' : fail,
'true' : pass
}
function _IF (expr) {
return truth_table[!!expr];
}
//usage:
_IF(3==2)(
function(){alert('this is true')},
//else
function(){alert('this is false')}
);
Not all languages can do this sort of thing. But languages I like tend to be able to.
The idea of polymorphism is to call an object without to first verify the class of that object.
That doesn't mean the if statement should not be used at all; you should avoid to write
if (object.isArray()) {
// Code to execute when the object is an array.
} else if (object.inString()) {
// Code to execute if the object is a string.
}
It depends on the language.
Statically typed languages should be able to handle all of the type checking by sharing common interfaces and overloading functions/methods.
Dynamically typed languages might need to approach the problem differently since type is not checked when a message is passed, only when an object is being accessed (more or less). Using common interfaces is still good practice and can eliminate many of the type checking if statements.
While some constructs are usually a sign of code smell, I am hesitant to eliminate any approach to a problem apriori. There may be times when type checking via if is the expedient solution.
Note: Others have suggested using switch instead, but that is just a clever way of writing more legible if statements.
Well, if you're writing in Perl, it's easy!
Instead of
if (x) {
# ...
}
you can use
unless (!x){
# ...
}
;-)
In answer to the question, and as suggested by the last respondent, you need some if statements to detect state in a factory. At that point you then instantiate a set of collaborating classes that solve the state specific problem. Of course, other conditionals would be required as needed, but they would be minimized.
What would be removed of course would be the endless procedural state checking rife in so much service based code.
Interesting smalltalk is mentioned, as that's the language I used before being dragged across into Java. I don't get home as early as I used to.
I thought about adding my two cents: you can optimize away ifs in many languages where the second part of a boolean expression is not evaluated when it won't affect the result.
With the and operator, if the first operand evaluates to false, then there is no need to evaluate the second one. With the or operator, it's the opposite - there's no need to evaluate the second operand if the first one is true. Some languages always behave like that, others offer an alternative syntax.
Here's an if - elseif - else code made in JavaScript by only using operators and anonymous functions.
document.getElementById("myinput").addEventListener("change", function(e) {
(e.target.value == 1 && !function() {
alert('if 1');
}()) || (e.target.value == 2 && !function() {
alert('else if 2');
}()) || (e.target.value == 3 && !function() {
alert('else if 3');
}()) || (function() {
alert('else');
}());
});
<input type="text" id="myinput" />
This makes me want to try defining an esoteric language where blocks implicitly behave like self-executing anonymous functions and return true, so that you would write it like this:
(condition && {
action
}) || (condition && {
action
}) || {
action
}