Why does FxCop think initializing fields to the default value is bad? - fxcop

When assigning a default default-value to a field (here false to a bool), FxCop says:
Resolution : "'Bar.Bar()' initializes field 'Bar.foo'
of type 'bool' to false. Remove this initialization
because it will be done automatically by the runtime."
Now, I know that code as int a = 0 or bool ok = false is introducing some redundancy, but to me it seems a very, very good code-practice, one that my teachers insisted on righteously in my opinion.
Not only is the performance penalty very little, more importantly: relying on the default is relying on the knowledge of each programmer ever to use a piece of code, on every datatype that comes with a default. (DateTime?)
Seriously, I think this is very strange: the very program that should protect you from making all too obvious mistakes, is suggesting here to make one, only for some increased performance? (we're talking about initialization code here, only executed once! Programmers who care that much, can of course omit the initialization (and should probably use C or assembler :-) ).
Is FxCop making an obvious mistake here, or is there more to it?
Two updates :
This is not just my opinion, but what I have been taught at university
(Belgium). Not that I like to use an
argumentum ad verecundiam, but
just to show that it isn't just my
opinion. And concerning that:
My apologies, I just found this one:
Should I always/ever/never initialize object fields to default values?

There can be significant performance benefits from this, in some cases. For details, see this CodeProject article.
The main issue is that it is unnecessary in C#. In C++, things are different, so many professors teach that you should always initialize. The way the objects are initialized has changed in .NET.
In .NET, objects are always initialized when constructed. If you add an initializer, it's possible (typical) that you cause a double initialization of your variables. This happens whether you initialize them in the constructor or inline.
In addition, since initialization is unnecessary in .NET (it always happens, even if you don't explicitly say to initialize to the default), adding an initializer suggests, from a readability standpoint, that you are trying to add code that has a function. Every piece of code should have a purpose, or be removed if unnecessary. The "extra" code, even if it was harmless, suggests that it is there for a reason, which reduces maintainability.

Reed Copsey said specifying default values for fields impacts performance, and he refers to a test from 2005.
public class A
{
// Use CLR's default value
private int varA;
private string varB;
private DataSet varC;
}
public class B
{
// Specify default value explicitly
private int varA = 0;
private string varB = null;
private DataSet varC = null;
}
Now eight years and four versions of C# and .NET later I decided to repeat that test under .NET Framework 4.5 and C# 5, compiled as Release with default optimizations using Visual Studio 2012. I was surprised to see that there is still a performance difference between explicitly initializing fields with their default values and not specifying a default value. I would have expected the later C# compilers to optimize those constant assignments away by now.
No init: 512,75 Init on Def: 536,96 Init in Const: 720,50
Method No init: 140,11 Method Init on Def: 166,86
(the rest)
So I peeked inside the constructors of classes A and B in this test, and both are empty:
.method public hidebysig specialname rtspecialname
instance void .ctor () cil managed
{
// Code size 7 (0x7)
.maxstack 8
IL_0000: ldarg.0
IL_0001: call instance void [mscorlib]System.Object::.ctor()
IL_0006: ret
} // end of method .ctor
I could not find a reason in the IL why explicitly assigning default constant values to fields would consume more time. Apparently the C# compiler does optimize the constants away, and I suspect it always did.
So, then the test must be wrong... I swapped the contents of classes A and B, swapped the numbers in the result string, and reran the tests. And lo and behold now suddenly specifying default values is faster.
No init: 474,75 Init on Def: 464,42 Init in Const: 715,49
Method No init: 141,60 Method Init on Def: 171,78
(the rest)
Therefore I conclude that the C# compiler correctly optimizes for the case where you assign default values to fields. It makes no performance difference!
Now we know performance is really not an issue. And I disagree with Reed Copsey's statement "The 'extra' code, even if it was harmless, suggests that it is there for a reason, which reduces maintainability." and agree with Anders Hansson on this:
Think long term maintenance.
Keep the code as explicit as possible.
Don't rely on language specific ways to initialize if you don't have to. Maybe a newer version of the language will work differently?
Future programmers will thank you.
Management will thank you.
Why obfuscate things even the slightest?
Future maintainers may come from a different background. It really isn't about what is "right" it is more what will be easiest in the long run.

FxCop has to provide rules for everyone, so if this rule doesn't appeal to you, just exclude it.
Now, I would suggest that if you want to explicitly declare a default value, use a constant (or static readonly variable) to do it. It will be even clearer than initializing with a value, and FXCop won't complain.
private const int DEFAULT_AGE = 0;
private int age = 0; // FXCop complains
private int age = DEFAULT_AGE; // FXCop is happy
private static readonly DateTime DEFAULT_BIRTH_DATE = default(DateTime);
private DateTime birthDate = default(DateTime); // FXCop doesn't complain, but it isn't as readable as below
private DateTime birthDate = DEFAULT_BIRTH_DATE; // Everyone's happy

FX Cop sees it as adding unnecessary code and unnecessary code is bad. I agree with you, I like to see what it's set to, I think it makes it easier to read.
A similar problem we encounter is, for documentation reasons we may create an empty constructor like this
/// <summary>
/// a test class
/// </summary>
/// <remarks>Documented on 4/8/2009 by richard</remarks>
public class TestClass
{
/// <summary>
/// Initializes a new instance of the <see cref="TestClass"/> class.
/// </summary>
/// <remarks>Documented on 4/8/2009 by Bob</remarks>
public TestClass()
{
//empty constructor
}
}
The compiler creates this constructor automatically, so FX cop complains but our sandcastle documentation rules require all public methods to be documented, so we just told fx cop not to complain about it.

It's relies not on every programmer knowing some locally defined "corporate standard" which might change between at any time, but on something formally defined in the Standard. You might as well say "don't using x++ because that relies on the knowledge of every programmer.

You have to remember that FxCop rules are only guidelines, they are not unbreakable. It even says so on the page for the description of the rule you mentioned (http://msdn.microsoft.com/en-us/library/ms182274(VS.80).aspx, emphasis mine):
When to Exclude Warnings:
Exclude a warning from this rule if the constructor calls another constructor in the same or base class that initializes the field to a non-default value. It is also safe to exclude a warning from this rule, or disable the rule entirely, if performance and code maintenance are not priorities.
Now, the rule isn't entirely incorrect, but like it says, this is not a priority for you, so just disable the rule.

Related

Is public variable all that bad?

I've read a lot of articles about "public vs getter/setter", but I still wonder if there is any good part about public variable.
Or the question is:
If you're going to make a new awesome programming languange, are you still going to support public variable and why??
I agree with almost everything that's been said by everyone else, but wanted to add this:
Public isn't automatically bad. Public is bad if you're writing an Object Class. Data Classes are just fine. There's nothing wrong with this class:
public class CommentRecord
{
public int id;
public string comment;
}
... why? Because the class isn't using the variables for anything. It's just a data object - it's meant to be just a simple data repository.
But there's absolutely something wrong with this class:
public class CommentRecord
{
public int id;
public string comment;
public void UpdateInSQL()
{
// code to update the SQL table for the row with commentID = this.id
// and set its UserComment column to this.comment
}
}
... why is this bad? Because it's not a data class. It's a class that actually does stuff with its variables - and because of that, making them public forces the person using the class to know the internals of the class. The person using it needs to know "If I want to update the comment, I have to change the public variable, but not change the id, then call the UpdateInSQL() method." Worse, if they screw up, they use the class in a way it wasn't intended and in a way that'll cause unforseen consequences down the line!
If you want to get some more info on this, take a look at Clean Code by Robert Martin, Chapter 6, on "Data/Object Anti-Symmetry"
A public variable essentially means you have a global accessible/changeable variable within the scope of an object. Is there really a use case for this?
Take this example: you have a class DatabaseQueryHandler which has a variable databaseAccessor. Under what circumstances would you want this variable to be:
Publicly accessible (i.e. gettable)
Publicly settable
Option #1 I can think of a few - you may want to get the last insert ID after an insert operation, you may want to check any errors the last query generated, commit or rollback transactions, etc., and it might make more logical sense to have these methods written in the class DatabaseAccessor than DatabaseQueryHandler.
Option #2 is less desirable, especially if you are doing OOP and abiding by SOLID principles, in particular regards to the ISP and DIP principles. In that case, when would you want to set the variable databaseAccessor in DatabaseQueryHandler? Probably on construction only, and never at any time after that. You probably also want it type-hinted at the interface level as well, so that you can code to interfaces. Also, why would you need an arbitrary object to be able to alter the database accessor? What happens if Foo changes the variable DatabaseQueryHandler->databaseAccessor to be NULL and then Bar tries to call DatabaseQueryHandler->databaseAccessor->beginTransaction()?
I'm just giving one example here, and it is by no means bullet proof. I program in PHP (dodges the hurled rotten fruit) and take OOP and SOLID very seriously given the looseness of the language. I'm sure there will be arguments on both sides of the fence, but I would say that if you're considering using a public class variable, instead consider what actually needs to access it, and how that variable is to be used. In most cases the functionality can be exposed via public methods without allowing unexpected alteration of the variable type.
Simple answer is: yes, they are bad. There are many reasons to that like coupling and unmaintanable code. In practice you should not use them. In OOP the public variable alternative is Singleton, which is considered a bad pracitce. Check out here.
It has a lot to do with encapsulation. You don't want your variable to be accessed anyhow. Other languages like iOS (objective-c) use properties:
#property (nonatomic, strong) NSArray* array;
then the compiler will generate the instance variable with it's getter and setter implicitly. In this case there is no need to use a variable (though other developers still prefer to use variables). You can then make this property public by declaring it in the .h file or private by declaring it in the .m file.

Optional Parameters, Good or Bad?

I am writing and browsing through a lot of methods in the project im working with and as much as I think overloads are useful I think that having a simple optional parameter with a default value can get around the problem aiding in writing more readable and I would think efficient code.
Now I hear that using these parmeters in the methods could carry nasty side effects.
What are these side effects and is it worth the risk of using these parameters to keep the code clean ???
I'll start by prefacing my answer by saying Any language feature can be used well or it can be used poorly. Optional parameters have some drawbacks, just like declaring locals as var does, or generics.
What are these side effects
Two come to mind.
The first being that the default value for optional parameters are compile time constants that are embedded in the consumer of the method. Let's say I have this class in AssemblyA:
public class Foo
{
public void Bar(string baz = "cat")
{
//Omitted
}
}
And this in AssemblyB:
public void CallBar()
{
new Foo().Bar();
}
What really ends up being produced is this, in assemblyB:
public void CallBar()
{
new Foo().Bar("cat");
}
So, if you were to ever change your default value on Bar, both assemblyA and assemblyB would need to be recompiled. Because of this, I tend not to declare methods as public if they use optional parameters, rather internal or private. If I needed to declare it as public, I would use overloads.
The second issue being how they interact with interfaces and polymorphism. Take this interface:
public interface IBar
{
void Foo(string baz = "cat");
}
and this class:
public class Bar : IBar
{
public void Foo(string baz = "dog")
{
Console.WriteLine(baz);
}
}
These lines will print different things:
IBar bar1 = new Bar();
bar1.Foo(); //Prints "cat"
var bar2 = new Bar();
bar2.Foo(); //Prints "dog"
Those are two negatives that come to mind. However, there are positives, as well. Consider this method:
void Foo(string bar = "bar", string baz = "baz", string yat = "yat")
{
}
Creating methods that offer all the possible permutations as default would be several if not dozens of lines of code.
Conclusion: optional parameters are good, and they can be bad. Just like anything else.
Necromancing.
The thing with optional parameters is, they are BAD because they are unintuitive - meaning they do NOT behave the way you would expect it.
Here's why:
They break ABI compatibility !
(and strictly speaking, they also break API-compatiblity, when used in constructors)
For example:
You have a DLL, in which you have code such as this
public void Foo(string a = "dog", string b = "cat", string c = "mouse")
{
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
Now what kinda happens is, you expect the compiler to generate this code behind the scenes:
public void Foo(string a, string b, string c)
{
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
public void Foo(string a, string b)
{
Foo(a, b, "mouse");
}
public void Foo(string a)
{
Foo(a, "cat", "mouse");
}
public void Foo()
{
Foo("dog", "cat", "mouse");
}
or perhaps more realistically, you would expect it to pass NULLs and do
public void Foo(string a, string b, string c)
{
if(a == null) a = "dog";
if(b == null) b = "cat";
if(c == null) c = "mouse";
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
so you can change the default-arguments at one place.
But this is not what the C# compiler does, because then you couldn't do:
Foo(a:"dog", c:"dogfood");
So instead the C# compiler does this:
Everywhere where you write e.g.
Foo(a:"dog", c:"mouse");
or Foo(a:"dog");
or Foo(a:"dog", b:"bla");
It substitutes it with
Foo(your_value_for_a_or_default, your_value_for_b_or_default, your_value_for_c_or_default);
So that means if you add another default-value, change a default-value, remove a value, you don't break API-compatiblity, but you break ABI-compatibility.
So what this means is, if you just replace the DLL out of all files that compose an application, you'll break every application out there that uses your DLL. That's rather bad. Because if your DLL contains a bad bug, and I have to replace it, I have to recompile my entire application with your latest DLL. That might contain a lot of changes, so I can't do it quickly. I also might not have the old source code handy, and the application might be in a major modification, with no idea what commit the old version of the application was compiled on. So I might not be able to recompile at this time. That is very bad.
And as for only using it in PUBLIC methods, not private, protected or internal.
Yea, nice try, but one can still use private, protected or internal methods with reflection. Not because one wants to, but because it sometimes is necessary, as there is no other way. (Example).
Interfaces have already been mentioned by vcsjones.
The problem there is code-duplication (which allows for divergent default-values - or ignoring of default-values).
But the real bummer is, that in addition to that, you can now introduce API-breaking-changes in Constructors...
Example:
public class SomeClass
{
public SomeClass(bool aTinyLittleBitOfSomethingNew = true)
{
}
}
And now, everywhere where you use
System.Activator.CreateInstance<SomeClass>();
you'll now get a RUNTIME exception, because now there is NO parameter-less constructor...
The compiler won't be able to catch this at compile time.
Good night if you happen to have a lot of Activator.CreateInstances in your code.
You'll be screwed, and screwed badly.
Bonus points will be awarded if some of the code you have to maintain uses reflection to create class instances, or use reflection to access private/protected/internal methods...
Don't use optional parameters !
Especially not in class constructors.
(Disclaimer: sometimes, there simply is no other way - e.g. an attribute on a property that takes the name of the property as constructor argument automagically - but try to limit it to these few cases, especially if you can make due with overloading)
I guess theoretically they are fine for quick prototyping, but only for that.
But since prototypes have a strong tendency to go productive (at least in the company I currently work), don't use it for that, either.
I'd say that it depends how different the method becomes when you include or omit that parameter.
If a method's behaviour and internal functioning is very different without a parameter, then make it an overload. If you're using optional parameters to change behaviour, DON'T. Instead of having a method that does one thing with one parameter, and something different when you pass in a second one, have one method that does one thing, and a different method that does the other thing. If their behaviour differs greatly, then they should probably be entirely separate, and not overloads with the same name.
If you need to know whether a parameter was user-specified or left blank, then consider making it an overload. Sometimes you can use nullable values if the place they're being passed in from won't allow nulls, but generally you can't rule out the possibility that the user passed null, so if you need to know where the value came from as well as what the value is, don't use optional parameters.
Above all, remember that the optional parameters should (kinda by definition) be used for things that have a small, trivial or otherwise unimportant effect on the outcome of the method. If you change the default value, any place that calls the method without specifying a value should still be happy with the result. If you change the default and then find that some other bit of code that calls the method with the optional parameter left blank is now not working how it should, then it probably shouldn't have been an optional parameter.
Places where it can be a good idea to use optional parameters are:
Methods where it's safe to just set something to a default if a value isn't provided. This basically covers anything where the caller might not know or care what the value is. A good example is in encryption methods - the caller may just think "I don't know crypto, I don't know what value R should be set to, I just want this to be encrypted", in which case you set the defaults to sensible values. Often these start out as a method with an internal variable that you then move to be user-provided. It's pointless making two methods when the only difference is that one has var foo = bar; somewhere at the start.
Methods that have a set of parameters, but not all of them are needed. This is quite common with constructors; you'll see overloads that each set different combinations of the various properties, but if there's three or four parameters that may or may not need to be set, that can require a lot of overloads to cover all the possible combinations (it's basically a handshake problem), and all these overloads have more or less identical behaviour internally. You can solve this by having most of them just set defaults and call the one that sets all parameters, but it's less code to use optional parameters.
Methods where the coder calling them might want to set parameters, but you want them to know what a "normal" value is. For example, the encryption method we mentioned earlier might require various parameters for whatever maths goes on internally. A coder might see that they can pass in values for workFactor or blockSize, but they may not know what "normal" values are for these. Commenting and documentation will help here, but so will optional parameters - the coder will see in the signature [workFactor = 24], [blockSize = 256] which helps them judge what kind of values are sensible. (Of course, this is no excuse to not comment and document your code properly.)
You're not making more readable and efficient code.
First, your method signatures will be gratuitously longer.
Second, overloads don't exist for the sole purpose of using default values - a quick look at the Convert class should show you that. Many times overloaded methods have different execution paths, which will become spaghetti code in your single non overloaded method.
Third, sometimes you need to know whether a value was used as input. How would you then know whether the user passed those values, if he happens to use the same value as the default one you were using?
Often I see optional parameters in C# like IMyInterface parameter = null.
Especially when I see that in constructors I would even say it'S a code smell.
I know that's a hard verdict - but in this case it obscures your dependencies, which is bad.
Like vcsjones said, you can use those language features right, but I believe optional parameters should be used only in some edge-cases.
my opinion.

Should private functions modify field variable, or use a return value?

I'm often running into the same trail of thought when I'm creating private methods, which application is to modify (usually initialize) an existing variable in scope of the class.
I can't decide which of the following two methods I prefer.
Lets say we have a class Test with a field variable x. Let it be an integer. How do you usually modify / initialize x ?
a) Modifying the field directly
private void initX(){
// Do something to determine x. Here its very simple.
x = 60;
}
b) Using a return value
private int initX(){
// Do something to determine x. Here its very simple.
return 60;
}
And in the constructor:
public Test(){
// a)
initX();
// b)
x = initX();
}
I like that its clear in b) which variable we are dealing with. But on the other hand, a) seems sufficient most of the time - the function name implies perfectly well what we are doing!
Which one do you prefer and why?
Thank for your answers guys! I'll make this a community wiki as I realize that there is no correct answer to this.
I usually prefer b), only I pick a different name, like computeX() in this case. A few reasons for why:
if I declare computeX() as protected, there is a simple way for a subclass to influent how it works, yet x itself can remain a private field;
I like to declare fields final if that's what they are; in this case a) is not an option since initialization has to happen in compiler (this is Java-specific, but your examples all look Java as well).
That said, I don't have a strong preference between the two methods. For instance, if I need to initialize several related fields at once, I will usually pick option a). That, though, only if I cannot or don't want for some reason, to initialize directly in constructor.
For initialization I prefer constructor initialization if it's possible,
public Test():x(val){...}, or write initialization code in the constructor body. Constructor is the best place to initialize all the fields (actually, it is the purpose of constructor). I'd use private initX() approach only if initialization code for X is too long (just for readability) and call this function from constructor. private int initX() in my opinion has nothing to do with initialization(unless you implement lazy initialization,but in this case it should return &int or const &int) , it is an accessor.
I would prefer option b), because you can make it a const function in languages that support it.
With option a), there is a temptation for new, lazy or just time-stressed developers to start adding little extra tasks into the initX method, instead of creating a new one.
Also, in b), you can remove initX() from the class definition, so consumers of the object don't even have to know it's there. For example, in C++.
In the header:
class Test {
private: int X;
public: Test();
...
}
In the CPP file:
static int initX() { return 60; }
Test::Test() {
X = initX();
}
Removing the init functions from the header file simplifies the class for the people that have to use it.
Neither?
I prefer to initialize in the constructor and only extract out an initialization method if I need a lot of fields initialized and/or need the ability to re-initialize at another point in the life time of an instance (without going through a destruct/construct).
More importantly, what does 60 mean?
If it is a meaningful value, make it a const with a meaningful name: NUMBER_OF_XXXXX, MINUTES_PER_HOUR, FIVE_DOZEN_APPLES, SPEED_LIMIT, ... regardless of how and where you subsequently use it (constructor, init method or getter function).
Making it a named constant makes the value re-useable in and of itself. And using a const is much more "findable", especially for more ubiquitous values (like 1 or -1) then using the actual value.
Only when you want to tie this const value to a specific class would it make sense to me to create a class const or var, or - it the language does not support those - a getter class function.
Another reason to make it a (virtual) getter function would be if descendant classes need the ability to start with a different initial value.
Edit (in response to comments):
For initializations that involve complex calculations I would also extract out a method to do the calculation. The choice of making that method a procedure that directly modifies the field value (a) or a function that returns the value it should be given (b), would be driven by the question whether or not the calculation would be needed at other times than "just the constructor".
If only needed at initialization in the constructor, I would prefer method (a).
If the calculation needs to be done at other times as well, I would opt for method (b) as it also makes it possible to assign the outcome to some other field or local variable and so can be used by descendants or other users of the class without affecting the inner state of the instance.
Actually only a) method behaves as expected (by analyzing method name). Method b) should be named 'return60' in your example or 'getXValue' in some more complicated one.
Both options are correct in my opinion. It all depeneds what was your intention when certain design was choosen. If your method has to do initialization only I would prefer a) beacuse it is simplier. In case x value is also used for something else somewhere in logic using b) option might lead to more consistent code.
You should also always write method names clearly and make those names corresponding with actual logic. (in this case method b) has confusing name).
#Frederik, if you use option b) and you have a LOT of field variables, the constructor will become a quite unwieldy block of code. Sometimes you just can't help but have lots and lots of member variables in a class (example: it's a domain object and it's data comes straight from a very wide table in the database). The most pragmatic approach would be to modularize the code as you need to.

Is there any disadvantage of writing a long constructor?

Does it affect the time in loading the application?
or any other issues in doing so?
The question is vague on what "long" means. Here are some possible interpretations:
Interpretation #1: The constructor has many parameters
Constructors with many parameters can lead to poor readability, and better alternatives exist.
Here's a quote from Effective Java 2nd Edition, Item 2: Consider a builder pattern when faced with many constructor parameters:
Traditionally, programmers have used the telescoping constructor pattern, in which you provide a constructor with only the required parameters, another with a single optional parameters, a third with two optional parameters, and so on...
The telescoping constructor pattern is essentially something like this:
public class Telescope {
final String name;
final int levels;
final boolean isAdjustable;
public Telescope(String name) {
this(name, 5);
}
public Telescope(String name, int levels) {
this(name, levels, false);
}
public Telescope(String name, int levels, boolean isAdjustable) {
this.name = name;
this.levels = levels;
this.isAdjustable = isAdjustable;
}
}
And now you can do any of the following:
new Telescope("X/1999");
new Telescope("X/1999", 13);
new Telescope("X/1999", 13, true);
You can't, however, currently set only the name and isAdjustable, and leaving levels at default. You can provide more constructor overloads, but obviously the number would explode as the number of parameters grow, and you may even have multiple boolean and int arguments, which would really make a mess out of things.
As you can see, this isn't a pleasant pattern to write, and even less pleasant to use (What does "true" mean here? What's 13?).
Bloch recommends using a builder pattern, which would allow you to write something like this instead:
Telescope telly = new Telescope.Builder("X/1999").setAdjustable(true).build();
Note that now the parameters are named, and you can set them in any order you want, and you can skip the ones that you want to keep at default values. This is certainly much better than telescoping constructors, especially when there's a huge number of parameters that belong to many of the same types.
See also
Wikipedia/Builder pattern
Effective Java 2nd Edition, Item 2: Consider a builder pattern when faced with many constructor parameters (excerpt online)
Related questions
When would you use the Builder Pattern?
Is this a well known design pattern? What is its name?
Interpretation #2: The constructor does a lot of work that costs time
If the work must be done at construction time, then doing it in the constructor or in a helper method doesn't really make too much of a difference. When a constructor delegates work to a helper method, however, make sure that it's not overridable, because that could lead to a lot of problems.
Here's some quote from Effective Java 2nd Edition, Item 17: Design and document for inheritance, or else prohibit it:
There are a few more restrictions that a class must obey to allow inheritance. Constructors must not invoke overridable methods, directly or indirectly. If you violate this rule, program failure will result. The superclass constructor runs before the subclass constructor, so the overriding method in the subclass will be invoked before the subclass constructor has run. If the overriding method depends on any initialization performed by the subclass constructor, the method will not behave as expected.
Here's an example to illustrate:
public class ConstructorCallsOverride {
public static void main(String[] args) {
abstract class Base {
Base() { overrideMe(); }
abstract void overrideMe();
}
class Child extends Base {
final int x;
Child(int x) { this.x = x; }
#Override void overrideMe() {
System.out.println(x);
}
}
new Child(42); // prints "0"
}
}
Here, when Base constructor calls overrideMe, Child has not finished initializing the final int x, and the method gets the wrong value. This will almost certainly lead to bugs and errors.
Interpretation #3: The constructor does a lot of work that can be deferred
The construction of an object can be made faster when some work is deferred to when it's actually needed; this is called lazy initialization. As an example, when a String is constructed, it does not actually compute its hash code. It only does it when the hash code is first required, and then it will cache it (since strings are immutable, this value will not change).
However, consider Effective Java 2nd Edition, Item 71: Use lazy initialization judiciously. Lazy initialization can lead to subtle bugs, and don't always yield improved performance that justifies the added complexity. Do not prematurely optimize.
Constructors are a little special in that an unhandled exception in a constructor may have weird side effects. Without seeing your code I would assume that a long constructor increases the risk of exceptions. I would make the constructor as simple as needed and utilize other methods to do the rest in order to provide better error handling.
The biggest disadvantage is probably the same as writing any other long function -- that it can get complex and difficult to understand.
The rest is going to vary. First of all, length and execution time don't necessarily correlate -- you could have a single line (e.g., function call) that took several seconds to complete (e.g., connect to a server) or lots of code that executed entirely within the CPU and finished quickly.
Startup time would (obviously) only be affected by constructors that were/are invoked during startup. I haven't had an issue with this in any code I've written (at all recently anyway), but I've seen code that did. On some types of embedded systems (for one example) you really want to avoid creating and destroying objects during normal use, so you create almost everything statically during bootup. Once it's running, you can devote all the processor time to getting the real work done.
Constructor is yet another function. You need very long functions called many times to make the program work slow. So if it's only called once it usually won't matter how much code is inside.
It affects the time it takes to construct that object, naturally, but no more than having an empty constructor and calling methods to do that work instead. It has no effect on the application load time
In case of copy constructor if we use donot use reference in that case
it will create an object and call the copy constructor and passing the
value to the copy constructor and each time a new object is created and
each time it will call the copy constructor it goes to infinite and
fill the memory then it display the error message .
if we pass the reference it will not create the new object for storing
the value. and no recursion will take place
I would avoid doing anything in your constructor that isn't absolutely necessary. Initialize your variables in there, and try not to do much else. Additional functionality should reside in separate functions that you call only if you need to.

Is there a commonly used OO Pattern for holding "constant variables"?

I am working on a little pinball-game project for a hobby and am looking for a pattern to encapsulate constant variables.
I have a model, within which there are values which will be constant over the life of that model e.g. maximum speed/maximum gravity etc. Throughout the GUI and other areas these values are required in order to correctly validate input. Currently they are included either as references to a public static final, or just plain hard-coded. I'd like to encapsulate these "constant variables" in an object which can be injected into the model, and retrieved by the view/controller.
To clarify, the value of the "constant variables" may not necessarily be defined at compile-time, they could come from reading in a file; user input etc. What is known at compile time is which ones are needed. A way which may be easier to explain it is that whatever this encapsulation is, the values it provides are immutable.
I'm looking for a way to achieve this which:
has compile time type-safety (i.e. not mapping a string to variable at runtime)
avoids anything static (including enums, which can't be extended)
I know I could define an interface which has the methods such as:
public int getMaximumSpeed();
public int getMaximumGravity();
... and inject an instance of that into the model, and make it accessible in some way. However, this results in a lot of boilerplate code, which is pretty tedious to write/test etc (I am doing this for funsies :-)).
I am looking for a better way to do this, preferably something which has the benefits of being part of a shared vocabulary, as with design patterns.
Is there a better way to do this?
P.S. I've thought some more about this, and the best trade-off I could find would be to have something like:
public class Variables {
enum Variable {
MaxSpeed(100),
MaxGravity(10)
Variable(Object variableValue) {
// assign value to field, provide getter etc.
}
}
public Object getVariable(Variable v) { // look up enum and get member }
} // end of MyVariables
I could then do something like:
Model m = new Model(new Variables());
Advantages: the lookup of a variable is protected by having to be a member of the enum in order to compile, variables can be added with little extra code
Disadvantages: enums cannot be extended, brittleness (a recompile is needed to add a variable), variable values would have to be cast from Object (to Integer in this example), which again isn't type safe, though generics may be an option for that... somehow
Are you looking for the Singleton or, a variant, the Monostate? If not, how does that pattern fail your needs?
Of course, here's the mandatory disclaimer that Anything Global Is Evil.
UPDATE: I did some looking, because I've been having similar debates/issues. I stumbled across a list of "alternatives" to classic global/scope solutions. Thought I'd share.
Thanks for all the time spent by you guys trying to decipher what is a pretty weird question.
I think, in terms of design patterns, the closest that comes to what I'm describing is the factory pattern, where I have a factory of pseudo-constants. Technically it's not creating an instance each call, but rather always providing the same instance (in the sense of a Guice provider). But I can create several factories, which each can provide different psuedo-constants, and inject each into a different model, so the model's UI can validate input a lot more flexibly.
If anyone's interested I've came to the conclusion that an interface providing a method for each psuedo-constant is the way to go:
public interface IVariableProvider {
public int maxGravity();
public int maxSpeed();
// and everything else...
}
public class VariableProvider {
private final int maxGravity, maxSpeed...;
public VariableProvider(int maxGravity, int maxSpeed) {
// assign final fields
}
}
Then I can do:
Model firstModel = new Model(new VariableProvider(2, 10));
Model secondModel = new Model(new VariableProvider(10, 100));
I think as long as the interface doesn't provide a prohibitively large number of variable getters, it wins over some parameterised lookup (which will either be vulnerable at run-time, or will prohibit extension/polymorphism).
P.S. I realise some have been questioning what my problem is with static final values. I made the statement (with tongue in cheek) to a colleague that anything static is an inherently not object-oriented. So in my hobby I used that as the basis for a thought exercise where I try to remove anything static from the project (next I'll be trying to remove all 'if' statements ;-D). If I was on a deadline and I was satisfied public static final values wouldn't hamstring testing, I would have used them pretty quickly.
If you're just using java/IOC, why not just dependency-inject the values?
e.g. Spring inject the values via a map, specify the object as a singleton -
<property name="values">
<map>
<entry> <key><value>a1</value></key><value>b1</value></entry>
<entry> <key><value>a2</value></key><value>b3</value></entry>
</map>
</property>
your class is a singleton that holds an immutable copy of the map set in spring -
private Map<String, String> m;
public String getValue(String s)
{
return m.containsKey(s)?m.get(s):null;
}
public void setValues(Map m)
{
this.m=Collections.unmodifiableMap(m):
}
From what I can tell, you probably don't need to implement a pattern here -- you just need access to a set of constants, and it seems to me that's handled pretty well through the use of a publicly accessible static interface to them. Unless I'm missing something. :)
If you simply want to "objectify" the constants though, for some reason, than the Singleton pattern would probably be called for, if any; I know you mentioned in a comment that you don't mind creating multiple instances of this wrapper object, but in response I'd ask, then why even introduce the sort of confusion that could arise from having multiple instances at all? What practical benefit are you looking for that'd be satisfied with having the data in object form?
Now, if the values aren't constants, then that's different -- in that case, you probably do want a Singleton or Monostate. But if they really are constants, just wrap a set of enums or static constants in a class and be done! Keep-it-simple is as good a "pattern" as any.