I am working on a little pinball-game project for a hobby and am looking for a pattern to encapsulate constant variables.
I have a model, within which there are values which will be constant over the life of that model e.g. maximum speed/maximum gravity etc. Throughout the GUI and other areas these values are required in order to correctly validate input. Currently they are included either as references to a public static final, or just plain hard-coded. I'd like to encapsulate these "constant variables" in an object which can be injected into the model, and retrieved by the view/controller.
To clarify, the value of the "constant variables" may not necessarily be defined at compile-time, they could come from reading in a file; user input etc. What is known at compile time is which ones are needed. A way which may be easier to explain it is that whatever this encapsulation is, the values it provides are immutable.
I'm looking for a way to achieve this which:
has compile time type-safety (i.e. not mapping a string to variable at runtime)
avoids anything static (including enums, which can't be extended)
I know I could define an interface which has the methods such as:
public int getMaximumSpeed();
public int getMaximumGravity();
... and inject an instance of that into the model, and make it accessible in some way. However, this results in a lot of boilerplate code, which is pretty tedious to write/test etc (I am doing this for funsies :-)).
I am looking for a better way to do this, preferably something which has the benefits of being part of a shared vocabulary, as with design patterns.
Is there a better way to do this?
P.S. I've thought some more about this, and the best trade-off I could find would be to have something like:
public class Variables {
enum Variable {
MaxSpeed(100),
MaxGravity(10)
Variable(Object variableValue) {
// assign value to field, provide getter etc.
}
}
public Object getVariable(Variable v) { // look up enum and get member }
} // end of MyVariables
I could then do something like:
Model m = new Model(new Variables());
Advantages: the lookup of a variable is protected by having to be a member of the enum in order to compile, variables can be added with little extra code
Disadvantages: enums cannot be extended, brittleness (a recompile is needed to add a variable), variable values would have to be cast from Object (to Integer in this example), which again isn't type safe, though generics may be an option for that... somehow
Are you looking for the Singleton or, a variant, the Monostate? If not, how does that pattern fail your needs?
Of course, here's the mandatory disclaimer that Anything Global Is Evil.
UPDATE: I did some looking, because I've been having similar debates/issues. I stumbled across a list of "alternatives" to classic global/scope solutions. Thought I'd share.
Thanks for all the time spent by you guys trying to decipher what is a pretty weird question.
I think, in terms of design patterns, the closest that comes to what I'm describing is the factory pattern, where I have a factory of pseudo-constants. Technically it's not creating an instance each call, but rather always providing the same instance (in the sense of a Guice provider). But I can create several factories, which each can provide different psuedo-constants, and inject each into a different model, so the model's UI can validate input a lot more flexibly.
If anyone's interested I've came to the conclusion that an interface providing a method for each psuedo-constant is the way to go:
public interface IVariableProvider {
public int maxGravity();
public int maxSpeed();
// and everything else...
}
public class VariableProvider {
private final int maxGravity, maxSpeed...;
public VariableProvider(int maxGravity, int maxSpeed) {
// assign final fields
}
}
Then I can do:
Model firstModel = new Model(new VariableProvider(2, 10));
Model secondModel = new Model(new VariableProvider(10, 100));
I think as long as the interface doesn't provide a prohibitively large number of variable getters, it wins over some parameterised lookup (which will either be vulnerable at run-time, or will prohibit extension/polymorphism).
P.S. I realise some have been questioning what my problem is with static final values. I made the statement (with tongue in cheek) to a colleague that anything static is an inherently not object-oriented. So in my hobby I used that as the basis for a thought exercise where I try to remove anything static from the project (next I'll be trying to remove all 'if' statements ;-D). If I was on a deadline and I was satisfied public static final values wouldn't hamstring testing, I would have used them pretty quickly.
If you're just using java/IOC, why not just dependency-inject the values?
e.g. Spring inject the values via a map, specify the object as a singleton -
<property name="values">
<map>
<entry> <key><value>a1</value></key><value>b1</value></entry>
<entry> <key><value>a2</value></key><value>b3</value></entry>
</map>
</property>
your class is a singleton that holds an immutable copy of the map set in spring -
private Map<String, String> m;
public String getValue(String s)
{
return m.containsKey(s)?m.get(s):null;
}
public void setValues(Map m)
{
this.m=Collections.unmodifiableMap(m):
}
From what I can tell, you probably don't need to implement a pattern here -- you just need access to a set of constants, and it seems to me that's handled pretty well through the use of a publicly accessible static interface to them. Unless I'm missing something. :)
If you simply want to "objectify" the constants though, for some reason, than the Singleton pattern would probably be called for, if any; I know you mentioned in a comment that you don't mind creating multiple instances of this wrapper object, but in response I'd ask, then why even introduce the sort of confusion that could arise from having multiple instances at all? What practical benefit are you looking for that'd be satisfied with having the data in object form?
Now, if the values aren't constants, then that's different -- in that case, you probably do want a Singleton or Monostate. But if they really are constants, just wrap a set of enums or static constants in a class and be done! Keep-it-simple is as good a "pattern" as any.
Related
I've read a lot of articles about "public vs getter/setter", but I still wonder if there is any good part about public variable.
Or the question is:
If you're going to make a new awesome programming languange, are you still going to support public variable and why??
I agree with almost everything that's been said by everyone else, but wanted to add this:
Public isn't automatically bad. Public is bad if you're writing an Object Class. Data Classes are just fine. There's nothing wrong with this class:
public class CommentRecord
{
public int id;
public string comment;
}
... why? Because the class isn't using the variables for anything. It's just a data object - it's meant to be just a simple data repository.
But there's absolutely something wrong with this class:
public class CommentRecord
{
public int id;
public string comment;
public void UpdateInSQL()
{
// code to update the SQL table for the row with commentID = this.id
// and set its UserComment column to this.comment
}
}
... why is this bad? Because it's not a data class. It's a class that actually does stuff with its variables - and because of that, making them public forces the person using the class to know the internals of the class. The person using it needs to know "If I want to update the comment, I have to change the public variable, but not change the id, then call the UpdateInSQL() method." Worse, if they screw up, they use the class in a way it wasn't intended and in a way that'll cause unforseen consequences down the line!
If you want to get some more info on this, take a look at Clean Code by Robert Martin, Chapter 6, on "Data/Object Anti-Symmetry"
A public variable essentially means you have a global accessible/changeable variable within the scope of an object. Is there really a use case for this?
Take this example: you have a class DatabaseQueryHandler which has a variable databaseAccessor. Under what circumstances would you want this variable to be:
Publicly accessible (i.e. gettable)
Publicly settable
Option #1 I can think of a few - you may want to get the last insert ID after an insert operation, you may want to check any errors the last query generated, commit or rollback transactions, etc., and it might make more logical sense to have these methods written in the class DatabaseAccessor than DatabaseQueryHandler.
Option #2 is less desirable, especially if you are doing OOP and abiding by SOLID principles, in particular regards to the ISP and DIP principles. In that case, when would you want to set the variable databaseAccessor in DatabaseQueryHandler? Probably on construction only, and never at any time after that. You probably also want it type-hinted at the interface level as well, so that you can code to interfaces. Also, why would you need an arbitrary object to be able to alter the database accessor? What happens if Foo changes the variable DatabaseQueryHandler->databaseAccessor to be NULL and then Bar tries to call DatabaseQueryHandler->databaseAccessor->beginTransaction()?
I'm just giving one example here, and it is by no means bullet proof. I program in PHP (dodges the hurled rotten fruit) and take OOP and SOLID very seriously given the looseness of the language. I'm sure there will be arguments on both sides of the fence, but I would say that if you're considering using a public class variable, instead consider what actually needs to access it, and how that variable is to be used. In most cases the functionality can be exposed via public methods without allowing unexpected alteration of the variable type.
Simple answer is: yes, they are bad. There are many reasons to that like coupling and unmaintanable code. In practice you should not use them. In OOP the public variable alternative is Singleton, which is considered a bad pracitce. Check out here.
It has a lot to do with encapsulation. You don't want your variable to be accessed anyhow. Other languages like iOS (objective-c) use properties:
#property (nonatomic, strong) NSArray* array;
then the compiler will generate the instance variable with it's getter and setter implicitly. In this case there is no need to use a variable (though other developers still prefer to use variables). You can then make this property public by declaring it in the .h file or private by declaring it in the .m file.
I am writing and browsing through a lot of methods in the project im working with and as much as I think overloads are useful I think that having a simple optional parameter with a default value can get around the problem aiding in writing more readable and I would think efficient code.
Now I hear that using these parmeters in the methods could carry nasty side effects.
What are these side effects and is it worth the risk of using these parameters to keep the code clean ???
I'll start by prefacing my answer by saying Any language feature can be used well or it can be used poorly. Optional parameters have some drawbacks, just like declaring locals as var does, or generics.
What are these side effects
Two come to mind.
The first being that the default value for optional parameters are compile time constants that are embedded in the consumer of the method. Let's say I have this class in AssemblyA:
public class Foo
{
public void Bar(string baz = "cat")
{
//Omitted
}
}
And this in AssemblyB:
public void CallBar()
{
new Foo().Bar();
}
What really ends up being produced is this, in assemblyB:
public void CallBar()
{
new Foo().Bar("cat");
}
So, if you were to ever change your default value on Bar, both assemblyA and assemblyB would need to be recompiled. Because of this, I tend not to declare methods as public if they use optional parameters, rather internal or private. If I needed to declare it as public, I would use overloads.
The second issue being how they interact with interfaces and polymorphism. Take this interface:
public interface IBar
{
void Foo(string baz = "cat");
}
and this class:
public class Bar : IBar
{
public void Foo(string baz = "dog")
{
Console.WriteLine(baz);
}
}
These lines will print different things:
IBar bar1 = new Bar();
bar1.Foo(); //Prints "cat"
var bar2 = new Bar();
bar2.Foo(); //Prints "dog"
Those are two negatives that come to mind. However, there are positives, as well. Consider this method:
void Foo(string bar = "bar", string baz = "baz", string yat = "yat")
{
}
Creating methods that offer all the possible permutations as default would be several if not dozens of lines of code.
Conclusion: optional parameters are good, and they can be bad. Just like anything else.
Necromancing.
The thing with optional parameters is, they are BAD because they are unintuitive - meaning they do NOT behave the way you would expect it.
Here's why:
They break ABI compatibility !
(and strictly speaking, they also break API-compatiblity, when used in constructors)
For example:
You have a DLL, in which you have code such as this
public void Foo(string a = "dog", string b = "cat", string c = "mouse")
{
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
Now what kinda happens is, you expect the compiler to generate this code behind the scenes:
public void Foo(string a, string b, string c)
{
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
public void Foo(string a, string b)
{
Foo(a, b, "mouse");
}
public void Foo(string a)
{
Foo(a, "cat", "mouse");
}
public void Foo()
{
Foo("dog", "cat", "mouse");
}
or perhaps more realistically, you would expect it to pass NULLs and do
public void Foo(string a, string b, string c)
{
if(a == null) a = "dog";
if(b == null) b = "cat";
if(c == null) c = "mouse";
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
so you can change the default-arguments at one place.
But this is not what the C# compiler does, because then you couldn't do:
Foo(a:"dog", c:"dogfood");
So instead the C# compiler does this:
Everywhere where you write e.g.
Foo(a:"dog", c:"mouse");
or Foo(a:"dog");
or Foo(a:"dog", b:"bla");
It substitutes it with
Foo(your_value_for_a_or_default, your_value_for_b_or_default, your_value_for_c_or_default);
So that means if you add another default-value, change a default-value, remove a value, you don't break API-compatiblity, but you break ABI-compatibility.
So what this means is, if you just replace the DLL out of all files that compose an application, you'll break every application out there that uses your DLL. That's rather bad. Because if your DLL contains a bad bug, and I have to replace it, I have to recompile my entire application with your latest DLL. That might contain a lot of changes, so I can't do it quickly. I also might not have the old source code handy, and the application might be in a major modification, with no idea what commit the old version of the application was compiled on. So I might not be able to recompile at this time. That is very bad.
And as for only using it in PUBLIC methods, not private, protected or internal.
Yea, nice try, but one can still use private, protected or internal methods with reflection. Not because one wants to, but because it sometimes is necessary, as there is no other way. (Example).
Interfaces have already been mentioned by vcsjones.
The problem there is code-duplication (which allows for divergent default-values - or ignoring of default-values).
But the real bummer is, that in addition to that, you can now introduce API-breaking-changes in Constructors...
Example:
public class SomeClass
{
public SomeClass(bool aTinyLittleBitOfSomethingNew = true)
{
}
}
And now, everywhere where you use
System.Activator.CreateInstance<SomeClass>();
you'll now get a RUNTIME exception, because now there is NO parameter-less constructor...
The compiler won't be able to catch this at compile time.
Good night if you happen to have a lot of Activator.CreateInstances in your code.
You'll be screwed, and screwed badly.
Bonus points will be awarded if some of the code you have to maintain uses reflection to create class instances, or use reflection to access private/protected/internal methods...
Don't use optional parameters !
Especially not in class constructors.
(Disclaimer: sometimes, there simply is no other way - e.g. an attribute on a property that takes the name of the property as constructor argument automagically - but try to limit it to these few cases, especially if you can make due with overloading)
I guess theoretically they are fine for quick prototyping, but only for that.
But since prototypes have a strong tendency to go productive (at least in the company I currently work), don't use it for that, either.
I'd say that it depends how different the method becomes when you include or omit that parameter.
If a method's behaviour and internal functioning is very different without a parameter, then make it an overload. If you're using optional parameters to change behaviour, DON'T. Instead of having a method that does one thing with one parameter, and something different when you pass in a second one, have one method that does one thing, and a different method that does the other thing. If their behaviour differs greatly, then they should probably be entirely separate, and not overloads with the same name.
If you need to know whether a parameter was user-specified or left blank, then consider making it an overload. Sometimes you can use nullable values if the place they're being passed in from won't allow nulls, but generally you can't rule out the possibility that the user passed null, so if you need to know where the value came from as well as what the value is, don't use optional parameters.
Above all, remember that the optional parameters should (kinda by definition) be used for things that have a small, trivial or otherwise unimportant effect on the outcome of the method. If you change the default value, any place that calls the method without specifying a value should still be happy with the result. If you change the default and then find that some other bit of code that calls the method with the optional parameter left blank is now not working how it should, then it probably shouldn't have been an optional parameter.
Places where it can be a good idea to use optional parameters are:
Methods where it's safe to just set something to a default if a value isn't provided. This basically covers anything where the caller might not know or care what the value is. A good example is in encryption methods - the caller may just think "I don't know crypto, I don't know what value R should be set to, I just want this to be encrypted", in which case you set the defaults to sensible values. Often these start out as a method with an internal variable that you then move to be user-provided. It's pointless making two methods when the only difference is that one has var foo = bar; somewhere at the start.
Methods that have a set of parameters, but not all of them are needed. This is quite common with constructors; you'll see overloads that each set different combinations of the various properties, but if there's three or four parameters that may or may not need to be set, that can require a lot of overloads to cover all the possible combinations (it's basically a handshake problem), and all these overloads have more or less identical behaviour internally. You can solve this by having most of them just set defaults and call the one that sets all parameters, but it's less code to use optional parameters.
Methods where the coder calling them might want to set parameters, but you want them to know what a "normal" value is. For example, the encryption method we mentioned earlier might require various parameters for whatever maths goes on internally. A coder might see that they can pass in values for workFactor or blockSize, but they may not know what "normal" values are for these. Commenting and documentation will help here, but so will optional parameters - the coder will see in the signature [workFactor = 24], [blockSize = 256] which helps them judge what kind of values are sensible. (Of course, this is no excuse to not comment and document your code properly.)
You're not making more readable and efficient code.
First, your method signatures will be gratuitously longer.
Second, overloads don't exist for the sole purpose of using default values - a quick look at the Convert class should show you that. Many times overloaded methods have different execution paths, which will become spaghetti code in your single non overloaded method.
Third, sometimes you need to know whether a value was used as input. How would you then know whether the user passed those values, if he happens to use the same value as the default one you were using?
Often I see optional parameters in C# like IMyInterface parameter = null.
Especially when I see that in constructors I would even say it'S a code smell.
I know that's a hard verdict - but in this case it obscures your dependencies, which is bad.
Like vcsjones said, you can use those language features right, but I believe optional parameters should be used only in some edge-cases.
my opinion.
Today I read a book and the author wrote that in a well-designed class the only way to access attributes is through one of that class methods. Is it a widely accepted thought? Why is it so important to encapsulate the attributes? What could be the consequences of not doing it? I read somewhere earlier that this improves security or something like that. Any example in PHP or Java would be very helpful.
Is it a widely accepted thought?
In the object-oriented world, yes.
Why is it so important to encapsulate the attributes? What could be the consequences of not doing it?
Objects are intended to be cohesive entities containing data and behavior that other objects can access in a controlled way through a public interface. If an class does not encapsulate its data and behavior, it no longer has control over the data being accessed and cannot fulfill its contracts with other objects implied by the public interface.
One of the big problems with this is that if a class has to change internally, the public interface shouldn't have to change. That way it doesn't break any code and other classes can continue using it as before.
Any example in PHP or Java would be very helpful.
Here's a Java example:
public class MyClass {
// Should not be < 0
public int importantValue;
...
public void setImportantValue(int newValue) {
if (newValue < 0) {
throw new IllegalArgumentException("value cannot be < 0");
}
}
...
}
The problem here is that because I haven't encapsulated importantValue by making it private rather than public, anyone can come along and circumvent the check I put in the setter to prevent the object from having an invalid state. importantValue should never be less than 0, but the lack of encapsulation makes it impossible to prevent it from being so.
What could be the consequences of not
doing it?
The whole idea behind encapsulation is that all knowledge of anything related to the class (other than its interface) is within the class itself. For example, allowing direct access to attributes puts the onus of making sure any assignments are valid on the code doing the assigning. If the definition of what's valid changes, you have to go through and audit everything using the class to make sure they conform. Encapsulating the rule in a "setter" method means you only have to change it in one place, and any caller trying anything funny can get an exception thrown at it in return. There are lots of other things you might want to do when an attribute changes, and a setter is the place to do it.
Whether or not allowing direct access for attributes that don't have any rules to bind them (e.g., anything that fits in an integer is okay) is good practice is debatable. I suppose that using getters and setters is a good idea for the sake of consistency, i.e., you always know that you can call setFoo() to alter the foo attribute without having to look up whether or not you can do it directly. They also allow you to future-proof your class so that if you have additional code to execute, the place to put it is already there.
Personally, I think having to use getters and setters is clumsy-looking. I'd much rather write x.foo = 34 than x.setFoo(34) and look forward to the day when some language comes up with the equivalent of database triggers for members that allow you to define code that fires before, after or instead of a assignments.
Opinions on how "good OOD" is achieved are dime a dozen, and also very experienced programmers and designers tend to disagree about design choices and philosophies. This could be a flame-war starter, if you ask people across a wide varieties of language background and paradigms.
And yes, in theory are theory and practice the same, so language choice shouldn't influence high level design very much. But in practice they do, and good and bad things happen because of that.
Let me add this:
It depends. Encapsulation (in a supporting language) gives you some control over how you classes are used, so you can tell people: this is the API, and you have to use this. In other languages (e.g. python) the difference between official API and informal (subject to change) interfaces is by naming convention only (after all, we're all consenting adults here)
Encapsulation is not a security feature.
Another thought to ponder
Encapsulation with accessors also provides much better maintainability in the future. In Feanor's answer above, it works great to enforce security checks (assuming your instvar is private), but it can have much further reaching benifits.
Consider the following scenario:
1) you complete your application, and distribute it to some set of users (internal, external, whatever).
2) BigCustomerA approaches your team and wants an audit trail added to the product.
If everyone is using the accessor methods in their code, this becomes almost trivial to implement. Something like so:
MyAPI Version 1.0
public class MyClass {
private int importantValue;
...
public void setImportantValue(int newValue) {
if (newValue < 0) {
throw new IllegalArgumentException("value cannot be < 0");
}
importantValue = newValue;
}
...
}
MyAPI V1.1 (now with audit trails)
public class MyClass {
private int importantValue;
...
public void setImportantValue(int newValue) {
if (newValue < 0) {
throw new IllegalArgumentException("value cannot be < 0");
}
this.addAuditTrail("importantValue", importantValue, newValue);
importantValue = newValue;
}
...
}
Existing users of the API make no changes to their code and the new feature (audit trail) is now available.
Without encapsulation using accessors your faced with a huge migration effort.
When coding for the first time, it will seem like a lot of work. Its much faster to type: class.varName = something vs class.setVarName(something); but if everyone took the easy way out, getting paid for BigCustomerA's feature request would be a huge effort.
In Object Oriente Programming there is a principle that is known as (http://en.wikipedia.org/wiki/Open/closed_principle):
POC --> Principle of Open and Closed. This principle stays for: a well class design should be opened for extensibility (inheritance) but closed for modification of internal members (encapsulation). It means that you could not be able to modify the state of an object without taking care about it.
So, new languages only modify internal variables (fields) through properties (getters and setters methods in C++ or Java). In C# properties compile to methods in MSIL.
C#:
int _myproperty = 0;
public int MyProperty
{
get { return _myproperty; }
set { if (_someVarieble = someConstantValue) { _myproperty = value; } else { _myproperty = _someOtherValue; } }
}
C++/Java:
int _myproperty = 0;
public void setMyProperty(int value)
{
if (value = someConstantValue) { _myproperty = value; } else { _myproperty = _someOtherValue; }
}
public int getMyProperty()
{
return _myproperty;
}
Take theses ideas (from Head First C#):
Think about ways the fields can misused. What can go wrong if they're not set properly.
Is everything in your class public? Spend some time thinking about encapsulation.
What fields require processing or calculation? They are prime candidates.
Only make fields and methods public if you need to. If you don't have a reason to declare something public, don't.
Does it affect the time in loading the application?
or any other issues in doing so?
The question is vague on what "long" means. Here are some possible interpretations:
Interpretation #1: The constructor has many parameters
Constructors with many parameters can lead to poor readability, and better alternatives exist.
Here's a quote from Effective Java 2nd Edition, Item 2: Consider a builder pattern when faced with many constructor parameters:
Traditionally, programmers have used the telescoping constructor pattern, in which you provide a constructor with only the required parameters, another with a single optional parameters, a third with two optional parameters, and so on...
The telescoping constructor pattern is essentially something like this:
public class Telescope {
final String name;
final int levels;
final boolean isAdjustable;
public Telescope(String name) {
this(name, 5);
}
public Telescope(String name, int levels) {
this(name, levels, false);
}
public Telescope(String name, int levels, boolean isAdjustable) {
this.name = name;
this.levels = levels;
this.isAdjustable = isAdjustable;
}
}
And now you can do any of the following:
new Telescope("X/1999");
new Telescope("X/1999", 13);
new Telescope("X/1999", 13, true);
You can't, however, currently set only the name and isAdjustable, and leaving levels at default. You can provide more constructor overloads, but obviously the number would explode as the number of parameters grow, and you may even have multiple boolean and int arguments, which would really make a mess out of things.
As you can see, this isn't a pleasant pattern to write, and even less pleasant to use (What does "true" mean here? What's 13?).
Bloch recommends using a builder pattern, which would allow you to write something like this instead:
Telescope telly = new Telescope.Builder("X/1999").setAdjustable(true).build();
Note that now the parameters are named, and you can set them in any order you want, and you can skip the ones that you want to keep at default values. This is certainly much better than telescoping constructors, especially when there's a huge number of parameters that belong to many of the same types.
See also
Wikipedia/Builder pattern
Effective Java 2nd Edition, Item 2: Consider a builder pattern when faced with many constructor parameters (excerpt online)
Related questions
When would you use the Builder Pattern?
Is this a well known design pattern? What is its name?
Interpretation #2: The constructor does a lot of work that costs time
If the work must be done at construction time, then doing it in the constructor or in a helper method doesn't really make too much of a difference. When a constructor delegates work to a helper method, however, make sure that it's not overridable, because that could lead to a lot of problems.
Here's some quote from Effective Java 2nd Edition, Item 17: Design and document for inheritance, or else prohibit it:
There are a few more restrictions that a class must obey to allow inheritance. Constructors must not invoke overridable methods, directly or indirectly. If you violate this rule, program failure will result. The superclass constructor runs before the subclass constructor, so the overriding method in the subclass will be invoked before the subclass constructor has run. If the overriding method depends on any initialization performed by the subclass constructor, the method will not behave as expected.
Here's an example to illustrate:
public class ConstructorCallsOverride {
public static void main(String[] args) {
abstract class Base {
Base() { overrideMe(); }
abstract void overrideMe();
}
class Child extends Base {
final int x;
Child(int x) { this.x = x; }
#Override void overrideMe() {
System.out.println(x);
}
}
new Child(42); // prints "0"
}
}
Here, when Base constructor calls overrideMe, Child has not finished initializing the final int x, and the method gets the wrong value. This will almost certainly lead to bugs and errors.
Interpretation #3: The constructor does a lot of work that can be deferred
The construction of an object can be made faster when some work is deferred to when it's actually needed; this is called lazy initialization. As an example, when a String is constructed, it does not actually compute its hash code. It only does it when the hash code is first required, and then it will cache it (since strings are immutable, this value will not change).
However, consider Effective Java 2nd Edition, Item 71: Use lazy initialization judiciously. Lazy initialization can lead to subtle bugs, and don't always yield improved performance that justifies the added complexity. Do not prematurely optimize.
Constructors are a little special in that an unhandled exception in a constructor may have weird side effects. Without seeing your code I would assume that a long constructor increases the risk of exceptions. I would make the constructor as simple as needed and utilize other methods to do the rest in order to provide better error handling.
The biggest disadvantage is probably the same as writing any other long function -- that it can get complex and difficult to understand.
The rest is going to vary. First of all, length and execution time don't necessarily correlate -- you could have a single line (e.g., function call) that took several seconds to complete (e.g., connect to a server) or lots of code that executed entirely within the CPU and finished quickly.
Startup time would (obviously) only be affected by constructors that were/are invoked during startup. I haven't had an issue with this in any code I've written (at all recently anyway), but I've seen code that did. On some types of embedded systems (for one example) you really want to avoid creating and destroying objects during normal use, so you create almost everything statically during bootup. Once it's running, you can devote all the processor time to getting the real work done.
Constructor is yet another function. You need very long functions called many times to make the program work slow. So if it's only called once it usually won't matter how much code is inside.
It affects the time it takes to construct that object, naturally, but no more than having an empty constructor and calling methods to do that work instead. It has no effect on the application load time
In case of copy constructor if we use donot use reference in that case
it will create an object and call the copy constructor and passing the
value to the copy constructor and each time a new object is created and
each time it will call the copy constructor it goes to infinite and
fill the memory then it display the error message .
if we pass the reference it will not create the new object for storing
the value. and no recursion will take place
I would avoid doing anything in your constructor that isn't absolutely necessary. Initialize your variables in there, and try not to do much else. Additional functionality should reside in separate functions that you call only if you need to.
I have an object called Parameters that gets tossed from method to method down and up the call tree, across package boundaries. It has about fifty state variables. Each method might use one or two variables to control its output.
I think this is a bad idea, beacuse I can't easily see what a method needs to function, or even what might happen if with a certain combination of parameters for module Y which is totally unrelated to my current module.
What are some good techniques for decreasing coupling to this god object, or ideally eliminating it ?
public void ExporterExcelParFonds(ParametresExecution parametres)
{
ApplicationExcel appExcel = null;
LogTool.Instance.ExceptionSoulevee = false;
bool inclureReferences = parametres.inclureReferences;
bool inclureBornes = parametres.inclureBornes;
DateTime dateDebut = parametres.date;
DateTime dateFin = parametres.dateFin;
try
{
LogTool.Instance.AfficherMessage(Variables.msg_GenerationRapportPortefeuilleReference);
bool fichiersPreparesAvecSucces = PreparerFichiers(parametres, Sections.exportExcelParFonds);
if (!fichiersPreparesAvecSucces)
{
parametres.afficherRapportApresGeneration = false;
LogTool.Instance.ExceptionSoulevee = true;
}
else
{
The caller would do :
PortefeuillesReference pr = new PortefeuillesReference();
pr.ExporterExcelParFonds(parametres);
First, at the risk of stating the obvious: pass the parameters which are used by the methods, rather than the god object.
This, however, might lead to some methods needing huge amounts of parameters because they call other methods, which call other methods in turn, etcetera. That was probably the inspiration for putting everything in a god object. I'll give a simplified example of such a method with too many parameters; you'll have to imagine that "too many" == 3 here :-)
public void PrintFilteredReport(
Data data, FilterCriteria criteria, ReportFormat format)
{
var filteredData = Filter(data, criteria);
PrintReport(filteredData, format);
}
So the question is, how can we reduce the amount of parameters without resorting to a god object? The answer is to get rid of procedural programming and make good use of object oriented design. Objects can use each other without needing to know the parameters that were used to initialize their collaborators:
// dataFilter service object only needs to know the criteria
var dataFilter = new DataFilter(criteria);
// report printer service object only needs to know the format
var reportPrinter = new ReportPrinter(format);
// filteredReportPrinter service object is initialized with a
// dataFilter and a reportPrinter service, but it doesn't need
// to know which parameters those are using to do their job
var filteredReportPrinter = new FilteredReportPrinter(dataFilter, reportPrinter);
Now the FilteredReportPrinter.Print method can be implemented with only one parameter:
public void Print(data)
{
var filteredData = this.dataFilter.Filter(data);
this.reportPrinter.Print(filteredData);
}
Incidentally, this sort of separation of concerns and dependency injection is good for more than just eliminating parameters. If you access collaborator objects through interfaces, then that makes your class
very flexible: you can set up FilteredReportPrinter with any filter/printer implementation you can imagine
very testable: you can pass in mock collaborators with canned responses and verify that they were used correctly in a unit test
If all your methods are using the same Parameters class then maybe it should be a member variable of a class with the relevant methods in it, then you can pass Parameters into the constructor of this class, assign it to a member variable and all your methods can use it with having to pass it as a parameter.
A good way to start refactoring this god class is by splitting it up into smaller pieces. Find groups of properties that are related and break them out into their own class.
You can then revisit the methods that depend on Parameters and see if you can replace it with one of the smaller classes you created.
Hard to give a good solution without some code samples and real world situations.
It sounds like you are not applying object-oriented (OO) principles in your design. Since you mention the word "object" I presume you are working within some sort of OO paradigm. I recommend you convert your "call tree" into objects that model the problem you are solving. A "god object" is definitely something to avoid. I think you may be missing something fundamental... If you post some code examples I may be able to answer in more detail.
Query each client for their required parameters and inject them?
Example: each "object" that requires "parameters" is a "Client". Each "Client" exposes an interface through which a "Configuration Agent" queries the Client for its required parameters. The Configuration Agent then "injects" the parameters (and only those required by a Client).
For the parameters that dictate behavior, one can instantiate an object that exhibits the configured behavior. Then client classes simply use the instantiated object - neither the client nor the service have to know what the value of the parameter is. For instance for a parameter that tells where to read data from, have the FlatFileReader, XMLFileReader and DatabaseReader all inherit the same base class (or implement the same interface). Instantiate one of them base on the value of the parameter, then clients of the reader class just ask for data to the instantiated reader object without knowing if the data come from a file or from the DB.
To start you can break your big ParametresExecution class into several classes, one per package, which only hold the parameters for the package.
Another direction could be to pass the ParametresExecution object at construction time. You won't have to pass it around at every function call.
(I am assuming this is within a Java or .NET environment) Convert the class into a singleton. Add a static method called "getInstance()" or something similar to call to get the name-value bundle (and stop "tramping" it around -- see Ch. 10 of "Code Complete" book).
Now the hard part. Presumably, this is within a web app or some other non batch/single-thread environment. So, to get access to the right instance when the object is not really a true singleton, you have to hide selection logic inside of the static accessor.
In java, you can set up a "thread local" reference, and initialize it when each request or sub-task starts. Then, code the accessor in terms of that thread-local. I don't know if something analogous exists in .NET, but you can always fake it with a Dictionary (Hash, Map) which uses the current thread instance as the key.
It's a start... (there's always decomposition of the blob itself, but I built a framework that has a very similar semi-global value-store within it)