C# 4.0 'dynamic' and foreach statement - dynamic

Not long time before I've discovered, that new dynamic keyword doesn't work well with the C#'s foreach statement:
using System;
sealed class Foo {
public struct FooEnumerator {
int value;
public bool MoveNext() { return true; }
public int Current { get { return value++; } }
}
public FooEnumerator GetEnumerator() {
return new FooEnumerator();
}
static void Main() {
foreach (int x in new Foo()) {
Console.WriteLine(x);
if (x >= 100) break;
}
foreach (int x in (dynamic)new Foo()) { // :)
Console.WriteLine(x);
if (x >= 100) break;
}
}
}
I've expected that iterating over the dynamic variable should work completely as if the type of collection variable is known at compile time. I've discovered that the second loop actually is looked like this when is compiled:
foreach (object x in (IEnumerable) /* dynamic cast */ (object) new Foo()) {
...
}
and every access to the x variable results with the dynamic lookup/cast so C# ignores that I've specify the correct x's type in the foreach statement - that was a bit surprising for me... And also, C# compiler completely ignores that collection from dynamically typed variable may implements IEnumerable<T> interface!
The full foreach statement behavior is described in the C# 4.0 specification 8.8.4 The foreach statement article.
But... It's perfectly possible to implement the same behavior at runtime! It's possible to add an extra CSharpBinderFlags.ForEachCast flag, correct the emmited code to looks like:
foreach (int x in (IEnumerable<int>) /* dynamic cast with the CSharpBinderFlags.ForEachCast flag */ (object) new Foo()) {
...
}
And add some extra logic to CSharpConvertBinder:
Wrap IEnumerable collections and IEnumerator's to IEnumerable<T>/IEnumerator<T>.
Wrap collections doesn't implementing Ienumerable<T>/IEnumerator<T> to implement this interfaces.
So today foreach statement iterates over dynamic completely different from iterating over statically known collection variable and completely ignores the type information, specified by user. All that results with the different iteration behavior (IEnumarble<T>-implementing collections is being iterated as only IEnumerable-implementing) and more than 150x slowdown when iterating over dynamic. Simple fix will results a much better performance:
foreach (int x in (IEnumerable<int>) dynamicVariable) {
But why I should write code like this?
It's very nicely to see that sometimes C# 4.0 dynamic works completely the same if the type will be known at compile-time, but it's very sadly to see that dynamic works completely different where IT CAN works the same as statically typed code.
So my question is: why foreach over dynamic works different from foreach over anything else?

First off, to explain some background to readers who are confused by the question: the C# language actually does not require that the collection of a "foreach" implement IEnumerable. Rather, it requires either that it implement IEnumerable, or that it implement IEnumerable<T>, or simply that it have a GetEnumerator method (and that the GetEnumerator method returns something with a Current and MoveNext that matches the pattern expected, and so on.)
That might seem like an odd feature for a statically typed language like C# to have. Why should we "match the pattern"? Why not require that collections implement IEnumerable?
Think about the world before generics. If you wanted to make a collection of ints, you'd have to use IEnumerable. And therefore, every call to Current would box an int, and then of course the caller would immediately unbox it back to int. Which is slow and creates pressure on the GC. By going with a pattern-based approach you can make strongly typed collections in C# 1.0!
Nowadays of course no one implements that pattern; if you want a strongly typed collection, you implement IEnumerable<T> and you're done. Had a generic type system been available to C# 1.0, it is unlikely that the "match the pattern" feature would have been implemented in the first place.
As you've noted, instead of looking for the pattern, the code generated for a dynamic collection in a foreach looks for a dynamic conversion to IEnumerable (and then does a conversion from the object returned by Current to the type of the loop variable of course.) So your question basically is "why does the code generated by use of the dynamic type as a collection type of foreach fail to look for the pattern at runtime?"
Because it isn't 1999 anymore, and even when it was back in the C# 1.0 days, collections that used the pattern also almost always implemented IEnumerable too. The probability that a real user is going to be writing production-quality C# 4.0 code which does a foreach over a collection that implements the pattern but not IEnumerable is extremely low. Now, if you're in that situation, well, that's unexpected, and I'm sorry that our design failed to anticipate your needs. If you feel that your scenario is in fact common, and that we've misjudged how rare it is, please post more details about your scenario and we'll consider changing this for hypothetical future versions.
Note that the conversion we generate to IEnumerable is a dynamic conversion, not simply a type test. That way, the dynamic object may participate; if it does not implement IEnumerable but wishes to proffer up a proxy object which does, it is free to do so.
In short, the design of "dynamic foreach" is "dynamically ask the object for an IEnumerable sequence", rather than "dynamically do every type-testing operation we would have done at compile time". This does in theory subtly violate the design principle that dynamic analysis gives the same result as static analysis would have, but in practice it's how we expect the vast majority of dynamically accessed collections to work.

But why I should write code like this?
Indeed. And why would the compiler write code like that? You've removed any chance it might have had to guess that the loop could be optimized. Btw, you seem to interpret the IL incorrectly, it is rebinding to obtain IEnumerable.Current, the MoveNext() call is direct and GetEnumerator() is called only once. Which I think is appropriate, the next element might or might not cast to an int without problems. It could be a collection of various types, each with their own binder.

Related

Optional Parameters, Good or Bad?

I am writing and browsing through a lot of methods in the project im working with and as much as I think overloads are useful I think that having a simple optional parameter with a default value can get around the problem aiding in writing more readable and I would think efficient code.
Now I hear that using these parmeters in the methods could carry nasty side effects.
What are these side effects and is it worth the risk of using these parameters to keep the code clean ???
I'll start by prefacing my answer by saying Any language feature can be used well or it can be used poorly. Optional parameters have some drawbacks, just like declaring locals as var does, or generics.
What are these side effects
Two come to mind.
The first being that the default value for optional parameters are compile time constants that are embedded in the consumer of the method. Let's say I have this class in AssemblyA:
public class Foo
{
public void Bar(string baz = "cat")
{
//Omitted
}
}
And this in AssemblyB:
public void CallBar()
{
new Foo().Bar();
}
What really ends up being produced is this, in assemblyB:
public void CallBar()
{
new Foo().Bar("cat");
}
So, if you were to ever change your default value on Bar, both assemblyA and assemblyB would need to be recompiled. Because of this, I tend not to declare methods as public if they use optional parameters, rather internal or private. If I needed to declare it as public, I would use overloads.
The second issue being how they interact with interfaces and polymorphism. Take this interface:
public interface IBar
{
void Foo(string baz = "cat");
}
and this class:
public class Bar : IBar
{
public void Foo(string baz = "dog")
{
Console.WriteLine(baz);
}
}
These lines will print different things:
IBar bar1 = new Bar();
bar1.Foo(); //Prints "cat"
var bar2 = new Bar();
bar2.Foo(); //Prints "dog"
Those are two negatives that come to mind. However, there are positives, as well. Consider this method:
void Foo(string bar = "bar", string baz = "baz", string yat = "yat")
{
}
Creating methods that offer all the possible permutations as default would be several if not dozens of lines of code.
Conclusion: optional parameters are good, and they can be bad. Just like anything else.
Necromancing.
The thing with optional parameters is, they are BAD because they are unintuitive - meaning they do NOT behave the way you would expect it.
Here's why:
They break ABI compatibility !
(and strictly speaking, they also break API-compatiblity, when used in constructors)
For example:
You have a DLL, in which you have code such as this
public void Foo(string a = "dog", string b = "cat", string c = "mouse")
{
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
Now what kinda happens is, you expect the compiler to generate this code behind the scenes:
public void Foo(string a, string b, string c)
{
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
public void Foo(string a, string b)
{
Foo(a, b, "mouse");
}
public void Foo(string a)
{
Foo(a, "cat", "mouse");
}
public void Foo()
{
Foo("dog", "cat", "mouse");
}
or perhaps more realistically, you would expect it to pass NULLs and do
public void Foo(string a, string b, string c)
{
if(a == null) a = "dog";
if(b == null) b = "cat";
if(c == null) c = "mouse";
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
so you can change the default-arguments at one place.
But this is not what the C# compiler does, because then you couldn't do:
Foo(a:"dog", c:"dogfood");
So instead the C# compiler does this:
Everywhere where you write e.g.
Foo(a:"dog", c:"mouse");
or Foo(a:"dog");
or Foo(a:"dog", b:"bla");
It substitutes it with
Foo(your_value_for_a_or_default, your_value_for_b_or_default, your_value_for_c_or_default);
So that means if you add another default-value, change a default-value, remove a value, you don't break API-compatiblity, but you break ABI-compatibility.
So what this means is, if you just replace the DLL out of all files that compose an application, you'll break every application out there that uses your DLL. That's rather bad. Because if your DLL contains a bad bug, and I have to replace it, I have to recompile my entire application with your latest DLL. That might contain a lot of changes, so I can't do it quickly. I also might not have the old source code handy, and the application might be in a major modification, with no idea what commit the old version of the application was compiled on. So I might not be able to recompile at this time. That is very bad.
And as for only using it in PUBLIC methods, not private, protected or internal.
Yea, nice try, but one can still use private, protected or internal methods with reflection. Not because one wants to, but because it sometimes is necessary, as there is no other way. (Example).
Interfaces have already been mentioned by vcsjones.
The problem there is code-duplication (which allows for divergent default-values - or ignoring of default-values).
But the real bummer is, that in addition to that, you can now introduce API-breaking-changes in Constructors...
Example:
public class SomeClass
{
public SomeClass(bool aTinyLittleBitOfSomethingNew = true)
{
}
}
And now, everywhere where you use
System.Activator.CreateInstance<SomeClass>();
you'll now get a RUNTIME exception, because now there is NO parameter-less constructor...
The compiler won't be able to catch this at compile time.
Good night if you happen to have a lot of Activator.CreateInstances in your code.
You'll be screwed, and screwed badly.
Bonus points will be awarded if some of the code you have to maintain uses reflection to create class instances, or use reflection to access private/protected/internal methods...
Don't use optional parameters !
Especially not in class constructors.
(Disclaimer: sometimes, there simply is no other way - e.g. an attribute on a property that takes the name of the property as constructor argument automagically - but try to limit it to these few cases, especially if you can make due with overloading)
I guess theoretically they are fine for quick prototyping, but only for that.
But since prototypes have a strong tendency to go productive (at least in the company I currently work), don't use it for that, either.
I'd say that it depends how different the method becomes when you include or omit that parameter.
If a method's behaviour and internal functioning is very different without a parameter, then make it an overload. If you're using optional parameters to change behaviour, DON'T. Instead of having a method that does one thing with one parameter, and something different when you pass in a second one, have one method that does one thing, and a different method that does the other thing. If their behaviour differs greatly, then they should probably be entirely separate, and not overloads with the same name.
If you need to know whether a parameter was user-specified or left blank, then consider making it an overload. Sometimes you can use nullable values if the place they're being passed in from won't allow nulls, but generally you can't rule out the possibility that the user passed null, so if you need to know where the value came from as well as what the value is, don't use optional parameters.
Above all, remember that the optional parameters should (kinda by definition) be used for things that have a small, trivial or otherwise unimportant effect on the outcome of the method. If you change the default value, any place that calls the method without specifying a value should still be happy with the result. If you change the default and then find that some other bit of code that calls the method with the optional parameter left blank is now not working how it should, then it probably shouldn't have been an optional parameter.
Places where it can be a good idea to use optional parameters are:
Methods where it's safe to just set something to a default if a value isn't provided. This basically covers anything where the caller might not know or care what the value is. A good example is in encryption methods - the caller may just think "I don't know crypto, I don't know what value R should be set to, I just want this to be encrypted", in which case you set the defaults to sensible values. Often these start out as a method with an internal variable that you then move to be user-provided. It's pointless making two methods when the only difference is that one has var foo = bar; somewhere at the start.
Methods that have a set of parameters, but not all of them are needed. This is quite common with constructors; you'll see overloads that each set different combinations of the various properties, but if there's three or four parameters that may or may not need to be set, that can require a lot of overloads to cover all the possible combinations (it's basically a handshake problem), and all these overloads have more or less identical behaviour internally. You can solve this by having most of them just set defaults and call the one that sets all parameters, but it's less code to use optional parameters.
Methods where the coder calling them might want to set parameters, but you want them to know what a "normal" value is. For example, the encryption method we mentioned earlier might require various parameters for whatever maths goes on internally. A coder might see that they can pass in values for workFactor or blockSize, but they may not know what "normal" values are for these. Commenting and documentation will help here, but so will optional parameters - the coder will see in the signature [workFactor = 24], [blockSize = 256] which helps them judge what kind of values are sensible. (Of course, this is no excuse to not comment and document your code properly.)
You're not making more readable and efficient code.
First, your method signatures will be gratuitously longer.
Second, overloads don't exist for the sole purpose of using default values - a quick look at the Convert class should show you that. Many times overloaded methods have different execution paths, which will become spaghetti code in your single non overloaded method.
Third, sometimes you need to know whether a value was used as input. How would you then know whether the user passed those values, if he happens to use the same value as the default one you were using?
Often I see optional parameters in C# like IMyInterface parameter = null.
Especially when I see that in constructors I would even say it'S a code smell.
I know that's a hard verdict - but in this case it obscures your dependencies, which is bad.
Like vcsjones said, you can use those language features right, but I believe optional parameters should be used only in some edge-cases.
my opinion.

Language without type-casting

My question is pretty much what the title says: Is it possible to have a programming language which does not allow explicit type casting?
To clarify what I mean, assume we're working in some C#-like language with a parent Base class and a child Derived class. Clearly, such code would be safe:
Base a = new Derived();
Since going up the inheritance hierarchy is safe, but
Dervied b = (Base)a;
is not guarenteed safe, since going down is not safe.
But, regardless of the safety, such downcasts are valid in many languages (like Java or C#) - the code will compile, and will simply fail at runtime if the types aren't right. So technically, the code is still safe, but via runtime checks and not compile-time checks (btw, I'm not a fan of runtime checks).
I would personally find complete compile-time type safety to be very important, at least from a theoretical perspective, and at most from the perspective of reliable code. A consequence of compile-time type safety is that casts are no longer needed (which I think is great, 'cause they're ugly anyways). Any cast-like behaviour can be implemented by an implicit conversion operator or by a constructor.
So I'm wondering, are currently any OO languages which provide such a rigourous type safety at compile-time that casts are obsolete? I.e., they don't any allow unsafe conversion operations whatsoever? Or is there a reason this wouldn't work?
Thanks for any input.
Edit
If I can clarify by example, here's the big reason I hate downcasts so much.
Let's say I have the following (loosely based on C#'s collections):
public interface IEnumerable<T>
{
IEnumerator<T> GetEnumerator();
IEnumerable<T> Filter( Func<T, bool> );
}
public class List<T> : IEnumerable<T>
{
// All of list's implementation here
}
Now suppose someone decides to write code like this:
List<int> list = new List<int>( new int[]{1, 2, 3, 4, 5, 6} );
// Let's filter out the odd numbers
List<int> result = (List<int>)list.Filter( x => x % 2 != 0 );
Notice how the cast is necessary on that last line. But is it valid? Not in general. Sure, it makes sense that the implementation of List<T>.Filter will return another List<T>, but this is not guarenteed (it could be any subtype of IEnumerable<T>). Even if this runs at one point in time, a later version may change this, exposing how brittle the code is.
Pretty much all of the situations I can think that require downcasts would boil down to something like this example - a method has a return type of some class or interface, but since we know some implementation details, we're confident in downcasting the result. But this is anti-OOP, since OOP actually encourages abstracting from implementation details. So why do we do it anyways, even in purely OOP languages?
Downcasts can be gradually eliminated by improving the power of the type system.
One proposed solution to the example you gave is to add the ability to declare the return type of a method as "the same as this". This allows a subclass to return a subclass without requiring a cast. Thus you get something like this:
public interface IEnumerable<T>
{
IEnumerator<T> GetEnumerator();
This<T> Filter( Func<T, bool> );
}
public class List<T> : IEnumerable<T>
{
// All of list's implementation here
}
Now the cast is unnecessary:
List<int> list = new List<int>( new int[]{1, 2, 3, 4, 5, 6} );
// Compiler "knows" that Filter returns the same type as its receiver
List<int> result = list.Filter( x => x % 2 != 0 );
Other cases of downcasting also have proposed solutions by improving the type system, but these improvements have not yet been made to C#, Java, or C++.
Well, it's certainly possible to have programming languages that don't have subtyping at all, and then naturally there's no need for downcasts there. Most non-OO language fall into that class.
Even in a class-based OO language like Java, most downcasts could formally be replaced simply by letting the base class have a method
Foo meAsFoo() {
return null;
}
which the subclass would then override to return itself. However, that would still just be another way to express a run-time test, with the added downside of being more complicated to use. And it would be hard to forbid the pattern without losing all other advantages of inheritance-based subtyping.
Of course, this is only possible if you're able to modify the parent class. I suspect you might consider that a plus, but given how often one can modify the parent class and so use the workaround, I'm not sure how much that would be worth in terms of encouraging "good" design (for some more or less arbitrary value of "good").
A case could be made that it would encourage safe programming more if the language offered a case-matching construct instead of a downcast expression:
Shape x = .... ;
switch( x ) {
case Rectangle r:
return 5*r.diagonal();
case Circle c:
return c.radius();
case Point:
return 0 ;
default:
throw new RuntimeException("This can't happen, and I, "+
"the programmer, take full responsibility");
}
However, it might then be a problem in practice that without a closed-world assumption (which modern programming languages seem to be reluctant to make) many of those switches would need default: cases that the programmer knows can never happen, which might well desensitivize the programmer to the resultant throws.
There are many languages with duck typing and/or implicit type conversion. Perl certainly comes to mind; the intricacies of how subtypes of the scalar type are converted internally are a frequent source of criticism, but also receive praise because when they do work like you expect, they contribute to the DWIM feel of the language.
Traditional Lisp is another good example - all you have is atoms and lists, and nil which is both at the same time. Otherwise, the twain never meet ...
(You seem to come from a universe where programming languages are necessarily object-oriented, strongly typed, and compiled, though.)

How to access properties dynamically / late-bound?

I'd like to implement a method which allows me to access a property of an unknown/anonymous object (-graph) in a late-bound / dynamic way (I don't even know how to correctly call it).
Here's an example of what I'd like to achieve:
// setup an anonymous object
var a = new { B = new { C = new { I = 33 } } };
// now get the value of a.B.C.I in a late-bound way
var i = Get(a, "B.C.I");
And here's a simple implementation using "classic" reflection:
public static object Get(object obj, string expression)
{
foreach (var name in expression.Split('.'))
{
var property = obj.GetType().GetProperty(name);
obj = property.GetValue(obj, null);
}
return obj;
}
What other options do I have with C# / .NET 4 to implement something similar as shown above, but maybe simpler, more performant, etc.?
Maybe there are ways to achieve the same thing, which would allow me to specify expression using a lambda expression instead of a string? Would expression trees be helpful in any way (e.g. as shown in this question)?
Update: the object and the expression are passed into my code via a web service call. That's why I used object and string in my Get() method.
Do you actually only have the expression as a string? Is it known at compile-time, just that the types themselves aren't known (or are hard to express)? If so:
dynamic d = a;
int i = d.B.C.I;
If you really only have it as a string (e.g. as user-entered data) that makes life a lot harder, and none of the C# 4 features really help you. You could potentially evaluate it as an IronPython script or something like that...
EDIT: Okay, after the update it sounds like you're in the latter situation - in which case, I don't know of a nicer way than using reflection. Some of the built-in property binding built for UIs may help, but I don't know of a clean way of accessing it.
If you want to use C# style, you could use the Mono compiler as a service from your application. I describe how to do this here:
Mono Compiler as a Service (MCS)
As an alternative approach, you could use reflection to put all of your object's properties into an ExpandoObject then use it like a dictionary (because ExpandoObject implements IDictionary). Alternatively, you could use JSON.NET and call JObject.FromObject, which will turn a regular object into a JObject which is accessible in a dictionary-like style (and as an added benefit has recursive graph support). Lastly, you could use the same approach to dump your object into a dictionary of dictionaries.

Is there any disadvantage of writing a long constructor?

Does it affect the time in loading the application?
or any other issues in doing so?
The question is vague on what "long" means. Here are some possible interpretations:
Interpretation #1: The constructor has many parameters
Constructors with many parameters can lead to poor readability, and better alternatives exist.
Here's a quote from Effective Java 2nd Edition, Item 2: Consider a builder pattern when faced with many constructor parameters:
Traditionally, programmers have used the telescoping constructor pattern, in which you provide a constructor with only the required parameters, another with a single optional parameters, a third with two optional parameters, and so on...
The telescoping constructor pattern is essentially something like this:
public class Telescope {
final String name;
final int levels;
final boolean isAdjustable;
public Telescope(String name) {
this(name, 5);
}
public Telescope(String name, int levels) {
this(name, levels, false);
}
public Telescope(String name, int levels, boolean isAdjustable) {
this.name = name;
this.levels = levels;
this.isAdjustable = isAdjustable;
}
}
And now you can do any of the following:
new Telescope("X/1999");
new Telescope("X/1999", 13);
new Telescope("X/1999", 13, true);
You can't, however, currently set only the name and isAdjustable, and leaving levels at default. You can provide more constructor overloads, but obviously the number would explode as the number of parameters grow, and you may even have multiple boolean and int arguments, which would really make a mess out of things.
As you can see, this isn't a pleasant pattern to write, and even less pleasant to use (What does "true" mean here? What's 13?).
Bloch recommends using a builder pattern, which would allow you to write something like this instead:
Telescope telly = new Telescope.Builder("X/1999").setAdjustable(true).build();
Note that now the parameters are named, and you can set them in any order you want, and you can skip the ones that you want to keep at default values. This is certainly much better than telescoping constructors, especially when there's a huge number of parameters that belong to many of the same types.
See also
Wikipedia/Builder pattern
Effective Java 2nd Edition, Item 2: Consider a builder pattern when faced with many constructor parameters (excerpt online)
Related questions
When would you use the Builder Pattern?
Is this a well known design pattern? What is its name?
Interpretation #2: The constructor does a lot of work that costs time
If the work must be done at construction time, then doing it in the constructor or in a helper method doesn't really make too much of a difference. When a constructor delegates work to a helper method, however, make sure that it's not overridable, because that could lead to a lot of problems.
Here's some quote from Effective Java 2nd Edition, Item 17: Design and document for inheritance, or else prohibit it:
There are a few more restrictions that a class must obey to allow inheritance. Constructors must not invoke overridable methods, directly or indirectly. If you violate this rule, program failure will result. The superclass constructor runs before the subclass constructor, so the overriding method in the subclass will be invoked before the subclass constructor has run. If the overriding method depends on any initialization performed by the subclass constructor, the method will not behave as expected.
Here's an example to illustrate:
public class ConstructorCallsOverride {
public static void main(String[] args) {
abstract class Base {
Base() { overrideMe(); }
abstract void overrideMe();
}
class Child extends Base {
final int x;
Child(int x) { this.x = x; }
#Override void overrideMe() {
System.out.println(x);
}
}
new Child(42); // prints "0"
}
}
Here, when Base constructor calls overrideMe, Child has not finished initializing the final int x, and the method gets the wrong value. This will almost certainly lead to bugs and errors.
Interpretation #3: The constructor does a lot of work that can be deferred
The construction of an object can be made faster when some work is deferred to when it's actually needed; this is called lazy initialization. As an example, when a String is constructed, it does not actually compute its hash code. It only does it when the hash code is first required, and then it will cache it (since strings are immutable, this value will not change).
However, consider Effective Java 2nd Edition, Item 71: Use lazy initialization judiciously. Lazy initialization can lead to subtle bugs, and don't always yield improved performance that justifies the added complexity. Do not prematurely optimize.
Constructors are a little special in that an unhandled exception in a constructor may have weird side effects. Without seeing your code I would assume that a long constructor increases the risk of exceptions. I would make the constructor as simple as needed and utilize other methods to do the rest in order to provide better error handling.
The biggest disadvantage is probably the same as writing any other long function -- that it can get complex and difficult to understand.
The rest is going to vary. First of all, length and execution time don't necessarily correlate -- you could have a single line (e.g., function call) that took several seconds to complete (e.g., connect to a server) or lots of code that executed entirely within the CPU and finished quickly.
Startup time would (obviously) only be affected by constructors that were/are invoked during startup. I haven't had an issue with this in any code I've written (at all recently anyway), but I've seen code that did. On some types of embedded systems (for one example) you really want to avoid creating and destroying objects during normal use, so you create almost everything statically during bootup. Once it's running, you can devote all the processor time to getting the real work done.
Constructor is yet another function. You need very long functions called many times to make the program work slow. So if it's only called once it usually won't matter how much code is inside.
It affects the time it takes to construct that object, naturally, but no more than having an empty constructor and calling methods to do that work instead. It has no effect on the application load time
In case of copy constructor if we use donot use reference in that case
it will create an object and call the copy constructor and passing the
value to the copy constructor and each time a new object is created and
each time it will call the copy constructor it goes to infinite and
fill the memory then it display the error message .
if we pass the reference it will not create the new object for storing
the value. and no recursion will take place
I would avoid doing anything in your constructor that isn't absolutely necessary. Initialize your variables in there, and try not to do much else. Additional functionality should reside in separate functions that you call only if you need to.

Is there a commonly used OO Pattern for holding "constant variables"?

I am working on a little pinball-game project for a hobby and am looking for a pattern to encapsulate constant variables.
I have a model, within which there are values which will be constant over the life of that model e.g. maximum speed/maximum gravity etc. Throughout the GUI and other areas these values are required in order to correctly validate input. Currently they are included either as references to a public static final, or just plain hard-coded. I'd like to encapsulate these "constant variables" in an object which can be injected into the model, and retrieved by the view/controller.
To clarify, the value of the "constant variables" may not necessarily be defined at compile-time, they could come from reading in a file; user input etc. What is known at compile time is which ones are needed. A way which may be easier to explain it is that whatever this encapsulation is, the values it provides are immutable.
I'm looking for a way to achieve this which:
has compile time type-safety (i.e. not mapping a string to variable at runtime)
avoids anything static (including enums, which can't be extended)
I know I could define an interface which has the methods such as:
public int getMaximumSpeed();
public int getMaximumGravity();
... and inject an instance of that into the model, and make it accessible in some way. However, this results in a lot of boilerplate code, which is pretty tedious to write/test etc (I am doing this for funsies :-)).
I am looking for a better way to do this, preferably something which has the benefits of being part of a shared vocabulary, as with design patterns.
Is there a better way to do this?
P.S. I've thought some more about this, and the best trade-off I could find would be to have something like:
public class Variables {
enum Variable {
MaxSpeed(100),
MaxGravity(10)
Variable(Object variableValue) {
// assign value to field, provide getter etc.
}
}
public Object getVariable(Variable v) { // look up enum and get member }
} // end of MyVariables
I could then do something like:
Model m = new Model(new Variables());
Advantages: the lookup of a variable is protected by having to be a member of the enum in order to compile, variables can be added with little extra code
Disadvantages: enums cannot be extended, brittleness (a recompile is needed to add a variable), variable values would have to be cast from Object (to Integer in this example), which again isn't type safe, though generics may be an option for that... somehow
Are you looking for the Singleton or, a variant, the Monostate? If not, how does that pattern fail your needs?
Of course, here's the mandatory disclaimer that Anything Global Is Evil.
UPDATE: I did some looking, because I've been having similar debates/issues. I stumbled across a list of "alternatives" to classic global/scope solutions. Thought I'd share.
Thanks for all the time spent by you guys trying to decipher what is a pretty weird question.
I think, in terms of design patterns, the closest that comes to what I'm describing is the factory pattern, where I have a factory of pseudo-constants. Technically it's not creating an instance each call, but rather always providing the same instance (in the sense of a Guice provider). But I can create several factories, which each can provide different psuedo-constants, and inject each into a different model, so the model's UI can validate input a lot more flexibly.
If anyone's interested I've came to the conclusion that an interface providing a method for each psuedo-constant is the way to go:
public interface IVariableProvider {
public int maxGravity();
public int maxSpeed();
// and everything else...
}
public class VariableProvider {
private final int maxGravity, maxSpeed...;
public VariableProvider(int maxGravity, int maxSpeed) {
// assign final fields
}
}
Then I can do:
Model firstModel = new Model(new VariableProvider(2, 10));
Model secondModel = new Model(new VariableProvider(10, 100));
I think as long as the interface doesn't provide a prohibitively large number of variable getters, it wins over some parameterised lookup (which will either be vulnerable at run-time, or will prohibit extension/polymorphism).
P.S. I realise some have been questioning what my problem is with static final values. I made the statement (with tongue in cheek) to a colleague that anything static is an inherently not object-oriented. So in my hobby I used that as the basis for a thought exercise where I try to remove anything static from the project (next I'll be trying to remove all 'if' statements ;-D). If I was on a deadline and I was satisfied public static final values wouldn't hamstring testing, I would have used them pretty quickly.
If you're just using java/IOC, why not just dependency-inject the values?
e.g. Spring inject the values via a map, specify the object as a singleton -
<property name="values">
<map>
<entry> <key><value>a1</value></key><value>b1</value></entry>
<entry> <key><value>a2</value></key><value>b3</value></entry>
</map>
</property>
your class is a singleton that holds an immutable copy of the map set in spring -
private Map<String, String> m;
public String getValue(String s)
{
return m.containsKey(s)?m.get(s):null;
}
public void setValues(Map m)
{
this.m=Collections.unmodifiableMap(m):
}
From what I can tell, you probably don't need to implement a pattern here -- you just need access to a set of constants, and it seems to me that's handled pretty well through the use of a publicly accessible static interface to them. Unless I'm missing something. :)
If you simply want to "objectify" the constants though, for some reason, than the Singleton pattern would probably be called for, if any; I know you mentioned in a comment that you don't mind creating multiple instances of this wrapper object, but in response I'd ask, then why even introduce the sort of confusion that could arise from having multiple instances at all? What practical benefit are you looking for that'd be satisfied with having the data in object form?
Now, if the values aren't constants, then that's different -- in that case, you probably do want a Singleton or Monostate. But if they really are constants, just wrap a set of enums or static constants in a class and be done! Keep-it-simple is as good a "pattern" as any.