Say I have MyClass with 100s of fields.
If I use an object of MyClass as an input param, Pex would simply choke trying to generate all possible combinations (mine runs into 1000s of paths even on a simple test)
[PexMethod]
void MytestMethod(MyClass param){...}
How can I tell Pex to use only a set of predefined objects of MyClass rather than having it trying to be smart and generate all possible combinations to test?
In other words I want to manually specify a list of possible states for param in the code above and tell Pex to use it
Cheers
If you find that Pex is generating large amounts of irrelevant, redundant, or otherwise unhelpful inputs, you can shape the values that it generates for your parametrized unit tests' input using PexAssume, which will ensure that all generated inputs meet a set of criteria that you provide.
If you were wanting to ensure that arguments came from a predefined collection of values, for instance, you could do something like this:
public void TestSomething(Object a) {
PexAssume.IsTrue(someCollection.Contains(a));
}
PexAssume has other helper methods as well for more general input pruning, such as IsNotNull, AreNotEqual, etc. What little documentation is out there suggests that there is some collection-specific functionality as well, though if those methods exist, I'm not familiar with them.
Check out the Pex manual for a bit more information.
Pex will not try to generate every possible combination of values. Instead, it analyses your code and tries to cover every branch. So if you have
if (MyObject.Property1 == "something")
{
...
}
then it will try to create an object that has Property1 == "something". So limiting the tests to some predefined objects is rather against the 'Pex philosophy'. That said, you may find the following information interesting.
You can provide a Pex factory class. See, for instance, this blog post or this one.
[PexFactoryClass]
public partial class EmployeeFactory
{
[PexFactoryMethod(typeof(Employee))]
public static Employee Create(
int i0,
string s0,
string s1,
DateTime dt0,
DateTime dt1,
uint ui0,
Contract c0
)
{
Employee e0 = new Employee();
e0.EmployeeID = i0;
e0.FirstName = s0;
e0.LastName = s1;
e0.BirthDate = dt0;
e0.StartDateContract = dt1;
e0.Salary = ui0;
e0.TypeContract = c0;
return e0;
}
}
Pex will then call this factory class (instead of a default factory) using appropriate values it discovers from exploring your code. The factory method allows you to limit the possible parameters and values.
You can also use PexArguments attribute to suggest values, but this will not prevent Pex from trying to generate other values to cover any branches in your code. It just tries the ones you provide first.
[PexArguments(1, "foo")] // try this first
void MyTest(int i, string s)
{
...
}
See here for more information on PexArguments and also search for 'seed values' in the PDF documentation on Parameterized Test Patterns.
Related
I've always known what static methods are by definition, but I've always avoided using them at school because I was afraid of what I didn't know.
I already understand that you can use it as a counter throughout your entire project.
Now that I am interning I want to know when exactly static methods are used. From my observation so far, static classes/methods are used when it contains a lot of functions that will be used in many different classes and itself doesn't contain too many critical local variables within the class where it is not necessary to create an instant of it.
So as an example, you can have a static class called Zip that zips and unzips files and provide it to many different classes for them to do whatever with it.
Am I right? Do I have the right idea? I'm pretty sure there are many ways to use it.
Static functions are helpful as they do not rely on an instantiated member of whatever class they are attached to.
Static functions can provide functionality related to an a particular class without requiring the programmer to first create an instance of that class.
See this comparison:
class Numbers
{
public int Add(int x, int y)
{
return x + y;
}
public static int AddNumbers(int x, int y)
{
return x + y;
}
}
class Main
{
//in this first case, we use the non-static version of the Add function
int z1 = (new Numbers()).Add(2, 4);
//in the second case, we use the static one
int z2 = Numbers.AddNumbers(3, 5);
}
Technically, answers above are correct.
But the examples are not correct from the OOP point of view.
For example you have a class like this:
class Zip
{
public static function zipFile($fileName)
{
//
}
public static function unzipFile($fileName)
{
//
}
}
The truth is that there is nothing object-oriented here. You just defined two functions which you need to call using the fancy syntax like Zip::zipFile($myFile) instead of just zipFile($myFile).
You don't create any objects here and the Zip class is only used as a namespace.
So in this case it is better to just define these functions outside of class, as regular functions. There are namespaces in php since version 5.3, you can use them if you want to group your functions.
With the OOP approach, your class would look like this:
class ZipArchive
{
private $_archiveFileName;
private $_files;
public function __construct($archiveFileName) {
$this->_archiveFileName = $archiveFileName;
$this->_files = [];
}
public function add($fileName)
{
$this->_files[] = $fileName;
return $this; // allows to chain calls
}
public function zip()
{
// zip the files into archive specified
// by $_archiveFileName
}
}
And then you can use it like this:
$archive = new ZipArchive('path/to/archive.zip');
$archive->add('file1')->add('file2')->zip();
What is more important, you can now use the zip functionality in an OOP way.
For example, you can have a base class Archive and sub-classes like ZipArchive, TarGzArchive, etc.
Now, you can create an instance of the specific sub-class and pass it to other code which will not even know if files are going to be zip-ped or tag.gz-ipped. For example:
if ($config['archive_type'] === 'targz') {
// use tar.gz if specified
$archive = new TarGzArchive($path);
} else {
// use zip by default
$archive = new ZipArchive($path);
}
$backup = new Backup($archive /*, other params*/);
$backup->run();
Now the $backup object will use the specified archive type. Internally it doesn't know and doesn't care how exactly files will be archived.
You can even have a CopyArchive class which will simply copy files to another location.
It is easy to do it this way because your archive support is written in OOP way. You have small object responsible for specific things, you create and combine them and get the result you want.
And if you just have a bunch of static methods instead of real class, you will be forced to write the procedural-style code.
So I would not recommend to use static methods to implement actual features of your application.
Static methods may be helpful to support logging, debugging, testing and similar things. Like if you want to count number of objects created, you can use class-level static counter, increment it in the constructor and you can have a static method which reads the counter and prints it or writes to the log file.
Yes, static classes are used for problems that require stateless computation. Such as adding two numbers. Zipping a file. Etc.
If your class requires state, where you need to store connections or other longer living entities, then you wouldn't use static.
AFAIK. Static methods does not depends on a class instance. Just that.
As an example:
If you have an single thread program that will have only ONE database connection and will do several queries against the database it will be better to implement it as a static class (note that I specified that you will not connect, ever to several databases or have several threads).
So you will not need to create several connection objects, because you already know that you will only use one. And you will not need to create several objects. Singletons in this scenario are, also, an option.
There are other examples.
If you create an class to convert values.
class Convert{
static std::string fromIntToString(int value);
}
This way you will not need to create the class convert every time you need to convert from integer to an string.
std::string value = Convert::fromIntToString(10).
If you haven't done that you would need to instantiate this class several times through your program.
I know that you can find several other examples. It is up to you and your scenario to decide when you are going to do that.
I am writing and browsing through a lot of methods in the project im working with and as much as I think overloads are useful I think that having a simple optional parameter with a default value can get around the problem aiding in writing more readable and I would think efficient code.
Now I hear that using these parmeters in the methods could carry nasty side effects.
What are these side effects and is it worth the risk of using these parameters to keep the code clean ???
I'll start by prefacing my answer by saying Any language feature can be used well or it can be used poorly. Optional parameters have some drawbacks, just like declaring locals as var does, or generics.
What are these side effects
Two come to mind.
The first being that the default value for optional parameters are compile time constants that are embedded in the consumer of the method. Let's say I have this class in AssemblyA:
public class Foo
{
public void Bar(string baz = "cat")
{
//Omitted
}
}
And this in AssemblyB:
public void CallBar()
{
new Foo().Bar();
}
What really ends up being produced is this, in assemblyB:
public void CallBar()
{
new Foo().Bar("cat");
}
So, if you were to ever change your default value on Bar, both assemblyA and assemblyB would need to be recompiled. Because of this, I tend not to declare methods as public if they use optional parameters, rather internal or private. If I needed to declare it as public, I would use overloads.
The second issue being how they interact with interfaces and polymorphism. Take this interface:
public interface IBar
{
void Foo(string baz = "cat");
}
and this class:
public class Bar : IBar
{
public void Foo(string baz = "dog")
{
Console.WriteLine(baz);
}
}
These lines will print different things:
IBar bar1 = new Bar();
bar1.Foo(); //Prints "cat"
var bar2 = new Bar();
bar2.Foo(); //Prints "dog"
Those are two negatives that come to mind. However, there are positives, as well. Consider this method:
void Foo(string bar = "bar", string baz = "baz", string yat = "yat")
{
}
Creating methods that offer all the possible permutations as default would be several if not dozens of lines of code.
Conclusion: optional parameters are good, and they can be bad. Just like anything else.
Necromancing.
The thing with optional parameters is, they are BAD because they are unintuitive - meaning they do NOT behave the way you would expect it.
Here's why:
They break ABI compatibility !
(and strictly speaking, they also break API-compatiblity, when used in constructors)
For example:
You have a DLL, in which you have code such as this
public void Foo(string a = "dog", string b = "cat", string c = "mouse")
{
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
Now what kinda happens is, you expect the compiler to generate this code behind the scenes:
public void Foo(string a, string b, string c)
{
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
public void Foo(string a, string b)
{
Foo(a, b, "mouse");
}
public void Foo(string a)
{
Foo(a, "cat", "mouse");
}
public void Foo()
{
Foo("dog", "cat", "mouse");
}
or perhaps more realistically, you would expect it to pass NULLs and do
public void Foo(string a, string b, string c)
{
if(a == null) a = "dog";
if(b == null) b = "cat";
if(c == null) c = "mouse";
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
so you can change the default-arguments at one place.
But this is not what the C# compiler does, because then you couldn't do:
Foo(a:"dog", c:"dogfood");
So instead the C# compiler does this:
Everywhere where you write e.g.
Foo(a:"dog", c:"mouse");
or Foo(a:"dog");
or Foo(a:"dog", b:"bla");
It substitutes it with
Foo(your_value_for_a_or_default, your_value_for_b_or_default, your_value_for_c_or_default);
So that means if you add another default-value, change a default-value, remove a value, you don't break API-compatiblity, but you break ABI-compatibility.
So what this means is, if you just replace the DLL out of all files that compose an application, you'll break every application out there that uses your DLL. That's rather bad. Because if your DLL contains a bad bug, and I have to replace it, I have to recompile my entire application with your latest DLL. That might contain a lot of changes, so I can't do it quickly. I also might not have the old source code handy, and the application might be in a major modification, with no idea what commit the old version of the application was compiled on. So I might not be able to recompile at this time. That is very bad.
And as for only using it in PUBLIC methods, not private, protected or internal.
Yea, nice try, but one can still use private, protected or internal methods with reflection. Not because one wants to, but because it sometimes is necessary, as there is no other way. (Example).
Interfaces have already been mentioned by vcsjones.
The problem there is code-duplication (which allows for divergent default-values - or ignoring of default-values).
But the real bummer is, that in addition to that, you can now introduce API-breaking-changes in Constructors...
Example:
public class SomeClass
{
public SomeClass(bool aTinyLittleBitOfSomethingNew = true)
{
}
}
And now, everywhere where you use
System.Activator.CreateInstance<SomeClass>();
you'll now get a RUNTIME exception, because now there is NO parameter-less constructor...
The compiler won't be able to catch this at compile time.
Good night if you happen to have a lot of Activator.CreateInstances in your code.
You'll be screwed, and screwed badly.
Bonus points will be awarded if some of the code you have to maintain uses reflection to create class instances, or use reflection to access private/protected/internal methods...
Don't use optional parameters !
Especially not in class constructors.
(Disclaimer: sometimes, there simply is no other way - e.g. an attribute on a property that takes the name of the property as constructor argument automagically - but try to limit it to these few cases, especially if you can make due with overloading)
I guess theoretically they are fine for quick prototyping, but only for that.
But since prototypes have a strong tendency to go productive (at least in the company I currently work), don't use it for that, either.
I'd say that it depends how different the method becomes when you include or omit that parameter.
If a method's behaviour and internal functioning is very different without a parameter, then make it an overload. If you're using optional parameters to change behaviour, DON'T. Instead of having a method that does one thing with one parameter, and something different when you pass in a second one, have one method that does one thing, and a different method that does the other thing. If their behaviour differs greatly, then they should probably be entirely separate, and not overloads with the same name.
If you need to know whether a parameter was user-specified or left blank, then consider making it an overload. Sometimes you can use nullable values if the place they're being passed in from won't allow nulls, but generally you can't rule out the possibility that the user passed null, so if you need to know where the value came from as well as what the value is, don't use optional parameters.
Above all, remember that the optional parameters should (kinda by definition) be used for things that have a small, trivial or otherwise unimportant effect on the outcome of the method. If you change the default value, any place that calls the method without specifying a value should still be happy with the result. If you change the default and then find that some other bit of code that calls the method with the optional parameter left blank is now not working how it should, then it probably shouldn't have been an optional parameter.
Places where it can be a good idea to use optional parameters are:
Methods where it's safe to just set something to a default if a value isn't provided. This basically covers anything where the caller might not know or care what the value is. A good example is in encryption methods - the caller may just think "I don't know crypto, I don't know what value R should be set to, I just want this to be encrypted", in which case you set the defaults to sensible values. Often these start out as a method with an internal variable that you then move to be user-provided. It's pointless making two methods when the only difference is that one has var foo = bar; somewhere at the start.
Methods that have a set of parameters, but not all of them are needed. This is quite common with constructors; you'll see overloads that each set different combinations of the various properties, but if there's three or four parameters that may or may not need to be set, that can require a lot of overloads to cover all the possible combinations (it's basically a handshake problem), and all these overloads have more or less identical behaviour internally. You can solve this by having most of them just set defaults and call the one that sets all parameters, but it's less code to use optional parameters.
Methods where the coder calling them might want to set parameters, but you want them to know what a "normal" value is. For example, the encryption method we mentioned earlier might require various parameters for whatever maths goes on internally. A coder might see that they can pass in values for workFactor or blockSize, but they may not know what "normal" values are for these. Commenting and documentation will help here, but so will optional parameters - the coder will see in the signature [workFactor = 24], [blockSize = 256] which helps them judge what kind of values are sensible. (Of course, this is no excuse to not comment and document your code properly.)
You're not making more readable and efficient code.
First, your method signatures will be gratuitously longer.
Second, overloads don't exist for the sole purpose of using default values - a quick look at the Convert class should show you that. Many times overloaded methods have different execution paths, which will become spaghetti code in your single non overloaded method.
Third, sometimes you need to know whether a value was used as input. How would you then know whether the user passed those values, if he happens to use the same value as the default one you were using?
Often I see optional parameters in C# like IMyInterface parameter = null.
Especially when I see that in constructors I would even say it'S a code smell.
I know that's a hard verdict - but in this case it obscures your dependencies, which is bad.
Like vcsjones said, you can use those language features right, but I believe optional parameters should be used only in some edge-cases.
my opinion.
I have an object called Parameters that gets tossed from method to method down and up the call tree, across package boundaries. It has about fifty state variables. Each method might use one or two variables to control its output.
I think this is a bad idea, beacuse I can't easily see what a method needs to function, or even what might happen if with a certain combination of parameters for module Y which is totally unrelated to my current module.
What are some good techniques for decreasing coupling to this god object, or ideally eliminating it ?
public void ExporterExcelParFonds(ParametresExecution parametres)
{
ApplicationExcel appExcel = null;
LogTool.Instance.ExceptionSoulevee = false;
bool inclureReferences = parametres.inclureReferences;
bool inclureBornes = parametres.inclureBornes;
DateTime dateDebut = parametres.date;
DateTime dateFin = parametres.dateFin;
try
{
LogTool.Instance.AfficherMessage(Variables.msg_GenerationRapportPortefeuilleReference);
bool fichiersPreparesAvecSucces = PreparerFichiers(parametres, Sections.exportExcelParFonds);
if (!fichiersPreparesAvecSucces)
{
parametres.afficherRapportApresGeneration = false;
LogTool.Instance.ExceptionSoulevee = true;
}
else
{
The caller would do :
PortefeuillesReference pr = new PortefeuillesReference();
pr.ExporterExcelParFonds(parametres);
First, at the risk of stating the obvious: pass the parameters which are used by the methods, rather than the god object.
This, however, might lead to some methods needing huge amounts of parameters because they call other methods, which call other methods in turn, etcetera. That was probably the inspiration for putting everything in a god object. I'll give a simplified example of such a method with too many parameters; you'll have to imagine that "too many" == 3 here :-)
public void PrintFilteredReport(
Data data, FilterCriteria criteria, ReportFormat format)
{
var filteredData = Filter(data, criteria);
PrintReport(filteredData, format);
}
So the question is, how can we reduce the amount of parameters without resorting to a god object? The answer is to get rid of procedural programming and make good use of object oriented design. Objects can use each other without needing to know the parameters that were used to initialize their collaborators:
// dataFilter service object only needs to know the criteria
var dataFilter = new DataFilter(criteria);
// report printer service object only needs to know the format
var reportPrinter = new ReportPrinter(format);
// filteredReportPrinter service object is initialized with a
// dataFilter and a reportPrinter service, but it doesn't need
// to know which parameters those are using to do their job
var filteredReportPrinter = new FilteredReportPrinter(dataFilter, reportPrinter);
Now the FilteredReportPrinter.Print method can be implemented with only one parameter:
public void Print(data)
{
var filteredData = this.dataFilter.Filter(data);
this.reportPrinter.Print(filteredData);
}
Incidentally, this sort of separation of concerns and dependency injection is good for more than just eliminating parameters. If you access collaborator objects through interfaces, then that makes your class
very flexible: you can set up FilteredReportPrinter with any filter/printer implementation you can imagine
very testable: you can pass in mock collaborators with canned responses and verify that they were used correctly in a unit test
If all your methods are using the same Parameters class then maybe it should be a member variable of a class with the relevant methods in it, then you can pass Parameters into the constructor of this class, assign it to a member variable and all your methods can use it with having to pass it as a parameter.
A good way to start refactoring this god class is by splitting it up into smaller pieces. Find groups of properties that are related and break them out into their own class.
You can then revisit the methods that depend on Parameters and see if you can replace it with one of the smaller classes you created.
Hard to give a good solution without some code samples and real world situations.
It sounds like you are not applying object-oriented (OO) principles in your design. Since you mention the word "object" I presume you are working within some sort of OO paradigm. I recommend you convert your "call tree" into objects that model the problem you are solving. A "god object" is definitely something to avoid. I think you may be missing something fundamental... If you post some code examples I may be able to answer in more detail.
Query each client for their required parameters and inject them?
Example: each "object" that requires "parameters" is a "Client". Each "Client" exposes an interface through which a "Configuration Agent" queries the Client for its required parameters. The Configuration Agent then "injects" the parameters (and only those required by a Client).
For the parameters that dictate behavior, one can instantiate an object that exhibits the configured behavior. Then client classes simply use the instantiated object - neither the client nor the service have to know what the value of the parameter is. For instance for a parameter that tells where to read data from, have the FlatFileReader, XMLFileReader and DatabaseReader all inherit the same base class (or implement the same interface). Instantiate one of them base on the value of the parameter, then clients of the reader class just ask for data to the instantiated reader object without knowing if the data come from a file or from the DB.
To start you can break your big ParametresExecution class into several classes, one per package, which only hold the parameters for the package.
Another direction could be to pass the ParametresExecution object at construction time. You won't have to pass it around at every function call.
(I am assuming this is within a Java or .NET environment) Convert the class into a singleton. Add a static method called "getInstance()" or something similar to call to get the name-value bundle (and stop "tramping" it around -- see Ch. 10 of "Code Complete" book).
Now the hard part. Presumably, this is within a web app or some other non batch/single-thread environment. So, to get access to the right instance when the object is not really a true singleton, you have to hide selection logic inside of the static accessor.
In java, you can set up a "thread local" reference, and initialize it when each request or sub-task starts. Then, code the accessor in terms of that thread-local. I don't know if something analogous exists in .NET, but you can always fake it with a Dictionary (Hash, Map) which uses the current thread instance as the key.
It's a start... (there's always decomposition of the blob itself, but I built a framework that has a very similar semi-global value-store within it)
As a sort of continuation of this, I have the following newbie question:
What difference is there in building a wrapper class that expects lots of inputs parameters and inputting those parameters directly into the final constructor?
Don't get me wrong, I think the multiple input parameter thing is pretty ugly, and I'm trying to get around it since just like the poster of that question, I need to deal with a Calculator-like class that requires a lot of parameters. But what I don't understand is what would a wrapper class for input parameters solve, since I also need to build the input class--and that's just as ugly as the other alternative.
To summarize, I don't think this:
MyClass::MyClass(int param1, int param2, int param3... int paramN)
{
this->param1 = param1;
this->param2 = param2;
this->param3 = param3;
...
this->paramN = paramN;
}
...is much different from this:
Result MyClass::myInterface(MyInputClass input)
{
//perform calculations
}
MyInputClass::MyInputClass(int param1, int param2, int param3... int paramN)
{
this->param1 = param1;
this->param2 = param2;
this->param3 = param3;
...
this->paramN = paramN;
}
And, of course, I am trying to avoid setters as much as possible.
Am I missing something here? I'd love to have some insight on this, since I'm still a rather newbie programmer.
Here is some of the reasons, one would like to use a parameter class:
Reusability: You can save the parameters in a variable and reuse them, maybe in another method.
Separation of concerns: Parameter handling, say to verify which parameters are active, which are in their correct ranges, etc., is performed in the parameter class; your calculation method only knows how to calculate, so in the future you know where is each and no logic is mixed.
You can add a new value and impact minimally in your calcultor method.
Maybe your calculator method can be applied to two different set of parameters, say on an integer and on a double. It is more readable/mantainable/faster to write the calculation logic just once and change the parameter object.
Some classes do not need to initialize every single field at the constructor. Sometimes setters are the way to go.
The biggest benefits are:
Insulation from changes. You can add
new properties to the parameter class
and not have to change the class that
uses it or any of its callers.
Code reduction when you're chaining
methods. If MyClass needs to pass
its parameters around, MyInputClass
eliminates a bunch of code.
All the other points are completely valid and sound. Here's a little reinforcement from some authoritative text:
Code Complete suggests that you limit the number of parameters of any routine to seven, as "seven is the magic number for people's comprehension." It then goes on to suggest that passing more than seven parameters increases coupling with the calling scope and that you should use a structured variable (ala MyInputClass) instead.
I am working on a little pinball-game project for a hobby and am looking for a pattern to encapsulate constant variables.
I have a model, within which there are values which will be constant over the life of that model e.g. maximum speed/maximum gravity etc. Throughout the GUI and other areas these values are required in order to correctly validate input. Currently they are included either as references to a public static final, or just plain hard-coded. I'd like to encapsulate these "constant variables" in an object which can be injected into the model, and retrieved by the view/controller.
To clarify, the value of the "constant variables" may not necessarily be defined at compile-time, they could come from reading in a file; user input etc. What is known at compile time is which ones are needed. A way which may be easier to explain it is that whatever this encapsulation is, the values it provides are immutable.
I'm looking for a way to achieve this which:
has compile time type-safety (i.e. not mapping a string to variable at runtime)
avoids anything static (including enums, which can't be extended)
I know I could define an interface which has the methods such as:
public int getMaximumSpeed();
public int getMaximumGravity();
... and inject an instance of that into the model, and make it accessible in some way. However, this results in a lot of boilerplate code, which is pretty tedious to write/test etc (I am doing this for funsies :-)).
I am looking for a better way to do this, preferably something which has the benefits of being part of a shared vocabulary, as with design patterns.
Is there a better way to do this?
P.S. I've thought some more about this, and the best trade-off I could find would be to have something like:
public class Variables {
enum Variable {
MaxSpeed(100),
MaxGravity(10)
Variable(Object variableValue) {
// assign value to field, provide getter etc.
}
}
public Object getVariable(Variable v) { // look up enum and get member }
} // end of MyVariables
I could then do something like:
Model m = new Model(new Variables());
Advantages: the lookup of a variable is protected by having to be a member of the enum in order to compile, variables can be added with little extra code
Disadvantages: enums cannot be extended, brittleness (a recompile is needed to add a variable), variable values would have to be cast from Object (to Integer in this example), which again isn't type safe, though generics may be an option for that... somehow
Are you looking for the Singleton or, a variant, the Monostate? If not, how does that pattern fail your needs?
Of course, here's the mandatory disclaimer that Anything Global Is Evil.
UPDATE: I did some looking, because I've been having similar debates/issues. I stumbled across a list of "alternatives" to classic global/scope solutions. Thought I'd share.
Thanks for all the time spent by you guys trying to decipher what is a pretty weird question.
I think, in terms of design patterns, the closest that comes to what I'm describing is the factory pattern, where I have a factory of pseudo-constants. Technically it's not creating an instance each call, but rather always providing the same instance (in the sense of a Guice provider). But I can create several factories, which each can provide different psuedo-constants, and inject each into a different model, so the model's UI can validate input a lot more flexibly.
If anyone's interested I've came to the conclusion that an interface providing a method for each psuedo-constant is the way to go:
public interface IVariableProvider {
public int maxGravity();
public int maxSpeed();
// and everything else...
}
public class VariableProvider {
private final int maxGravity, maxSpeed...;
public VariableProvider(int maxGravity, int maxSpeed) {
// assign final fields
}
}
Then I can do:
Model firstModel = new Model(new VariableProvider(2, 10));
Model secondModel = new Model(new VariableProvider(10, 100));
I think as long as the interface doesn't provide a prohibitively large number of variable getters, it wins over some parameterised lookup (which will either be vulnerable at run-time, or will prohibit extension/polymorphism).
P.S. I realise some have been questioning what my problem is with static final values. I made the statement (with tongue in cheek) to a colleague that anything static is an inherently not object-oriented. So in my hobby I used that as the basis for a thought exercise where I try to remove anything static from the project (next I'll be trying to remove all 'if' statements ;-D). If I was on a deadline and I was satisfied public static final values wouldn't hamstring testing, I would have used them pretty quickly.
If you're just using java/IOC, why not just dependency-inject the values?
e.g. Spring inject the values via a map, specify the object as a singleton -
<property name="values">
<map>
<entry> <key><value>a1</value></key><value>b1</value></entry>
<entry> <key><value>a2</value></key><value>b3</value></entry>
</map>
</property>
your class is a singleton that holds an immutable copy of the map set in spring -
private Map<String, String> m;
public String getValue(String s)
{
return m.containsKey(s)?m.get(s):null;
}
public void setValues(Map m)
{
this.m=Collections.unmodifiableMap(m):
}
From what I can tell, you probably don't need to implement a pattern here -- you just need access to a set of constants, and it seems to me that's handled pretty well through the use of a publicly accessible static interface to them. Unless I'm missing something. :)
If you simply want to "objectify" the constants though, for some reason, than the Singleton pattern would probably be called for, if any; I know you mentioned in a comment that you don't mind creating multiple instances of this wrapper object, but in response I'd ask, then why even introduce the sort of confusion that could arise from having multiple instances at all? What practical benefit are you looking for that'd be satisfied with having the data in object form?
Now, if the values aren't constants, then that's different -- in that case, you probably do want a Singleton or Monostate. But if they really are constants, just wrap a set of enums or static constants in a class and be done! Keep-it-simple is as good a "pattern" as any.