Stuck on how to apply OOP to estimation problems - oop

I have been reading online articles and in the process of reading "Head First Design Patterns", and it seems one of the fundamental OOP is "encapsulate what varies". I am stuck on how to apply that to my problem, might not even be looking at the problem the right way so I was hoping to get some advice.
I will sum up my application, desired goal, and problem I am facing than at the end show the code I have. I omitted some parts of the code and simplified the inputs. I figured since this is more of a design problem, I dont need to show all the constructors and so on.
My application takes a bunch of measurements and processes the measurements through what I will call estimators: Least Squares (LS), extended Kalman filter (EKF) or sequential least squares (SLS). I would like to at run-time change which estimator is used.
Each estimator fundamentally has two methods "Update()" and "DetectBlunders()" (and called in that order) with the internals of each method being different dependent on the estimator, but the objective is the same. So I originally have a "BaseEstimator" that has methods "Update()" and "DetectBlunders()" and each estimator inherits from "BaseEstimator". Through implementing the concrete estimators I started to realize some problems.
1) EXF requires one extra step "Predict()" before "Update()". So I originally added a "Predict()" method to "BaseEstimator" and in LS and SLS "Predict()" does nothing. However this kind of seems wrong some how. If I changed "Predict()" or removed it from "BaseEstimator" in my LS and SLS would be affected even though they shouldnt have a "Predict()" in the first place.
2) EKF and SLS require one extra input to "Update()". So I did something similar as before, I added this extra input in "Update()" in the "BaseEstimator" and in the override in LS I just pass an empty variable. Again, this seems wrong. This input is another object so I actually pass just an empty constructor in LS, and in LS I test if the constructor is empty, seems very wrong.
Would anyone have any advice on what I can do, if anything. Maybe this is the only way to approach the problem. The end goal was to make a "EstimatorFactory" (the only design principle I understand enough to use) that at run time would make an Estimator. I also may add other estimators in the future like "Particle Filter (PF)". Again I am very new to actually using OOP, old me would have just used if/switches everywhere and have one class that does all three estimators (a couple thousand lines later).
class BaseEstimator {
public:
// I dont actually call my inputs 1,2,3,4
virtual bool Update(
double input1, // Required by all three estimators
double input2, // Required by all three estimators
double input3, // Required by all three estimators
double input4 // Not required in LS, but required in EXF and SLS
);
virtual bool DetectBlunders() const = 0;
// Only required by EXF
virtual bool Predict(
double input1,
double input2
) = 0;
};
class EXF : public BaseEstimator {
public:
bool Update(
double input1,
double input2,
double input3,
double input4);
bool DetectBlunders() const;
bool Predict(
double input1,
double input2);
};
class LS : public BaseEstimator {
public:
bool Update(
double input1,
double input2,
double input3,
double input4 = 0 // This is not needed in LS so I actually pass it an empty constructor
);
bool DetectBlunders() const;
bool Predict(
double input1,
double input2) {
// Do nothing since its not required in LS
}
};
class SLS : public BaseEstimator {
public:
bool Update(
double input1,
double input2,
double input3,
double input4);
bool DetectBlunders() const;
bool Predict(
double input1,
double input2) {
// Do nothing since its not required in SLS
}
};

I've been working with OOP for a while now and what you describe is not that uncommon.
A base class has 2 role.
1. To hide the specific implementation of its children.
2. Guarantee a certain amount of information to the children.
Once the estimator is constructed you should be workingwith BaseEstimator exclusively. This will completely hide which Estimator you are using. Allowing you to write generic code independent of the specific implementation of you Estimator.
Because of that, you will have no way of knowing which Estimator you are using and you will have to provide all the information the Estimator might need.
All child have access to the same information. The only difference is how they process this information.
For all these reasons, you also want to offer a certain degree of flexibility to you child class. It is not uncommon to see PreLoad Loading Loaded methods in OOP framework. These methods give a certain flexibility and scale-ability to the derived class. Even if not all children use the Predict method it is good to offer this possibility.

Related

Object Oriented Programming: Subclasses or Enum

I am making a simple turn-based farming simulator game where players make choices whether to buy land or crops and have turn-count based growing. Different crops grow for different times, and have different purchase prices and sale values. Objective is to be the first to reach a dollar amount.
My question is how to develop these crops programmatically. I currently have each crop variation as a subclass of a Crop, however, this leads to a lot of redundancy (mostly different field/attribute values, or image paths). Since these objects are very similar save for some values, should they be subclasses, or would it be better practice to make one class Crop with an Enum of type and use logic to determine the values it should have?
Superclass Crop
subclass Wheat
subclass Corn
subclass Barley
Or
Crop.Type = CropType.Wheat
if(this.Type == CropType.Wheat) { return StockMarket.Wheat_Sell_Value; }
else if(this.Type == CropType.Corn) { return StockMarket.Corn_Sell_Value; }
If you make a single crop class it risks becoming very large and unwieldly, especially if you want to add a new crop type you'll have to update the 100's of if statements littered through your code (e.g. if(this.Type == CropType.Wheat) { return StockMarket.Wheat_Sell_Value; }).
To echo #oswin's answer, use inheritance sparingly. You are probably ok using a base-class with a few "dumb" properties, but be especially careful when adding anything that implements "behaviour" or complexity, like methods and logic; i.e. anything that acts on CropType within Crop is probably a bad idea.
One simple approach is if crop types all have the same properties, but just different values; and so crop instances just get acted on by processes within the game, see below. (Note: If crops have different properties then I would probably use interfaces to handle that because they are more forgiving when you need to make changes).
// Crop Types - could he held in a database or config file, so easy to add new types.
// Water, light, heat are required to grow and influence how crops grow.
// Energy - how much energy you get from eating the crop.
Name:Barley, growthRate:1.3, water:1.3, light:1.9, heat:1.3, energy:1.4
Name:Corn, growthRate:1.2, water:1.2, light:1.6, heat:1.2, energy:1.5
Name:Rice, growthRate:1.9, water:1.5, light:1.0, heat:1.4, energy:1.8
The crop type values help drive logic later on. You also (I assume) need crop instance:
class CropInstance
{
public CropType Crop { get; set; }
public double Size { get; set; }
public double Health { get; }
}
Then you simply have other parts of your program that act on instances of Crop, e.g:
void ApplyWeatherForTurn(CropInstance crop, Weather weather)
{
// Logic that applies weather to a crop for the turn.
// E.g. might under or over supply the amount of water, light, heat
// depending on the type of crop, resulting in 'x' value, which might
// increase of decrease the health of the crop instance.
double x = crop.WaterRequired - weather.RainFall;
// ...
crop.Health = x;
}
double CurrentValue(CropInstance crop)
{
return crop.Size * crop.Health * crop.Crop.Energy;
}
Note you can still add logic that does different things to different crops, but based on their values, not their types:
double CropThieves(CropInstance crop)
{
if(crop.health > 2.0 & crop.Crop.Energy > 2.0)
{
// Thieves steal % of crop.
crop.Size = crop.Size * 0.9;
}
}
Update - Interfaces:
I was thinking about this some more. The assumption with code like double CurrentValue(CropInstance crop) is that it assumes you only deal in crop instances. If you were to add other types like Livestock that sort of code could get cumbersome.
E.g. If you know for certain that you'll only ever have crops then the approach is fine. If you decide to add another type later, it will be manageable, if you become wildly popular and decide to add 20 new types you'll want to do a re-write / re-architecture because it won't scale well from a maintenance perspective.
This is where interfaces come in, imagine you will eventually have many different types including Crop (as above) and Livestock - note it's properties aren't the same:
// growthRate - how effectively an animal grows.
// bredRate - how effectively the animals bred.
Name:Sheep, growthRate:2.1, water:1.9, food:2.0, energy:4.6, bredRate:1.7
Name:Cows, growthRate:1.4, water:3.2, food:5.1, energy:8.1, breedRate:1.1
class HerdInstance
{
public HerdType Herd { get; set; }
public int Population { get; set; }
public double Health { get; }
}
So how would interfaces come to the rescue? Crop and herd specific logic is located in the relevant instance code:
// Allows items to be valued
interface IFinancialValue
{
double CurrentValue();
}
class CropInstance : IFinancialValue
{
...
public double CurrentValue()
{
return this.Size * this.Health * this.Crop.Energy;
}
}
class HerdInstance : IFinancialValue
{
...
public double CurrentValue()
{
return this.Population * this.Health * this.Herd.Energy - this.Herd.Food;
}
}
You can then do things with objects that implement IFinancialValue:
public string NetWorth()
{
List<IFinancialValue> list = new List<IFinancialValue>();
list.AddRange(Crops);
list.AddRange(Herds);
double total = 0.0;
for(int i = 0; i < list.Count; i++)
{
total = total + list[i].CurrentValue();
}
return string.Format("Your total net worth is ${0} from {1} sellable assets", total, list.Count);
}
You might recall that above I said:
...but be especially careful when adding anything that implements
"behaviour" or complexity, like methods and logic; i.e. anything that
acts on CropType within Crop is probably a bad idea.
...which seems to contradict the code just above. The difference is that if you have one class that has everything in it you won't be able to flex, where as in the approach above I have assumed that I can add as many different game-asset types as I like by using the [x]Type and [x]Instance architecture.
The answer depends on the difference in functionality between the crop types. The general rule is to avoid unnecessary complexity where possible and inheritance should be used sparingly because it introduces hard dependencies.
So if all crops are functionally similar and only differ by their attribute values then you would want to use a single class for crop, but if your game logic demands the crop types to behave very differently and/or carry very different sets of data, then you may want to consider creating separate structures.
If inheritance would be the best choice (in case you need separate structures) cannot be answered without knowing the exact details of your game world either. Some alternatives you could consider are:
interfaces (or another type of mix-in), which allows you to re-use behavior or data across multiple types, e.g. if crops can be harvested, maybe forests can be harvested as well.
structs (or data-classes), which only define the data structure and not the behavior. This is generally more efficient and forces you to do a simpler design with less abstractions.
a functional programming approach where the crops only exist as primitives being passed around functions. This has all the benefits of functional programming, such as no side effects, less bugs, easier to write tests, easier concurrency designs which can help your game scale larger.

C++/CLI ref parameter overloading

I have a C++/CLI value class(struct in C#) that for math.
As you know, math relevant structs are often using ref paramters for its methods for improve performance. And then, add a overloading that is non-ref version for convinience.
Such like this, in C#
public struct Vector
{
public static Vector Add(Vector left, Vector right) => Add(ref left, ref right);
public static Vector Add(ref Vector left, ref Vector right)
{
// Something.
}
}
On the other hands, in C++/CLI, I had made ref versions such like below.
public value class Vector
{
public:
static Vector Add(Vector% left, Vector% right)
{
// Something.
}
}
It has not problem. That exposing ref parameters properly in C# project.
BTW, how about non-ref version.
I can declare its prototype, but I cannot call the ref version from that.
public value class Vector
{
public:
static Vector Add(Vector left, Vector right)
{
return Add(left, right); // Compile error C2668.
}
static Vector Add(Vector% left, Vector% right)
{
// Something.
}
}
How to solve it?
What is your goal here?
If your goal is to have a API that can be called from C++/CLI, then ignoring the issue of how to implement the two versions, how would you call them from C++/CLI code? Since C++/CLI doesn't require an extra keyword when calling the ref version, you'll get error C2668 everywhere you try to use Add. In that case, I would implement the two versions with different names.
If your goal is to have an API that can be called from C#, then it might make sense to have these two methods with the same name, since C# can differentiate which one to call based on the ref keyword. In that case, I would implement Add(Vector, Vector) and Add(Vector%, Vector%) as both calling private method AddImpl(Vector%, Vector%).
I haven't checked this with a compiler. Since the way you call the methods are so similar, you may get compiler errors even after you fix the implementations. Last I checked, an unmanaged C++ compiler won't even let you define an overload along the lines of foo(int) and foo(int&), because they're called in exactly the same way. (Though I might be mis-remembering that.)
Now, all that said, let me say that this feels like a premature optimization to me. Consider implementing the 'call by value' versions only, and if your performance is acceptable, then there's no need for these extra hoops.
Also, you haven't shown your definition of what data is in your Vector type. If it's heavy-weight enough that you're considering call by reference to save time in copying the object, then perhaps the it should be changed to a reference type.

Understanding an object-oriented assignment while working with methods

I'm still new to java and writing/reading code, so I'm not quite sure what my professor wants. All I need is some reinforcement of what I should be doing.
The assignment is as follows:
Specify and then implement a method (of some class X) that is passed a NumberList and that returns an array containing the values from the NumberList.
(The NumberList is not changed by your method. Your method is NOT a member of NumberList. You won't be able to test your method by running it since I am not providing the NumberList class to you.)
If you need it, here are the public methods.
The one method that I use is:
public int size() //returns number of items in this NumberList
So, as I understand, all I am doing is taking the NumberList and creating an array of the values. Easy enough. Is this handling the work that is asked?
public double [] arrayNL(NumberList list){
//pre: NL is not empty
//post: array with NL values is returned
double [] arrayNL = new double [list.size()];
for(int x=0;x<list.size();x++){
arrayNL[x]=list.nextDouble;
}
return arrayNL;
}
Just uncertain about list.size() and list.nextDouble... and that is if I'm correct in understanding the problem. Really haven't done enough object coding to be familiar/confident with it and I heavily rely on testing, so I'm questioning everything. Any help would be great, I just have trouble following this professor's instructions for some reason.
Not sure I understand the question. Is the goal to write the code that copies the list to the array, or to implement the methods in the NumberList class based on the pre- and post- conditions?
I believe that one of the goals of this exercise is to teach how to read an API (Application program interface) and implements its method just by reading the documentation, without reading the actual code behind it.
This is an important practice since as a future developer you will have to use other people's methods and you won't be able to implement everything by yourself.
As for your code, I'm not sure where you've seen the nextDouble method as I don't see it in the documentation. Unless it was given to you, I suggest you'll stick to the documentation of NumberList() and other basic coding features.
Instead of using nextDouble you can use: public double get(int index) so your for loop would look something like this:
for(int i = 0; i < list.size() ;i++){
arrayNL[i]= list.get(i);
}
The rest of your code is basically fine.
Your code is basically all there, although next double is undefined in the NumberList class so that may give you trouble. Here's what each part is doing:
public double [] arrayNL(NumberList list){
// Initialize an array of doubles containing the same # of elements
// as the NumberList
double [] arrayNL = new double [list.size()];
// Iterate through the NumberList
for(int x=0;x<list.size();x+) {
// Copy the double from the NumberList object to the double array
// at the current index. Note "nextDouble" is undefined, but
// NumberList does have a method you can use instead.
arrayNL[x]=list.nextDouble;
}
// After iterating through the whole list, return the double array
return arrayNL;
}
Sorry for any formatting issues. Typed this on my phone

Optional Parameters, Good or Bad?

I am writing and browsing through a lot of methods in the project im working with and as much as I think overloads are useful I think that having a simple optional parameter with a default value can get around the problem aiding in writing more readable and I would think efficient code.
Now I hear that using these parmeters in the methods could carry nasty side effects.
What are these side effects and is it worth the risk of using these parameters to keep the code clean ???
I'll start by prefacing my answer by saying Any language feature can be used well or it can be used poorly. Optional parameters have some drawbacks, just like declaring locals as var does, or generics.
What are these side effects
Two come to mind.
The first being that the default value for optional parameters are compile time constants that are embedded in the consumer of the method. Let's say I have this class in AssemblyA:
public class Foo
{
public void Bar(string baz = "cat")
{
//Omitted
}
}
And this in AssemblyB:
public void CallBar()
{
new Foo().Bar();
}
What really ends up being produced is this, in assemblyB:
public void CallBar()
{
new Foo().Bar("cat");
}
So, if you were to ever change your default value on Bar, both assemblyA and assemblyB would need to be recompiled. Because of this, I tend not to declare methods as public if they use optional parameters, rather internal or private. If I needed to declare it as public, I would use overloads.
The second issue being how they interact with interfaces and polymorphism. Take this interface:
public interface IBar
{
void Foo(string baz = "cat");
}
and this class:
public class Bar : IBar
{
public void Foo(string baz = "dog")
{
Console.WriteLine(baz);
}
}
These lines will print different things:
IBar bar1 = new Bar();
bar1.Foo(); //Prints "cat"
var bar2 = new Bar();
bar2.Foo(); //Prints "dog"
Those are two negatives that come to mind. However, there are positives, as well. Consider this method:
void Foo(string bar = "bar", string baz = "baz", string yat = "yat")
{
}
Creating methods that offer all the possible permutations as default would be several if not dozens of lines of code.
Conclusion: optional parameters are good, and they can be bad. Just like anything else.
Necromancing.
The thing with optional parameters is, they are BAD because they are unintuitive - meaning they do NOT behave the way you would expect it.
Here's why:
They break ABI compatibility !
(and strictly speaking, they also break API-compatiblity, when used in constructors)
For example:
You have a DLL, in which you have code such as this
public void Foo(string a = "dog", string b = "cat", string c = "mouse")
{
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
Now what kinda happens is, you expect the compiler to generate this code behind the scenes:
public void Foo(string a, string b, string c)
{
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
public void Foo(string a, string b)
{
Foo(a, b, "mouse");
}
public void Foo(string a)
{
Foo(a, "cat", "mouse");
}
public void Foo()
{
Foo("dog", "cat", "mouse");
}
or perhaps more realistically, you would expect it to pass NULLs and do
public void Foo(string a, string b, string c)
{
if(a == null) a = "dog";
if(b == null) b = "cat";
if(c == null) c = "mouse";
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
}
so you can change the default-arguments at one place.
But this is not what the C# compiler does, because then you couldn't do:
Foo(a:"dog", c:"dogfood");
So instead the C# compiler does this:
Everywhere where you write e.g.
Foo(a:"dog", c:"mouse");
or Foo(a:"dog");
or Foo(a:"dog", b:"bla");
It substitutes it with
Foo(your_value_for_a_or_default, your_value_for_b_or_default, your_value_for_c_or_default);
So that means if you add another default-value, change a default-value, remove a value, you don't break API-compatiblity, but you break ABI-compatibility.
So what this means is, if you just replace the DLL out of all files that compose an application, you'll break every application out there that uses your DLL. That's rather bad. Because if your DLL contains a bad bug, and I have to replace it, I have to recompile my entire application with your latest DLL. That might contain a lot of changes, so I can't do it quickly. I also might not have the old source code handy, and the application might be in a major modification, with no idea what commit the old version of the application was compiled on. So I might not be able to recompile at this time. That is very bad.
And as for only using it in PUBLIC methods, not private, protected or internal.
Yea, nice try, but one can still use private, protected or internal methods with reflection. Not because one wants to, but because it sometimes is necessary, as there is no other way. (Example).
Interfaces have already been mentioned by vcsjones.
The problem there is code-duplication (which allows for divergent default-values - or ignoring of default-values).
But the real bummer is, that in addition to that, you can now introduce API-breaking-changes in Constructors...
Example:
public class SomeClass
{
public SomeClass(bool aTinyLittleBitOfSomethingNew = true)
{
}
}
And now, everywhere where you use
System.Activator.CreateInstance<SomeClass>();
you'll now get a RUNTIME exception, because now there is NO parameter-less constructor...
The compiler won't be able to catch this at compile time.
Good night if you happen to have a lot of Activator.CreateInstances in your code.
You'll be screwed, and screwed badly.
Bonus points will be awarded if some of the code you have to maintain uses reflection to create class instances, or use reflection to access private/protected/internal methods...
Don't use optional parameters !
Especially not in class constructors.
(Disclaimer: sometimes, there simply is no other way - e.g. an attribute on a property that takes the name of the property as constructor argument automagically - but try to limit it to these few cases, especially if you can make due with overloading)
I guess theoretically they are fine for quick prototyping, but only for that.
But since prototypes have a strong tendency to go productive (at least in the company I currently work), don't use it for that, either.
I'd say that it depends how different the method becomes when you include or omit that parameter.
If a method's behaviour and internal functioning is very different without a parameter, then make it an overload. If you're using optional parameters to change behaviour, DON'T. Instead of having a method that does one thing with one parameter, and something different when you pass in a second one, have one method that does one thing, and a different method that does the other thing. If their behaviour differs greatly, then they should probably be entirely separate, and not overloads with the same name.
If you need to know whether a parameter was user-specified or left blank, then consider making it an overload. Sometimes you can use nullable values if the place they're being passed in from won't allow nulls, but generally you can't rule out the possibility that the user passed null, so if you need to know where the value came from as well as what the value is, don't use optional parameters.
Above all, remember that the optional parameters should (kinda by definition) be used for things that have a small, trivial or otherwise unimportant effect on the outcome of the method. If you change the default value, any place that calls the method without specifying a value should still be happy with the result. If you change the default and then find that some other bit of code that calls the method with the optional parameter left blank is now not working how it should, then it probably shouldn't have been an optional parameter.
Places where it can be a good idea to use optional parameters are:
Methods where it's safe to just set something to a default if a value isn't provided. This basically covers anything where the caller might not know or care what the value is. A good example is in encryption methods - the caller may just think "I don't know crypto, I don't know what value R should be set to, I just want this to be encrypted", in which case you set the defaults to sensible values. Often these start out as a method with an internal variable that you then move to be user-provided. It's pointless making two methods when the only difference is that one has var foo = bar; somewhere at the start.
Methods that have a set of parameters, but not all of them are needed. This is quite common with constructors; you'll see overloads that each set different combinations of the various properties, but if there's three or four parameters that may or may not need to be set, that can require a lot of overloads to cover all the possible combinations (it's basically a handshake problem), and all these overloads have more or less identical behaviour internally. You can solve this by having most of them just set defaults and call the one that sets all parameters, but it's less code to use optional parameters.
Methods where the coder calling them might want to set parameters, but you want them to know what a "normal" value is. For example, the encryption method we mentioned earlier might require various parameters for whatever maths goes on internally. A coder might see that they can pass in values for workFactor or blockSize, but they may not know what "normal" values are for these. Commenting and documentation will help here, but so will optional parameters - the coder will see in the signature [workFactor = 24], [blockSize = 256] which helps them judge what kind of values are sensible. (Of course, this is no excuse to not comment and document your code properly.)
You're not making more readable and efficient code.
First, your method signatures will be gratuitously longer.
Second, overloads don't exist for the sole purpose of using default values - a quick look at the Convert class should show you that. Many times overloaded methods have different execution paths, which will become spaghetti code in your single non overloaded method.
Third, sometimes you need to know whether a value was used as input. How would you then know whether the user passed those values, if he happens to use the same value as the default one you were using?
Often I see optional parameters in C# like IMyInterface parameter = null.
Especially when I see that in constructors I would even say it'S a code smell.
I know that's a hard verdict - but in this case it obscures your dependencies, which is bad.
Like vcsjones said, you can use those language features right, but I believe optional parameters should be used only in some edge-cases.
my opinion.

How to define a set of input parameters in Pex?

Say I have MyClass with 100s of fields.
If I use an object of MyClass as an input param, Pex would simply choke trying to generate all possible combinations (mine runs into 1000s of paths even on a simple test)
[PexMethod]
void MytestMethod(MyClass param){...}
How can I tell Pex to use only a set of predefined objects of MyClass rather than having it trying to be smart and generate all possible combinations to test?
In other words I want to manually specify a list of possible states for param in the code above and tell Pex to use it
Cheers
If you find that Pex is generating large amounts of irrelevant, redundant, or otherwise unhelpful inputs, you can shape the values that it generates for your parametrized unit tests' input using PexAssume, which will ensure that all generated inputs meet a set of criteria that you provide.
If you were wanting to ensure that arguments came from a predefined collection of values, for instance, you could do something like this:
public void TestSomething(Object a) {
PexAssume.IsTrue(someCollection.Contains(a));
}
PexAssume has other helper methods as well for more general input pruning, such as IsNotNull, AreNotEqual, etc. What little documentation is out there suggests that there is some collection-specific functionality as well, though if those methods exist, I'm not familiar with them.
Check out the Pex manual for a bit more information.
Pex will not try to generate every possible combination of values. Instead, it analyses your code and tries to cover every branch. So if you have
if (MyObject.Property1 == "something")
{
...
}
then it will try to create an object that has Property1 == "something". So limiting the tests to some predefined objects is rather against the 'Pex philosophy'. That said, you may find the following information interesting.
You can provide a Pex factory class. See, for instance, this blog post or this one.
[PexFactoryClass]
public partial class EmployeeFactory
{
[PexFactoryMethod(typeof(Employee))]
public static Employee Create(
int i0,
string s0,
string s1,
DateTime dt0,
DateTime dt1,
uint ui0,
Contract c0
)
{
Employee e0 = new Employee();
e0.EmployeeID = i0;
e0.FirstName = s0;
e0.LastName = s1;
e0.BirthDate = dt0;
e0.StartDateContract = dt1;
e0.Salary = ui0;
e0.TypeContract = c0;
return e0;
}
}
Pex will then call this factory class (instead of a default factory) using appropriate values it discovers from exploring your code. The factory method allows you to limit the possible parameters and values.
You can also use PexArguments attribute to suggest values, but this will not prevent Pex from trying to generate other values to cover any branches in your code. It just tries the ones you provide first.
[PexArguments(1, "foo")] // try this first
void MyTest(int i, string s)
{
...
}
See here for more information on PexArguments and also search for 'seed values' in the PDF documentation on Parameterized Test Patterns.