I have a class which represents a set of numbers. The constructor takes three arguments: startValue, endValue and stepSize.
The class is responsible for holding a list containing all values between start and end value taking the stepSize into consideration.
Example: startValue: 3, endValue: 1, stepSize = -1, Collection = { 3,2,1 }
I am currently creating the collection and some info strings about the object in the constructor. The public members are read only info strings and the collection.
My constructor does three things at the moment:
Checks the arguments; this could throw an exception from the constructor
Fills values into the collection
Generates the information strings
I can see that my constructor does real work but how can I fix this, or, should I fix this? If I move the "methods" out of the constructor it is like having init function and leaving me with an not fully initialized object. Is the existence of my object doubtful? Or is it not that bad to have some work done in the constructor because it is still possible to test the constructor because no object references are created.
For me it looks wrong but it seems that I just can't find a solution. I also have taken a builder into account but I am not sure if that's right because you can't choose between different types of creations. However single unit tests would have less responsibility.
I am writing my code in C# but I would prefer a general solution, that's why the text contains no code.
EDIT: Thanks for editing my poor text (: I changed the title back because it represents my opinion and the edited title did not. I am not asking if real work is a flaw or not. For me, it is. Take a look at this reference.
http://misko.hevery.com/code-reviewers-guide/flaw-constructor-does-real-work/
The blog states the problems quite well. Still I can't find a solution.
Concepts that urge you to keep your constructors light weight:
Inversion of control (Dependency Injection)
Single responsibility principle (as applied to the constructor rather than a class)
Lazy initialization
Testing
K.I.S.S.
D.R.Y.
Links to arguments of why:
How much work should be done in a constructor?
What (not) to do in a constructor
Should a C++ constructor do real work?
http://misko.hevery.com/code-reviewers-guide/flaw-constructor-does-real-work/
If you check the arguments in the constructor that validation code can't be shared if those arguments come in from any other source (setter, constructor, parameter object)
If you fill values into the collection or generate the information strings in the constructor that code can't be shared with other constructors you may need to add later.
In addition to not being able to be shared there is also being delayed until really needed (lazy init). There is also overriding thru inheritance that offers more options with many methods that just do one thing rather then one do everything constructor.
Your constructor only needs to put your class into a usable state. It does NOT have to be fully initialized. But it is perfectly free to use other methods to do the real work. That just doesn't take advantage of the "lazy init" idea. Sometimes you need it, sometimes you don't.
Just keep in mind anything that the constructor does or calls is being shoved down the users / testers throat.
EDIT:
You still haven't accepted an answer and I've had some sleep so I'll take a stab at a design. A good design is flexible so I'm going to assume it's OK that I'm not sure what the information strings are, or whether our object is required to represent a set of numbers by being a collection (and so provides iterators, size(), add(), remove(), etc) or is merely backed by a collection and provides some narrow specialized access to those numbers (such as being immutable).
This little guy is the Parameter Object pattern
/** Throws exception if sign of endValue - startValue != stepSize */
ListDefinition(T startValue, T endValue, T stepSize);
T can be int or long or short or char. Have fun but be consistent.
/** An interface, independent from any one collection implementation */
ListFactory(ListDefinition ld){
/** Make as many as you like */
List<T> build();
}
If we don't need to narrow access to the collection, we're done. If we do, wrap it in a facade before exposing it.
/** Provides read access only. Immutable if List l kept private. */
ImmutableFacade(List l);
Oh wait, requirements change, forgot about 'information strings'. :)
/** Build list of info strings */
InformationStrings(String infoFilePath) {
List<String> read();
}
Have no idea if this is what you had in mind but if you want the power to count line numbers by twos you now have it. :)
/** Assuming information strings have a 1 to 1 relationship with our numbers */
MapFactory(List l, List infoStrings){
/** Make as many as you like */
Map<T, String> build();
}
So, yes I'd use the builder pattern to wire all that together. Or you could try to use one object to do all that. Up to you. But I think you'll find few of these constructors doing much of anything.
EDIT2
I know this answer's already been accepted but I've realized there's room for improvement and I can't resist. The ListDefinition above works by exposing it's contents with getters, ick. There is a "Tell, don't ask" design principle that is being violated here for no good reason.
ListDefinition(T startValue, T endValue, T stepSize) {
List<T> buildList(List<T> l);
}
This let's us build any kind of list implementation and have it initialized according to the definition. Now we don't need ListFactory. buildList is something I call a shunt. It returns the same reference it accepted after having done something with it. It simply allows you to skip giving the new ArrayList a name. Making a list now looks like this:
ListDefinition<int> ld = new ListDefinition<int>(3, 1, -1);
List<int> l = new ImmutableFacade<int>( ld.buildList( new ArrayList<int>() ) );
Which works fine. Bit hard to read. So why not add a static factory method:
List<int> l = ImmutableRangeOfNumbers.over(3, 1, -1);
This doesn't accept dependency injections but it's built on classes that do. It's effectively a dependency injection container. This makes it a nice shorthand for popular combinations and configurations of the underlying classes. You don't have to make one for every combination. The point of doing this with many classes is now you can put together whatever combination you need.
Well, that's my 2 cents. I'm gonna find something else to obsess on. Feedback welcome.
As far as cohesion is concerned, there's no "real work", only work that's in line (or not) with the class/method's responsibility.
A constructor's responsibility is to create an instance of a class. And a valid instance for that matter. I'm a big fan of keeping the validation part as intrinsic as possible, so that you can see the invariants every time you look at the class. In other words, that the class "contains its own definition".
However, there are cases when an object is a complex assemblage of multiple other objects, with conditional logic, non-trivial validation or other creation sub-tasks involved. This is when I'd delegate the object creation to another class (Factory or Builder pattern) and restrain the accessibility scope of the constructor, but I think twice before doing it.
In your case, I see no conditionals (except argument checking), no composition or inspection of complex objects. The work done by your constructor is cohesive with the class because it essentially only populates its internals. While you may (and should) of course extract atomic, well identified construction steps into private methods inside the same class, I don't see the need for a separate builder class.
The constructor is a special member function, in a way that it constructor, but after all - it is a member function. As such, it is allowed to do things.
Consider for example c++ std::fstream. It opens a file in the constructor. Can throw an exception, but doesn't have to.
As long as you can test the class, it is all good.
It's true, a constructur should do minimum of work oriented to a single aim - successful creaation of the valid object. Whatever it takes is ok. But not more.
In your example, creating this collection in the constructor is perfectly valid, as object of your class represent a set of numbers (your words). If an object is set of numbers, you should clearly create it in the constructor! On the contrary - the constructur does not perform what it is made for - a fresh, valid object construction.
These info strings call my attention. What is their purpose? What exactly do you do? This sounds like something periferic, something that can be left for later and exposed through a method, like
String getInfo()
or similar.
If you want to use Microsoft's .NET Framework was an example here, it is perfectly valid both semantically and in terms of common practice, for a constructor to do some real work.
An example of where Microsoft does this is in their implementation of System.IO.FileStream. This class performs string processing on path names, opens new file handles, opens threads, binds all sorts of things, and invokes many system functions. The constructor is actually, in effect, about 1,200 lines of code.
I believe your example, where you are creating a list, is absolutely fine and valid. I would just make sure that you fail as often as possible. Say if you the minimum size higher than the maximum size, you could get stuck in an infinite loop with a poorly written loop condition, thus exhausting all available memory.
The takeaway is "it depends" and you should use your best judgement. If all you wanted was a second opinion, then I say you're fine.
It's not a good practice to do "real work" in the constructor: you can initialize class members, but you shouldn't call other methods or do more "heavy lifting" in the constructor.
If you need to do some initialization which requires a big amount of code running, a good practice will be to do it in an init() method which will be called after the object was constructed.
The reasoning for not doing heavy lifting inside the constructor is: in case something bad happens, and fails silently, you'll end up having a messed up object and it'll be a nightmare to debug and realize where the issues are coming from.
In the case you describe above I would only do the assignments in the constructor and then, in two separate methods, I would implement the validations and generate the string-information.
Implementing it this way also conforms with SRP: "Single Responsibility Principle" which suggests that any method/function should do one thing, and one thing only.
Related
I'm a little unclear as to how far to take the idea in making all members within a class private and make public methods to handle mutations. Primitive types are not the issue, it's encapsulated object that I am unclear about. The benefit of making object members private is the ability to hide methods that do not apply to the context of class being built. The downside is that you have to provide public methods to pass parameters to the underlying object (more methods, more work). On the otherside, if you want to have all methods and properties exposed for the underlying object, couldn't you just make the object public? What are the dangers in having objects exposed this way?
For example, I would find it useful to have everything from a vector, or Array List, exposed. The only downside I can think of is that public members could potentially assigned a type that its not via implicit casting (or something to that affect). Would a volitile designation reduce the potential for problems?
Just a side note: I understand that true enapsulation implies that members are private.
What are the dangers in having objects exposed this way?
Changing the type of those objects would require changing the interface to the class. With private objects + public getters/setters, you'd only have to modify the code in the getters and setters, assuming you want to keep the things being returned the same.
Note that this is why properties are useful in languages such as Python, which technically doesn't have private class members, only obscured ones at most.
The problem with making instance variables public is that you can never change your mind later, and make them private, without breaking existing code that relies on directly public access to those instance vars. Some examples:
You decide to later make your class thread-safe by synchronizing all access to instance vars, or maybe by using a ThreadLocal to create a new copy of the value for each thread. Can't do it if any thread can directly access the variables.
Using your example of a vector or array list - at some point, you realize that there is a security flaw in your code because those classes are mutable, so somebody else can replace the contents of the list. If this were only available via an accessor method, you could easily solve the problem by making an immutable copy of the list upon request, but you can't do that with a public variable.
You realize later that one of your instance vars is redundant and can be derived based on other variables. Once again, easy if you're using accessors, impossible with public variables.
I think that it boils down to a practical point - if you know that you're the only one who will be using this code, and it pains you to write accessors (every IDE will do it for you automatically), and you don't mind changing your own code later if you decide to break the API, then go for it. But if other people will be using your class, or if you would like to make it easier to refactor later for your own use, stick with accessors.
Object oriented design is just a guideline. Think about it from the perspective of the person who will be using your class. Balance OOD with making it intuitive and easy to use.
You could run into issues depending on the language you are using and how it treats return statements or assignment operators. In some cases it may give you a reference, or values in other cases.
For example, say you have a PrimeCalculator class that figures out prime numbers, then you have another class that does something with those prime numbers.
public PrimeCalculator calculatorObject = new PrimeCalculator();
Vector<int> primeNumbers = calculatorObject.PrimeNumbersVector;
/* do something complicated here */
primeNumbers.clear(); // free up some memory
When you use this stuff later, possibly in another class, you don't want the overhead of calculating the numbers again so you use the same calculatorObject.
Vector<int> primes = calculatorObject.PrimeNumbersVector;
int tenthPrime = primes.elementAt(9);
It may not exactly be clear at this point whether primes and primeNumbers reference the same Vector. If they do, trying to get the tenth prime from primes would throw an error.
You can do it this way if you're careful and understand what exactly is happening in your situation, but you have a smaller margin of error using functions to return a value rather than assigning the variable directly.
Well you can check the post :
first this
then this
This should solve your confusion . It solved mine ! Thanks to Nicol Bolas.
Also read the comments below the accepted answer (also notice the link given in the second last comment by me ( in the first post) )
Also visit the wikipedia post
I understand the differences between them (at least in C#). I know the effects they have on the elements to which they are assigned. What I don't understand is why it is important to implement them - why not have everything Public?
The material I read on the subject usually goes on about how classes and methods shouldn't have unnecessary access to others, but I've yet to come across an example of why/how that would be a bad thing. It seems like a security thing, but I'm the programmer; I create the methods and define what they will (or will not) do. Why would I spend all the effort to write a function which tried to change a variable it shouldn't, or tried to read information in another class, if that would be bad?
I apologize if this is a dumb question. It's just something I ran into on the first articles I ever read on OOP, and I've never felt like it really clicked.
I'm the programmer is a correct assumption only if you're the only programmer.
In many cases, other programmers work with the first programmer's code. They use it in ways he didn't intend by fiddling with the values of fields they shouldn't, and they create a hack that works, but breaks when the producer of the original code changes it.
OOP is about creating libraries with well-defined contracts. If all your variables are public and accessible to others, then the "contract" theoretically includes every field in the object (and its sub-objects), so it becomes much harder to build a new, different implementation that still honors the original contract.
Also, the more "moving parts" of your object are exposed, the easier it is for a user of your class to manipulate it incorrectly.
You probably don't need this, but here's an example I consider amusing:
Say you sell a car with no hood over the engine compartment. Come nighttime, the driver turns on the lights. He gets to his destination, gets out of the car and then remembers he left the light on. He's too lazy to unlock the car's door, so he pulls the wire to the lights out from where it's attached to the battery. This works fine - the light is out. However, because he didn't use the intended mechanism, he finds himself with a problem next time he's driving in the dark.
Living in the USA (go ahead, downvote me!), he refuses to take responsibility for his incorrect use of the car's innards, and sues you, the manufacturer for creating a product that's unsafe to drive in the dark because the lights can't be reliably turned on after having been turned off.
This is why all cars have hoods over their engine compartments :)
A more serious example: You create a Fraction class, with a numerator and denominator field and a bunch of methods to manipulate fractions. Your constructor doesn't let its caller create a fraction with a 0 denominator, but since your fields are public, it's easy for a user to set the denominator of an existing (valid) fraction to 0, and hilarity ensues.
First, nothing in the language forces you to use access modifiers - you are free to make everything public in your class if you wish. However, there are some compelling reasons for using them. Here's my perspective.
Hiding the internals of how your class operates allows you to protect that class from unintended uses. While you may be the creator of the class, in many cases you will not be the only consumer - or even maintainer. Hiding internal state protects the class for people who may not understand its workings as well as you. Making everything public creates the temptation to "tweak" the internal state or internal behavior when the class isn't acting the way you may want - rather than actually correcting the public interface of internal implementation. This is the road to ruin.
Hiding internals helps to de-clutter the namespace, and allows tools like Intellisense to display only the relevant and meaningful methods/properties/fields. Don't discount tools like Intellisense - they are a powerful means for developers to quickly identify what they can do with your class.
Hiding internals allows you to structure an interface appropriate for the problem the class is solving. Exposing all of the internals (which often substantially outnumber the exposed interface) makes it hard to later understand what the class is trying to solve.
Hiding internals allows you to focus your testing on the appropriate portion - the public interface. When all methods/properties of a class are public, the number of permutations you must potentially test increases significantly - since any particular call path becomes possible.
Hiding internals helps you control (enforce) the call paths through your class. This makes it easier to ensure that your consumers understand what your class can be asked to do - and when. Typically, there are only a few paths through your code that are meaningful and useful. Allowing a consumer to take any path makes it more likely that they will not get meaningful results - and will interpret that as your code being buggy. Limiting how your consumers can use your class actually frees them to use it correctly.
Hiding the internal implementation frees you to change it with the knowledge that it will not adversely impact consumers of your class - so long as your public interface remains unchanged. If you decide to use a dictionary rather than a list internally - no one should care. But if you made all the internals of your class available, someone could write code that depends on the fact that your internally use a list. Imagine having to change all of the consumers when you want to change such choices about your implementation. The golden rule is: consumers of a class should not care how the class does what it does.
It is primarily a hiding and sharing thing. You may produce and use all your own code, but other people provide libraries, etc. to be used more widely.
Making things non-public allows you to explicitly define the external interface of your class. The non-public stuff is not part of the external interface, which means you can change anything you want internally without affecting anyone using the external interface,
You only want to expose the API and keep everything else hidden. Why?
Ok lets assume you want to make an awesome Matrix library so you make
class Matrix {
public Object[][] data //data your matrix storages
...
public Object[] getRow()
}
By default any other programmer that use your library will want to maximize the speed of his program by tapping into the underlying structure.
//Someone else's function
Object one() {data[0][0]}
Now, you discover that using list to emulate the matrix will increase performance so you change data from
Object[][] data => Object[] data
causes Object one() to break. In other words by changing your implementation you broke backward compatibility :-(
By encapsulating you divide internal implementation from external interface (achieved with a private modifier).
That way you can change implementation as much as possible without breaking backward compatibility :D Profit!!!
Of course if you are the only programmer that is ever going to modify or use that class you might as well as keep it public.
Note: There are other major benefits for encapsulating your stuff, this is just one of many. See Encapsulation for more details
I think the best reason for this is to provide layers of abstraction on your code.
As your application grows, you will need to have your objects interacting with other objects. Having publicly modifiable fields makes it harder to wrap your head around your entire application.
Limiting what you make public on your classes makes it easier to abstract your design so you can understand each layer of your code.
For some classes, it may seem ridiculous to have private members, with a bunch of methods that just set and get those values. The reason for it is that let's say you have a class where the members are public and directly accessible:
class A
{
public int i;
....
}
And now you go on using that in a bunch of code you wrote. Now after writing a bunch of code that directly accesses i and now you realize that i should have some constraints on it, like i should always be >= 0 and less than 100 (for argument's sake).
Now, you could go through all of your code where you used i and check for this constraint, but you could just add a public setI method that would do it for you:
class A
{
private int i;
public int I
{
get {return i;}
set
{
if (value >= 0 && value < 100)
i = value;
else
throw some exception...
}
}
}
This hides all of that error checking. While the example is trite, situations like these come up quite often.
It is not related to security at all.
Access modifers and scope are all about structure, layers, organization, and communication.
If you are the only programmer, it is probably fine until you have so much code even you can't remember. At that point, it's just like a team environment - the access modifiers and the structure of the code guide you to stay within the architecture.
A lot of times in code on the internet or code from my co-workers I see them creating an Object with just one method which only gets used once in the whole application. Like this:
class iOnlyHaveOneMethod{
public function theOneMethod(){
//loads and loads of code, say 100's of lines
// but it only gets used once in the whole application
}
}
if($foo){
$bar = new iOnlyHaveOneMEthod;
$bar->theOneMethod();
}
Is that really better then:
if($foo){
//loads and loads of code which only gets used here and nowhere else
}
?
For readability it makes sense to move the loads and loads of code away, but shouldn't it just be in a function?
function loadsAndLoadsOfCode(){
//Loads and loads of code
}
if($foo){ loadsAndLoadsOfCode(); }
Is moving the code to a new object really better then just creating a function or putting the code in there directly?
To me the function part makes more sense and seems more readible then creating an object which hardly is of any use since it just holds one method.
The problem is not whether it's in a function or an object.
The problem is that you have hundreds of lines in one blob. Whether that mass of code is in a method of an object or just a class seems more or less irrelevant to me, just being minor syntatic sugar.
What are those hundreds of lines doing? That's the place to look to implement object oriented best practice.
If your other developers really think using an object instead of a function makes it significantly more "object oriented" but having a several-hundred line function/method isn't seen as a code smell, then I think organisationally you have some education to do.
Well, if there really is "loads and loads" of code in the method, then it should be broken down into several protected methods in that class, in which case the use of a class scope is justified.
Perhaps that code isn't reusable because it hasn't been factored well into several distinct methods. By moving it into a class and breaking it down, you might find it could be better reused elsewhere. At least it would be much more maintainable.
Whilst the function with hundreds of lines of code clearly indicates a problem (as others have already pointed out), placing it in a separate instance class rather than a static function does have advantages, which you can exploit by rejigging your example a fraction:
// let's instead assume that $bar was set earlier using a setter
if($foo){
$bar = getMyBar();
$bar->theOneMethod();
}
This gives you a couple of advantages now:
This is a simple example of the Strategy Pattern. if $bar implements an interface that provides theOneMethod() then you can dynamically switch implementations of that method;
Testing your class independently of $bar->theOneMethod() is dramatically easier, as you can replace $bar with a mock at testing time.
Neither of these advantages are available if you just use a static function.
I would argue that, whilst simple static functions have their place, non-trivial methods (as this clearly is by the 'hundreds of lines' comment) deserve their own instance anyway:
to separate concerns;
to aid testing;
to aid refactoring and reimplementation.
You are really asking two questions here:
Is just declaring a function better than creating an object to hold only this function?
Should any function contain "loads of code"?
The first part: If you want to be able to dynamically switch functions, you may need the explicit object encapsulation as a workaround in languages that cannot handle functions this way. Of course, having to allocate a new object, assign it to a variable, then call the function from that variable is a bit dumb when all you want to do is call a function.
The second part: Ideally not, but there is no clear definition of "loads", and it may be the appropriate thing to do in certain cases.
yes, the presences of loads and loads of code is a Code Smell.
I'd say you almost never want to have either a block or a method with loads of code in it -- doesn't matter if it's in it's own class or not.
Moving it to an object might be a first step in refactoring 'though - so it might make sense in that way. First move it to its own class and later split it down to several smaller methods.
Well, I'd say it depends on how tightly coupled the block of code is with the calling section of code.
If it's so tightly coupled, that I can't imagine it being used anywhere else, I'd prefer sticking it in a private method of the calling class. That way it won't be visible to other parts of your system, guaranteeing it won't be misused by others.
On the other hand, if the block of code is generic enough (email validation i.e.) to possibly be interesting in other parts of the system, I'd have no problem extracting that part into it's own class, and then consider that to be a utility class. Even if it means it will be a single-method class.
If your question was more in the lines of "what to do with hundreds and hundreds of lines of code", then you really need to be doing some refactoring.
As much as a single method with lots of code is a code smell. My first thought was to at least make the method static. No data in the class so no need for creating an object.
I think i would look to rephrase the question that you are asking. I think you want to ask the questions is my class supporting singles responsibility principle. Is there anyway to decompose the pieces of your class into seperate smaller pieces that might change independently of each other (data access and parsing, etc . .). Can you unit test your class easily . .
If you can say yes to the above items, i wouldn't worry about method versus new class as the whole point here is that you have readable, maintainable code.
In my team we have red flag if a class gets long (over x amount of lines) but that is just a heuristic as if you class has 2000 lines of codes it probably can get broken down and is probably not supporting SRP.
For testability, it is definitely better to break it out into a separate class with separate method(s). It is a whole lot easier to write unit tests for single methods than as part of an inline if statement in a code-behind file or whatnot.
That being said, I agree with everyone else that the method should be broken out into single responsibility methods instead of hundreds of lines of code. This too will make it more readable and easier to test. And hopefully, you might get some reuse out of some of the logic contained in that big mess of code.
Is type checking considered bad practice even if you are checking against an interface? I understand that you should always program to an interface and not an implementation - is this what it means?
For example, in PHP, is the following OK?
if($class instanceof AnInterface) {
// Do some code
}
Or is there a better way of altering the behaviour of code based on a class type?
Edit: Just to be clear I am talking about checking whether a class implements an interface not just that it is an instance of a certain class.
As long as you follow the LSP, I don't see a problem. Your code must work with any implementation of the interface. It's not a problem that certain implementations cause you to follow different code paths, as long as you can correctly work with any implementation of the interface.
If your code doesn't work with all implementations of the interface, then you shouldn't use the interface in the first place.
If you can avoid type checking you should; however, one scenario where I found it handy, was we had a web service which took a message but the contents of the message could change. We had to persist the message back into a db, in order to get the right component to break the message down to its proper tables we used type checking in a sense.
What I find more common and flexible then if ($class instanceof SomeOtherType) is to define an IProcessing strategy for example and then using factory based on the type $class create the correct class.
So in c# roughly this:
void Process(Message msg)
{
IProcessor processor=ProcessignFactory.GetProcessor(msg.GetType());
processor.Process(msg);
}
However sometimes doing this can be overkill if your only dealing with one variation that won't change implement it using a type check, and when / if you find you were wrong and it requires more checks then refactor it into a more robust solution.
In my practice any checking for type (as well as type casting) has always indicated that something is wrong with the code or with the language.
So I try to avoid it whenever possible.
Run-time type checking is often necessary in situations where an interface provides all the methods necessary to do something, but does not provide enough to do it well. A prime example of such a situation is determining the number of items in an enumerable sequence. It's possible to make such a determination by enumerating through the sequence, but many enumerable objects "know" how many items they contain. If an object knows how many items it contains, it will likely be more efficient to ask it than to enumerate through the collection and count the items individually.
Arguably, IEnumerable should have provided some methods to ask what it knows about the number of items it contains [recognizing the possibility that the object may know that the number is unbounded, or that it's at most 4,591 (but could be a lot less), etc.], but it doesn't. What might be ideal would be if a new version of IEnumerable interface could be produced that included default implementations for any "new" methods it adds, and if such interface could be considered to be implemented by any implementations of the present version. Unfortunately, because no such feature exists, the only way to get the count of an enumerable collection without enumerating it is to check whether it implements any known collection interfaces that include a Count member.
Let's say you have a Person object and it has a method on it, promote(), that transforms it into a Captain object. What do you call this type of method/interaction?
It also feels like an inversion of:
myCaptain = new Captain(myPerson);
Edit: Thanks to all the replies. The reason I'm coming across this pattern (in Perl, but relevant anywhere) is purely for convenience. Without knowing any implementation deals, you could say the Captain class "has a" Person (I realize this may not be the best example, but be assured it isn't a subclass).
Implementation I assumed:
// this definition only matches example A
Person.promote() {
return new Captain(this)
}
personable = new Person;
// A. this is what i'm actually coding
myCaptain = personable.promote();
// B. this is what my original post was implying
personable.promote(); // is magically now a captain?
So, literally, it's just a convenience method for the construction of a Captain. I was merely wondering if this pattern has been seen in the wild and if it had a name. And I guess yeah, it doesn't really change the class so much as it returns a different one. But it theoretically could, since I don't really care about the original.
Ken++, I like how you point out a use case. Sometimes it really would be awesome to change something in place, in say, a memory sensitive environment.
A method of an object shouldn't change its class. You should either have a member which returns a new instance:
myCaptain = myPerson->ToCaptain();
Or use a constructor, as in your example:
myCaptain = new Captain(myPerson);
I would call it a conversion, or even a cast, depending on how you use the object. If you have a value object:
Person person;
You can use the constructor method to implicitly cast:
Captain captain = person;
(This is assuming C++.)
A simpler solution might be making rank a property of person. I don't know your data structure or requirements, but if you need to something that is trying to break the basics of a language its likely that there is a better way to do it.
You might want to consider the "State Pattern", also sometimes called the "Objects for States" pattern. It is defined in the book Design Patterns, but you could easily find a lot about it on Google.
A characteristic of the pattern is that "the object will appear to change its class."
Here are some links:
Objects for States
Pattern: State
Everybody seems to be assuming a C++/Java-like object system, possibly because of the syntax used in the question, but it is quite possible to change the class of an instance at runtime in other languages.
Lisp's CLOS allows changing the class of an instance at any time, and it's a well-defined and efficient transformation. (The terminology and structure is slightly different: methods don't "belong" to classes in CLOS.)
I've never heard a name for this specific type of transformation, though. The function which does this is simply called change-class.
Richard Gabriel seems to call it the "change-class protocol", after Kiczales' AMOP, which formalized as "protocols" many of the internals of CLOS for metaprogramming.
People wonder why you'd want to do this; I see two big advantages over simply creating a new instance:
faster: changing class can be as simple as updating a pointer, and updating any slots that differ; if the classes are very similar, this can be done with no new memory allocations
simpler: if a dozen places already have a reference to the old object, creating a new instance won't change what they point to; if you need to update each one yourself, that could add a lot of complexity for what should be a simple operation (2 words, in Lisp)
That's not to say it's always the right answer, but it's nice to have the ability to do this when you want it. "Change an instance's class" and "make a new instance that's similar to that one" are very different operations, and I like being able to say exactly what I mean.
The first interesting part would be to know: why do you want/need an object changes its class at runtime?
There are various options:
You want it to respond differently to some methods for a given state of the application.
You might want it to have new functionality that the original class don't have.
Others...
Statically typed languages such as Java and C# don't allow this to happen, because the type of the object should be know at compile time.
Other programming languages such as Python and Ruby may allow this ( I don't know for sure, but I know they can add methods at runtime )
For the first option, the answer given by Charlie Flowers is correct, using the state patterns would allow a class behave differently but the object will have the same interface.
For the second option, you would need to change the object type anyway and assign it to a new reference with the extra functionality. So you will need to create another distinct object and you'll end up with two different objects.