Using "Base" in a Class Name - oop

Is it acceptable to use the word 'Base' in a class name which is a the bottom of the inheritance tree?
I have always found this a bit of a cop-out, just wondering if anyone agrees with me.
For example, if I am refactoring certain elements from MyClassA and MyClassB into a common base class, I'd be tempted to create a MyBaseClass from which the two inherit.
But what happens if I ever need to refactor MyBaseClass? MyBaseBaseClass? Now that's just silly.
I know that Rocky Lhotka doesn't mind with his CSLA framework, but I'm always uneasy about 'definites' in programming.
Thoughts?
Let me clarify why I'm even worrying about this.
I have two namespaces - MySpecificNamespace and MyCommonNamespace. MyNamespace uses MyCommonNamespace, as you might expect.
Now, I like to make maximum use of Namespaces wherever possible to describe the context of the problem, and avoid adding the context to the class name. So, for example, consider that I have a class in MyNamespace which descends from one in MyCommonNamespace.
Option A
I could call this
MySpecificClass: MyClass
{
}
But then I'm adding 'Specific' (the context) to the name - which is redundant as it's already in MySpecificNamespace.
Option B
MyClass: MyCommonNamespace.MyClass
{
}
You can see how we could get confused here, right?
Option C
The one I think is fishy:
MyClass: MyBaseClass
{
}

I tend to add a Base suffix to the name of the base class only if it exists from technical perspective (to share some code), and doesn't really constitute any usable class on its own (so all of these classes are abstract). These are quite rare cases though, and should be avoided just as Helper classes.

"All your BaseClass are belong to us."
I side with a definitive no, with a single exception. If you are writing an app to manage military installations or baseball stadiums, go for it.

I side with "no" for exactly the refactoring reason you've cited.
A class should be named after what it logically represents, and nothing but the Object class is really really Base. Metaphysics ftw :)
re: Option B, there is nothing confusing about
namespace MySpecificNamespace
{
MyClass: MyCommonNamespace.MyClass
{
}
}

Classes that have the same name as their parent classes bug me to no end. In Java java.sql.Date extends java.util.Date. This is very annoying because you have to specify the exact class you want to import or else specify the classname fully (including package/namespace).
Personally I prefer to name things as they are; if a Base or Abstract class exists only to provide a partial implementation of something, and doesn't represent the interface for that thing, it is often acceptable to put the word Abstract or Base in its name. However, if that class represents the interface as well, then you should just name it after what it does.
For example, in Java, we have the Connection interface (for DB connections). It's just called Connection, not IConnection. You use it like this:
Connection con = getConnectionFromSomewhere();
If you are making a JDBC driver and need to implement connection, you could have a ConnectionBase or AbstractConnection which is the lower layer of the implementation detail of your particular Connection. You might have
abstract class AbstractConnection implements Connection
class OracleConnection extends AbstractConnection
or something like that. The clients of your code, however, never see AbstractConnection nor do they see OracleConnection, they only see Connection.
So, in general, classes that are meant to be generally useful should be named after what they represent/do, whereas classes that are helpers for code maintenance/organization can be named after what they are.
*ps I hate naming Interfaces with I. Do people name all their classes with C? It's 2009! your IDE can tell you what type of object that is, in the odd case when it even matters if it's an interface or a class.

I think it's worth wiki-fying this question.
FWIW, I agree. I usually try to find a more "generic" term for my base classes. So if I have a "Customer" class and need to introduce a new base class for it, I'd go with "Contact" or something rather than "CustomerBase".

I too would suggest No, but not cast in stone...
Following OO mantra, your naming system should best represent the underlying objects that the code is supposed to be encapsulating. There should really be no 'meta language', related to the actual syntactical makeup of the programming language of choice in there.
That said, if your object is truly abstract and you really don't see it changing anytime soon, there is an argument that adding 'Base' helps with general readability.
As with most things, there's no blanket right and wrong answer - it depends on the overall layout of your codebase, what this specific code is supposed to be representing and the in-house style that you have. Just try to be consistent.
Is base used anywhere else?

In Java I tend to provide a base implementation of an interface Foo in an abstract class FooBase. I think that is perfectly ok, and makes the connection to the interface very clear and regular.
Without the interface I would call the abstract base class Foo.

I also side with the no camp...place a Base in there today and in 6 months someone will whack a MyDerivedClass class in you code base while you're not looking.

"Abstract" prefix maybe?

I usually go with IFoo for the interface and AbstractFoo for the skeletal implementation, which is a mix of .NET and Java conventions.

I think it should probably be avoided where possible in favour of an identifier that actually describes what it is!
This question is difficult to answer because it's abstract. I might, for example, consider calling the base of MyClassA and MyClassB, "MyClass".

I agree, AbstractFoo is a decent solution. I try to pick names that don't need additional adjectives. I would shy away from using Base.

It seems like any principled answer will end up being no... However, comma, when I'm looking at code I'm not particularly familiar with, which happens a lot in python (where the source code is sometimes the only dependable documentation), I find it really helpful when a class has Base in it. Python is different from other OO languages where the class is defined with an "abstract" or "interface" specifier though. For naming, I like to ask myself "if I have never seen this code before, which way would make it easier for me to understand this code?" (Then, depending on how lazy I'm feeling, I name it accordingly).

Related

OO principle: c#: design to interface and not concrete classes

I have some questions about the affects of using concrete classes and interfaces.
Say some chunk of code (call it chunkCode) uses concrete class A. Would I have to re-compile chunkCode if:
I add some new public methods to A? If so, isn't that a bit stange? After all I still provide the interface chunkCode relies on. (Or do I have to re-compile because chunkCode may never know otherwise that this is true and I haven't omitted some API)
I add some new private methods to A?
I add a new public field to A?
I add a new private field to A?
Factory Design Pattern:
The main code doesn't care what the concrete type of the object is. It relies only on the API. But what would you do if there are few methods which are relevant to only one concrete type? This type implements the interface but adds some more public methods? Would you use some if (A is type1) statements (or the like) the main code?
Thanks for any clarification
1) Compiling is not an activity in OO. It is a detail of specific OO implementations. If you want an answer for a specific implementation (e.g. Java), then you need to clarify.
In general, some would say that adding to an interface is not considered a breaking change, wheras others say you cannot change an interface once it is published, and you have to create a new interface.
Edit: You specified C#, so check out this question regarding breaking changes in .Net. I don't want to do that answer a disservice, so I won't try to replicate it here.
2) People often hack their designs to do this, but it is a sign that you have a poor design.
Good alternatives:
Create a method in your interface that allows you to invoke the custom behavior, but not be required to know what that behavior is.
Create an additional interface (and a new factory) that supports the new methods. The new interface does not have to inherit the old interface, but it can if it makes sense (if an is-a relationship can be expressed between the interfaces).
If your language supports it, use the Abstract Factory pattern, and take advantage of Covariant Return Types in the concrete factory. If you need a specific derived type, accept a concrete factory instead of an abstract one.
Bad alternatives (anti-patterns):
Adding a method to the interface that does nothing in other derived classed.
Throwing an exception in a method that doesn't make sense for your derived class.
Adding query methods to the interface that tell the user if they can call a certain method.
Unless the method name is generic enough that the user wouldn't expect it to do anything (e.g. DoExtraProcessing), then adding a method that is no-op in most derived classes breaks the contract defined by that interface.
E.g.: Someone invoking bird.Fly() would expect it to actually do something. We know that chickens can't fly. So either a Chicken isn't a Bird, or Birds don't Fly.
Adding query methods is a poor work-around for this. E.g. Adding a boolean CanFly() method or property in your interface. So is throwing an exception. Neither of them get around the fact that the type simply isn't substitutable. Check out the Liskov Substitution Principle (LSP).
For your first question the answer is NO for all your points. If it would be that way then backward compatibility would not make any sense. You have to recompile chunkCode only if you brake the API, that is remove some functionality that chunkCode is using, changing calling conventions, modifying number of parameters, these sort of things == breaking changes.
For the second I usually, but only if I really have to, use dynamic_cast in those situations.
Note my answer is valid in the context of C++;I just saw the question is language agnostic(kind of tired at this hour; I'll remove the answer if it offenses anybody).
Question 1: Depends on what language you are talking about. Its always safer to recompile both languages though. Mostly because chuckCode does not know what actually exists inside A. Recompiling refreshes its memory. But it should work in Java without recompiling.
Question 2: No. The entire point of writing a Factory is to get rid of if(A is type1). These if statements are terrible from maintenance perspective.
Factory is designed to build objects of similar type. If you are having a situation where you are using this statement then that object is either not a similar type to rest of the classes. If you are sure it is of similar type and have similar interfaces. I would write an extra function in all the concrete base classes and implement it only on this one.
Ideally All these concrete classes should have a common abstract base class or a Interface to define what the API is. Nothing other than what is designed in this Interface should be expected to be called anywhere in the code unless you are writing functions that takes this specific class.

Doubt in using the interface?

Whenever i hear about interfaces i have the following doubt.
i have the following interface
interface Imammals
{
walk();
eat();
run();
}
and i have two classes Human and Cat that implements this interface.
Anyway, the functionality of the methods are going to be different in both the Classes.
For Eg: walk(), the functionality differs as cat uses four legs and human uses two legs
Then, Why do i need to have a common interface which ONLY declares these methods? Is my design here faulty?
If the functionality of the methods are going to be same in both the classes, i could go for a class based inheritance where the parent implements the complete functionality and the child inherits and uses the parent class methods.
But here the interfaces help us just to consolidate the methods declarations or is there anything more inside?
EDIT: walking(), eating(), running() was changed to walk(), eat(), run() and mammals was changes to Imammals.
In your scenario, either type-inheritance or interface-implementation would work - but interface based abstraction allows types outside of your existing type model to provide the functionality. It could be a mock object, or it could be some kind of super killer robot, that can walk run and eat but isn't really a mammal (so having it inherit from a Mammal class could be confusing or just impossible).
In particular, interfaces allow us to express this relationship neatly, while avoiding the subtle points from C# having single (type-)inheritance.
Using the interface you can have the following:
public void walkMyAnimal(Animal animal) {
animal.walk();
}
without the need to know what animal exactly is passed.
Interface allows you to define behavior for inheriting classes so if you have Donkey in future then you simply implement this interface and be sure that you donkey will walk,run and eat.
Also you can use composition instead of concrete implementation if some of your objects have common behaviour.
Read a bit about Strategy pattern I think that will help.
One big advantage of interfaces is that even in languages like Java and C# where multiple inheritance is not allowed, a class can take on more than one interface. Something can be both Closable, for instance, and a List, but could not inherit from both (hypothetical) abstract base classes AbstractClosable and AbstractList.
It is also suitable for cases where you are writing a library or a plugin interface and want to provide a way for your code to use objects provided by library users or plugin writers, but you don't want (nor should you) any say in the implementation. Think of the Listener interfaces in Java, for instance. Without those, there would be no possibility of an event model, since Java doens't support callbacks.
In general, interfaces are good for cases where you want objects that have particular functionality, but the way that functionality is implemented can vary widely, and might not be the only thing a class does.
The reason you want an interface is to be able to treat them all alike when commanding them.
Whoever calls walking() (which is a rather odd name btw, it should probably be walk()) is just interested in telling your animal to do just that. The actual implementation will vary but that is not something the caller would care about.
Well, sometimes you'd want to be able to do something to "anything capable of running" without necessarily knowing at design time whether you're talking about a human or a cat or whatever. For instance, imagine a function mammal raceWinner(mammal m1, mammal m2){...}
to calculate which mammal would win in a race. To determine who wins, perhaps the function needs to call m1.running() and m2.running(). Of course, the mammals we pass in will really be cats or humans or whatever, and this class supplies the actual implementation of running(). But all raceWinner needs to know is that they have a running() method with the expected signature.
If we only defined running() on cat and human, we couldn't call m1.running() (because the compiler is not guaranteed that m1 has a running() method, as it only knows it's a m1 implements mammal). So instead we'd have to implement a raceWinner(human m1, cat m2) and likewise for two humans, two cats, or any other pair of mammals we had in mind, leading to a lot more work on our part.
An interface provides a contract. It doesn't provide an implementation. It's good practice to interface out your classes.
Of course, walking(), eating() will have different implementation in different animals. But they all walk, run, etc. That is all the interface is saying.
You could model this using inheritance, which would allow you to give default implementations for some or all of the methods. However, interfaces are really useful for declaring a set of features that apply to many unrelated types.
To continue your example, you could imagine a type Alien, which would probably have the same methods, but would not fit in your inheritance hierarchy.
The purpose of interfaces is to tell you what a class does, not how it does it.
This is especially important for accepting things that work differently -- each printer we attach to the PC works differently, so does each scanner, so does each external drive. If all programs needed to care about how each of them worked, you would need to recompile, say, Microsoft Office, for every model of printer that comes out.
One way to develop interfaces is to define an interface and a relative class which implements te interface a common reasonable way. Having both interface and class, you could use the interface in the case the class alreay derives from another class, otherwise a class could derived derivctly to the interface implementation.
It's not always possible, but it solves many problem.
Having a common interface is used to use different object using only the interface (collecting them to a generic list, for example).
There isn't much difference between an entirely abstract class and an interface if you only have one base type. Interfaces can't have any implementation code, but abstract classes can. In this case, abstract classes can be more flexible.
Where interfaces are really useful is that you can assign multiple interfaces to a single implementation, but you can only assign one base class.
for instance, you could have:
class Cat : IMammal, IFourLeggedAnimal
{
}
class Human: IMammal, ITwoLeggedAnimal
{
}
Now you can treat both of them as Mammals, with a "walk()" method, or you can treat them as Four or two legged animals (not necessarily mammals).
What is really useful with an interface like mammal is that you can treat an array of objects (Humans and Cats) as of being of the same type when you want them to walk, eat or run.
For instance if you ware creating a game where you have a number (objects would be created dynamically, but just for example lets say 10 cats and 1 human) of mammals on the screen (saved in a collection), and just wanted them to walk on every turn, you could simply do:
foreach(mammals m in MamalsArrayList){
{
m.walking();
}
note: I suggest you follow naming conventions and name your interfaces with "I" in front of them, so your example should be named IMammals.
without having to know weather any particular m is either a cat or a human.
Interfaces are hard to show on any particular snippet - but when you really need one you can see how useful they can be.
Of course they have other uses to (that are mentioned in other answers), I just focused on your example.
There are two issues here that are often confused. Inherited behaviour allows different 'commands' to be responded to in the same way e.g Man.walk() === Woman.walk(). Polymorphic behaviour allows the same 'command' to be responded to in different ways e.g. Animal.move() for one object may be different for Animal.move() for another, the bird will choose to fly while the slug will slide.
Now, I would argue the second of these is good while the first is not. Why? Because in OOP we should be encapsulating functionality into objects, which in turn promotes code reuse and all the other nicenesses of OOP. So rather than inheriting behaviour we should delegate it out to a shared object. If you know patterns then this is what State and Strategy are doing.
The problem lies in the fact that normally when you inherit , you get both of these behaviours mixed together. I suggest that this is more trouble than its worth and we should only be using interfaces, though sometimes we do have to make do with whatever the framework provides.
In your specific example Mammal is probably a bad interface name because it doesn't really tell me what it does and it has the potential to blowout to thousands of methods. It's better to divide interfaces into very specific cases. If you were modelling animals you might have a Moveable interface with one method, move(), to which each animal could respond by walking, running, flying, or crawling as appropriate.

Are there established alternatives to ISomething / ISomethingable for interfaces?

The .NET standard of prefixing an interface name with an I seems to be becoming widespread and isn't just limited to .NET any more. I have come across a lot of Java code that uses this convention (so it wouldn't surprise me if Java used it before C# did). Also Flex uses it, and so on. The placing of an I at the start of the name smacks of Hungarian notation though and so I'm uncomfortable with using it.
So the question is, is there an alternative way of denoting that Something is an interface, rather than a class and is there any need to denote it like this anyway. Or is it a case its become a standard and so I should just accept it and stop trying to stir up "religious wars" by suggesting it be done differently?
From the Framework Design Guidelines book:
Interfaces representing roots of a hierarchy (e.g. IList) should also use nouns or noun phrases. Interfaces representing capabilities should use adjectives and adjective phrases (e.g. IComparable, IFormattable).
Also, from the annotations on interface naming:
KRZYSZTOF CWALINA: One of the few
prefixes used is “I” for interfaces
(as in ICollection), but that is for
historical reasons. In retrospect, I
think it would have been better to use
regular type names. In a majority of
the cases developers don’t care that
something is an interface and not an
abstract class, for example.
BRAD ABRAMS: On the other hand, the “I” prefix on interfaces is a clear
recognition of the influence of COM
(and Java) on the .NET Framework. COM
popularized, even institutionalized,
the notation that interfaces begin
with “I.” Although we discussed
diverging from this historic pattern
we decided to carry forward the
pattern as so many of our users were
already familiar with COM.
JEFFREY RICHTER: Personally, I like the
“I” prefix and I wish we had more
stuff like this. Little one-character
prefixes go a long way toward keeping
code terse and yet descriptive. As I
said earlier, I use prefixes for my
private type fields because I find
this very useful.
BRENT RECTOR Note:
this is really another application of
Hungarian notation (though one without
the disadvantages of the notation's
use in variable names).
It has very much become a widely adopted standard, and while it is a form of Hungarian, as Brent states, it doesn't suffer from the disadvantages of using Hungarian notation in variable names.
I would just accept it, to be honest. I know what you mean about being a bit like Hungarian notation (or at least abuse of the same) but I think it gives sufficient value to be worth doing in this case.
With dependency injection being in vogue, often I find I end up with an interface and a single production implementation. It's handy to make them easily distinguishable just with the I prefix.
One little data point: I work with both Java and C# a fair amount, and I regularly find myself having to check which types in Java are actually interfaces, particularly around the collection framework. .NET just makes this simple. Maybe it doesn't bother other people, but it bothers me.
+1 for IFoo from me.
As a .NET programmer (for the most part), I actually prefer the Java convention of dropping the I here, for a simple reason: Often, small redesigns require the change from an interface into an abstract base class or vice versa. If you have to change the name, this might require a lot of unnecessary refactoring.
On the other hand, usage for the client should be transparent so they shouldn't care for this type hint. Furthermore, the “able” suffix in `Thingable” should be enough of a hint. It works well enough in Java.
/EDIT: I'd like to point out that the above reasoning had prompted me to drop the I prefix for private projects. However, upon checking one of them against the FxCop rule set, I promptly reverted to the usage of I. Consistency wins here, even though a foolish consistency is the hobgoblin of little minds.
Its all about style and readability. Prefixing Interfaces with "I" is merely a naming convention and style guideline that has caught on. The compilers themselves couldn't care less.
My main assumption is that the most important thing is to maintain readability in domain part of the implementation. Therefore:
If you have one behaviour and one possible implementation, then just don't create an interface:
public class StackOverflowAnswerGenerator { }
If you have one behaviour and many possible implementations, then there is no problem and you can just drop the "I", and have:
public interface StackOverflowAnswerGenerator {}
public class StupidStackOverflowAnswerGenerator : StackOverflowAnswerGenerator {}
public class RandomStackOverflowAnswerGenerator : StackOverflowAnswerGenerator {}
public class GoogleSearchStackoverflowAnswerGenerator : StackOverflowAnswerGenerator {}
//...
The real problem comes when you have one behaviour and one possible implementation but you need an interface to describe its behaviour (for example for convenient testing, because of convention in your project, using some library/framework which enforces this, ...). Possible solutions, other from prefixing the interface are:
a) Prefix or suffix the implementation (as stated in some other answers in this topic)
b) Use a different namespace for interface:
namespace StackOverflowAnswerMachine.Interfaces
{
public interface StackOverflowAnswerGenerator {}
}
namespace StackOverflowAnswerMachine
{
public class StackOverflowAnswerGenerator : Interfaces.StackOverflowAnswerGenerator
{}
}
c) Use a different namespace for implementation:
namespace StackOverflowAnswerMachine
{
public interface StackOverflowAnswerGenerator {}
}
namespace StackOverflowAnswerMachine.Implementations
{
public class StackOverflowAnswerGenerator : StackOverflowAnswerMachine.StackOverflowAnswerGenerator
{}
}
Even though I think the last possibility is the cleanest, its one drawback is that even though using StackOverflowAnswerMachine; gives you access to all domain objects you must prefix all domain interfaces not to be confused with their implementations. That may feel like something not very convenient but in clean design usually a class doesn't use many other domain objects, and mostly you need to use the prefix only in field declaration and constructor parameter list. So, that is my current recommendation.
The client of domain functionality shouldn't need to know whether they're using an interface, an abstract class or a concrete class. If they need to know this, then there is some serious problem in such a project, because it has domain logic and infrastructural concerns mixed on the same abstraction layer. Therefore I recommend "a" or "c" solutions.
The coding standard for Symbian has interfaces (pure abstract C++ classes) denoted with an M rather than an I.
Otherwise, the only other way I have seen of denoting interfaces is through context.
For .NET, Microsoft's Framework Design Guidelines book absolutely recommends it, and yes, it is very much standard. I have never seen it done otherwise, and to create a new convention would only serve to confuse people.
I should add that I dislike Hungarian notation too, but this and the case of prefixing class variables with an underscore are good exceptions to me, because they make code so much more readable.
I've always thought this naming convention is a bit of a dinosaur. Nowadays IDEs are powerful enough to tell us that something is an interface. Adding that I makes the code harder to read so if you really want to have a naming convention that separates interfaces from classes I would append Impl to the name of the implementing class.
public class CustomerImpl implements Customer
You asked for an alternative, so here is one I have encountered:
Use no prefix on the interface class, but use a c or C prefix on the corresponding concrete classes. Most of your code will generally reference the interface, so why pollute it with the prefix and not the generally much less used concrete type.
This approach does introduce one inconsistency in that some concrete types will be prefixed (the ones with matching interfaces) and others will not. This may be useful since it reminds developers that an interface exists and its use should be preferred over the concrete type.
To be honest, I use the prefix on the interface, but I think it is more because I have become so accustomed and comfortable with to it.

Is there a best way to handle naming fads?

In the last year and a bit of working on my team's code base I have noticed a steady progression of naming conventions.
For example, there are a lot of classes that are named to express that they are a class that helps you do something.
Here's the ones I've spotted:
MyClassUtil
MyClassFactory
MyClassHelper
MyClassManager
MyClassService
It just seems to me that over time people come up with naming conventions for relatively the same thing and so instead of having everything named in a consistent manner you wind up with a code base that has a bit of every convention. All the new stuff is named based on the latest fad naming convention and so you can pretty much tell the age of a bit of code by what convention was in fashion at the time.
What is the best way to deal with this tendency? Is it really a problem? As these naming fads come into vogue, should one use the latest fad? Should one rename all existing items with the new naming convention? Or should one just accept the variety as something that is inescapable?
They don't seem like fads... all these names hint at the purpose of the class, and those purposes are different. With programming, it's all in the name, and they should be chosen very carefully. The variety doesn't need to be escaped. The names vary because the purposes of the classes vary.
MyClassUtil
-Some utilities for working with MyClass that it didn't come with. Maybe MyClass belongs to a library you're using, but you often use some higher level functions with it and you need somewhere to put them.
MyClassFactory
-Creates instances of MyClass in an abstracted way. This allows you to write code that needs MyClass instances. It can get those new instances from a MyClassFactory. This would allow the Factory to modified in future to serve up different specific implementations of MyClass. Maybe under unit testing, the Factory just serves up dummy/mock MyClasses. This means a class that uses the factory can be tested without needing to change it, just change the factory, and voilà you can isolate the class being tested.
MyClassHelper
-Ok, I may agree, perhaps this can be more specific. It does something to help with MyClass, but what. Maybe this is a bit similar to MyClassUtil. But, probably MyClassUtil is general functions that work with MyClass, whereas the helper is initialized with a specific instance of MyClass and then can do operations on that one instance. You need a new helper for each MyClass you want to help.
MyClassManager
-Maybe this deals with a pool of MyClass instances and stores or orchestrates them. Eg. in a CommunicationsManager, the class would handle wiring together classes that handle talking to a port or connection like ethernet or serial, and a class that deals with the comms protocol being sent over it so it can transport packets, and a class that deals with the messages in those packets.
MyClassService
-A service can do things for you, like given a postcode convert it into a grid-reference. Usually a service can resolve to many specific things. With the postcode example, this class might be have implementations that can talk to different web sites to do the conversion.
All of the names of classes you've given above indicate to me a striking departure from object-oriented principles. There's no way of telling what "MyClassUtil" or "MyClassService" does. It could be anything. Class naming should be specific, and should relay clearly the actual function of the class. None of these do. The best way to deal with this tendency is to brush up on object oriented programming skills and name the classes accordingly.
Now, it could be that these examples point out the function, within the application architecture, that these classes represent, and your use of "MyClass" is simply a placeholder for something more definitive at runtime, in which case, I wouldn't view these as naming fads, but rather as descriptive indicators of the function of the class itself, with a loose hint of the application's underlying architecture.
If this is pervasive, the team needs to spend some time studying OO design: reading the source code to well-respected OO frameworks, books on design patterns or books such as Evans "Domain Driven Design".
"Util" and "Manager" are often symptoms of poor design - "code smells". So is "Helper" outside of special contexts (Rails apps) where it's well entrenched.
"Factory" and "Service" have precise technical meanings, you can check the code to see if it conforms to those design patterns.
The general remedy is to sit down with the team, and have an explicit discussion about what benefits you're expecting from these naming schemes, what makes sense and what doesn't, and then over the next few months apply refactoring techniques to phase out the names you've all decided are code smells.
Naming is important. It shouldn't be taken lightly, nor is it a subjective matter. True, there is often more than one correct answer to a given naming issue. However, there are seldom many answers consistent with previous choices, which is key.
Renaming the names to better ones and refactoring the code so that each class has a clear responsibility, is recommended. To know what kind of names to use, read Tim Ottinger's article about Meaningful Names.
When a class does only one thing, then giving it a descriptive name is usually easy. Words such as "manager" are vague and may indicate that the class is responsible for doing so many unrelated things, that no simple name is able to describe what the class does. If you can know what the class does just by looking at the name of the class, then the class has a good name.
I don't really see how Factory or Service fit in to a particular fad...
Factory is a design pattern and if the class really is a factory then it's a perfectly appropriate name.
If a class is a Windows service what's wrong with calling it service?
There isn't a problem unless you find that performing all the rename refactors is too costly even though you really want to do them.
Why not use a static analysis tool to help enforce a set of style and consistency rule?
If you're in the .NET world Microsoft provides a tool called StyleCop
In the classname examples you give does "MyClass" stand for an actual class name, so that you are really seeing names like "PersonnelRecordUtil" or "GraphNodeFactory"? MyClassFactory is a really bad actual name for a class.

Why should you prevent a class from being subclassed?

What can be reasons to prevent a class from being inherited? (e.g. using sealed on a c# class)
Right now I can't think of any.
Because writing classes to be substitutably extended is damn hard and requires you to make accurate predictions of how future users will want to extend what you've written.
Sealing your class forces them to use composition, which is much more robust.
How about if you are not sure about the interface yet and don't want any other code depending on the present interface? [That's off the top of my head, but I'd be interested in other reasons as well!]
Edit:
A bit of googling gave the following:
http://codebetter.com/blogs/patricksmacchia/archive/2008/01/05/rambling-on-the-sealed-keyword.aspx
Quoting:
There are three reasons why a sealed class is better than an unsealed class:
Versioning: When a class is originally sealed, it can change to unsealed in the future without breaking compatibility. (…)
Performance: (…) if the JIT compiler sees a call to a virtual method using a sealed types, the JIT compiler can produce more efficient code by calling the method non-virtually.(…)
Security and Predictability: A class must protect its own state and not allow itself to ever become corrupted. When a class is unsealed, a derived class can access and manipulate the base class’s state if any data fields or methods that internally manipulate fields are accessible and not private.(…)
I want to give you this message from "Code Complete":
Inheritance - subclasses - tends to
work against the primary technical
imperative you have as a programmer,
which is to manage complexity.For the sake of controlling complexity, you should maintain a heavy bias against inheritance.
The only legitimate use of inheritance is to define a particular case of a base class like, for example, when inherit from Shape to derive Circle. To check this look at the relation in opposite direction: is a Shape a generalization of Circle? If the answer is yes then it is ok to use inheritance.
So if you have a class for which there can not be any particular cases that specialize its behavior it should be sealed.
Also due to LSP (Liskov Substitution Principle) one can use derived class where base class is expected and this is actually imposes the greatest impact from use of inheritance: code using base class may be given an inherited class and it still has to work as expected. In order to protect external code when there is no obvious need for subclasses you seal the class and its clients can rely that its behavior will not be changed. Otherwise external code needs to be explicitly designed to expect possible changes in behavior in subclasses.
A more concrete example would be Singleton pattern. You need to seal singleton to ensure one can not break the "singletonness".
This may not apply to your code, but a lot of classes within the .NET framework are sealed purposely so that no one tries to create a sub-class.
There are certain situations where the internals are complex and require certain things to be controlled very specifically so the designer decided no one should inherit the class so that no one accidentally breaks functionality by using something in the wrong way.
#jjnguy
Another user may want to re-use your code by sub-classing your class. I don't see a reason to stop this.
If they want to use the functionality of my class they can achieve that with containment, and they will have much less brittle code as a result.
Composition seems to be often overlooked; all too often people want to jump on the inheritance bandwagon. They should not! Substitutability is difficult. Default to composition; you'll thank me in the long run.
I am in agreement with jjnguy... I think the reasons to seal a class are few and far between. Quite the contrary, I have been in the situation more than once where I want to extend a class, but couldn't because it was sealed.
As a perfect example, I was recently creating a small package (Java, not C#, but same principles) to wrap functionality around the memcached tool. I wanted an interface so in tests I could mock away the memcached client API I was using, and also so we could switch clients if the need arose (there are 2 clients listed on the memcached homepage). Additionally, I wanted to have the opportunity to replace the functionality altogether if the need or desire arose (such as if the memcached servers are down for some reason, we could potentially hot swap with a local cache implementation instead).
I exposed a minimal interface to interact with the client API, and it would have been awesome to extend the client API class and then just add an implements clause with my new interface. The methods that I had in the interface that matched the actual interface would then need no further details and so I wouldn't have to explicitly implement them. However, the class was sealed, so I had to instead proxy calls to an internal reference to this class. The result: more work and a lot more code for no real good reason.
That said, I think there are potential times when you might want to make a class sealed... and the best thing I can think of is an API that you will invoke directly, but allow clients to implement. For example, a game where you can program against the game... if your classes were not sealed, then the players who are adding features could potentially exploit the API to their advantage. This is a very narrow case though, and I think any time you have full control over the codebase, there really is little if any reason to make a class sealed.
This is one reason I really like the Ruby programming language... even the core classes are open, not just to extend but to ADD AND CHANGE functionality dynamically, TO THE CLASS ITSELF! It's called monkeypatching and can be a nightmare if abused, but it's damn fun to play with!
From an object-oriented perspective, sealing a class clearly documents the author's intent without the need for comments. When I seal a class I am trying to say that this class was designed to encapsulate some specific piece of knowledge or some specific service. It was not meant to be enhanced or subclassed further.
This goes well with the Template Method design pattern. I have an interface that says "I perform this service." I then have a class that implements that interface. But, what if performing that service relies on context that the base class doesn't know about (and shouldn't know about)? What happens is that the base class provides virtual methods, which are either protected or private, and these virtual methods are the hooks for subclasses to provide the piece of information or action that the base class does not know and cannot know. Meanwhile, the base class can contain code that is common for all the child classes. These subclasses would be sealed because they are meant to accomplish that one and only one concrete implementation of the service.
Can you make the argument that these subclasses should be further subclassed to enhance them? I would say no because if that subclass couldn't get the job done in the first place then it should never have derived from the base class. If you don't like it then you have the original interface, go write your own implementation class.
Sealing these subclasses also discourages deep levels of inheritence, which works well for GUI frameworks but works poorly for business logic layers.
Because you always want to be handed a reference to the class and not to a derived one for various reasons:
i. invariants that you have in some other part of your code
ii. security
etc
Also, because it's a safe bet with regards to backward compatibility - you'll never be able to close that class for inheritance if it's release unsealed.
Or maybe you didn't have enough time to test the interface that the class exposes to be sure that you can allow others to inherit from it.
Or maybe there's no point (that you see now) in having a subclass.
Or you don't want bug reports when people try to subclass and don't manage to get all the nitty-gritty details - cut support costs.
Sometimes your class interface just isn't meant to be inheirited. The public interface just isn't virtual and while someone could override the functionality that's in place it would just be wrong. Yes in general they shouldn't override the public interface, but you can insure that they don't by making the class non-inheritable.
The example I can think of right now are customized contained classes with deep clones in .Net. If you inherit from them you lose the deep clone ability.[I'm kind of fuzzy on this example, it's been a while since I worked with IClonable] If you have a true singelton class, you probably don't want inherited forms of it around, and a data persistence layer is not normally place you want a lot of inheritance.
Not everything that's important in a class is asserted easily in code. There can be semantics and relationships present that are easily broken by inheriting and overriding methods. Overriding one method at a time is an easy way to do this. You design a class/object as a single meaningful entity and then someone comes along and thinks if a method or two were 'better' it would do no harm. That may or may not be true. Maybe you can correctly separate all methods between private and not private or virtual and not virtual but that still may not be enough. Demanding inheritance of all classes also puts a huge additional burden on the original developer to foresee all the ways an inheriting class could screw things up.
I don't know of a perfect solution. I'm sympathetic to preventing inheritance but that's also a problem because it hinders unit testing.
I exposed a minimal interface to interact with the client API, and it would have been awesome to extend the client API class and then just add an implements clause with my new interface. The methods that I had in the interface that matched the actual interface would then need no further details and so I wouldn't have to explicitly implement them. However, the class was sealed, so I had to instead proxy calls to an internal reference to this class. The result: more work and a lot more code for no real good reason.
Well, there is a reason: your code is now somewhat insulated from changes to the memcached interface.
Performance: (…) if the JIT compiler sees a call to a virtual method using a sealed types, the JIT compiler can produce more efficient code by calling the method non-virtually.(…)
That's a great reason indeed. Thus, for performance-critical classes, sealed and friends make sense.
All the other reasons I've seen mentioned so far boil down to "nobody touches my class!". If you're worried someone might misunderstand its internals, you did a poor job documenting it. You can't possibly know that there's nothing useful to add to your class, or that you already know every imaginable use case for it. Even if you're right and the other developer shouldn't have used your class to solve their problem, using a keyword isn't a great way of preventing such a mistake. Documentation is. If they ignore the documentation, their loss.
Most of answers (when abstracted) state that sealed/finalized classes are tool to protect other programmers against potential mistakes. There is a blurry line between meaningful protection and pointless restriction. But as long as programmer is the one who is expected to understand the program, I see no hardly any reasons to restrict him from reusing parts of a class. Most of you talk about classes. But it's all about objects!
In his first post, DrPizza claims that designing inheritable class means anticipating possible extensions. Do I get it right that you think that class should be inheritable only if it's likely to be extended well? Looks as if you were used to design software from the most abstract classes. Allow me a brief explanation of how do I think when designing:
Starting from the very concrete objects, I find characteristics and [thus] functionality that they have in common and I abstract it to superclass of those particular objects. This is a way to reduce code duplicity.
Unless developing some specific product such as a framework, I should care about my code, not others (virtual) code. The fact that others might find it useful to reuse my code is a nice bonus, not my primary goal. If they decide to do so, it's their responsibility to ensure validity of extensions. This applies team-wide. Up-front design is crucial to productivity.
Getting back to my idea: Your objects should primarily serve your purposes, not some possible shoulda/woulda/coulda functionality of their subtypes. Your goal is to solve given problem. Object oriented languages uses fact that many problems (or more likely their subproblems) are similar and therefore existing code can be used to accelerate further development.
Sealing a class forces people who could possibly take advantage of existing code WITHOUT ACTUALLY MODIFYING YOUR PRODUCT to reinvent the wheel. (This is a crucial idea of my thesis: Inheriting a class doesn't modify it! Which seems quite pedestrian and obvious, but it's being commonly ignored).
People are often scared that their "open" classes will be twisted to something that can not substitute its ascendants. So what? Why should you care? No tool can prevent bad programmer from creating bad software!
I'm not trying to denote inheritable classes as the ultimately correct way of designing, consider this more like an explanation of my inclination to inheritable classes. That's the beauty of programming - virtually infinite set of correct solutions, each with its own cons and pros. Your comments and arguments are welcome.
And finally, my answer to the original question: I'd finalize a class to let others know that I consider the class a leaf of the hierarchical class tree and I see absolutely no possibility that it could become a parent node. (And if anyone thinks that it actually could, then either I was wrong or they don't get me).