Explanation of naming conventions for classes, methods and variables - naming-conventions

I'm currently in University and they're pretty particular about following their standards.
They've told me this:
All classes must start with a capital
letter
Correct
public class MyClass {}
Incorrect
public class myClass {}
public class _myClass {}
All methods must start with a
lowercase letter
Correct
public void doSomething() {}
Incorrect
public void DoSomething() {}
public void _doSomething() {}
all variables must start with a
lowercase letter
Correct
string myString;
Incorrect
string MyString;
string _myString;
Yet in my last year of programming, I've been finding that people are using much different rules. It wouldn't matter if it were just a few people using the different rules, but almost everywhere I see these different practices being used.
So I just wanted to know what the reasoning behind the above standards is and why some of these other standards are being used: (are they wrong/old standards?)
Most methods I've seen start with a capital letter rather than a lowercase-- Pretty much any of Microsoft's methods I've been using from their imported namespaces. This is probably the most common one I've seen that I don't understand
A lot of people use _ for class variables.
I've seen capitals on variables ie. string MyString;
I know I've missed a few as well, if you can think of any that you could add in and give an explanation for that would be helpful. I know everyone develops their own coding styles, but many of these practices have reasons behind them and I would rather stick with what makes the most sense.
Thanks,
Matt

There is no valuable reason to choose one coding style rather than an other one.
The most important thing is to agree on a coding style with the people you are working on. And to help you to all agree on a coding style, your professor told you a coding style.
Most of the time, it is just a point of view. So, just follow your professor's coding style if you have to code with the university....

standards are arbitrary, like which side of the road to drive on; just do it like they tell you to do it ;-)

Most people are talking about naming convention style, but there are other things to consider when approaching naming conventions, such as what you actually name a routine.
Routine (methods, functions, and procedures) names should typically by in the form of a strong verb + object, regardless of how you format it. For example:
paginateResponse()
or
empty_input_buffer()
as (respectively) opposed to
dealWithResponse()
or
process_input_buffer()
Both "dealWith" and "process" are verbs, but they are ambiguous and cause any other programmers working with your code in the future to have to consult the actual routine definition to determine what it really does.
"Strong" verbs, on the other hand, as shown in the first two examples, are much more powerful in their descriptive power and really pin down what the routine is doing.
This makes your code easier to read as it is self-documenting and leads to higher levels of cohesion.
Also, as a personal point of style, I try to avoid at all costs using "my" in any name.

Standards are only standards if they are followed, and every company or institution has their own standards. It is one of the worst parts of programming. :D
Speaking specifically about the leading _. From my experience this is mostly used on variables that are declared private within a class. They are usually coupled with a method to retrieve them that has the same name without the leading _.

I am trying to follow the rules from Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries by Krzysztof Cwalina and Brad Abrams
Guidelines in this book are presented in four major forms: Do, Consider, Avoid, and Do not. These directives help focus attention on practices that should always be used, those that should generally be used, those that should rarely be used, and those that should never be used. Every guideline includes a discussion of its applicability, and most include a code example to help illuminate the dialogue.
Also, you can use FxCop to check your compliance with those rules.

Standards help with readability, and therefore improve maintainability. (because when you can read the code faster, easier and more accurately, you can debug and repair it, or enhance it, in less time and with less effort.)
They have no effect on reliability or availability, cause the computer doesn't care what the variables are named or how the souurce code is formatted.
If you code is well-organized and readable, you have achieved the objective, regardless of whether or not it conforms exactly to anyone elses "standard".
This says nothing, of course, about how to handle the environment where "standards" are high on someone's list of developer evaluation tools, or management metrics...

I see logic behind capitalisation of classes and variables; it means you can do things like
Banana banana; // Makes a new Banana called banana
I've been learning Qt recently, and they follow your conventions to the letter. I wouldn't ever follow Microsoft's naming conventions!

The standards I've seen echo what's in the Framework Design Guidelines. In the examples you've stated above, I don't see you distinguishing between visibility (public/private).
For example:
Public facing methods should be PascalCase: public void MyMethod() ...
Parameters to methods should be camelCase: public void MyMethod(string myParameter) ...
Fields which should always be private, should be camelCase. Some prefer the underscore prefix (i do) to distinguish it from method parameters.
The best bet on standards is to have your team agree upon conventions up front when the project kicks off, you'll find everything much more consistent.

Coding styles are based on personal preferences and to a large extent the features of the language that you're using.
My personal take is that it's more important to be consistent with a convention than picking the "right one". People can be dogmatic about they're preferred style and things can often delve into a religious war.
All classes must start with a capital letter - This goes hand-in-hand with variable naming and helps prevent confusion that would arise if you had both classes and variables named with the same rules. My preference is a capital letter because I'm used to it and it follows the guidelines for my preferred language (C#).
All methods must start with a lowercase letter - same goes, although I start my methods with an uppercase character (as per C# guidelines).
All variables must start with a lowercase letter - this, I believe, is dependent on you language's scoping features. Often people prefix variables (usually an underscore or a character like "g") to indicate a variable's scope ("g" might mean "global"). This can help prevent confusion where variables have the same names in different scopes. My C# driven preference: all variables have start with a lowercase letter and I use "this." to reference a global variable of the same name where scope is a problem (this usually only occurs in a class's constructor).
I can't let 3. go by without mentioning Hungarian notation (which is grossly misused and misunderstood). Joel has a great article that helped me understand these better.

In addition to the main point, that while any specific standard is essentially arbitrary but it's important to have some agreed upon standard, I'd also add that some standards are ubiquitous enough to have achieved the status of the "correct" way to do things.
For example, in java, class names in professional code are always in CamelCase. I'll qualify the always in saying that your code will compile if you break the standard, and you may occasionally find some open source projects that break the convention as well, but I believe that most people would take that as a sign that the author is not too familiar with the language. Most of your professors guidelines are fairly standard (for java, in any case). Being radically different in this case, apart from annoying your professor, will probably irritate total strangers ;)
It's interesting to me that some languages seem to have taken this standardization to heart, and enforce capitalization to have specific meaning (e.g. Haskell).

The rules you're citing are those used pretty universally in the Java world.
Are you doing Java code at university? If not, it may be that they were previously teaching Java, then switched to C# but kept the naming conventions.

Related

Objective-C class naming convention vs Uncle Bob

In Chapter 2: Meaningful Names Uncle Bob writes:
Don't Add Gratuitous Context
In an imaginary application called "Gas Station Deluxe," it is bad idea to prefix every class with GDS. Frankly, you are working against your tools. You type G and the press completion key and are rewarded with a mile-long list of every class in your system
Actually that what I discovered during my first days with Objective-C a bit more than one year ago. After Java it was quite disappointing but I thought I'm only one who annoyed about that :)
I understand, that "Clean Code" book refers to Java most of the time and Java has namespaces (packages) unlike Objective-C.
Do you use 2-3 letters prefix in your classes if you're building an app, not a library?
What do you think, is it bad language design, language "feature" or Uncle Bob wasn't right here?
Perhaps the key word here is gratuitous. In Objective-C, prefixes serve the important purpose of reducing the chance of name collisions. In other languages like Java and C++, the existence of support for namespaces makes the use of prefixes gratuitous (and a violation of the oft-cited DRY principle). In Objective-C, however, prefixes are meaningful, useful, and not gratuitous.
I was tempted to close this question, but I don't think I've seen a similar one asked before and it's a valid question. Here are my rather disorganized thoughts on the matter.
Many languages have a feature called namespaces, where the "fully qualified" class name is prefixed by a hierarchical series of names. For example, the String class in Java is, properly, java.lang.String, and a custom class is properly com.whatever.foobar.MyClass.
Unfortunately, namespaces have never been added to Objective-C, which means that Objective-C symbols (class names, protocol names, and a few various other types) cannot be placed in a namespace even when using Objective-C++ (which has a namespace feature for functions, constants, structures, etc.)
The only solution to prevent symbol collisions in shared code, then, is to use some form of name mangling to make your symbol names unique. In Objective-C, the convention is to use a prefix of two characters (sometimes the number varies) for all your classes.
This Uncle Bob fellow is a twit for telling you not to do this, because while you'll end up with a program that doesn't compile, you'll lose any benefit of namespaces that prefixes still offer. Does your app use plugins? You need to prefix. Does your app have a public API? You need to prefix.
In theory, code within a single application that never touches the outside world can do without prefixes, but screw it--keep coding cleanly, and add a prefix even there. It'll save you grief later.
Personally I almost never use prefixes. The only exceptions are classes that are somehow connected to each other or they all should be present.
An example:
Some client app for chat. Let's call that chat an ExampleChat.
Then I'd use ECMessage, ECUser, ECRoom, etc. to easily see which classes should there be.
Or if I make some custom cells for UITableView I'd use prefixes to keep them all close to each other and not struggle with searching them in a "mile-long list". Example:
ECTextMessageCell, ECSoundMessageCell, ECUploadMessageCell, ECJoinOrLeaveMessageCell, etc.
That's my personal opinion, which can not be the best. But it's still easiest for me.
Hope it helps
Well if you do not have Namespaces, name conflicts are likely to occur. You can see that in a lot of C libraries that they are using some kind of prefix. So I guess there are good reasons to have those prefixes and other reasons not to use it. But what should be the big problem to modify the completion to either just ignore the prefix of typing three letters instead of just one.
So in the end it seems to me a matter of taste. I guess it would be more important to have good structures classes with prefixes instead of a mess of classes without prefix....
It has nothing to do with bad language design IMHO. There was a time where software was not everywhere and why should one waste extra effort on namespaces? And still as we can see even nowadays languages without namespaces are used.....
I would say, that the world is not black or white. I do programming in java, with packages and yes, it is annoying to have a prefix in each class, as well as it is annoying and arguable to start interfaces with I (just like .Net used to do it).
Sometimes it does annoying me in objective-c however it has some legitimacy if you do not have packages in your language, since you can 'build' artificial groups of classes like 'NS', 'UI', 'MK' and so on in objc and cocoa.
Beyond avoiding collisions, one of the benefits that name prefixes gives is that you're immediately aware of what type you're really dealing with. Suppose you had the following code:
Color c = ...;
MultiValueMap m = ...;
From a cursory glance at the code and depending on what libraries you've used, those types could be from a number of different sources. You may have to lookup which include/import statement was made to understand what the type can do (e.g. you want to modify it but it's missing a method that you're sure is there).
In the iOS world, you would immediately know whether it's a UIColor vs. a CGColor and gain immediate context.
In the past at WWDC, Apple would host a session where they explained Cocoa/Objective-C coding conventions. I believe they mention this aspect of name prefixes so you might want to find one of the recordings that are made available. Other C developers (e.g. Linux kernel developers) also do not seem to think highly of C++ namespaces (among other C++ features) for various reasons.

What's the benefit of case-sensitivity in a program language? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Is there any advantage of being a case-sensitive programming language?
My first programming experiences where with the Basic family (MSX Basix, Q-basic, VB).
These are all not case-sensitive. Now, it might be because of these first experiences, but I've never grasped the benefit of a language being case sensitive. On the contrary, I think it is a source of unneeded overhead and bugs, and it still annoys me when I use e.g. Java or C.
Now, I just read on Clojure (a Lisp-dialect) and noticed - to my surprise - that one of the differences with Lisp is case-sensitivity.
So: what is actually the benefit (to the programmer) of having a case-sensitive language?
The only things I can think of are:
double the number of symbols
visual feedback and easier reading for complex variables using techniques like CamelCase, e.g. HopCount
However, the first argument doesn't hold because of being a major source for bugs (bad practice to use hopcount and HopCount in one method).
The second argument doesn't hold either, as a decent IDE can provide this also in an other way. A good example is the VBA IDE, which has a very good approach: the langauge is case-insensitive but as soon as you type a variable it will change it to the case used in its definition. For example, if you defined Dim thisIsMyVariable as string, it will change any occurrence of thisismyvariable into thisIsMyVariable). That provides the programmer with an immediate clue that the variable was actually typed-in correctly (because it changed appearance).
Edit: added ... benefit to the programmer ...
One point is, like you said, visual aid. Most programming languages (and even frameworks) have conventions on how to capitalize variables, names, etc.
Also, it enforces using uniform names everywhere, so you don't have a mess with the same variable referred to as "var", "Var" or even "VaR".
I can't remember of ever having bugs related to capitalization, so that point seems kind of contrived to me.
Using 2 variables of the same name but different capitalization to me sounds like a conscious attempt to shoot yourself in the foot. Different capitalization conventions almost everywhere signify objects of completely different type (classes, variables, methods and so on), so it's pretty hard to make such a mistake due to the completely different semantics.
I'd like to think of it in this way: what do we gain by NOT having case-sensitivity?
We introduce ambiguity, we encourage sloppiness and poor style.
This is a slightly subjective matter of course.
Many naming conventions demand that symbols denoting objects from different semantic classes (types, functions, variables) have their own name casing rules. In Java, for example, types names always begin with a upper case letter, while variables, member function names etc. begin with a lower case letter. This effectively puts type names in a different namespace and gives a visual clue what a statement actually means.
// declare and initialize a new Point
Point point=new Point();
// calls a static member function of type Point
Point.fooBar();
// calls a member function of Point
point.moveTo(x,y);

History of access control modifiers such as public/private/protected

How did these keywords and concepts come to life? What were the forces and problems that made them appear? What was the first language to have them?
Actually, it's not just about public/private/protected, but rather the whole range of keywords that enforce some rules (abstract, final, internal).
But, please, do not assume things. Answer if you know at least part of the answer or answer if you lived those moments. References are greatly appreciated.
Simula (1967), considered to be the first OO language, has modifiers called protected and hidden. I assume that public is the default, I can't remember. It also uses virtual.
And, with thanks to Pavel, Simula introduced the most important keywords (and concepts) of class, this, new, downcasting and reference types.
Smalltalk (1980), a later but much more fundamental OO language, gave us Methods responding to Messages. This basically is the same functionality as virtual functions. Messages and Classes were later imitated in C (non-OO) to give the Windows API polymorphic behavior. But still needing ugly switch-statements and function pointers to replace inheritance.
The first use of Properties was, as far as I know, in Delphi (Object Pascal, < 1994).
public, private and protected access modifiers come from C++. It seems that public and private already existed in "C with classes", short lived precursor of C++. This is probably detailed in The design and Evolution of C++.
I think abstract and final come from Java and internal from C#.
this sort of thing starts out with multiple language designers asking 'what's a simple, logical name for this concept'? then, over time certain names become popular (sometimes because they're good names, sometimes just because). add 20 years, and most people end up picking the same names, based on what they've seen.
a similar question, perhaps, to asking how new words get added to (say) the English language.
For C++, the origins of private and public protection come from before Stroustrup's experiments with C With Classes, but from an even older system - the Cambridge CAP computer. This is described in section 2.10 of "The Design & Evolution of C++".
As for protected, that has had a murkier past & I don't have a good reference for it.

Are there established alternatives to ISomething / ISomethingable for interfaces?

The .NET standard of prefixing an interface name with an I seems to be becoming widespread and isn't just limited to .NET any more. I have come across a lot of Java code that uses this convention (so it wouldn't surprise me if Java used it before C# did). Also Flex uses it, and so on. The placing of an I at the start of the name smacks of Hungarian notation though and so I'm uncomfortable with using it.
So the question is, is there an alternative way of denoting that Something is an interface, rather than a class and is there any need to denote it like this anyway. Or is it a case its become a standard and so I should just accept it and stop trying to stir up "religious wars" by suggesting it be done differently?
From the Framework Design Guidelines book:
Interfaces representing roots of a hierarchy (e.g. IList) should also use nouns or noun phrases. Interfaces representing capabilities should use adjectives and adjective phrases (e.g. IComparable, IFormattable).
Also, from the annotations on interface naming:
KRZYSZTOF CWALINA: One of the few
prefixes used is “I” for interfaces
(as in ICollection), but that is for
historical reasons. In retrospect, I
think it would have been better to use
regular type names. In a majority of
the cases developers don’t care that
something is an interface and not an
abstract class, for example.
BRAD ABRAMS: On the other hand, the “I” prefix on interfaces is a clear
recognition of the influence of COM
(and Java) on the .NET Framework. COM
popularized, even institutionalized,
the notation that interfaces begin
with “I.” Although we discussed
diverging from this historic pattern
we decided to carry forward the
pattern as so many of our users were
already familiar with COM.
JEFFREY RICHTER: Personally, I like the
“I” prefix and I wish we had more
stuff like this. Little one-character
prefixes go a long way toward keeping
code terse and yet descriptive. As I
said earlier, I use prefixes for my
private type fields because I find
this very useful.
BRENT RECTOR Note:
this is really another application of
Hungarian notation (though one without
the disadvantages of the notation's
use in variable names).
It has very much become a widely adopted standard, and while it is a form of Hungarian, as Brent states, it doesn't suffer from the disadvantages of using Hungarian notation in variable names.
I would just accept it, to be honest. I know what you mean about being a bit like Hungarian notation (or at least abuse of the same) but I think it gives sufficient value to be worth doing in this case.
With dependency injection being in vogue, often I find I end up with an interface and a single production implementation. It's handy to make them easily distinguishable just with the I prefix.
One little data point: I work with both Java and C# a fair amount, and I regularly find myself having to check which types in Java are actually interfaces, particularly around the collection framework. .NET just makes this simple. Maybe it doesn't bother other people, but it bothers me.
+1 for IFoo from me.
As a .NET programmer (for the most part), I actually prefer the Java convention of dropping the I here, for a simple reason: Often, small redesigns require the change from an interface into an abstract base class or vice versa. If you have to change the name, this might require a lot of unnecessary refactoring.
On the other hand, usage for the client should be transparent so they shouldn't care for this type hint. Furthermore, the “able” suffix in `Thingable” should be enough of a hint. It works well enough in Java.
/EDIT: I'd like to point out that the above reasoning had prompted me to drop the I prefix for private projects. However, upon checking one of them against the FxCop rule set, I promptly reverted to the usage of I. Consistency wins here, even though a foolish consistency is the hobgoblin of little minds.
Its all about style and readability. Prefixing Interfaces with "I" is merely a naming convention and style guideline that has caught on. The compilers themselves couldn't care less.
My main assumption is that the most important thing is to maintain readability in domain part of the implementation. Therefore:
If you have one behaviour and one possible implementation, then just don't create an interface:
public class StackOverflowAnswerGenerator { }
If you have one behaviour and many possible implementations, then there is no problem and you can just drop the "I", and have:
public interface StackOverflowAnswerGenerator {}
public class StupidStackOverflowAnswerGenerator : StackOverflowAnswerGenerator {}
public class RandomStackOverflowAnswerGenerator : StackOverflowAnswerGenerator {}
public class GoogleSearchStackoverflowAnswerGenerator : StackOverflowAnswerGenerator {}
//...
The real problem comes when you have one behaviour and one possible implementation but you need an interface to describe its behaviour (for example for convenient testing, because of convention in your project, using some library/framework which enforces this, ...). Possible solutions, other from prefixing the interface are:
a) Prefix or suffix the implementation (as stated in some other answers in this topic)
b) Use a different namespace for interface:
namespace StackOverflowAnswerMachine.Interfaces
{
public interface StackOverflowAnswerGenerator {}
}
namespace StackOverflowAnswerMachine
{
public class StackOverflowAnswerGenerator : Interfaces.StackOverflowAnswerGenerator
{}
}
c) Use a different namespace for implementation:
namespace StackOverflowAnswerMachine
{
public interface StackOverflowAnswerGenerator {}
}
namespace StackOverflowAnswerMachine.Implementations
{
public class StackOverflowAnswerGenerator : StackOverflowAnswerMachine.StackOverflowAnswerGenerator
{}
}
Even though I think the last possibility is the cleanest, its one drawback is that even though using StackOverflowAnswerMachine; gives you access to all domain objects you must prefix all domain interfaces not to be confused with their implementations. That may feel like something not very convenient but in clean design usually a class doesn't use many other domain objects, and mostly you need to use the prefix only in field declaration and constructor parameter list. So, that is my current recommendation.
The client of domain functionality shouldn't need to know whether they're using an interface, an abstract class or a concrete class. If they need to know this, then there is some serious problem in such a project, because it has domain logic and infrastructural concerns mixed on the same abstraction layer. Therefore I recommend "a" or "c" solutions.
The coding standard for Symbian has interfaces (pure abstract C++ classes) denoted with an M rather than an I.
Otherwise, the only other way I have seen of denoting interfaces is through context.
For .NET, Microsoft's Framework Design Guidelines book absolutely recommends it, and yes, it is very much standard. I have never seen it done otherwise, and to create a new convention would only serve to confuse people.
I should add that I dislike Hungarian notation too, but this and the case of prefixing class variables with an underscore are good exceptions to me, because they make code so much more readable.
I've always thought this naming convention is a bit of a dinosaur. Nowadays IDEs are powerful enough to tell us that something is an interface. Adding that I makes the code harder to read so if you really want to have a naming convention that separates interfaces from classes I would append Impl to the name of the implementing class.
public class CustomerImpl implements Customer
You asked for an alternative, so here is one I have encountered:
Use no prefix on the interface class, but use a c or C prefix on the corresponding concrete classes. Most of your code will generally reference the interface, so why pollute it with the prefix and not the generally much less used concrete type.
This approach does introduce one inconsistency in that some concrete types will be prefixed (the ones with matching interfaces) and others will not. This may be useful since it reminds developers that an interface exists and its use should be preferred over the concrete type.
To be honest, I use the prefix on the interface, but I think it is more because I have become so accustomed and comfortable with to it.

Is there a best way to handle naming fads?

In the last year and a bit of working on my team's code base I have noticed a steady progression of naming conventions.
For example, there are a lot of classes that are named to express that they are a class that helps you do something.
Here's the ones I've spotted:
MyClassUtil
MyClassFactory
MyClassHelper
MyClassManager
MyClassService
It just seems to me that over time people come up with naming conventions for relatively the same thing and so instead of having everything named in a consistent manner you wind up with a code base that has a bit of every convention. All the new stuff is named based on the latest fad naming convention and so you can pretty much tell the age of a bit of code by what convention was in fashion at the time.
What is the best way to deal with this tendency? Is it really a problem? As these naming fads come into vogue, should one use the latest fad? Should one rename all existing items with the new naming convention? Or should one just accept the variety as something that is inescapable?
They don't seem like fads... all these names hint at the purpose of the class, and those purposes are different. With programming, it's all in the name, and they should be chosen very carefully. The variety doesn't need to be escaped. The names vary because the purposes of the classes vary.
MyClassUtil
-Some utilities for working with MyClass that it didn't come with. Maybe MyClass belongs to a library you're using, but you often use some higher level functions with it and you need somewhere to put them.
MyClassFactory
-Creates instances of MyClass in an abstracted way. This allows you to write code that needs MyClass instances. It can get those new instances from a MyClassFactory. This would allow the Factory to modified in future to serve up different specific implementations of MyClass. Maybe under unit testing, the Factory just serves up dummy/mock MyClasses. This means a class that uses the factory can be tested without needing to change it, just change the factory, and voilà you can isolate the class being tested.
MyClassHelper
-Ok, I may agree, perhaps this can be more specific. It does something to help with MyClass, but what. Maybe this is a bit similar to MyClassUtil. But, probably MyClassUtil is general functions that work with MyClass, whereas the helper is initialized with a specific instance of MyClass and then can do operations on that one instance. You need a new helper for each MyClass you want to help.
MyClassManager
-Maybe this deals with a pool of MyClass instances and stores or orchestrates them. Eg. in a CommunicationsManager, the class would handle wiring together classes that handle talking to a port or connection like ethernet or serial, and a class that deals with the comms protocol being sent over it so it can transport packets, and a class that deals with the messages in those packets.
MyClassService
-A service can do things for you, like given a postcode convert it into a grid-reference. Usually a service can resolve to many specific things. With the postcode example, this class might be have implementations that can talk to different web sites to do the conversion.
All of the names of classes you've given above indicate to me a striking departure from object-oriented principles. There's no way of telling what "MyClassUtil" or "MyClassService" does. It could be anything. Class naming should be specific, and should relay clearly the actual function of the class. None of these do. The best way to deal with this tendency is to brush up on object oriented programming skills and name the classes accordingly.
Now, it could be that these examples point out the function, within the application architecture, that these classes represent, and your use of "MyClass" is simply a placeholder for something more definitive at runtime, in which case, I wouldn't view these as naming fads, but rather as descriptive indicators of the function of the class itself, with a loose hint of the application's underlying architecture.
If this is pervasive, the team needs to spend some time studying OO design: reading the source code to well-respected OO frameworks, books on design patterns or books such as Evans "Domain Driven Design".
"Util" and "Manager" are often symptoms of poor design - "code smells". So is "Helper" outside of special contexts (Rails apps) where it's well entrenched.
"Factory" and "Service" have precise technical meanings, you can check the code to see if it conforms to those design patterns.
The general remedy is to sit down with the team, and have an explicit discussion about what benefits you're expecting from these naming schemes, what makes sense and what doesn't, and then over the next few months apply refactoring techniques to phase out the names you've all decided are code smells.
Naming is important. It shouldn't be taken lightly, nor is it a subjective matter. True, there is often more than one correct answer to a given naming issue. However, there are seldom many answers consistent with previous choices, which is key.
Renaming the names to better ones and refactoring the code so that each class has a clear responsibility, is recommended. To know what kind of names to use, read Tim Ottinger's article about Meaningful Names.
When a class does only one thing, then giving it a descriptive name is usually easy. Words such as "manager" are vague and may indicate that the class is responsible for doing so many unrelated things, that no simple name is able to describe what the class does. If you can know what the class does just by looking at the name of the class, then the class has a good name.
I don't really see how Factory or Service fit in to a particular fad...
Factory is a design pattern and if the class really is a factory then it's a perfectly appropriate name.
If a class is a Windows service what's wrong with calling it service?
There isn't a problem unless you find that performing all the rename refactors is too costly even though you really want to do them.
Why not use a static analysis tool to help enforce a set of style and consistency rule?
If you're in the .NET world Microsoft provides a tool called StyleCop
In the classname examples you give does "MyClass" stand for an actual class name, so that you are really seeing names like "PersonnelRecordUtil" or "GraphNodeFactory"? MyClassFactory is a really bad actual name for a class.